text
stringlengths
100
500k
subset
stringclasses
4 values
Inhibitory action on the production of advanced glycation end products (AGEs) and suppression of free radicals in vitro by a Sri Lankan polyherbal formulation Nawarathne Kalka Chamira Dilanka Fernando1, Diyathi Tharindhi Karunaratne1, Sachith Dilshan Gunasinghe1, M. C. Dilusha Cooray1, Prabuddhi Kanchana1, Chandani Udawatte1 and Pathirage Kamal Perera2Email author Published: 8 July 2016 Advanced glycation end products (AGEs) and free radicals are inflammatory mediators and are implicated in many diseases such as diabetes, cancer, rheumatoid arthritis etc. Multi targeted poly herbal drug systems like Nawarathne Kalka (NK) are able to quench the overall effect of these mediators as they contain good combinations of phytochemicals that have least side effects in contrast to modern medicinal drugs. The objectives of this study were to evaluate phytochemical composition, free radical scavenging activity, cytotoxicity and the inhibitory action on the formation of AGEs by aqueous extract of NK. Total phenolic content (TPC) and total flavonoid content (TFC) were determined using Folin ciocalteu method and aluminium chloride assay respectively. Free radical scavenging activity was assessed by DPPH radical scavenging assay (DRSA), phosphomolybdenum reduction antioxidant assay (PRAA) and nitric oxide (NO) scavenging assay. Brine Shrimp Lethality (BSL) bioassay was performed as preliminary screening for cytotoxic activity. Inhibitory action on AGE formation was evaluated using fructose mediated glycation of bovine serum albumin using fluorescence spectroscopic method. The TPC and TFC were 75.1 ± 3.0 mg/g gallic acid equivalents and 68.7 ± 7.8 mg/g epigallocatechin gallate equivalents. The DRSA yielded EC50 of 19.15 ± 2.24 μg mL−1 for NK. DRSA of NK extract was greater than butylated hydroxy toluene (EC50 = 96.50 ± 4.51 μg mL−1) but lesser than L-ascorbic acid (EC50 = 5.60 ± 0.51 μg mL−1). The total antioxidant capacity of NK as evidenced by PRAA was 106.4 ± 8.2 mg/g L-ascorbic acid equivalents. NK showed EC50 value of 99.3 ± 8.4 μg mL−1 in the NO scavenging assay compared to the standard ascorbic acid (EC50 = 7.3 ± 0.3 μg mL−1). The extract indicated moderate cytotoxic activity in the BSL bioassay. The extract showed effective inhibitory action on the formation of AGEs with EC50 values of 116 ± 19 μg mL−1, 125 ± 35 μg mL−1 and 84 ± 28 μg mL−1 in data obtained over three consecutive weeks respectively. Comparatively the reference standard, aminoguanidine at a concentration of 500 μg mL−1 demonstrated 65 % inhibition on the formation of AGE after one week of sample incubation. The results proved the potential of NK as a free radical scavenger, moderate cytotoxic agent and an inhibitor on the formation of advanced glycation end-products. Advanced glycation end products Nawarathne Kalka Traditional Sri Lankan System of Medicine (TSM) was established more than 3,000 years ago and it has been useful ever since for the treatment of various ailments [1]. In contrast to modern medicinal systems, polyherbal preparations have gained more attention for their multi-targeting ability via pathways that give fewer side effects [2]. These TSM drug systems consist of poly herbal formulations that can suppress painful symptoms associated with various ailments such as rheumatoid arthritis, diabetes and cancer [1]. Nawarathne Kalka (NK) is a similar poly herbal formulation which is used in TSM. This particular preparation contains components originating from 14 different plant species and it is mainly prescribed for gastrointestinal tract disorders such as diarrhea, abdominal pain, haematochezia, indigestion as well as for rheumatoid arthritis (RA) and other inflammatory conditions [3]. It consists of Cedrus deodara (Devadara), Cuminum cyminum (Suduru), Eugenia caryophylla (Karabu), Ferula asafetida (Perunkayam), Glycyrrhiza glabra (Walmi), Myristica fragrans (Sadikka), Nigella sativa (Kaluduru), Picrorhiza kurroa (Katukarosana), Piper longum (Thippili), Trachyspermum roxburghianum (Asamodagum), Vernonia anthelmintica (Sanninayam), Zingiber officinale (Inguru), Terminalia bellirica (Bulu), Terminalia chebula (Aralu), and bees honey. The ingredients and proportions of each component in NK and the parts of the plants used for its preparation are stated in Table 1 [3]. Ingredients and proportions of Nawarathne Kalka [3] Ingredients of Nawarathne Kalka Part of the plant used Proportions (weight basis) 1. Cedrus deodara (Vernacular name (VN): Devadara) 2. Cuminum cyminum (VN: Suduru) 3. Eugenia caryophylla (VN: Karabu) Flower bud 4. Ferula asafetida (VN: Perunkayam) 5. Glycyrrhiza glabra (VN: Valmi) 6. Myristica fragrans (VN: Sadikka) Dried kernel of the seed 7. Nigella sativa (VN: Kaluduru) 8. Picrorhiza kurroa (VN: Katukarosana) 9. Piper longum (VN: Thippili) 10. Trachyspermum roxburghianum (VN:Asamodagum) 11. Vernonia anthelmintica (VN: Sanninayam) 12. Zingiber officinale (VN:Inguru) 13. Terminalia bellirica (VN: Bulu) Fruit (outer cover) 14. Terminalia chebula (VN: Aralu) 15. Honey Diseases such as RA and diabetes are inflammatory mediated, and hence require anti-inflammatory medicines to suppress the overall effects associated with inflammation. Inflammation causes pro inflammatory cytokines to be elevated such as interleukine-17 (IL-17) and tumor necrosis factor alpha (TNF-α) [4], which would subsequently initiate the secretion of more inflammatory mediators such as cytokines like IL-6 and IL-8 [5] and colony stimulating factors like granulocyte macrophage colony stimulating factor (GM-CSF) [6]. This means that propagation of inflammation activates osteoclasts in RA-cartilages to initiate osteoclastogenesis which is common in pathophysiology of RA [7, 8]. The diabetes related complications such as retinopathy [9], nephropathy [10] are also driven by similar inflammatory pathways. Accumulation of advanced glycation end-products (AGEs) resulting from protein glycation are considered to be the initiators of these complications [10]. Advanced glycation end-products are formed due to the non-enzymatic reactions between sugars and proteins or nucleicacids [11, 12] and are associated with vascular related complications [13]. Oxidative stress is another factor that drives inflammation which can exert cytotoxic effects on tissues in the human body and hence there is a close association between oxidative stress and inflammation. Most common contributors of oxidative stress are hydroxyl radicals (.OH), nitric oxide (NO), superoxide anions (O2 .-) and peroxynitrites (OONO−) and these contributors are known as reactive oxygen species (ROS) [14]. The ability to scavenge ROS is a useful quality that every anti-oxidant/anti-inflammatory drug must possess. Suppression of the formation of ROS, AGEs and the secretion of cytokines altogether is the task of a multi targeted drug system rather than of a single targeted drug system. Hence the complex and complicated pathways by which the most dangerous diseases are associated with can be ameliorated by using the multi component formulations such as NK. Due to lack of evidence on the pharmacologically important actions of the poly herbal formulation of NK towards suppression of various ailments, this study was focused towards investigation of NK for its phytochemical composition, antioxidant capacity and inhibitory action on formation of AGEs. Additionally, the cytotoxic effect of this herbal medicament was investigated. 2,2-diphenyl-1-picrylhydrazyl (DPPH), Glacial acetic acid, sulfanilamide, N-(1-napthyl)-ethylene diaminedihydrochloride (NEDD), sodium nitroprusside (SNP), L-ascorbic acid, potassium dihydrogen phosphate, disodium hydrogen phosphate, fructose, bovine serum albumin (BSA) and sodium azide were purchased from Sigma Aldrich USA. Preparation of aqueous extract of NK NK was purchased from an Ayurvedic drug store. An amount of 15 g from 3 sachet packets of NK were pooled together and the contents were then dissolved in 400 mL of deionized water. This mixture was refluxed in dark for 3 h. The refluxed solution was filtered using Whatman no.1 filter paper and the filtrate was stored under 4 °C until further use. Prepared NK specimen (voucher number NK 102) was deposited in Department of Ayurveda Pharmacology and Pharmaceutics, Institute of Indigenous Medicine, University of Colombo, Rajagiriya, Sri Lanka. Determination of total phenolic content The total phenolic content of the extracts were determined by Folin ciocalteu method [15]. The extract was diluted 50, 100 and 500 times using deionized water. Folin ciocalteu's phenol reagent (1 N, 250 μL) was added to the sample (500 μL) and the mixture was allowed to stand at room temperature for 2 min. Sodium carbonate solution (10 %, 1.25 mL) was added and samples were incubated for 45 min in the dark at room temperature. The absorbances of the resulting solutions were measured at 760 nm against a blank prepared in same manner but replacing the extract with deionized water. The calibration curve was constructed using gallic acid standards (6 – 30 μg mL−1) and the total phenolic content of the extract was expressed as mg/g gallic acid equivalents (GAE). Determination of flavonoid content The flavonoid content was measured by the aluminium chloride colorimetric assay [16]. The extract was diluted 3, 4 and 5 times using deionized water. The diluted extract (100 μL) was mixed with deionized water (400 μL) and sodium nitrite (5 %, 30 μL). After 5 min aluminium chloride (10 %, 30 μL) was added followed by sodium hydroxide (1 M, 200 μL) at 6th minute. The total volume was adjusted to 1000 μL with deionized water and absorbance was measured at 510 nm against a blank prepared in similar manner but replacing the extract with deionized water. The calibration curve was plotted using (−)-epigallocatechingallate (EGCG) standards (300–1000 μg mL−1) and flavonoid content was expressed mg/g EGCG equivalents. 1,1-Diphenyl-2-picrylhydrazyl (DPPH) free radical scavenging activity Free radical scavenging capacity of the NK extract was assessed by DPPH radical scavenging method according to a method published previously with slight modifications [17]. A concentration series was prepared by diluting the extract. DPPH reagent prepared in 96 % ethanol (100 μM, 2.0 mL) was added to diluted extract (0.5 mL) and the mixture was allowed to stand for 30 min in the dark. The scavenging activity by each concentration was quantified by measuring the decolourization of the resulting solutions at 517 nm. Deionized water was used as the blank. The control was prepared by mixing deionized water with DPPH. L-Ascorbic acid (1–20 μg mL−1) and butylated hydroxy toluene (BHT, 20 – 400 μg mL−1) were used as standard reference antioxidants. The results were expressed as percentage inhibition (%I) calculated according to equation 1 given below: $$ \frac{\%\mathrm{I} = {\mathrm{A}}_{\mathrm{c}}\hbox{--}\ {\mathrm{A}}_{\mathrm{s}}\mathrm{x}\ 100\%}{{\mathrm{A}}_{\mathrm{c}}} $$ (where Ac = Absorbance of control and As = Absorbance of sample) The effective concentration needed to scavenge 50 % of the DPPH free radical (Half maximal effective concentration, EC50) was calculated by regression analysis of the dose response curve plotted between percentage inhibition versus concentration of the test sample and the standard. Phosphomolybdenum reduction antioxidant assay The total antioxidant capacity of the extract was evaluated based on the method developed by Prieto et al. [18]. The reduction of Mo (VI) to Mo (V) by the antioxidants present in the extract will subsequently form a green phosphate-Mo (V) complex at an acidic pH. The extract was diluted 50, 100 and 500 times. Diluted extract (0.5 mL) was combined with 2.5 mL of reagent solution (0.6 M sulfuric Acid, 28 mM trisodium phosphate and 4 mM ammonium molybdate). The reaction mixture was then incubated at 95 °C for 90 min. Finally, after cooling the reaction mixture to room temperature, the absorbance was measured at 695 nm against a blank prepared in the same manner but using deionized water instead of extract. The calibration curve was constructed using L-ascorbic acid standards (25 – 100 μg mL−1) and the total antioxidant capacity of the above extracts was expressed as mg/g L-ascorbic acid equivalents. NO scavenging activity The NO scavenging activity of NK extract was determined according to a method published previously [19]. Sodium nitroprusside (10 mM) solution was mixed with phosphate buffer (pH 7.4) in the ratio of 1:3 and kept for 20 min until the required aerobic conditions were obtained. Auto oxidation products (nitrites/nitrates) of NO generated by SNP were produced under these conditions. The SNP and buffer mixture (2.0 mL) was added to 1.0 mL of NK (0.1-19.8 mg mL−1) and the samples were incubated for 150 min at 25 °C. Sulfanilamide (0.33 % in 20 % glacial acetic acid, 1.0 mL) was added to 0.5 mL of the previously incubated solution and allowed to stand for 5 min. Then 1.0 mL of NEDD (0.1 % w/v) was added to the mixture and further incubated for 30 min at 25 °C. The pink chromophore generated during diazotization of nitrite ions with sulphanilamide and NEDD was measured spectrophotometrically at 540 nm against a blank sample which consisted of NEDD, SNP and buffer only. The control was prepared by replacing NK with phosphate buffer which lacks a NO scavenger. L-Ascorbic acid was used as the positive control. Each analysis was performed in triplicates. The percentage inhibition (% I) of NO radicals by NK/positive control was calculated according to equation 1. Brine shrimp lethality bioassay The cytotoxicity of the Nawarathne Kalka was determined using Brine Shrimp Lethality Bioassay [20]. Different concentrations of the extract (3.0 - 18.0 mg mL−1) were prepared by diluting the extract with deionized water. Diluted test sample (0.2 mL) was added to 24 well plates. Dimethyl sulfoxide (10 %, 0.05 mL) was added to each well and total volume was adjusted to 2.0 mL with artificial brine solution (25 g NaCl and 2.5 g MgSO4 dissolved in 1000 mL of deionized water, pH 8). The control solution was prepared by adding dimethyl sulfoxide and adjusting the total volume up to 2.0 mL with brine solution. Brine shrimp eggs were allowed to hatch in artificial brine solution for 24 h to obtain the nauplii. Ten living nauplii were transferred carefully to each well using a clean pasture pipette and kept for 18 h before monitoring the results. After 18 h, the number of living nauplii was counted. The experiment was conducted in triplicates. The percentage lethality (% L) was calculated according to equation 2 given below: $$ \frac{\%\mathrm{L} = \left({\mathrm{N}}_{\mathrm{C}}\hbox{--}\ {\mathrm{N}}_{\mathrm{S}}\right)\mathrm{x}\ 100\%}{{\mathrm{N}}_{\mathrm{C}}} $$ (Where, Nc = Number of living nauplii in the control sample, Ns = Number of living nauplii in the test sample) The effective concentration required to kill 50 % of the living nauplii with respect to the control (Half maximal lethal dosage, LD50) was calculated by the dose response curves plotted between %L versus concentration of the extract. Inhibitory action on the formation of Advanced Glycation End-products The inhibitory action by NK on formation of AGEs was evaluated according to McPherson, 1988 with slight modifications [21]. Bovine serum albumin (BSA) solution (10mgmL−1) was prepared in phosphate buffered saline (pH 7.4) containing sodium azide (0.02 %) to minimize microbial activity during the experiment. Fructose (500 mM, 4.0 mL) was mixed with BSA solution (5.0 mL) and NK (0.1 – 19.8 mg/ml, 1.0 mL). Bovine serum albumin was used as the negative control. A sample containing BSA and fructose was used to induce the formation of advanced glycation end products. Samples containing only NK at respective concentrations were also run to measure any fluorescence emission caused by the endogenous substances present in NK. The fluorescence intensity of each mixture was measured at excitation and emission wavelengths at 355 nm and 460 nm respectively. Readings were obtained each week for a period of 3 successive weeks. Aminoguanidine was used as the reference standard. Each sample was analyzed in triplicates. The percentage inhibition (%I) caused by NK on the formation of AGEs was determined by equation 3 given below. $$ \frac{\%\mathrm{I} = \left({\mathrm{F}}_{\mathrm{C}}\hbox{--}\ {\mathrm{F}}_{\mathrm{C}\mathrm{B}}\right)\ \hbox{--}\ \left({\mathrm{F}}_{\mathrm{S}}\hbox{--}\ {\mathrm{F}}_{\mathrm{S}\mathrm{B}}\right)\mathrm{x}\ 100\%}{\left({\mathrm{F}}_{\mathrm{C}}\hbox{--}\ {\mathrm{F}}_{\mathrm{C}\mathrm{B}}\right)} $$ (Where FC = Fluorescence intensity of control with fructose, FCB = Fluorescence intensity of blank of control without fructose, FS = Fluorescence intensity of sample with fructose, FSB = Fluorescence intensity of blank of sample without fructose) Results are presented as mean ± standard deviation (Mean ± SD) of at least three independent experiments. Statistical analysis including student's t-test was performed using Microsoft Excel. Value of p < 0.05 was considered as significant. Phenolic compounds are considered to be the most important antioxidants and are widely distributed among various plant species. These phenols play important roles in plants such as protection against herbivores and pathogens, regulation of cell growth and cell division [22]. Polyphenols abound in natural products and in human diet and their roles in preventing chronic degenerative diseases have been proven in previous studies [23–25]. Nawarathne kalka being abundant with a collection of different plant species elicited a total phenolic content of 75.1 ± 3.0 mg/g GAE. This indicates the potential possessed by NK to prevent degenerative diseases including rheumatoid arthritis. Flavonoids are water soluble polyphenolic compounds which are extremely common and wide spread in the plant kingdom as their glycosides [22]. The documented biological effects of dietary flavonoids include anti-inflammatory, anti-allergic, antimicrobial, hepatoprotective, antiviral, antithrombotic, cardioprotective, capillary strengthening, antidiabetic, anticarcinogenic and antineoplastic effects [26]. The content of total flavonoid present in NK extract was 68.7 ± 7.8 mg/g epigallocatechin gallate equivalents which suggests that the flavonoids abundantly present in the extract may offer afore mentioned therapeutic benefits to humans. The results obtained for the determination of phytochemical composition of NK extract is given in Table 2. Phytochemical composition of NK aqueous extract Phytochemical Composition Total Phenolic Content 75.1 ± 3.0 mg/g gallic acid equivalents Total Flavonoid Content 68.7 ± 7.8 mg/g epigallocatechin gallate equivalents 1,1-Diphenyl-2-picrylhydrazyl (DPPH) free radical scavenging assay was used to determine hydrogen donating ability of the extracts. The EC50 values obtained from the dose response curves (Fig. 1) for the DPPH assay were 19.15 ± 2.24 μg mL−1, 5.60 ± 0.51 μg mL−1 and 96.50 ± 4.51 μg mL−1 for NK extract, L-ascorbic acid and BHT respectively. This indicates that the antioxidant potential of the NK extract was higher than BHT but lesser than L-ascorbic acid. Dose response curves for NK extract and L-ascorbic acid for DPPH radical scavenging assay Phosphomolybdenum reduction antioxidant assay is a single electron transfer system which is useful in measuring the capacity of an antioxidant in reduction of an oxidant which changes its color when reduced. A higher degree of color formation indicates higher reducing power of the antioxidant [27]. NK extract demonstrated an antioxidant capacity of 106.4 ± 8.2 mg/g L-ascorbic acid equivalents in this assay. This proves that the extract has a higher reducing power to almost neutralize many oxidants generated in vivo as well as arising from exogenous sources. Nitric oxide is mainly produced by many isoforms of nitric oxide synthases (NOS) that are mainly inducible-NOS, endothelial-NOS and neuronal-NOS [28]. Nitric oxide synthases release NO under different stimuli for different purposes [28]-[29]. The over production of NO is similarly dangerous as impaired production of NO [30–33]. In this study only the suppression of over production of NO by NK is considered. Nitric oxide scavenging ability for both the positive control (L-ascorbic acid) and the NK water extract were considered and the percentage inhibitions calculated for the respective concentrations were plotted against each other and are shown in Fig. 2. Water extract of NK showed higher EC50 of 99.3 ± 8.4 μg mL−1 than the respective positive control which was L- ascorbic acid at EC50 of 7.3 ± 0.3 μg mL−1. This suggests that compared to the positive control, the water extract of NK showed moderate NO scavenging activity. This activity may be helpful for NK to act neutral under lower NO production conditions while acting as a moderate NO scavenger under over production of NO. However, this activity should be further analyzed and researched. Dose response curve for percentage inhibition of NO production by L- ascorbic acid and NK The brine shrimp lethality bioassay represents a rapid, inexpensive and simple bioassay for testing plant extracts' bioactivity which in most cases correlates reasonably well with cytotoxic and anti-tumor properties [34]. This assay has been widely used for the isolation of bioactive compounds from plant extracts and for estimating the extent of in-vivo lethality on Artemia salina (brine shrimp) [34, 35]. Drugs derived from plants have many anticancer properties like topoisomerase-I and topoisomerase- II inhibition, thymidylate synthase inhibition, apoptotic effect, interaction with cyclin dependent kinases, anti-mitotic properties etc., leading to lethality with exposure time and dosage of the drug [36]. The aqueous extract of NK indicated LD50 value of 807.6 ± 221.0 μg mL−1as determined according to the dose response curve depicted in Fig. 3. The extract demonstrated percentage lethality greater than 85 % at concentrations above 900 μg mL−1. This indicates that the aqueous extract of NK possess moderate cytotoxic activity. However toxicity studies with mammalian cancer cell line versus normal cell line remains to be established in order for better understanding of the cytotoxicity that NK may selectively cause towards cancerous cells compared to normal cells. Cytotoxic effect of NK extract on shrimp nauplii Formation of AGEs is induced under conditions such as hyperglycemia, dyslipidemia and oxidative stress. This phenomenon can lead to complications related to diabetes such as retinopathy and nephropathy. Disturbance in the formation of such AGEs can contribute in suppressing any complications related to AGEs [37]. Fluorescence emission caused by AGEs and emission from AGEs under treatment conditions with NK at three concentration levels over a period of three weeks are shown in Fig. 4. There was a significant increase in fluorescence intensities (p < 0.05) observed for the BSA samples treated along with fructose compared to the BSA (only) samples (Fig. 4). This indicates the formation of fluorescent AGEs occurred over the time period due to the glycation of BSA with fructose [38]. However, when the samples of BSA along with fructose were treated separately with 493.5 μg mL−1 and 1973.9 μg mL−1 concentrations of the NK extract, it was observed that the fluorescence emission resulting due to AGEs are significantly reduced (p < 0.05) with respect to the samples treated with BSA and fructose (Fig. 4). This in turn suggests the disruption caused on AGE formation by the components present in the aqueous extract of NK. NK in this study, not only suppressed the formation of fluorescent AGEs, but also continued suppressing such AGE from forming throughout the three weeks, as evidenced by the dose response curves obtained during this time period (Fig. 5). The EC50 values obtained for the dose response curves corresponding to Week 1, Week 2 and Week 3 were 116 ± 19 μg mL−1, 125 ± 35 μg mL−1 and 84 ± 28 μg mL−1 respectively. The reference standard, aminoguanidine at a concentration of 500 μg mL−1 demonstrated 65 % inhibition on the formation of AGE after one week of sample incubation. This provides enough support to consider NK as a possible anti glycation drug even though the precise mechanism of how NK suppress the formation of AGE is not known. Advanced glycation end-product formation is due to the non-enzymatic reaction between sugars and proteins. This reaction forms imines/Schiff's bases. These Schiff's bases will subsequently enter an irreversible rearrangement called Amadori rearrangement, and will become Amadori products eventually [39]. Considering the reversibility of the two steps of AGE formation, NK might be possessing the ability to terminate the formation of irreversible Amadori products, hence capable of remaining as an anti glycation drug for a long period. Fluorescence intensity versus time of sample measurements. Respective blanks solution with BSA only, showed fluorescence intensities at constant levels throughout the three weeks (red bars). Whereas when the blanks samples were introduced with fructose (Fr) the formation of fluorescence AGE got induced hence the fluorescence intensity increased significantly (p < 0.05) (blue bar). But when samples were simultaneously treated with NK a significant decrease (p < 0.05) in the fluorescence intensities was indicated in a concentration dependent manner (green & purple bars) compared to the sample treated with BSA and fructose Overlay of the dose response curves for percentage inhibition on advanced glycation end-product formation by the aqueous extract of NK (Week1 – Week3) Several studies conducted with different types of honey have proven their antioxidant properties. This is due to the compounds present such as vitamin C, monophenolics, flavonoids and polyphenolics. Antioxidant compounds like caffeic acid, chrysin, galangin, quercetin, acacetin, kaempferol, pinocembrin, pinobanksin, apigenin and enzymes like glucose oxidase and catalase are found to predominate in most of the types of honeys [40]. They have received special attention due to their role in preventing diseases associated with oxidative stress such as cancer, cardiovascular diseases, inflammatory diseases and infections [40, 41]. Honey being a main ingredient in the formulation of NK may be responsible for this therapeutic potential of the formulation itself and causing a synergistic effect along with phyto-constituents derived from plant materials to enhance the activity of the medicament. The next most abundant ingredients Terminalia belerica and Terminalia chebula present in NK have been scientifically proven for many biological activites including antioxidant and anti-diabetic effects [42, 43]. NK being a polyherbal formulation comprising of these two ingredients also would have added to and enhanced the overall effects of NK. Future studies will be focused at identification and quantification of individual compounds present in NK. Our findings provide evidence of potent antioxidant activity, moderate NO scavenging activity and cytotoxic effects as well as the ability to inhibit the formation of advanced glycation end products possessed by the poly herbal formulation Nawarathne Kalka. This can be attributed to very high levels of phenolic and flavonoid compounds being present, thus justifying the use of this particular herbal remedy in the treatment of various inflammatory conditions including arthritis in the Traditional Sri Lankan System of Medicine. However further studies including identifying potent individual chemical components present in NK, their mechanistic pathways of action and clinical trials should be conducted to understand the holistic effects caused by this poly herbal medicament on human body. % I, Percentage inhibition; .OH, Hydroxyl radicals; AGEs, Advanced glycation end products; BHT, Butylated hydroxy toluene; BSA, Bovine serum albumin; BSL, Brine Shrimp Lethality bioassay; DPPH, 2,2-diphenyl-1-picrylhydrazyl; DRSA, DPPH radical scavenging assay; EC50, Half maximal effective concentration; EGCG, (−)-epigallocatechingallate; GAE, Gallic acid equivalents; GM-CSF, Granulocyte macrophage colony stimulating factor; IL-17, Interleukine-17; LD50, Half maximal lethal dosage; NEDD, N-(1-napthyl)-ethylene diaminedihydrochloride; NK, Nawarathne kalka; NO, Nitric oxide; O2 .-, Superoxide anions; OONO−, Peroxynitrites; PRAA, Phosphomolybdenum reduction antioxidant assay; RA, Rheumatoid arthritis; ROS, Reactive oxygen species; SNP, Sodium nitroprusside; TFC, Total flavonoid content; TNF-α, Tumor necrosis factor alpha; TPC, Total phenolic content; TSM, Traditional Sri Lankan System of Medicine The authors wish to thank College of Chemical Sciences, Institute of Chemistry Ceylon for providing financial assistance to conduct this study. Professor S A Deraniyagala, Department of Chemistry, University of Colombo, Sri Lanka is gratefully acknowledged for donating a sample of aminoguanidine. Financial assistance to conduct this study was received from College of Chemical Sciences, Institute of Chemistry Ceylon. Laboratory work was conducted by DTK, CDF, SDG and MCDC. CU, PKP and CDF supervised the project. Conception of the project hypothesis was by CDF, CU, PK and PKP. The manuscript was written by DTK and CDF. The manuscript was revised by CDF, CU and PKP. All authors read and accepted the final draft of the manuscript. College of Chemical Sciences, Institute of Chemistry Ceylon, Rajagiriya, Sri Lanka Institute of Indigenous Medicine, University of Colombo, Rajagiriya, Sri Lanka Perera PK. Current scenario of herbal medicine in Sri Lanka. Conference proceeding, ASSOCHAM, 4th annual Herbal International Summit cum Exhibition on Medicinal & Aromatic Products, Spices and finished products(hi-MAPS), NSIC, Okhla Industrial Estate, New Delhi, India; 2012.Google Scholar Parasuraman S, Thing GS, Dhanaraj SA. Polyherbal formulation: Concept of ayurveda. Pharmacogn Rev. 2014;8(16):73–80.View ArticlePubMedPubMed CentralGoogle Scholar Illiyakperuma A. VatikaPrakarana/DeshiyaBehethGuli Kalka Potha. Panadura, Sri-Lanka: Modern Press; 1879.Google Scholar Kirkham BW, Kavanaugh A, Reich K. Interleukin‐17A: a unique pathway in immune‐mediated diseases: psoriasis, psoriatic arthritis and rheumatoid arthritis. Immunology. 2014;141(2):133–42.View ArticlePubMedPubMed CentralGoogle Scholar Onishi RM, Gaffen SL. Interleukin‐17 and its target genes: mechanisms of interleukin‐17 function in disease. Immunology. 2010;129(3):311-321.Google Scholar Varas A, Valencia J, Lavocat F, Martinez VG, Thiam NN, Hidalgo L, Miossec P. Blockade of bone morphogenetic protein signaling potentiates the pro-inflammatory phenotype induced by interleukin-17 and tumor necrosis factor-α combination in rheumatoid synoviocytes. Arthritis Res Ther. 2015;17(1):1–10.View ArticleGoogle Scholar VanNieuwenhuijze AEM, Van de Loo FA, Walgreen B, Bennink M, Helsen M, et al. Complementary action of granulocyte macrophage colony-stimulating factor and interleukin-17A induces interleukin-23, receptor activator of nuclear factor-kB ligand, and matrix metalloproteinases and drives bone and cartilage pathology in experimental arthritis: rationale for combination therapy in rheumatoid arthritis. Arthritis Res Ther. 2015;17(1):1–14.View ArticleGoogle Scholar Fischer JA, Hueber AJ, Wilson S, Galm M, Baum W, Kitson C, Schett G.. Combined inhibition of TNFα and IL-17 as therapeutic opportunity for treatment in rheumatoid arthritis: Development and characterization of a novel bispecific antibody. Arthritis Rheum. 2015;67(1):51–62.View ArticleGoogle Scholar Ma K, Xu Y, Wang C, Li N, Li K, Zhang Y, Chen Q. A cross talk between class a scavenger receptor and receptor for advanced glycation end-products contributes to diabetic retinopathy. Am J Physiol Endocrinol Metab. 2014;307(12):E1153–65.View ArticlePubMedGoogle Scholar Bharti AK, Agrawal A, Agrawal S. Advanced glycation end products in progressive course of diabetic nephropathy: exploring interactive associations. Int J Pharm Sci Res. 2015;6(2):521.Google Scholar Vlassara H. Recent progress in advanced glycation end products and diabetic complications. Diabetes. 1997;46:S19.View ArticlePubMedGoogle Scholar Méndez JD, Xie J, Aguilar-Hernández M, Méndez-Valenzuela V. Trends in advanced glycation end products research in diabetes mellitus and its complications. Mol Cell Biochem. 2010;341(1–2):33–41.View ArticlePubMedGoogle Scholar Stirban A, Gawlowski T, Roden M. Vascular effects of advanced glycation end products: Clinical effects and molecular mechanisms. Mol Metab. 2014;3(2):94–108.View ArticlePubMedGoogle Scholar Schieber M, Chandel NS. ROS function in redox signaling and oxidative stress. Curr Biol. 2014;24(10):R453–62.View ArticlePubMedPubMed CentralGoogle Scholar Fernando CD, Soysa P. Total phenolic, flavonoid contents, in-vitro antioxidant activities and hepatoprotective effect of aqueous leaf extract of Atalantia ceylanica. BMC Complement Altern Med. 2014;14:395.View ArticlePubMedPubMed CentralGoogle Scholar Zhishen J, Mengcheng T, Jianming W. The determination of flavonoid contents in mulberry and their scavenging effects on Superoxide radicals. Food Chem. 1999;64:555–9.View ArticleGoogle Scholar Fernando CD, Soysa P. Extraction Kinetics of phytochemicals and antioxidant activity during black tea (Camellia sinensis L.) brewing. Nutr J. 2015;14:74.View ArticlePubMedPubMed CentralGoogle Scholar Prieto P, Pineda M, Aguilar M. Spectrophotometric quantitation of antioxidant capacity through the formation of a Phosphomolybdenum Complex: Specific application to the determination of vitamin E. Anal Biochem. 1999;269:337–41.View ArticlePubMedGoogle Scholar Harsha SN, Latha BV. In vitro antioxidant and in vitro anti inflammatory activity of Ruta graveolens methanol extract. Asian J Pharm Clin Res. 2012;5:32-35.Google Scholar Meyer BB, Ferringi NR, Futman FJ, Jacobsen LB, Nichols DE, Mclaughlin JL. Brine shrimp: a convenient general bioassay for active plant constituents. Planta Med. 1982;5:31–4.View ArticleGoogle Scholar McPherson ID, Shilton BH, Walton PJ. Role of fructose in glycation and cross-linking of proteins. Biochemistry. 1988;27:1901–7.View ArticlePubMedGoogle Scholar Kumar S, Sandhir R, Ojha S. Evaluation of antioxidant activity and total phenol in different varieties of Lantana camara leaves. BMC Res Notes. 2014;7(1):560.View ArticlePubMedPubMed CentralGoogle Scholar Manach C, Scalbert A, Morand C, Rémésy C, Jiménez L. Polyphenols: Food sources and bioavailability. Am J Clin Nutr. 2004;79:727–47.PubMedGoogle Scholar Procházková D, Boušová I, Wilhelmová N. Antioxidant and prooxidant properties of flavonoids. Fitoterapia. 2011;82:513–23.View ArticlePubMedGoogle Scholar Tsao R. Chemistry and biochemistry of dietary polyphenols. Nutrients. 2010;2:1231–46.View ArticlePubMedPubMed CentralGoogle Scholar Baiceanu E, Vlase L, Baiceanu A, Nanes M, Rusu D, Crisan G. New Polyphenols Identified in Artemisiae abrotani herba Extract. Molecules. 2015;20(6):11063–75.View ArticlePubMedGoogle Scholar Phatak RS, Hendre AS. Total antioxidant capacity (TAC) of fresh leaves of Kalanchoe pinnata. J Pharmacogn Phytochemistry. 2014;2(5):32–5.Google Scholar Campbell MG, Smith BC, Potter CS, Carragher B, Marletta MA. Molecular architecture of mammalian nitric oxide synthases. Proc Natl Acad Sci. 2014;111(35):E3614–23.View ArticlePubMedPubMed CentralGoogle Scholar Alderton WK, Cooper CH, Knowles R. Nitric oxide synthases: structure, function and inhibition. Biochem J. 2001;357:593–615.View ArticlePubMedPubMed CentralGoogle Scholar Feihl F, Waeber B, Liaudet L. Is nitric oxide overproduction the target of choice for the management of septic shock? Pharmacol Ther. 2001;91(3):179–213.View ArticlePubMedGoogle Scholar Kolios G, Valatas V, Ward SG. Nitric oxide in inflammatory bowel disease: a universal messenger in an unsolved puzzle. Immunology. 2004;113(4):427–37.View ArticlePubMedPubMed CentralGoogle Scholar Cannon RO. Role of nitric oxide in cardiovascular disease: focus on the endothelium. Clin Chem. 1998;44(8):1809–19.PubMedGoogle Scholar El-Hattab AW, Hsu JW, Emrick LT, Wong LJ, Craigen WJ, Jahoor F, Scaglia F. Restoration of impaired nitric oxide production in MELAS syndrome with citrulline and arginine supplementation. Mol Genet Metab. 2012;105(4):607–14.View ArticlePubMedPubMed CentralGoogle Scholar Krishnaraju AV, Rao TV, Sundararaju D, Vanisree M, Tsay HS, Subbaraju GV. Assessment of bioactivity of Indian medicinal plants using brine shrimp (Artemia salina) lethality assay. Int J Appl Sci Eng. 2005;3(2):125–34.Google Scholar Carballo JL, Hernández-Inda ZL, Pérez P, García-Grávalos MD. A comparison between two brine shrimp assays to detect in vitro cytotoxicity in marine natural products. BMC Biotechnol. 2002;2(1):17.View ArticlePubMedPubMed CentralGoogle Scholar Gali-Muhtasib H, Hmadi R, Kareh M, Tohme R, Darwiche N. Cell death mechanisms of plant-derived anticancer drugs: beyond apoptosis. Apoptosis. 2015;20(12):1531-1562.Google Scholar Yamagishi SI, Nakamura K, Matsui T, Ueda S, Noda Y, Imaizumi T. Inhibitors of advanced glycation end products (AGEs): potential utility for the treatment of cardiovascular disease. Cardiovasc Drug Rev. 2008;26(1):50–8.View ArticleGoogle Scholar Suarez G, Rajaram RA, Oronsky AL, Gawinowicz MA. Nonenzymatic glycation of bovine serum albumin by fructose (fructation). Comparison with the Maillard reaction initiated by glucose. J Biol Chem. 1989;264(7):3674–9.PubMedGoogle Scholar Gkogkolou P, Böhm M. Advanced glycation end products: Key players in skin aging? Dermato Endocrinol. 2012;4(3):259–70.View ArticleGoogle Scholar Khalil MI, Sulaiman SA, Boukraa L. Antioxidant properties of honey and its role in preventing health disorder. Open Nutraceuticals J. 2010;3(1):6–16.View ArticleGoogle Scholar Aljadi AM, Kamaruddin MY. Evaluation of the phenolic contents and antioxidant capacities of two Malaysian floral honeys. Food Chem. 2004;85:513–8.View ArticleGoogle Scholar Sabu MC, Kuttan R. Antidiabetic and antioxidant activity of Terminalia belerica. Roxb. Indian J Exp Biol. 2009;47(4):270.PubMedGoogle Scholar Suryaprakash DV, Sreesatya N, Avanigadda S, Vangalapati M. Pharmacological review on Terminalia chebula. Int J Res Pharm Biomed Sci. 2012;3(2):679–83.Google Scholar
CommonCrawl
Colonizing the galaxy by slow boating reality check Humans like to explore and seem to have an almost instinctual need to expand. After spreading throughout the solar system and even into the Oort Cloud they decided that humanity should follow the robot probes out into the galaxy. Without FTL, but having discovered a form of artificial gravity and having experience creating space habitats, they create a number of generation ships. These massive structures are initially given a population and crew of 30,000, with room to expand to 50,000, and each one has all the tools and manufacturing capabilities to make more generation ships and space habitats, along with the ability to terraform a planet. Strapping on a few large meteors with ice, minerals and other things they may need in an emergency, these ships slowly make their way to the nearest solar system. Slow being a little less than half the speed of light, thanks to getting a very large boost as they start their journey. Once there, they get to work making comfortable habitats for the now increased and cramped population using the resources of the system. They spend several decades there, creating a working system of habitats and making any repairs that are needed on the generation ships. After a century or two, the generation ships, possibly a few new ones as well, get crewed by people who want to travel and move onto the next solar system to do the same thing all over again. Eventually the fleet splits into two and each one does the same thing, eventually splitting again, and again and again. If they find a planet that looks like it can be terraformed, a planet with no life, or only the most basic of bacteria which is wiped out, they get to work making the planet livable for the people who want to have a sky over their head. Any planet with multicellular life is carefully studied by probes, but left otherwise alone because the risk of contamination, allergies, etc, are too great for the ships, and terraforming them will destroy the ecosystem. Is this a realistic way to colonize and explore the universe? reality-check space-colonization terraforming colonization exploration Cyn Dan ClarkeDan Clarke $\begingroup$ The larger problem I have with generation ships is how to keep a cohesive story that spans generations. I think if you can convince the audience about the world, then sure I would believe they want to settle planets, even though I know it's not realistic. I think it's a bit harder if you allow then to produce more generation ships. Maybe it takes a certain level of infrastructure to build one, a level they cant bring with them. So they would need to settle to account for their population while they build that up... $\endgroup$ – ArtisticPhoenix Feb 1 '18 at 7:41 $\begingroup$ Please read meta.stackexchange.com/q/43478/225745 — people hate when question changes after answered. Make sure you edits are not like described. If they are, retract them and ask a follow up question. $\endgroup$ – Mołot Feb 1 '18 at 7:46 $\begingroup$ Nice question. Have you considered how these ships are going to slow down? $\endgroup$ – Tim B♦ Feb 1 '18 at 9:40 $\begingroup$ I don't think they'd take repaired generation ships away from the new solar system. Those would stay in a stable orbit somewhere handy, where further repairs could be made as needed with the plentiful resources available, and become the core of the permanent settlement to house those people (as the answers below indicate, the overwhelming majority of the population) who don't want to live on a planet. Brand-new ships would eventually be built for the next generation (heh) of generation ships. $\endgroup$ – Monty Harder Feb 1 '18 at 14:27 $\begingroup$ @MontyHarder, any generation ship that is badly damaged would become a space habitat. But as long as it just needs some patches and basic maintenance, the engines alone would make the ships more valuable as a means of transport. $\endgroup$ – Dan Clarke Feb 1 '18 at 15:04 Slow being a little less than half the speed of light, thanks to getting a very large boost as they start their journey. Slow down there! At that speed it's not really a generation ship since you can get to many other stars within the original crews lifetime. And there are hazards to going that fast. Lets assume the ship is, say, 10X the mass of the worlds largest supertanker, that's very conservative for the numbers you talk about but it's a number to work with. How do you slow down at all? At 0.5 c that ship would have 7.197×10^25 joules of kinetic energy you'd need to get rid of if you want to slow down. That's about 1800 times as much energy as the energy from entire worlds fossile fuel reserves. You need some kind of fuel and some plan for slowing down. Hitting things in your path If there's something the size and mass of a sugar cube in your path it hits the front of your ship with the energy of the nuclear bomb dropped on Hiroshima with all the energy pretty well focused to rip through any sane quantity of armor. And that's not the only problem. The atoms between the stars using the figures for a cold neutral interstellar medium from wikipedia: 20—50 atoms/cm3 So let's go with 25 atoms/cm3 25000000 atoms per cubic meter. Lets imagine the ship is a nice neat cylinder. We can treat the volume of space that the ship passes through as a cylinder with a cross section equal to that of the front of the ship. Now lets look at how much it hits while traveling, say, 10 light years. Treat it as a cylinder 10 light years long with the diameter of the ship, again, lets guesstimate that the ship has a radius of 100 meters. This lets us estimate the total number of (almost all hydrogen) atoms in the path of the ship, lets assume they all hit and there's no shockwave effects: 946073047258080000000 π m^3 (cubic meters) Multiply by 25000000 atoms per cubic meter. mass of (946073047258080000000 * π *25000000 ) hydrogen atoms = 124.4 kilograms so over the course of 10 light years it will impact with 124.4 kg kilograms of gas atoms. For simplicity I'm assuming all hydrogen. Those atoms are hitting at .5c so the front of your ship (assuming it's a big round shield with radius 100m). kinetic energy of 124.4 kilograms at .5c is 1.73×10^18 joules I'm going to ignore time dilation because it's hard and I need to maintain my sanity. so at .5 C it takes us 20 years to travel those 10 light years So lets convert that into the energy of the gas hitting the front of the ship each hour. 1/24 (1/365 (1/20×1.73×10^18 J (joules))) = 9.8748×10^12 joules/h = 2.743 GW h (gigawatt hours) per hour has to cope with 2.743 GW hours worth of energy hitting it every hour. It's like having a large nuclear power plant at the front of your ship producing heat. you have no way of getting rid of that much heat with your ship in a vacuum and it will be melting your heat shield. So just slow down It's really common for writers to throw around large fractions of light speed but without magitech shields there's massive practical problems with going that fast at all. At those speeds the fine mist of interstellar gas is enough to cook an astronaut to death just from being outside the ship unshielded and enough to destroy any shielding made of matter within a short time. Since your ships are generation ships anyway you almost certainly want to slow your ships down to something sane like 0.05 C (or probably even lower if your crew want to continue to live) At least then you have some chance of stopping and some chance of surviving if you hit some grains of sand in deep space. Putting more ice or rock on the front of the ship does not help. Lets imagine that we put a cylinder of solid ice 100 meters thick at the front of the ship as a shield. it's an idea, I'll give you that, but lets work out how long it's likely to last at 0.5C .... cylinder | radius 100 meters, height 100 meters = 3.14159×10^6 cubic meters That's 3,141,590 cubic meters of ice, millions of cubic meters of ice. Wolfram alpha gives a helpful table for this Phase change energies for 3.14159×106 m3 of water from 25 °C: energy required to heat to boiling point | 9.85×10^11 kJ (kilojoules) energy required to convert to vapor | 7.01×10^12 kJ (kilojoules) energy required to heat to boiling point and convert to vapor | 8×10^12 kJ (kilojoules) energy released from cooling to freezing point | 3.28×10^11 kJ (kilojoules) energy released from converting to solid | 1.05×10^12 kJ (kilojoules) energy released from cooling to freezing point and converting to solid | 1.38×10^12 kJ (kilojoules) It's annoying that it calculates from 25 degrees C but the energy released from cooling and energy needed to heat can just be added together. Practically speaking I'm being very very forgiving by assuming that the energy needed is the same as at sea level. To melt that much ice we could need 1.05×10^15 J (joules) To turn that much ice into steam we would need about 7.01×10^15 J (joules) Unfortunately the front of our ship would be receiving 1.975×10^13 J (joules) every hour while traveling at 0.5 C from impacts with the fine mist of atoms in interstellar space. From there's it's just a matter of multiplying. it would shield you for a little while.... Within 5 days your 3 million cubic meters of ice has melted. after 34 days your ice has all turned into steam. But what if we use something stronger than ice! Lets imagine that instead of 3 million cubic meters of ice we make that shield out of 3 million cubic meters of solid iron! It takes 6.11×10^15 J to melt 3 million cubic meters of solid iron. Within 26 days enough energy has hit the front of your ship to melt 3 million cubic meters of iron. This is not exactly how long your shield will last, some energy will be radiated away, some will be lost to cooking your crew and iron may ablate in a less simplistic manner but it's a rough ballpark figure. At 0.5 C shields are not enough. Asteroids traveling at 0.5 C would melt and turn into a gas in short order. I cannot stress enough how poor natural intuition is when it comes to the rigors put on anything traveling at large fractions of the speed of light. MurphyMurphy $\begingroup$ Good point. Taking four or five generations to get to a nearby star would be good, Thanks for doing the math. $\endgroup$ – Dan Clarke Feb 1 '18 at 14:46 $\begingroup$ @DanClarke I've added an edit, unfortunately at 0.5c the asteroids they bring along would melt. $\endgroup$ – Murphy Feb 1 '18 at 16:27 $\begingroup$ Surely some space junk and a little bit of waste heat isn't going to stop a civilization that can build engines capable of accelerating you to half the speed of light, and can create gravity. - I'm not sure which of those is more of a handwave, but I wouldn't be surprised if figuring out how to do one, tells you how to do the other. $\endgroup$ – Mazura Feb 2 '18 at 0:03 $\begingroup$ Just out of interest: Would this interstellar particles be charged? So could you use a magnetic field to deflect them? $\endgroup$ – Fels Feb 2 '18 at 10:03 $\begingroup$ @Fels I was using the numbers for a cold, neutral, interstellar medium and for that magnets don't help you much. However if you were traveling through a region with ionized gas it could help and at the same time the ionized gas would tend to be less dense. (0.2–0.5 particles/cm3) $\endgroup$ – Murphy Feb 2 '18 at 11:37 Apart from problems with generational ships, which you'll find discussed on Worldbuilding SE in other questions, there is a fundamental flaw in the reasoning: they get to work making the planet livable for the people who want to have a sky over their head The people you have described have, for hundreds of generations, been a space-dwelling people. Planets may be interesting to you, but they're just grubby, messy places with restrictions like not having easy access to space (which is, in fact, home!) and gravity defined by something you can't change, whereas your ship-dwellers (presumably) have some form of pseudo-gravity but can also get to zero-gee (and maybe all points between) easily. As they have and can manage resources without needing a planet, all they need rocks for is to build more homes (i.e. more ships). And they can get those rocks easily without bothering with dirt and bacteria-infested planets that they have to clean first. Nope, they're going to expand all right, but they're going to do it in space. Rocks? Who needs 'em. :-) And if you have the tech to build generation ships like you describe, you have the tech to make them from (for example) asteroids. You have the tech to make a sky for yourself (and what a sky - something that's controlled and safe and familiar). For you, the sky could well be that the world curves over your head in a giant cylinder. Actual sky as we know it would be unnerving - you've never seen it and for hundreds of generations no one has written a song with our version of the sky in it. No, planets are, to a space dweller, completely pointless. What would they do with planets? Visit them. Planets are nice places to park, maybe, giving you an interesting view and access to, well, nothing you don't already have in space for these guys. You can explore planets and maybe some people (the dedicated scientists or the crazy people) will want to stay down there for extended periods, but live there?!?!?!? No way! And you've grown up used to the idea of living in a sealed, controlled environment with a stable population. The biggest problem that ship will face is convincing anyone to get off! Even if they reach a destination and need repairs, they'll find a way to do that from space, extracting materials from easier to access sources (small moons, asteroids, etc. ) and not by doing anything crazy like landing. Every solar system is indeed an opportunity to expand, but you could do an awful lot of expansion in space without ever getting lumped on something as yucky as a planet. These guys won't leave home and home is space. The minority (and presumably there will be some weirdos like this) who want to live on a planet and become "pioneers" (imagine the derision that would be said with by people who travel anywhere they want in space!), might set up colonies, but these will be villages with bare essentials and just enough equipment to keep them going. You're not going to waste resources on these nutcases if you and your people have been thrifty and efficient space dwellers for a hundred generations. Think about a hundred generations. What were your ancestors doing and where and how did they live a hundred generations ago? Do you know? Do you care? Would you consider their way of life anything you want now? Your science (even in a generation ship) has advanced and it will be focused on the needs and desires of space dwellers. You won't even care about the intentions of the people who sent your ancient ancestors into space. You're a space dweller. And proud of it! No "Rock Clingers" on this ship. :-) jdunlop StephenGStephenG $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – Monica Cellio♦ Feb 4 '18 at 22:25 Sure it is, this is the same process used by Polynesians to colonise the Pacific Islands. They had boats with everything they would need and would find a place, build. Then a few generations later when population pressure mounts they would split and some would go looking for another Island group. Your idea is the same just scaled up. KilisiKilisi $\begingroup$ Yeah, I edited my answer, the numbers were too extreme, at half light speed it's much more realistic. But give them much more room, over generations on long trips they need to expand or they'll be killing each other. Can't expect generations to adhere or respect a plan made by someonelong dead who sent them on a one way trip into the unknown. So make sure they have everything and are comfortable. $\endgroup$ – Kilisi Feb 1 '18 at 7:06 $\begingroup$ @DanClarke - my generation ship can hold over 2 billion ... lol $\endgroup$ – ArtisticPhoenix Feb 1 '18 at 7:13 $\begingroup$ Ten generation example, how to control population, uncontrolled with not much to do, 100000 people would probably be in the millions. $\endgroup$ – Kilisi Feb 1 '18 at 7:13 $\begingroup$ Excellent, so long as the cops don't become corrupt you have it sorted. Would make a great story if after a few generations corruption ruled the law force. $\endgroup$ – Kilisi Feb 1 '18 at 7:23 $\begingroup$ @DanClarke I think Kilisi just hit on your motivation for the "crazies" to go to a planet. In space, they cannot get away from the police force, but on the planet they are considered as good as dead and ignored. Sure, most colonists won't survive, but one day a colony will be successful--that's where your story happens. $\endgroup$ – called2voyage Feb 1 '18 at 13:29 Sure, everything seems fine. Except ( and this is my Opinion ) If they find a planet that looks like it can be terraformed, a planet with no life, or only the most basic of bacteria which is wiped out, they get to work making the planet livable for the people who want to have a sky over their head. Once you realize it will take hundreds of years, aboard a "generation" ship that is custom built to sustain the population for that time, you will realize that it's unlikely they would want to settle on a rock. Especially if they have to terraform it. Often times the most efficient engines have very poor thrust to weight ratios. It's very possible to build an engine perfectly suited to travel in space that has no utility lifting weight out of a deep gravity well. Given this, and the fact that they are now perfectly comfortable living in their "tin can" remember without FTL it's going to take dozens or hundreds of years to travel to even the closest stars. Even going to the nearest stars to earth will take (@0.5c) anywhere from 8-20 years depending on how fast you can accelerate. After spending 20 years in a perfectly controlled environment why start over. There are no elements on a planet that are not more easily acquired in orbit (if all your infrastructure is in orbit). The only thing unique would be the life on that planet which may or may not be poisonous. Even if it's not, would you ruin the biosphere just to dig up metal you can get by the ton other places? At most you may want to observe and do research on this new life. See if it's compatible with your life, etc. This could take centuries and the risk ... well, you could introduce some virus or bacteria that wipes your whole population out. In short, once you have generational ships, there is no reason to settle on a planet. In fact, it costs more energy to do so. In more likelihood, they will set up an orbital habitat, with some production capability. Maybe drag an asteroid into orbit. And then send science teams down to the planet to research every so often. ArtisticPhoenixArtisticPhoenix $\begingroup$ I do agree with you, but there will probably be a few weirdo's who want to try it out. After they get the basic habitats set up and begin really expanding the population they can spare some time and effort on a planet if there is enough interest. $\endgroup$ – Dan Clarke Feb 1 '18 at 7:13 $\begingroup$ Depends on your scale, my ideas are based on my generation ship concept. Which is almost 1,400 km in diameter. Roughly the size of pluto. It has 10km of atmosphere. $\endgroup$ – ArtisticPhoenix Feb 1 '18 at 7:15 $\begingroup$ My original goal was the same, then after I built my perfect ship. I realize there is no point to planets. I am actually sort of stuck on that, because my ship is built to go about 400 ly, but why build such a large ship, go so far once planets are pointless? $\endgroup$ – ArtisticPhoenix Feb 1 '18 at 7:17 $\begingroup$ Handwaving works nicely when it allows cool stuff to happen. $\endgroup$ – Dan Clarke Feb 1 '18 at 7:22 $\begingroup$ "Even going to the nearest stars to earth will take (@0.5c) anywhere from 8-20 years depending on how fast you can accelerate." Time dilation on the occupants could mean more like 7.5 years. That's not really so long. $\endgroup$ – Clumsy cat Feb 1 '18 at 12:30 In the early 1600's one hundred people settled in Jamestown. Those 100 people came form a population of 4 million UK subjects. If a colony ship has a population of 50,000, there are very poor odds that you would have enough people willing to start a colony on another planet. But suppose the population of the Earth is eight billion people. Now you'd have a pool of 200,000 colonists. You select 50,000 of those people and put them on the generation ship in suspended animation. The generation ship is then crewed with 30,000 people to run and maintain the ship and watch over the sleeping colonists. This becomes a sleeper/seeder colony ship hybrid. A potential dark twist on this idea is the Australia solution. If a person on the generation ship commits some terrible crime, instead of execution, that person is put into suspended animation, and dumped onto the colony world. Note: My answer was based on the other answers where the colony ship takes decades to reach the next solar system. Jim WolfordJim Wolford $\begingroup$ suspended animation, is a seeder ship..... or that's typically what I hear it called .. lol $\endgroup$ – ArtisticPhoenix Feb 1 '18 at 7:44 $\begingroup$ The Australia solution is good. Far in the future, maybe hundreds of worlds are accidentally populated with humans because of this practice. $\endgroup$ – called2voyage Feb 1 '18 at 13:33 $\begingroup$ ... but the beer is rubbish. $\endgroup$ – Rupert Morrish Feb 2 '18 at 0:23 $\begingroup$ The Jamestown comparison seems odd. I'm about 55% certain the English also sent some other people to at least two or three other colonies within a similar time period. And with Jamestown alone, just two years after the first group of settlers, a (third) supply mission arrived with an additional 500-600 settlers. More likely, the primary limiters on the number of settlers were funding and the size of and quantity of ships, rather than how many potential colonists could be found. $\endgroup$ – 8bittree Feb 2 '18 at 17:35 Yes, except the part where you state If they find a planet that looks like it can be terraformed, a planet with no life, or only the most basic of bacteria which is wiped out, they get to work making the planet livable for the people who want to have a sky over their head. Any planets with multicellular life is carefully studied by probes, but left otherwise alone because the risk of contamination, allergies, etc, are too great for the ships, and terraforming them will destroy the ecosystem. Based on the reconstructed history of life on our planet, the stage which for you is OK to use is a small fraction of the total time where life is possible. This reduces the likelihood of ending on a planet, and people in cramped spaces get easily angry... Based on the history of human colonization, efforts to preserve the hosting environment/populations arise centuries after the colonization has been made. Again, people in cramped spaces get easily angry, and anybody will have an hard time explaining 50000 people that they have to stay in their glass bubble surrounded by deep and hostile space a few more centuries for the sake of preserving the slimy pink mossy blob which covers that nice planet few hundreds kilometers away. L.Dutch♦L.Dutch $\begingroup$ That would be reasonable if there was FTL, and people were used to planet life. But these people will have spent their entire life in a generation ship, seeing an open sky above their heads would likely leave most of them at least nervous. Having a shiny new habitat with plenty of elbow room, and nice safe machines making sure everything is running smoothly, would probably be their idea of comfort. Also in a hard science setting, alien planets are not going to be nice places to live for the most part. The different chemicals and biology will wreak havoc on an alien biology. $\endgroup$ – Dan Clarke Feb 1 '18 at 6:58 $\begingroup$ Your assuming the planet would have a better living condition then a colony, a colony sufficient enough to support a lot of people for a very long time. $\endgroup$ – ArtisticPhoenix Feb 1 '18 at 6:59 $\begingroup$ @DanClarke, what you say then makes me ask "why would they ever consider landing on a planet?" $\endgroup$ – L.Dutch♦ Feb 1 '18 at 7:13 $\begingroup$ @L.Dutch, it's not so much a big plan, as a "we have room, maybe some crazies will want to live on a planet, so throw it in." And hey watching an Earth environment growing and adapting to an alien sun, gravity and a different but survivable atmosphere and mineral content, would make a cool project. $\endgroup$ – Dan Clarke Feb 1 '18 at 7:18 Yes, it's realistic. There are a few problems that many of the respondents have overlooked. machines break. These star ships might need to last thousands of years. Think of how worn down the pyramids are - these ships have to run for twice as long. Granted, there's no pesky atmosphere to deal with, but you've still got problems with lubrication, for example. After 5,000 years, your elevators might break down. This would be no big deal in a community of billions, but even with sci-fi level fabrication and automation techniques, there will be some issues that cannot be fixed without a shipyard. If there's no ship yard, that means stopping and building one. Which leads us into the next problem: 50,000 is not enough people for specialization of labor to work properly. When they have to stop and fix something, they are going to have to check the wiki, and hope that the problem was something we anticipated 10,000 years ago. Even if every person on the ship works and studies, maybe they won't know how to fix the ship. Maybe they will have to build a new one. Could you build a new pyramid? What if you also had to build the crane you'd use to make the pyramid? Dictator-ships. Ships 'aint democracies. They have captains and strict rules - even cruise ships have big long lists of do's and don'ts. How long would you like to live on a cruise ship before you wanted off? What if you want 4 kids? Or what if you don't like the weird-ass religion that your ancestors made up on the journey? What if you've got some recessive gene that the captain has decided to purge? To conclude: Only a tiny fraction of people ever get on colony ships, but when the time comes, they always get off. $\begingroup$ That's why the ships go out in groups, once they reach their destination there's a few hundred thousand, with the databases, robots, tools and manufacturing plants they need, not just the single ship. And if there is a major problem on one ship they can outside help. I wouldn't think it would be common, as moving between the ships would be difficult, but by staying in constant contact with each other it wouldn't be impossible. As you say it would be tight and if they have two or three major problems, they would be in serious trouble, but no risk no glory. $\endgroup$ – Dan Clarke Feb 1 '18 at 14:53 $\begingroup$ @james There is an atmosphere -- work out the density of particles hitting it from the interstellar medium. It makes a hurricane force wind seem like nothing. $\endgroup$ – Yakk Feb 1 '18 at 14:55 $\begingroup$ Imagine living on a sea cruise ship that will not see port for 10+ years. I speculate that within the first month people will be trying to take the lifeboats on day trips. Within a year the lifeboats will be hacked into expedition vessels. Human nature is to alter their environment, even if the environment provides everything you need. We couldn't leave the space station alone. A generation ship will not remain a single ship for long, it will quickly evolve into a flotilla, with opposing factions, opposing destinations, and at .5c speeds, quickly out of communication range. $\endgroup$ – slomobile Feb 2 '18 at 2:31 $\begingroup$ With multiple ships you could get interesting problems like, ship 13 has been unable to accelerate for a few days, and is now way behind the group. Should the group slow to keep everyone together? Well of course not. Screw ship 13! $\endgroup$ – joeytwiddle Feb 2 '18 at 7:12 Considering the preconditions you have given, which basically say that you are not interested in technological problems at all, surely it is a realistic way! It's your story. You have to find problems (like pointed out in many of the other answers, for example, "why would they want to live on a planet") and solve them for your version of humanity. Solving those problems would be the point of interest of your novel. E.g., the argument that people who have lived in space for 100 generations see no point to settle on a planet could be resolved this way: assume that the spaceship technology is barely fun enough to keep everyone from suiciding. No heroic extra-vehicular activities in race boats for fun; no action-packed alien missions, nothing of that sort. Just boringly slogging along. Maybe they don't regularly play with gravity, no "free fall sex" escapades or anything of that sort. Maybe there does not develop a sense of infinite freedom in space (which there is nothing of, for us, right now). They have no limitless space available to them personally, but just a clunky, degrading, half-lit, stale-air metal can in which they are constantly reminded of death on the other side of the wall. Maybe, to make those generation ships work, they need very strict hierarchies/duties to keep them afloat at all; and part of the incentive to go down to the planet is that small, like-minded groups can go to vastly separated regions to do their own thing. Maybe they bring along pictures/books/films of the earth which turn into some half-religious planet cult; being allowed to live on a planet could from the beginning be made out to be the highest climax of everyone's life. Finally, think about how great of an adventure space is for us today; you can be pretty sure that lots of boys and girls fantasize of spending some time on even our limited versions of space travel/space stations. After 100 generations, maybe in your world, it is simply reversed - boys and girls are just bored to death by their spaceship and really looking forward to adventures on a planet. And so on. Whatever reason anyone of us could think about to make it unlikely that your plan works, gives you a reason to make your story interesting. AnoEAnoE $\begingroup$ I am interested in the tech side, which is why I put it up here. But you are right that the social side interests me more and would form the larger part of the stories. Very good answer, especially about the planet. $\endgroup$ – Dan Clarke Feb 1 '18 at 14:57 The Journey In 2 Halves We already have an 'artificial gravity' per your requirement, which also sets the time to arrival; acceleration at ~1g. For the first half of the journey you accelerate, then you turn it around and decelerate for the remainder. Per relativity, this will feel exactly like being in a 1g environment on a planet. There is a neat derivation of the time to arrival via this mode for various destinations on this page, with the result that it would take no more than 30 years to get anywhere you have the data to bother exploring, even popping to the next galaxy. Note that in the Earth frame it would take you a lot longer to get there, but time dilation being what it is, that doesn't matter to the skyfarer. Generational attitudes, settlers and mariners Some of the other answers assume people would prefer to stay in space, on the assumption they are there for many generations, and it's hard to get out of the gravity well and leave a planet if you land on it. If it's just 1 generation then I suspect a large fraction of the population would be keen to start their life on the planet; these settlers would see the destination as their opportunity, mirroring those who journeyed to settle the Americas. We generational mariners But the gravity well is deep, so it would make sense for a number to remain on the ship in a wide orbit and pick a new location to head for. These mariners would never land, carrying on to the next planet. After a few generations there would be essentially some who had been on the ship for generations, had never wanted to settle on a planet. If they had a good stock of varied genetic material (e.g. a basic sperm bank) to maintain a decent gene pool, there is no real human limitation on this approach. I would wonder, however, whether some such ships would stop bothering with planets. Perhaps there would be reverse-Moana figures who sought to revive the idea of settling when they discover the ship's origins. Moon-miners One reason not to just forever flit through the stars is the need for physical resources; reserves of water and minerals which can replenish the exploring ship. It would make sense to harvest some such resources at each system where a settler-division happens; either by sending the settlers down with a rocket which could return material, or more likely by mining smaller moons. This mining stage could easily take a number of years to construct the equipment, refine the material and return the extracts and equipment back to the ship. During this long goodbye, perhaps some of the mariners would change their mind and opt to settle. Colonial control A key part of the settlement of the Americas was the need to control the settlers; in this instance, the home planet would want to know the settlers weren't going to return to destroy them. There is a tension here; on one hand the colony ship experiences less time than Earth; only decades could pass for them while centuries pass on Earth, so Earth can expect to have superior technology. But, this fact would make the settlers defensive; more people from Earth would likely arrive every few decades, bringing new technology, new diseases, new threats and new opportunities. I think there is an argument for the possibility that the government licensing the explorers to leave Earth with such a ship would require that the settlers were compelled to plant some kind of doomsday device on their new planet to ensure their future cooperation, and obviate the need to impose an external threat. The Overtakers Consider a ship heading to its second destination planet; subjectively 50 years have passed, but from Earth's perspective several centuries have done. A more recent ship has overtaken the first, by not stopping at the first planet, and thus when the first ship arrives it finds the planet already settled by a century-old settlement. Do they settle, but elsewhere on the planet? Or do they carry on to another planet where they could be even further behind? Newer technologies would inevitably also permit later departures to arrive earlier; better g-suits, genomic adjustments, cryo, take your pick. What is the protocol? What are the rules, thousands of light-years into uncaring space? The First Man For A Hundred Years Earlier I mentioned an onboard sperm bank as an easy way to ensure a wide gene pool, which has an interesting side-effect: The more fertile men on board, the more the gene pool is narrowed (donor eggs would require a surrogate, so the men would most likely reproduce with the women on board). So the longer you plan to live in space, the fewer men you want on board on a purely pragmatic level. Potent men begin an immediate biological countdown. This could easily lead in some instances to entirely female crews; where perhaps they have had to change destination a few times, and it is safer for the onboard society to just have female offspring for a while. Eventually, perhaps, a boy would be born by accident or by design. Alternatively, a larger male population could exist if the destination is assured but be required to leave on the landing raft, so that the onward exploration could continue unimpeded. Some of the descendants of the settling ships would inevitably dream of the stars, of the people who continued on. With the passage of time some would build their own rockets and starships to follow the mariners, and with the joys of time dilation they could actually meet some of these historical figures, for whom proper time seeps only slowly into the margins of their existence. Phil HPhil H There are a few realism flaws. First, 50% of the speed of light is insanely fast for a macroscopic object. Second, taking only a few centuries to create new generation ships. Slow them down to 1% or even 0.1% of the speed of light. Spend many 1000s of years expanding over a new system. You'll still colonize the galaxy in the blink of a cosmic eye. The current most practical form of interstellar colonization looks like star wisps -- ridiculously light von Neumann probes launched using a type-2 civilization's power output. Possibly coming to a stop using huge mirrors and lasers fired from the source system. The huge investment and tiny payload (traveling at a very small fraction of c, as the interstellar medium is quite dense at fast speeds) then has to be able to replicate itself in the target system and industrialize it. It can carry data with it (uploaded consciousnesses possibly included, or entire biospheres of data) and when industrialization is well underway can deploy this data; or, if civilizations are sufficiently long-lived, it can build an antenna and get it beamed after the fact. If it takes a probe 10 years to produce a duplicate and the probe weighs 2 grams, converting 0.01% of the solar system's mass into probes is 2E23 kg or E27 probes, which is 27/3*10 = 90 doublings, or just under 1 thousand years. At that point, the system is going to be close to a type-2 civilization and would be capable of launching another star wisp at a small fraction of c. It might also be able to catch a colony ship traveling at a small fraction of c. A type 2 civilization has E26 watts of power. 1 year of output is E33 J. At 1% C, kinetic energy is $$E_k = mc^2(\frac{1}{\sqrt{1-\frac{v^2}{c^2}}} -1)$$ $$E_k = mc^2(\frac{1}{\sqrt{1-0.01^2}} -1)$$ $$E_k = (.00005) mc^2$$ or 1 part in 20,000 of the mass-energy of the target. This means we can speed up or stop 2E20 kg over a period of 1 year if we have 100% efficiency. Ceres is E21 kg. So a type 2 civilization can launch something roughly the size of Ceres at 1% of the speed of light, and another type 2 civilization can stop it at the other side, assuming they can deliver the momentum over a distance of 1% of 1 light year. 5% of the speed of light makes this: 30 times more energy for the same mass. It also makes the distance you have to project the energy 5 times further and hence 25 times harder (tyranny of inverse-square). At 50% of the speed of light $$E_k = mc^2(\frac{1}{\sqrt{1-0.5^2}} -1)$$ $$E_k = (.15) mc^2$$ 120 times more energy than 5%, and 10 times further energy projection (which is 100 times harder). The reason why the star wisp has to be as small as possible is that most of the mass you'll launch will be in the form of mirrors and lenses and light sails. You need to brake, which means you need momentum in the opposite direction. You shoot off lenses/and mirrors, then reflect light from your launch laser back onto the star wisp's light sail. These mirrors are pushed further out (and never stop), but you can get a tiny star wisp to stop with ridiculous energy expenditure over "short" interstellar distances. YakkYakk $\begingroup$ That could be a bit too slow for my liking, but yeah, maybe 5-10% would be OK, still slow but not a crawl. $\endgroup$ – Dan Clarke Feb 1 '18 at 14:47 $\begingroup$ @DanClarke 5% light speed means that interstellar helium are alpha particles. Getting to 5% light speed requires either total conversion engines/antimatter engines and a huge amount of raw fuel and/or a type 2 civilization at the launch star. Stopping a large ship at 5% light speed requires total matter-energy conversion, or a type 2 civilization at the destination star. Space is big. I guess if your robot probes have already von-neuman industrialized the target system it might be possible. $\endgroup$ – Yakk Feb 1 '18 at 14:54 $\begingroup$ at this point humanity would be a type 2. The slowing down will be the problem which requires some thought. I do like the idea of sending out special one use von-neuman probes a few decades before hand to start everything up, that could work nicely. $\endgroup$ – Dan Clarke Feb 1 '18 at 15:07 getting a very large boost as they start their journey. This is an issue. When travelling in space slowing down and speeding up are more or less the same problem. There is pretty much negligible friction to slow you down again. So if there is some special "boost" that it gets from its home system that put it over its maximum natural speed, how it going to stop? It makes most sense to accelerate continuously for the first half of the journey, then decelerate for the second half. Clumsy catClumsy cat $\begingroup$ Good point, should have thought of that myself. $\endgroup$ – Dan Clarke Feb 1 '18 at 14:54 $\begingroup$ One possible answer to that: Don't slow down the ship, just slow down the people and the essentials they need for colonising. Some of the other heavy stuff you used for the journey can keep sailing on past the target solar system. (Maybe even faster than before if you "push off it".) Of course, in this scenario you lose a lot of your ship's equipment. You'd need to build a whole new ship from scratch for the next journey. $\endgroup$ – joeytwiddle Feb 2 '18 at 7:08 $\begingroup$ @joeytwiddle Yeh, that makes sense. Makes for a dramatic visual too :) $\endgroup$ – Clumsy cat Feb 2 '18 at 10:31 Dr Bob Enzmann wrote extensively about this sort of thing. He is still around.look him up. Reasons for wanting off the ship: Vast open spaces. as Much population as you can handle. and the 2 word answer? Soylent Green! Neal ClearyNeal Cleary $\begingroup$ Perhaps you could cite specific passages relevant to your answer? This site discourages answers in the form of links elsewhere, without explanation. $\endgroup$ – rek Feb 1 '18 at 19:34 $\begingroup$ I agree with rek. Please don't just say "This guy wrote about this subject, look him up"; that's not an answer. If Enzmann's writing contains an answer to the question, please provide a more detailed summary of it. $\endgroup$ – F1Krazy Feb 1 '18 at 19:45 Look at Larry Niven's Outsiders - they lived a little bit like this. There are technological issues as well as social issues with the whole concept, but most of them come from an extremely limited, anthropomorphic point of view. Just because humans, in our current state of development, could not maintain such a society, nor handle the technological challenges, does not make it an absolute impossibility. In fact, much good sci-fi comes from exactly that kind of projection. "IF" we were able to overcome such-and-such a technological or social obstacle, what would be the fallout? pdanespdanes $\begingroup$ This does not provide an answer to the question. Once you have sufficient reputation you will be able to comment on any post; instead, provide answers that don't require clarification from the asker. - From Review $\endgroup$ – Ash Feb 2 '18 at 13:51 $\begingroup$ The question, as I understood it, was whether the OP's concept was practical. The Outsiders are an example of a society that lives just that way. The question of whether or not it is realistic is purely speculative in all cases - nobody knows of any such existing examples. Other people brought up various technical objections, I simply referred to a story line where it worked. $\endgroup$ – pdanes Feb 2 '18 at 14:10 You're real problem is between the statements "don't have FTL" and "do have artificial gravity." Real linear artificial gravity violates F=ma "equal and opposite reaction" and once you've done that, the speed of light changes from a speed limit to a curio. Chances are the next problem of getting all your energy back into a usable form when you hit The Big Red Stop Button (lever) will be solved too. Chris SeveranceChris Severance $\begingroup$ Well, we currently don't (and may never) have FTL, yet can use centrifuges to create artificial gravity. As the question never specified any specific type of artificial gravity, assuming it means "Real linear artificial gravity" isn't an answer. I also don't get how invalidating a classical force equation immediately invalidates one of the cornerstones of non-classical physics... $\endgroup$ – Mithrandir24601 Feb 4 '18 at 11:20 $\begingroup$ @Mithrandir24601 thanks for responding, I couldn't have said it better. With the size of these ships centrifugal force wouldn't leave the passengers feeling nauseated, like it would on a smaller vessel. $\endgroup$ – Dan Clarke Feb 4 '18 at 22:12 I would drop the notion that your technology includes artificial gravity (beyond spinning) and engines that can get you to half the speed of light. Which as many point out limits time for the generations and leads to the problem of hitting stuff. I would assume that a spinning cylinder of a decent size with the interior designed to provide space, light and "nature" with the center axis being a source of light and "rain" and your vessel has decks below (outside of the inner space) that provide space for farming, then industrial tools, then storage and lastly the outermost deck would be flooded with water not only to protect the occupants from cosmic radiation, but also to make certain that the ship has enough water to make it across the vast expanse of interstellar space. I would increase crew/passenger size to 100,000 with an expected expansion to 300,000. I would also consider that the on board time between stars is about the same as the remainder lifespan of a 20-something first generation person. 50 to 60 years seems reasonable thus you would have grandparents who remember green hills, blue skies and a horizon that bent down instead of up. 10% light speed tops makes a reasonable on board trip time. I'd go with something along ion engines, low thrust that continues for a very long time. Instead of turning the ship you would have thrusters that would point bow-ward at an angle, thus your deceleration time would need to be longer than your acceleration time. This way you can use both ice(s) plow/shield and a Bussard Ram Scoop collector to provide more fuel. Ion thrusts are weak - 5.4 Newtons of thrust is the latest and most powerful ion engine. One can convert newtons to gravity... But I'm not doing it here. I can pretty much tell you that you would be pulling .01 g which would hardly be felt and the spinning craft would have a greater force on people than the continual thrust. Your ship could get up to a decent velocity given lots of time, but a half the speed of light will bring way too many other technical questions to mind as previously explored. 300,000 colonist/terraformer/ship builders arriving in a stellar system is a good start. One could argue that the ship is feeling crowded by this time so passengers and crew would be hungry for more space. And humans are real good at making new humans so 20-40 years at this new star system, given enough space, humans could be at the half million mark plus. They would need to stay in that system long enough to make enough humans to fill two ships... plus leave behind enough colonists to make a new world, make more ships - what ever direction they want. figure another three generations of time (say 60 years) and 1.5 kids per parent - magically the human population is at or slightly above one million. To give you an idea of how fast humans can make more humans, just look at population growth from 1900 (1 billion humans) to 2000 (7 billion humans). So yes, your basic idea makes sense. Humans making more humans, the desire to find new fresh untouched worlds to explore. Perhaps groups of like minded people are off to make their own worlds by their own rules... As for size. Rendezvous with Rama might be helpful. Not only with size, but with interior design to handle space travel. Bowyn AerrowBowyn Aerrow Not the answer you're looking for? Browse other questions tagged reality-check space-colonization terraforming colonization exploration or ask your own question. If we can have "all the comforts of home" in space why would we settle planets? The BFG, instant death from anywhere? Life on cold planets, and moving to warmer planets Reasonable way to determine orbital parameters in an unknows stellar system Reasons to colonize planets of another solar system? Energy availability in interstellar space Can we build a world in 1,000 years? How long would it take to build a colonization ship? How long would it take to colonize the galaxy with FTL? How long before colonization space is at a premium? Where on my spaceship can I justify more space for my humans? Can I have a glacier on a tidally-locked shallow-sea planet? Macro-life, colonisation or continuation?
CommonCrawl
EURASIP Journal on Information Security Research | Open | Published: 29 May 2018 Secrecy outage of threshold-based cooperative relay network with and without direct links Khyati Chopra ORCID: orcid.org/0000-0001-6218-53011, Ranjan Bose1 & Anupam Joshi2 EURASIP Journal on Information Securityvolume 2018, Article number: 7 (2018) | Download Citation In this paper, we investigate the secrecy outage performance of a dual-hop decode-and-forward (DF) threshold-based cooperative relay network, both with and without the direct links between source-eavesdropper and source-destination. Without assuming that all the relays can always perfectly decode, here we consider that only those relays who satisfy predetermined threshold can correctly decode the message. We have investigated the outage probability of optimal relay selection scheme, when either full instantaneous channel state information (ICSI) or statistical channel state information (SCSI) of all the links is available. We have shown that CSI knowledge at the transmitter can improve secrecy, and the amount of improvement for the outage probability is more when the required rate is low and for low operating SNR. Asymptotic and diversity gain analysis of the secrecy outage for both the single relay and multi-relay system is obtained, when average SNRs of source-relay and relay-destination links are equal or unequal. We have shown that the improvement in predetermined threshold, eavesdropper channel quality, direct links, and required secrecy rate significantly affects the secrecy performance of the system. Cooperative communication plays a promising role to expand the coverage of wireless networks, save uplink transmit power of source due to highly constrained wireless resources [1–3], and increase the spatial diversity without increasing the number of antennas [4]. Due to the broadcast nature of wireless medium, these cooperative networks are susceptible to eavesdropping, where the unintended receiver (eavesdropper) might overhear transmissions from the source and hence, can potentially cause great threat to secure wireless communication [1, 5]. Wireless security has traditionally relied on data encryption and decryption techniques at various layers, but key distribution becomes a major challenge in these cryptographic algorithms [6]. Recently, physical layer security (PLS) or information-theoretic security has emerged as an alternate paradigm for secure wireless cooperative communications [3, 6]. This line of work was pioneered by Wyner [5], where he introduced the degraded wiretap channel (DWTC) model by exploiting the physical characteristics of wireless channels [1, 6] and defined the concept of secrecy capacity. A positive secrecy capacity can only be achieved when an eavesdropper's channel is a degraded version of the main (or legitimate) channel. A survey on PLS is presented in [7], with technical challenges and recent advances. Authors in [8] have investigated the PLS for a wireless ad hoc network with numerous eavesdroppers and legitimate transmitter-receiver pairs. The PLS for a spectrum-sharing system has been examined in [9], which consists of multiple source-destination pairs. Physical layer security for full duplex communications is discussed in [10], with self-interference mitigation. Node cooperation is also introduced in PLS to improve the performance of secure communication by overcoming the wireless channel impairments [1, 2, 11]. Authors in [12] have investigated cooperative beamforming and user selection techniques to improve the security of a cooperative relaying network and have explored the concept of cooperative diversity gain, namely, adapted cooperative diversity gain (ACDG), which can be used to evaluate the security level. Authors in [11] have investigated secrecy outage of dual-hop amplify-and-forward (AF) relay system with relay selection without the knowledge of eavesdropper's instantaneous channel state information (ICSI). Authors in [13] have presented the secrecy outage probability of AF multi-antenna relay network in presence of an eavesdropper. Comprehensive study of secrecy transmission in decode-and-forward (DF) relay networks subjected to slow fading is presented in [3], and secrecy throughput of the two-hop transmission is maximized under secrecy outage constraint. Diversity is an effective technique to combat the performance degradation in wireless communication systems caused due to fading. Cooperative diversity is incorporated in a multi-path fading environment with the help of relay nodes, to improve the communication reliability and throughput [14]. Maximal ratio combining (MRC) and selection combining (SC) are two diversity combining techniques, where the relayed signal, as well as, the signal from the source are combined to obtain the diversity gain [14], and to enable higher transmission rates and robustness against channel variations due to fading. Cooperative jamming is introduced in [6, 15] where, in order to confuse the eavesdroppers, the source transmits the encoded signal and weighted jamming signal is transmitted by relays. The optimal routing policy that minimizes the cost with secrecy outage probability constraint over multi-hop fading wiretap networks is discussed in [16]. Authors in [17] have investigated physical layer secrecy performance of multi-hop decode-and-forward relay networks with multiple passive eavesdroppers over Nakagami-m fading channels. Cooperative multicast scheme which allows the users to function as relays is presented in [18], and the secure outage behavior of this scheme is studied. Secrecy outage probability of multicast cooperative relay network is also studied in [19], in the presence of multi-destination and multi-eavesdropper nodes. Authors in [15] have discussed the outage probability and outage secrecy rate in wireless relay channels using cooperative jamming, assuming that the eavesdropper channels follow a zero-mean Gaussian distribution with known covariance. The problem of physical layer security in a large-scale multiple-input multiple-output (LSMIMO) relaying system is studied in [20], and the impact of imperfect channel state information (CSI) in AF and DF classical relaying schemes is also investigated. Secrecy performance of full-duplex relay (FDR) networks is explored in [21]. Relay selection schemes are introduced based on ICSI to improve the diversity gain in secure cooperative multi-relay system [1, 2, 22]. The combined use of relays and jamming for AF and DF protocols [1, 6] to improve security has also been addressed extensively by authors in [2]. For secure communication in DF relay networks, outage optimal relay selection strategy using destination-based jamming is discussed in [6]. Authors in [23] have studied the impact of both maximal ratio combining and relay selection on the physical layer security in wireless communication systems over Rayleigh fading channel. The average secrecy rate was analyzed in [24] for the optimal relay selection scheme in DF relaying systems. Authors in [25] have investigated secrecy outage performance for partial relay selection schemes in cooperative systems. Authors in [26] have analyzed the secrecy outage performance of underlay cognitive radio networks with optimal relay selection over Nakagami-m fading channels. Secrecy performance of threshold-based DF cooperative cognitive networks is extensively discussed in [27], with optimal relay selection scheme. The opportunistic relay selection schemes were proposed in [2, 28, 29] taking into account the quality of relay-eavesdropper links, and it was demonstrated that the proposed relay selection schemes can significantly improve the secrecy outage probability. Single opportunistic relay selection scheme, which selects the relay that maximizes the system secrecy capacity for secure communication, in a cooperative system with multiple full-duplex decode-and-forward relays is presented in [30]. The PLS problem of cognitive DF relay networks is presented in [31], for Nakagami-m fading channels by using an opportunistic relay selection. Beamforming scheme with opportunistic relaying for wireless security under AF and DF strategies have been discussed in [32]. In most of the prior works, neither threshold-based relaying nor direct link between source-eavesdropper and source-destination is taken into account [1, 2, 6, 22, 29]. However, due to the broadcast nature of wireless medium, the direct link is likely to exist in practice. Hence, in contrast to the above study, our work investigates the secrecy outage for a dual-hop threshold-based cooperative DF relay network, both with and without the direct link between source-eavesdropper and source-destination. Also, threshold-based relaying is taken into account, where without assuming that all the relays can always perfectly decode, we consider that only those relays who meets predetermined threshold can correctly decode the message [27]. We have used DF protocol instead of AF in our study for the secrecy performance analysis of dual-hop threshold-based cooperative relay system. The study on DF and AF protocols has been done in the prior literature [33–35]. In comparison to AF, the bit error rate (BER) performance of DF scheme is better [33–35]. On the other hand, AF relaying technique is much simpler as compared to DF, as the complexity of a DF scheme is significantly higher due to its full processing capability [33–35]. Since we have considered threshold-based relaying in our paper, DF protocol is employed, such that the relay first decodes the source message, then compares it with the required threshold, and only if the threshold is met, the message is correctly decoded and forwarded by the relay node. AF protocol is not applicable for threshold-based schemes. The main contributions of our study are summarized as follows: Outage probability analysis of the cooperative threshold-based DF relay system is presented without assuming that all the relays can always correctly decode. We have shown that the secrecy outage performance can be affected by link quality of both source-relay and relay-destination. Without assuming that the direct transmissions are absent owing to deep shadow fading or large distance between nodes, the expression for secrecy outage of DF threshold-based cooperative relay network is derived, both with and without the direct link between source-eavesdropper and source-destination. We have shown that the improvement in predetermined threshold, eavesdropper channel quality, and required secrecy rate significantly affects the outage performance of the system. Secrecy outage probability is evaluated for optimal relay selection scheme, when either ICSI or SCSI is known for cooperative DF threshold-based dual-hop relay system. We have shown that CSI knowledge at the transmitter can improve secrecy, and the amount of improvement for the outage probability is more when the required rate is low and for low operating SNR. Asymptotic and diversity gain analysis of the secrecy outage for both the single relay and multi-relay cooperative system with optimal relay selection is obtained, when average SNRs of source-relay and relay-destination links are equal or unequal. The remainder of this study is organized as follows. The system model is described in Section 2. Outage probability expressions are evaluated for threshold-based single cooperative relay system, both with and without direct link in Section 3. In Section 4, outage probability is studied for optimal relay selection scheme. Asymptotic and diversity gain analysis is presented in Section 5. In Section 6, numerical results are discussed and finally, we conclude this study in Section 7. We consider the system model, consisting of a source S, a destination D, an eavesdropper E and N number of DF relays R i , i∈[1,2..,N] which work in a dual-hop mode as depicted in the Fig. 1. We assume there is also a direct S−D and S−E link due to the broadcast nature of wireless medium, and the communication takes place with the help of a single cooperative relay. We have derived the expression for secrecy outage probability of this dual-hop DF threshold-based cooperative relay network, both with and without the direct link between source-eavesdropper and source-destination. Threshold-based relaying is taken into account, where without assuming that the relay can always perfectly decode, we consider that only if the the received SNR at the relay meets predetermined threshold, illustrated as γ-th for S−R i link, it can correctly decode the message from the source [27, 36]. When none of the relays could perfectly decode the message from source, i.e, all relays have S−R i link SNR lower than the threshold, then, only direct communication between S−D and S−E takes place. The links between various nodes works in half-duplex mode and are modeled as flat Rayleigh flat fading channels, which are mutually independent but not identical [2, 22]. Dual-hop cooperative threshold-based multi-relay system The SNR between any two arbitrary nodes a and b, denoted as Γ ab , is given by [36] $$\begin{array}{*{20}l} \Gamma_{ab} = \frac{P_{a}|h_{ab}|^{2} }{N_{0_{b}}}, \end{array} $$ where P a is the transmitted power at node a, $N_{0_{b}}$ is the noise variance of the additive white Gaussian noise (AWGN) at b. As h ab is Rayleigh distributed, Γ ab is exponentially distributed with mean 1/β ab [37], expressed as $\Gamma _{ab} \sim \mathcal {E} \left (\beta _{ab}\right)$, where β ab is the parameter of the exponentially distribution. For the random variable Z, which is exponentially distributed with parameter β ab , the CDF is given as $$\begin{array}{*{20}l} F_{Z}(z) &= \mathbb{P}[Z \leq z] \\ &=1-e^{-z\beta_{ab}}, \end{array} $$ and the corresponding PDF is obtained by differentiating (2) with respect to z as $$\begin{array}{*{20}l} f_{Z}(z) =\beta_{ab}e^{-z\beta_{ab}}. \end{array} $$ For MRC, the random variable Z is the sum of two random variables A and B, i.e., Z=A+B where A and B are exponentially distributed with parameters β ab and $\beta _{a'b'}\phantom {\dot {i}\!}$, the CDF is given as $$\begin{array}{*{20}l} F_{Z}(z) &= \mathbb{P}[A+B \leq z] \\ &= \mathbb{P}[A \leq z -B] \\ &= 1 - \frac{\beta_{a'b'}e^{-z\beta_{ab}}}{\beta_{a'b'}-\beta_{ab}} - \frac{\beta_{ab}e^{-z\beta_{a'b'}}}{\beta_{ab}-\beta_{a'b'}}, \end{array} $$ $$\begin{array}{*{20}l} f_{Z}(z) = \frac{\beta_{a'b'}\beta_{ab}e^{-z\beta_{ab}}}{\beta_{a'b'}-\beta_{ab}} + \frac{\beta_{ab}\beta_{a'b'}e^{-z\beta_{a'b'}}}{\beta_{ab}-\beta_{a'b'}}. \end{array} $$ The S−R i channels $ h_{sr_{i}} $, R i −D channels $ h_{r_{i}d} $, R i −E channels $ h_{r_{i}e} $, S−D channels h sd , and S−E channels h se , ∀i∈[1,2..,N], are slowly varying Rayleigh flat fading channels [38]. Let P s and $ P_{r_{i}} $ denote the average powers used at source and relay R i , respectively. Also, let $N_{sr_{i}} $, $ N_{r_{i}d} $, $ N_{r_{i}e} $, N sd , and N se denote the variances of additive white Gaussian noise of S−R i , R i −D, R i −E, S−D and S−E links, respectively. The SNRs $ \Gamma _{sr_{i}} $, $ \Gamma _{r_{i}d} $, $ \Gamma _{r_{i}e} $, Γ sd and Γ se are exponentially distributed given as $ \Gamma _{sr_{i}} = \frac {P_{s}|h_{sr_{i}}|^{2}}{N_{sr_{i}}}$, $ \Gamma _{r_{i}d} = \frac {P_{r_{i}}|h_{r_{i}d}|^{2}}{N_{r_{i}d}} $, $ \Gamma _{r_{i}e} = \frac {P_{r_{i}}|h_{r_{i}e}|^{2}}{N_{r_{i}e}} $, $ \Gamma _{sd} = \frac {P_{s}|h_{sd}|^{2}}{N_{sd}} $ and $ \Gamma _{se} = \frac {P_{s}|h_{se}|^{2}}{N_{se}} $ with average values $ 1/ \beta _{sr_{i}} $, $ 1/ \beta _{r_{i}d} $, $ 1/ \alpha _{r_{i}e} $, 1/β sd , and 1/α se , respectively where $ \beta _{sr_{i}} $, $ \beta _{r_{i}d} $, $ \alpha _{r_{i}e} $, β sd , and α se are the parameters of the exponential distribution. An outage event occurs when the instantaneous secrecy rate is lower than the required secrecy rate of the cooperative relay system, given as R s where, R s >0 and $\phantom {\dot {i}\!}\rho =2^{2R_{s}}$ [6, 22, 27]. We have used ρ for direct mapping of required secrecy rate R s , and the probability of successful occurrence of this outage event is called outage probability P o , which is a key metric in evaluating the performance of physical-layer security [38]. The achievable secrecy rate is the difference between the capacity of main link and that of wiretap link [1, 5, 38] $$\begin{array}{*{20}l} C^{d}_{s} \triangleq \frac{1}{2}\left[\log_{2}\left(\frac{1+\Gamma^{d}_{M}}{1+\Gamma^{d}_{E}}\right)\right]^{+} \end{array} $$ where $ C^{d}_{s} $ is the secrecy capacity when both S−D and S−E direct link exists, $ \Gamma ^{d}_{M} = \Gamma _{r_{i}d} + \Gamma _{sd} $ is the maximal ratio combined SNR of the main link at D and $ \Gamma ^{d}_{E} = \Gamma _{r_{i}e}+\Gamma _{se} $ is the maximal ratio combined SNR of the eavesdropper link at E. The term 1/2 here denotes that to complete this dual-hop transmission process, two time phase are required. The message transmitted by the source is decoded at the relay, whose threshold is satisfied in the first phase. In the second phase, one of the relay is selected to re-encode and forward the message to the destination. From (6), when the relay node does not meet the predetermined threshold due to shadow fading [39], the secrecy capacity is defined as $ C^{d'}_{s} $ where $ \Gamma ^{d'}_{M} = \Gamma _{sd} $ is the combined SNR of the main link at D, and $ \Gamma ^{d'}_{E} = \Gamma _{se} $ is the SNR of the eavesdropper link at E. We have investigated three scenarios in our study where first is when the direct link between both source-eavesdropper and source-destination exists, as discussed above. The second is when the direct link only between S and E is considered assuming that the direct link between S and D is absent owing to deep shadow fading or large distance between nodes. From (6), when the relay node meets the predetermined threshold, the secrecy capacity is defined as $ C^{se}_{s} $ where $ \Gamma ^{se}_{M} = \Gamma _{r_{i}d} $ is the SNR of the main link at D, and $ \Gamma ^{se}_{E} = \Gamma _{r_{i}e} + \Gamma _{se} $ is the maximal ratio combined SNR of the eavesdropper link at E and when the relay node does not meet the predetermined threshold due to shadow fading [39], the secrecy capacity is defined as $ C^{se'}_{s} $, where only direct link between source-eavesdropper exists and $ \Gamma ^{se'}_{E} = \Gamma _{se} $ is the SNR of the eavesdropper link at E. The third scenario is when no direct link between S−D and S−E is considered assuming that the direct links between S−D and S−E are absent owing to deep shadow fading or large distance between nodes [1, 2, 22]. From (6), when the relay node meets the predetermined threshold, the secrecy capacity is defined as $ C^{nd}_{s} $ where $ \Gamma ^{nd}_{M} = \Gamma _{r_{i}d} $ is the SNR of the main link at D, and $ \Gamma ^{nd}_{E} = \Gamma _{r_{i}e} $ is the SNR of the eavesdropper link at E and when the relay node does not meet the predetermined threshold due to shadow fading [39], no relay is selected for communication. Secrecy outage probability analysis of single relay system This section deals with the evaluation of the expression for secrecy outage probability of DF threshold-based dual-hop cooperative relay network, in the three scenarios discussed in our study. Each scenario is divided into two probabilistic instances where in the first instance, we consider that the message is decoded successfully [28, 29], as the SNR at the relay node satisfies the predetermined threshold while, in the second instance we consider that the SNR at the relay node does not meet the predetermined threshold. Direct link between both S−D and S−E We evaluate outage probability for single ith relay in the first scenario where the direct link between both S−D and S−E exists as $$\begin{array}{*{20}l} &{P}_{o}^{i}(R_{s})\\ &= \mathbb{P}\left[C^{d}_{s} < R_{s} | \Gamma_{sr_{i}} \geq \gamma_{{\text{th}}} \right]\mathbb{P}[\Gamma_{sr_{i}} \geq \gamma_{{\text{th}}}] \\ &\quad+ \mathbb{P}\left[C^{d'}_{s} < R_{s} |\Gamma_{sr_{i}} < \gamma_{{\text{th}}} \right]\mathbb{P}\left[\Gamma_{sr_{i}} < \gamma_{{\text{th}}}\right] \\ &= \mathbb{P}\left[\frac{1}{2}\left[\log_{2}\left(\frac{1+ \Gamma^{d}_{M}}{1+ \Gamma^{d}_{E}}\right)\right] < R_{s} \left\rvert\right. \Gamma_{sr_{i}} \geq \gamma_{{\text{th}}} \right] \\ & \quad\times\mathbb{P}\left[\Gamma_{sr_{i}} \geq \gamma_{{\text{th}}}\right] + \mathbb{P}\left[\frac{1}{2}\left[\log_{2}\left(\frac{1+ \Gamma^{d'}_{M}}{1+ \Gamma^{d'}_{E}}\right)\right] < R_{s} \left\rvert\right. \Gamma_{sr_{i}} < \gamma_{{\text{th}}} \right] \\ & \quad \times \mathbb{P}\left[\Gamma_{sr_{i}} < \gamma_{{\text{th}}}\right] \\ &=\mathbb{P}[\! (\underbrace{\Gamma_{r_{i}d} + \Gamma_{sd})}_{{\text{MRC}}} < (\rho - 1)+ \rho (\underbrace{\Gamma_{r_{i}e} + \Gamma_{se})}_{{\text{MRC}}} | \Gamma_{sr_{i}} \geq \gamma_{{\text{th}}} ]\left(e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right) \\ & \quad + \mathbb{P}[\Gamma_{sd} < (\rho - 1)+ \rho\Gamma_{se} | \Gamma_{sr_{i}} < \gamma_{{\text{th}}} ]\left(1-e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right) \end{array} $$ By substituting the PDF and CDF of MRC diversity scheme from (4) and (5) in (7) and after some algebraic simplifications, the outage probability expression is obtained as $$\begin{array}{*{20}l} {P}_{o}^{i}(R_{s}) &= \left(1- \frac{e^{-\left(\rho-1\right)\beta_{r_{i}d}}\beta_{sd}\alpha_{r_{i}e}\alpha_{se}}{\left(\beta_{sd}-\beta_{r_{i}d}\right)\left(\alpha_{se}-\alpha_{r_{i}e}\right)\left(\rho\beta_{r_{i}d} + \alpha_{r_{i}e}\right)}\right. \\ & \:\:\:\:\: - \frac{e^{-\left(\rho-1\right)\beta_{sd}}\beta_{r_{i}d}\alpha_{r_{i}e}\alpha_{se}}{\left(\beta_{r_{i}d}-\beta_{sd}\right)\left(\alpha_{se}-\alpha_{r_{i}e}\right)\left(\rho\beta_{sd} + \alpha_{r_{i}e}\right)} \\ & \:\:\:\:\: -\frac{e^{-\left(\rho-1\right)\beta_{r_{i}d}}\beta_{sd}\alpha_{r_{i}e}\alpha_{se}}{\left(\beta_{sd}-\beta_{r_{i}d}\right)\left(\alpha_{r_{i}e}-\alpha_{se}\right)\left(\rho\beta_{r_{i}d} + \alpha_{se}\right)}\\ & \:\:\:\:\: \left.-\frac{e^{-\left(\rho-1\right)\beta_{sd}}\beta_{r_{i}d}\alpha_{r_{i}e}\alpha_{se}}{\left(\beta_{r_{i}d}-\beta_{sd}\right)\left(\alpha_{r_{i}e}-\alpha_{se}\right)\left(\rho\beta_{sd} + \alpha_{se}\right)}\right)\left(e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right)\\ & \:\:\:\:\: + \left(1-e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right)\left(1-\frac{\alpha_{se} e^{-\beta_{sd}(\rho-1)}}{ \rho \beta_{sd} + \alpha_{se} }\right). \end{array} $$ Direct link only between S and E We evaluate outage probability for single ith relay in the second scenario where only the direct link between S and E exists as $$\begin{array}{*{20}l} {}{P}_{o}^{i}(R_{s}) &= \mathbb{P}\left[C^{se}_{s} < R_{s} | \Gamma_{sr_{i}} \geq \gamma_{{\text{th}}} \right]\mathbb{P}[\Gamma_{sr_{i}} \geq \gamma_{{\text{th}}}] \\ & \:\:\:\:\: + \mathbb{P}\left[C^{se'}_{s} < R_{s} |\Gamma_{sr_{i}} < \gamma_{{\text{th}}} \right]\mathbb{P}\left[\Gamma_{sr_{i}}< \gamma_{{\text{th}}}\right] \\ &= \mathbb{P}\left[\frac{1}{2}\left[\log_{2}\left(\frac{1+ \Gamma^{se}_{M}}{1+ \Gamma^{se}_{E}}\right)\right] < R_{s} \left\rvert\right. \Gamma_{sr_{i}} \geq \gamma_{{\text{th}}} \right]\times \\ & \:\:\:\:\:\mathbb{P}\left[\Gamma_{sr_{i}} \geq \gamma_{{\text{th}}}\right] + \mathbb{P}\left[\frac{1}{2}\left[\log_{2}\left(\frac{1+ \Gamma^{se'}_{M}}{1+ \Gamma^{se'}_{E}}\right)\right] < R_{s} \left\rvert\right. \Gamma_{sr_{i}} < \gamma_{{\text{th}}} \right] \\ & \:\:\:\:\: \times \mathbb{P}\left[\Gamma_{sr_{i}} < \gamma_{{\text{th}}}\right] \\ &=\mathbb{P}[ (\Gamma_{r_{i}d}) < (\rho - 1)+ \rho (\underbrace{\Gamma_{r_{i}e} + \Gamma_{se}}_{{\text{MRC}}}) | \Gamma_{sr_{i}} \geq \gamma_{{\text{th}}} ]\left(e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right) \\ & \:\:\:\:\: + \left(1-e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right) \end{array} $$ $$\begin{array}{*{20}l} {}{P}_{o}^{i}(R_{s}) &=\! \left(\!1- \frac{e^{-\left(\rho-1\right)\beta_{r_{i}d}}\alpha_{r_{i}e}\alpha_{se}}{\left(\alpha_{se}-\alpha_{r_{i}e}\right)\left(\rho\beta_{r_{i}d} + \alpha_{r_{i}e}\right)}- \frac{e^{-\left(\rho-1\right)\beta_{r_{i}d}}\alpha_{r_{i}e}\alpha_{se}}{\left(\alpha_{r_{i}e}-\alpha_{se}\right)\left(\rho\beta_{r_{i}d} + \alpha_{se}\right)}\right) \\ & \:\:\:\:\: \times \left(e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right) + \left(1-e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right) \end{array} $$ No direct link between both S−D and S−E We have evaluated the outage probability of a threshold-based DF relaying system without any direct links for the two cases. The two cases discussed in this section are as follows. In the first case, no CSI knowledge is available at the transmitter. The transmission rate cannot thus be adapted by the transmitter to get the positive secrecy. The condition of positive secrecy, i.e., Γ M >Γ E is not imposed in this case and $ {P}_{o}^{i}(R_{s}) $ evaluation is independent of CSI. In the second case, CSI is completely known at the transmitter. The transmission rate can thus be adapted by the transmitter to get the positive secrecy. When CSI knowledge is available at the transmitter, the condition of positive secrecy, i.e., Γ M >Γ E is imposed while evaluating the $ {P}_{o}^{i}(R_{s})$ for cooperative system. Usually, through a feedback link from the receiver to the transmitter, there can be the availability of channel information for a particular link at the transmitter. However, this feedback channel has to be of high-capacity, which cannot be always maintained. Each case is further divided into two probabilistic instances. The relay can decode correctly in the first instance, as the predetermined threshold SNR is achieved [27]. The predetermined threshold SNR is not achieved in the second instance by the relay, and thus the source information is not forwarded [27]. When $ \Gamma _{sr_{i}} < \gamma _{{\text {th}}} $, the $ {P}_{o}^{i}(R_{s}) $ becomes unity; irrespective of the full CSI knowledge available at the transmitter or not, i.e., $ \mathbb {P}\left [C_{s} < R_{s} \cap \Gamma _{M} > \Gamma _{E} | \Gamma _{sr_{i}} < \gamma _{{\text {th}}} \right ] = 1 $. No knowledge of CSI at transmitter We evaluate outage probability for single ith relay in the third scenario where, no direct link between S−D and S−E exists and full CSI knowledge is not available at the transmitter $$\begin{array}{*{20}l} {P}_{o}^{i}(R_{s}) &= \mathbb{P}\left[C^{nd}_{s} < R_{s} | \Gamma_{sr_{i}} \geq \gamma_{{\text{th}}} \right]\mathbb{P}[\Gamma_{sr_{i}} \geq \gamma_{{\text{th}}}] \\ & \:\:\:\:\: + \mathbb{P}\left[C^{nd'}_{s} < R_{s} |\Gamma_{sr_{i}} < \gamma_{{\text{th}}} \right]\mathbb{P}\left[\Gamma_{sr_{i}} < \gamma_{{\text{th}}}\right] \\ &= \mathbb{P}\left[\frac{1}{2}\left[\log_{2}\left(\frac{1+ \Gamma^{nd}_{M}}{1+ \Gamma^{nd}_{E}}\right)\right] < R_{s} \left\rvert\right. \Gamma_{sr_{i}} \geq \gamma_{{\text{th}}} \right] \\ &\quad \times\mathbb{P}\left[\Gamma_{sr_{i}} \geq \gamma_{{\text{th}}}\right] + \mathbb{P}\left[\Gamma_{sr_{i}} < \gamma_{{\text{th}}}\right] \\ &=\mathbb{P}[ (\Gamma_{r_{i}d}) < (\rho - 1)+ \rho (\Gamma_{r_{i}e}) | \Gamma_{sr_{i}} \geq \gamma_{{\text{th}}} ]\left(e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right) \\ & \:\:\:\:\: + \left(1-e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right)\\ &= \left(e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right)\left(1-\frac{\alpha_{r_{i}e} e^{-\beta_{r_{i}d}(\rho-1)}}{ \rho \beta_{r_{i}d} + \alpha_{r_{i}e} }\right) \\ & \:\:\:\:\: + \left(1-e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right) \end{array} $$ CSI completely known at transmitter We evaluate outage probability for single ith relay in the third scenario where no direct link between S−D and S−E exists, and full CSI knowledge is available at the transmitter as $$\begin{array}{*{20}l} {}{P}_{o}^{i}(R_{s}) &= \mathbb{P}\left[C^{nd}_{s} < R_{s} \cap \Gamma_{M} > \Gamma_{E} | \Gamma_{sr_{i}} \geq \gamma_{{\text{th}}} \right]\mathbb{P}[\Gamma_{sr_{i}} \geq \gamma_{{\text{th}}}] \\ & \:\:\:\:\: + \mathbb{P}\left[C^{nd'}_{s} < R_{s} \cap \Gamma_{M} > \Gamma_{E} |\Gamma_{sr_{i}} < \gamma_{{\text{th}}} \right]\mathbb{P}\left[\Gamma_{sr_{i}} < \gamma_{{\text{th}}}\right] \\ &= \mathbb{P}\left[\frac{1}{2}\left[\log_{2}\left(\frac{1+ \Gamma^{nd}_{M}}{1+ \Gamma^{nd}_{E}}\right)\right] < R_{s} \cap \Gamma_{M} > \Gamma_{E} \left\rvert\right. \Gamma_{sr_{i}} \geq \gamma_{{\text{th}}} \right] \\ & \quad \times\mathbb{P}\left[\Gamma_{sr_{i}} \geq \gamma_{{\text{th}}}\right] + \mathbb{P}\left[\Gamma_{sr_{i}} < \gamma_{{\text{th}}}\right] \\ &=\mathbb{P}[\! (\Gamma_{r_{i}d}) \!<\! (\rho - 1)\!+ \rho (\Gamma_{r_{i}e}) \cap \Gamma_{M} > \Gamma_{E} | \Gamma_{sr_{i}} \geq \gamma_{{\text{th}}} ]\left(e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right) \\ & \quad + \left(1-e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right)\\ &= \left(e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right)\left(\frac{\alpha_{r_{i}e}}{\beta_{r_{i}d} + \alpha_{r_{i}e} }-\frac{\alpha_{r_{i}e} e^{-\beta_{r_{i}d}(\rho-1)}}{ \rho \beta_{r_{i}d} + \alpha_{r_{i}e} }\right) \\ & \quad + \left(1-e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right) \end{array} $$ In contrast to the prior literature, where the direct link between the source-eavesdropper and source-destination is not taken into account [1, 2, 22], we have derived the expression for secrecy outage probability of DF threshold-based dual-hop cooperative relay network, both with and without the direct link between source-eavesdropper and source-destination as discussed in our study. Secrecy outage analysis of relay selection scheme In this section, the secrecy outage probability analysis of optimal relay selection (OS) scheme for dual-hop threshold-based DF cooperative multi-relay system is presented, under the no direct link scenario [27]. Optimal selection: ICSI of all the links is known In the optimal relay selection scheme for cooperative multi-relay system [22, 29], the relay that maximizes the secrecy capacity of system is selected to forward the source data. In this case, ICSI of all the links is available. The relay is taken to be selected if predetermined threshold is satisfied, and P is taken as the number of relays which are selected. When the predetermined threshold is not satisfied, the relay is not selected and Q is taken as the number of relays which are not selected. The probability that the maximum of some independent random variable is less than some quantity, is the probability that all the independent random variables are less than that quantity [27]. The final summation is done over the set S, where S is the set of all possible combinations of relay i∈[1,2..,N]. Considering the fact that an outage event occurs when the secrecy capacity becomes less than the desired secrecy rate R s , we can evaluate the outage probability of this OS scheme in the third scenario where no direct link between S−D and S−E exists and full CSI knowledge is not available at the transmitter as $$\begin{array}{*{20}l} {}{P}_{o}^{OS}(R_{s}) &= \sum\limits_{S}\left[\left(\prod_{\substack {\forall i \in [1,P]\\ \text{selected}}}\mathbb{P}[\Gamma_{sr_{i}}\geq\gamma_{{\text{th}}}] \right)\left(\prod_{\substack {\forall j \in [1,Q]\\ \text{not selected}}} \mathbb{P}[\Gamma_{sr_{j}} < \gamma_{{\text{th}}}]\right) \right. \\ & \quad\times\left.\mathbb P\left[\max_{\substack {\forall i \in [1,P]\\ \text{selected}}} \{C_{s}\} < R_{s} | \Gamma_{sr_{i}}\geq \gamma_{{\text{th}}} \right]\right]\\ &= \sum_{S} \left[\left(\prod_{i=1}^{P} \left(e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right) \right) \left(\prod_{j=1}^{Q} \left(1-e^{{-\gamma_{{\text{th}}}\beta_{sr_{j}}}}\right)\right) \right. \\ & \quad \times\left. \prod_{i=1}^{P} \mathbb P\left[C_{s} < R_{s} | \Gamma_{sr_{i}}\geq \gamma_{{\text{th}}} \right]\right]\\ &= \sum_{S} \left[\left(\prod_{i=1}^{P} \left(e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right) \right) \left(\prod_{j=1}^{Q} \left(1-e^{{-\gamma_{{\text{th}}}\beta_{sr_{j}}}}\right)\right) \right. \\ &\quad \times\left. \prod_{i=1}^{P} \left(1-\frac{\alpha_{r_{i}e} e^{-\beta_{r_{i}d}(\rho-1)}}{ \rho \beta_{r_{i}d} + \alpha_{r_{i}e} }\right)\right] \end{array} $$ Similarly, we can also evaluate the outage probability of this OS scheme in the third scenario where no direct link between S−D and S−E exists and full CSI knowledge is available at the transmitter as $$\begin{array}{*{20}l} {P}_{o}^{OS}(R_{s}) &= \sum\limits_{S} \left[\left(\prod_{i=1}^{P} \left(e^{{-\gamma_{{\text{th}}}\beta_{sr_{i}}}}\right) \right) \left (\prod_{j=1}^{Q} \left(1-e^{{-\gamma_{{\text{th}}}\beta_{sr_{j}}}}\right)\right) \right. \\ & \quad \times\left. \prod_{i=1}^{P} \left(\frac{\alpha_{r_{i}e}}{\beta_{r_{i}d} + \alpha_{r_{i}e}}-\frac{\alpha_{r_{i}e} e^{-\beta_{r_{i}d}(\rho-1)}}{ \rho \beta_{r_{i}d} + \alpha_{r_{i}e} }\right)\right] \end{array} $$ Optimal selection: SCSI of all the links is known We have examined another relay selection scheme where no knowledge of instantaneous channel state information is required [22, 27]. This relay selection method has been proposed in [22], and it requires only the statistical information of all the links for secrecy outage probability measurement. This relay selection method is the optimal one, only when no knowledge of ICSI is available except statistical information. In this scheme, the relay for which the secrecy outage probability of system becomes minimum is selected [22]. The secrecy outage probabilities, $ {P}_{o}^{i}(R_{s}) $ of all the individual single relay systems can be first measured, and then we can find the optimal relay i∗ [22]. It can be expressed mathematically as $$\begin{array}{*{20}l} i^{*} &=\arg\min_{i\in[1,\cdots, N]}\left({P}_{o}^{i}(R_{s}) \right). \end{array} $$ Since ICSI is not required, power consumption is reduced as no complex channel measurements are necessary. Compared to the ICSI, channel statistics does not considerably change over time and thus, this is a one-time process. Under severe resource constraint like power and computational complexity, this selection scheme can improve the secrecy performance [22]. The performance of optimal relay selection scheme will be better, as improvement is achieved by utilizing the knowledge of ICSI of the system in OS scheme [22], while only SCSI of the system is available for this scheme. This scheme can be useful in the networks, where there is no availability of CSI of the eavesdropper at all the time instants and due to power limitations, the ICSI of other nodes cannot be fed back at all instants to the decision making node. Asymptotic and diversity analysis In this section, asymptotic and diversity analysis of dual-hop threshold-based DF cooperative relay network is presented, under the scenario when there is no direct link between both S−D and S−E. When link SNRs of S−R i and or R i −D are asymptotically increased in comparison to eavesdropper's link, the behavior of secrecy outage becomes important for the system design. We have discussed the following two significant cases: (1) when S−R i and R i −D link average SNRs are equal, for all i, and they together tends to infinity, i.e., $1/\beta _{sr_{i}}=1/\beta _{r_{i}d}=1/\beta \rightarrow \infty $, it is called as balanced case, and (2) when either of the S−R i or R i −D for all i, link average SNR tends to infinity, i.e., $1/\beta _{sr_{i}}$ is fixed and $1/\beta _{r_{i}d}=1/\beta \rightarrow \infty $ or $1/\beta _{r_{i}d}$ is fixed and $1/\beta _{sr_{i}}=1/\beta \rightarrow \infty $, it is called as unbalanced case [22, 27]. Single balanced relay case The $ {P}_{o}^{i}(R_{s}) $ for single DF relaying system is evaluated, both when full CSI knowledge is not available and when available at the transmitter. For the balanced case, when $1/\beta _{sr_{i}}=1/\beta _{r_{i}d} = 1/\beta \rightarrow \infty $, the $ {P}_{o}^{i}(R_{s}) $ for single DF relaying balanced system without CSI knowledge at the transmitter in (11) is expressed as $$\begin{array}{*{20}l} {P}_{o}^{i}(R_{s}) &= \frac{\beta_{r_{i}d}\left(\rho + \alpha_{r_{i}e}\left(\rho-1\right)\right)}{\alpha_{r_{i}e}} + \gamma_{{\text{th}}}\beta_{sr_{i}} \\ &= \beta\left[\frac{\rho + \alpha_{r_{i}e}\left(\rho-1\right) }{\alpha_{r_{i}e}}+ \gamma_{{\text{th}}}\right]\\ &= \frac{1}{\frac{1 }{\beta}}\left[\frac{\rho}{\alpha_{r_{i}e}} + \left(\rho-1\right) + \gamma_{{\text{th}}}\right] \end{array} $$ For the balanced case, when $1/\beta _{sr_{i}}=1/\beta _{r_{i}d} = 1/\beta \rightarrow \infty $, the $ {P}_{o}^{i}(R_{s}) $ for single DF relaying balanced system with CSI knowledge at the transmitter in (12) is expressed as $$\begin{array}{*{20}l} {P}_{o}^{i}(R_{s}) &= \frac{\beta_{r_{i}d}\left((\rho-1) + \alpha_{r_{i}e}\left(\rho-1\right)\right)}{\alpha_{r_{i}e}} + \gamma_{{\text{th}}}\beta_{sr_{i}} \\ &= \beta\left[\frac{(\rho-1) + \alpha_{r_{i}e}\left(\rho-1\right) }{\alpha_{r_{i}e}}+ \gamma_{{\text{th}}}\right]\\ &= \frac{1}{\frac{1 }{\beta}}\left[\frac{(\rho-1)}{\alpha_{r_{i}e}} + \left(\rho-1\right) + \gamma_{{\text{th}}}\right] \end{array} $$ We can interpret from (16) and (17) that secrecy outage probability is inversely proportional to 1/β and it tends to zero, when main channel SNR (1/β) tends to infinity. It is directly proportional to the required threshold γth, eavesdropper channel SNR ($1/\alpha _{r_{i}e}$), and desired secrecy rate R s . Diversity order is a critical measure to observe how fast the outage probability decreases when SNR tends to infinity. Hence, the effect of the increase in number of relays on the outage probability can also be intuitively understood. The diversity order [1] can be defined as $$\begin{array}{*{20}l} D = -\underset{\text{SNR} \rightarrow \infty}{\text{lim}} \frac{\log {P}_{o} (\text{SNR}) }{\log (\text{SNR})}, \end{array} $$ where P o (SNR) is the secrecy outage probability given by function of SNR=1/β. We can show that using this definition, diversity order of (16) and (17) can be obtained as one. The power of SNR in the denominator of (16) and (17) is same as the diversity order D. It is also depicted by the slope of curve in the log graph. As there is no relay selection, it is intuitive that diversity order of one is achieved by this single cooperative relay system. Single unbalanced relay case The behavior of outage probability is studied for this unbalanced case, both when full CSI knowledge is not available and when available at the transmitter. The $ {P}_{o}^{i}(R_{s}) $ is evaluated by asymptotically increasing the average SNR of the R i −D link and keeping the average SNR of the S−R i link fixed, i.e., when $1/\beta _{sr_{i}}$ is fixed and $1/\beta _{r_{i}d}=1/\beta \rightarrow \infty $. The $ {P}_{o}^{i}(R_{s}) $ for single DF relaying unbalanced system without CSI knowledge at the transmitter in (11) is expressed as $$\begin{array}{*{20}l} {P}_{o}^{i}(R_{s}) &= \left[1- e^{-\gamma_{{\text{th}}}\beta_{sr_{i}}}\right]+ \frac{1}{\frac{1}{\beta}}\left[\frac{e^{-\gamma_{{\text{th}}}\beta_{sr_{i}}} \left(\rho + \left(\rho - 1 \right)\alpha_{r_{i}e}\right)}{\alpha_{r_{i}e}}\right] \end{array} $$ Also, the behavior of outage probability is studied by asymptotically increasing the average SNR of the S−R i link and keeping the average SNR of the R i −D link fixed, i.e., when $1/\beta _{r_{i}d}$ is fixed and $1/\beta _{sr_{i}}=1/\beta \rightarrow \infty $. The $ {P}_{o}^{i}(R_{s}) $ is given as $$\begin{array}{*{20}l} {P}_{o}^{i}(R_{s}) &= \left[ 1- \frac{\alpha_{r_{i}e}e^{-\beta_{r_{i}d}\left(\rho-1 \right)}}{\rho\beta_{r_{i}d} + \alpha_{r_{i}e} }\right]+ \frac{1}{\frac{1}{\beta}}\left[\frac{\gamma_{{\text{th}}}\alpha_{r_{i}e} e^{-\beta_{r_{i}d}\left(\rho-1 \right)}}{\rho\beta_{r_{i}d} + \alpha_{r_{i}e} }\right] \end{array} $$ The $ {P}_{o}^{i}(R_{s}) $ is evaluated by asymptotically increasing the average SNR of the R i −D link and keeping the average SNR of the S−R i link fixed, i.e., when $1/\beta _{sr_{i}}$ is fixed and $1/\beta _{r_{i}d}=1/\beta \rightarrow \infty $. The $ {P}_{o}^{i}(R_{s})$ for single DF relaying unbalanced system with CSI knowledge at the transmitter in (12) is expressed as $$\begin{array}{*{20}l} {P}_{o}^{i}(R_{s}) &= \left[1- e^{-\gamma_{{\text{th}}}\beta_{sr_{i}}}\right]+ \frac{1}{\frac{1}{\beta}}\left[\frac{e^{-\gamma_{{\text{th}}}\beta_{sr_{i}}} \left((\rho -1) + \left(\rho - 1 \right)\alpha_{r_{i}e}\right)}{\alpha_{r_{i}e}}\right] \end{array} $$ $$\begin{array}{*{20}l} {P}_{o}^{i}(R_{s}) &= \left[ \frac{\alpha_{r_{i}e}}{\beta_{r_{i}d} + \alpha_{r_{i}e}}- \frac{\alpha_{r_{i}e}e^{-\beta_{r_{i}d}\left(\rho-1 \right)}}{\rho\beta_{r_{i}d} + \alpha_{r_{i}e} }\right]\\&\quad+ \frac{1}{\frac{1}{\beta}}\left[\frac{\gamma_{{\text{th}}}\alpha_{r_{i}e} e^{-\beta_{r_{i}d}\left(\rho-1 \right)}}{\rho\beta_{r_{i}d} + \alpha_{r_{i}e}} + \frac{\gamma_{{\text{th}}}\beta_{r_{i}d}}{\alpha_{r_{i}e} + \beta_{r_{i}d} }\right] \end{array} $$ The asymptotic outage probability is expressed as a summation of an asymptotically varying term with 1/β and a constant quantity. We can observe that asymptotically varying term is dominating at low SNR, but at high SNR it vanishes. We can also infer from (19) to (22) that due to fixing average SNR of any hop, unbalance is caused in dual-hop cooperative relay system. Hence, the secrecy outage is limited to a constant, even if we infinitely increase the average SNR of the other hop [22, 27]. Optimal balanced relay selection case Asymptotic expression of the outage probability for optimal relay selection in the balanced case can be evaluated both when full CSI knowledge is not available and when available at the transmitter. The $ P_{o}^{OS}(R_{s}) $ for DF optimal relaying balanced system without CSI knowledge at the transmitter in (13) is expressed as $$\begin{array}{*{20}l} P_{o}^{OS}(R_{s})&= \prod_{i=1}^{N} P_{o}^{i}(R_{s})\\ &=\frac{1}{\frac{1}{\beta^{N}} }\prod_{i=1}^{N}\left[\frac{\rho}{\alpha_{r_{i}e}} + \left(\rho-1\right) + \gamma_{{\text{th}}}\right]. \end{array} $$ The $ P_{o}^{OS}(R_{s}) $ for DF optimal relaying balanced system with CSI knowledge at the transmitter in (14) is expressed as $$\begin{array}{*{20}l} P_{o}^{OS}(R_{s})&= \prod_{i=1}^{N} P_{o}^{i}(R_{s})\\ &=\frac{1}{\frac{1}{\beta^{N}} }\prod_{i=1}^{N}\left[\frac{(\rho -1)}{\alpha_{r_{i}e}} + \left(\rho-1\right) + \gamma_{{\text{th}}}\right]. \end{array} $$ Comparing (23) and (24) with (16) and (17), we can see that for optimal relay selection scheme, asymptotic expression for secrecy outage probability is given by the product of asymptotic expressions of individual single cooperative relay system. We can also see that the denominator in (23) and (24) contains power of N at main channel SNR=1/β and thus, using (18) diversity order D=N is obtained. We conclude that, when we choose a single cooperative relay from a set of N relays, the diversity order of N is achieved, which is also intuitive [22, 27]. Optimal unbalanced relay selection case The outage probability for DF optimal relaying unbalanced system can be evaluated both when full CSI knowledge is not available and when available at the transmitter. When $1/\beta _{sr_{i}}$ is fixed and $1/\beta _{r_{i}d}=1/\beta \rightarrow \infty $, for all i=1,⋯,N, for optimal relay selection scheme, the outage probability tends to be a constant value in the unbalanced case. The $ P_{o}^{OS}(R_{s}) $ for DF optimal relaying unbalanced system without CSI knowledge at the transmitter in (13) is expressed as $$\begin{array}{*{20}l} P_{o}^{OS}(R_{s})&=\prod_{i=1}^{N} P_{o}^{i}(R_{s}) \\ &=\prod_{i=1}^{N} \left[1- e^{-\gamma_{{\text{th}}}\beta_{sr_{i}}}\right]. \end{array} $$ Also, when $1/\beta _{r_{i}d}$ is fixed and $1/\beta _{sr_{i}}=1/\beta \rightarrow \infty $, for all i=1,⋯,N, for optimal relay selection scheme, the outage probability tends to be a constant value in the unbalanced case and is given as $$\begin{array}{*{20}l} P_{o}^{OS}(R_{s})&=\prod_{i=1}^{N} P_{o}^{i}(R_{s}) \\ &=\prod_{i=1}^{N} \left[ 1- \frac{\alpha_{r_{i}e}e^{-\beta_{r_{i}d}\left(\rho-1 \right)}}{\rho\beta_{r_{i}d} + \alpha_{r_{i}e} }\right]. \end{array} $$ When $1/\beta _{sr_{i}}$ is fixed and $1/\beta _{r_{i}d}=1/\beta \rightarrow \infty $, for all i=1,⋯,N, for optimal relay selection scheme, the outage probability tends to be a constant value in the unbalanced case. The $ P_{o}^{OS}(R_{s}) $ for DF optimal relaying unbalanced system with CSI knowledge at the transmitter in (14) is expressed as $$\begin{array}{*{20}l} P_{o}^{OS}(R_{s})&=\prod_{i=1}^{N} P_{o}^{i}(R_{s}) \\ &=\prod_{i=1}^{N} \left[ \frac{\alpha_{r_{i}e}}{\beta_{r_{i}d} + \alpha_{r_{i}e} }- \frac{\alpha_{r_{i}e}e^{-\beta_{r_{i}d}\left(\rho-1 \right)}}{\rho\beta_{r_{i}d} + \alpha_{r_{i}e} }\right]. \end{array} $$ Here, asymptotic varying terms are not shown, which can also be obtained as in (19)–(22). Comparing (25)–(28) with (19)–(22), we can observe that the constant value of secrecy outage probability is the product of constant values of individual single cooperative relay system for optimal relay selection scheme. As each constant value of the outage probability in (19)–(22) is less than unity, the performance is always improved by optimal relay selection [22, 27]. The prior literature does not take into account the effect of S−R i link quality, but in our study, we have considered the effect of both S−R i and R i −D link quality for complete performance analysis [27]. This section presents the analytical results of a threshold-based dual-hop DF cooperative relay network that exactly matches with the simulation results. Noise power is assumed to be the same at all the nodes. To cover feasible range of required secrecy rate, both low and high desired rate of R s =0.1 and R s =2.0 are considered. Figure 2 shows the comparative analysis of the outage probability P o (R s ) of single ith relay with total SNR 1/β, as expressed in (11) and (12) for the balanced case under the scenario when no direct link is present, both with and without the availability of channel knowledge at the transmitter. The figure is plotted with different R s =0.1,1.0, and 2.0 and fixed γth=3 dB and 1/α re =1/α=3 dB. It can be observed that CSI knowledge can improve secrecy; the amount of improvement for the P o (R s ) is more when the required rate is low and for low operating SNR. Also, outage probability increases in function of R s . Corresponding asymptotic analysis as expressed in (16) and (17) is also shown by solid straight lines passing through the curves. Comparison of outage probability with 1/β under no direct link scenario both with and without CSI for R s =0.1,1.0, and 2.0; and γth=3 dB of single balanced relay system Figure 3 shows the comparison of outage probability P o (R s ) of single ith relay with total SNR 1/β, as expressed in (8), (10) and (11) under three scenarios: (1) with direct link between both S−D and S−E, (2) with direct link only between S and E, and (3) with no direct link between both S−D and S−E. This figure has been plotted with different relays to eavesdropper average SNR $ 1/\alpha _{r_{i}e} = 1/\alpha = 6$ and 9 dB, desired secrecy rate R s =1.0, and fixed γth=3 dB. It is observed from the figure that the outage probability is maximum for the case when only S−E link is present which is intuitive and least for the case when there is no direct link between both S−D and S−E. Also, increase in eavesdropper channel quality increases the outage probability of the system for all three scenarios. Comparison of outage probability under three scenarios: (1) with direct link between both S−D and S−E, (2) with direct link only between S and E, and (3) with no direct link between both S−D and S−E for 1/α=6 and 9 dB, R s =1.0, and γth=3 dB of single balanced relay system Figure 4 shows the outage probability P o (R s ) of single ith relay with total SNR 1/β, as expressed in (11) under the scenario when direct link is not present between both S−D and S−E. This figure has been plotted with different relays to eavesdropper average SNR $ 1/\alpha _{r_{i}e} = 1/\alpha = 3, 6$, and 9 dB, γth=3 and 6 dB, and fixed desired secrecy rate R s =1.0. It is observed from the figure that the improvement in predetermined threshold value γth, increases the outage probability of the system. This observation holds true for other two scenarios also. The corresponding asymptotic analysis as given in (16) is depicted by straight solid lines crossing through the curves. It can be observed from the plot that the spacing between asymptotic solid straight lines for γth=3 dB and γth=6 dB, at a given P o (R s ), is more for low eavesdropper average SNR 1/α=3 dB and subsequently decreases for 1/α=6 dB and 1/α=9 dB. Hence, we can interpret that increase in predetermined threshold value γth degrades the outage probability more, when eavesdropper average SNR is low, than when eavesdropper average SNR is high. Also, it is observed that increase in eavesdropper channel quality increases the outage probability of the system. Outage probability with no direct link between both S−D and S−E for 1/α=3,6, and 9 dB, γth=3 and 6 dB and R s =1.0 of single balanced relay system Figure 5 shows the outage probability P o (R s ) of single ith relay, as expressed in (11) for the unbalanced case under the scenario when direct link is not present between both S−D and S−E with average SNR of $ 1/\beta _{sr_{i}} = 1/\beta $ at different $ 1/\beta _{r_{i}d} = 25, 30,$ and 35 dB with $ 1/\alpha _{r_{i}e} = 1/\alpha = 6$ dB, γth=3 dB, desired secrecy rate R s =1.0, and it is also plotted for the unbalanced case with average SNR of $ 1/\beta _{r_{i}d} = 1/\beta $ at different $ 1/\beta _{sr_{i}} = 25, 30,$ and 35 dB with $ 1/\alpha _{r_{i}e} = 1/\alpha = 6$ dB and fixed γth = 3 dB, desired secrecy rate R s =1.0. It is observed that P o (R s ) tends to be a fixed constant, which is derived in (19) and (20) for a given $ 1/\beta _{r_{i}d} $ or $ 1/\beta _{sr_{i}} $ even if 1/β increases. The fixed constants which are derived in (19) and (20) are shown with horizontal dashed line. From the flooring of curves, we can interpret that outage probability is constrained by either of S−R i or R i −D link quality. Also, we can observe from the plot that the asymptotically varying term of (19) and (20) depicted by straight solid line have crossed the dashed lines exactly at the point, whereafter average SNR of one hop exceeds the other hop [22, 27]. The flooring of curves can also be analysed using the results of [40], where it is shown that since the average secrecy capacity has a ceiling when the transmit SNR improves, the secrecy outage probability has a floor. Outage probability with no direct link between both S−D and S−E for 1/α=6 dB, γth=3 dB, and R s =1.0 with $ 1/\beta _{sr_{i}}= 25, 30$, and 35 dB and $ 1/\beta _{r_{i}d}= 25, 30$, and 35 dB of single unbalanced relay system Figure 6 shows the outage probability P o (R s ) of optimal relay selection scheme when either ICSI or SCSI is known as given in (13) and (15) for the cooperative relay system. The figure is plotted with different number of relays N=2,3, and 4 for the balanced case under the scenario when direct link is not present between both S−D and S−E with total SNR 1/β. This figure has been plotted with fixed desired secrecy rate R s =1.0, γth=3 dB and different relay to eavesdropper average SNR $ 1/\alpha _{r_{i}e} = 1/\alpha = = 12, 9, 6$, and 3 dB. It is clearly observed from the figure that P o (R s ) decreases with the increase in number of relays N. The relay selection will improve the performance of multi-relay cooperative system, when the number of relays is increased, for the case when ICSI of the system is known. Whereas, when ICSI of the system is not available, while only SCSI of the system is known, the secrecy performance can either remain same or increase, when the number of relays is increased, depending on the channel characteristics. Particularly for this numerical analysis, we have shown that when only SCSI of the system is known, the secrecy performance is increasing with the increase in the number of relays. Here, out of N relays, we select the relay for which the secrecy outage probability of the system becomes minimum. The secrecy performance with only SCSI of the system will be less, as compared to the one with ICSI of the system, which is also intuitive as improvement is achieved by utilizing the knowledge of instantaneous channel information of the system. Outage probability of balanced optimal relay selection scheme when either ICSI or SCSI is known with no direct link between both S−D and S−E for N=2,3, and 4; R s =1.0; 1/α=12,9,6, and 3 dB; and γth=3 dB In this paper, we have evaluated the secrecy outage probability of the cooperative threshold-based DF dual-hop relay system, both with and without the direct link between source-eavesdropper and source-destination and also without assuming that all the relays can always perfectly decode. We have shown that improvement in desired secrecy rate, eavesdropper channel quality, and predetermined threshold has a significant impact on outage performance of the system. We have provided the asymptotic and diversity gain analysis of the secrecy outage for both the single relay and multi-relay system with OS, when average SNRs of source-relay and relay-destination links are equal or unequal. Secrecy outage probability is evaluated for OS scheme, when either ICSI or SCSI is known and we have examined that the secrecy performance improves with increase in the number of relays. We have also demonstrated that CSI knowledge at the transmitter can improve secrecy, and the amount of improvement for the outage probability is more when the required rate is low and for low operating SNR. Y Zou, X Wang, W Shen, Optimal relay selection for physical-layer security in cooperative wireless networks. IEEE J. Sel. Areas Commun. 31(10), 2099–2111 (2013). VNQ Bao, N Linh-Trung, M Debbah, Relay selection schemes for dual-hop networks under security constraints with multiple eavesdroppers. IEEE Trans. Wirel. Commun.12(12), 6076–6085 (2013). T-X Zheng, H-M Wang, F Liu, MH Lee, Outage constrained secrecy throughput maximization for DF relay networks. IEEE Trans. Commun.63(5), 1741–1755 (2015). TE Hunter, S Sanayei, A Nosratinia, Outage analysis of coded cooperation. IEEE Trans. Inf. Theory. 52(2), 375–391 (2006). AD Wyner, The wire-tap channel. Bell Syst. Tech. J. 54(8), 1355–1387 (1975). I Krikidis, JS Thompson, S McLaughlin, Relay selection for secure cooperative networks with jamming. IEEE Trans. Wirel. Commun. 8(10), 5003–5011 (2009). Y Zou, J Zhu, X Wang, L Hanzo, A survey on wireless security: technical challenges, recent advances, and future trends.Proc. IEEE. 104(9), 1727–1765 (2016). TX Zheng, HM Wang, J Yuan, Z Han, MH Lee, Physical layer security in wireless ad hoc networks under a hybrid full-/half-duplex receiver deployment strategy. IEEE Trans. Wirel. Commun. 16(6), 3827–3839 (2017). Y Zou, Physical-layer security for spectrum sharing systems. IEEE Trans. Wirel. Commun. 16(2), 1319–1329 (2017). F Zhu, F Gao, T Zhang, K Sun, M Yao, Physical-layer security for full duplex communications with self-interference mitigation. IEEE Trans. Wirel. Commun. 15(1), 329–340 (2016). A Jindal, C Kundu, R Bose, Secrecy outage of dual-hop AF relay system with relay selection without eavesdropper's CSI. IEEE Commun. Lett. 18(10), 1759–1762 (2014). TM Hoang, TQ Duong, HA Suraweera, C Tellambura, HV Poor, Cooperative beamforming and user selection for improving the security of relay-aided systems. IEEE Trans. Commun. 12:, 5039–5051 (2015). K-S Hwang, M Ju, in Proceedings IEEE International Conference on Communications (ICC). Secrecy outage probability of amplify-and-forward transmission with multi-antenna relay in presence of eavesdropper (IEEE, 2014), pp. 5408–5412. J Hu, NC Beaulieu, Performance analysis of decode-and-forward relaying with selection combining. IEEE Commun. Lett. 11(6), 489–491 (2007). J Li, S Luo, AP Petropulu, in Proceedings IEEE Global Communications Conference (GLOBECOM). Outage secrecy rate in wireless relay channels using cooperative jamming (IEEE, 2012), pp. 2438–2443. S Tomasin, Routing over multi-hop fading wiretap networks with secrecy outage probability constraint. IEEE Commun. Lett. 18(10), 1811–1814 (2014). D-D Tran, N-S Vo, T-L Vo, D-B Ha, in Proceedings IEEE 29th International Conference on Advanced Information Networking and Applications Workshops (WAINA). Physical layer secrecy performance of multi-hop decode-and-forward relay networks with multiple eavesdroppers (IEEE, 2015), pp. 430–435. X Wang, YXu M Tao, Outage analysis of cooperative secrecy multicast transmission. IEEE Wirel. Commun. Lett. 2(3), 161–164 (2014). ER Alotaibi, KA Hamdi, in Proceedings IEEE Wireless Communications and Networking Conference (WCNC). Secrecy outage probability of relay networking in multiple destination and eavesdropper scenarios (IEEE, 2014), pp. 2390–2395. X Chen, L Lei, H Zhang, C Yuen, in Proceedings IEEE International Conference on Communications (ICC). On the secrecy outage capacity of physical layer security in large-scale MIMO relaying systems with imperfect CSI (IEEE, 2014), pp. 2052–2057. H Alves, G Brante, R Demo Souza, DB da Costa, M Latva-aho, in Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). On the performance of full-duplex relaying under phy security constraints (IEEE, 2014), pp. 3978–3981. C Kundu, S Ghose, R Bose, Secrecy outage of dual-hop regenerative multi-relay system with relay selection. IEEE Trans. Wirel. Commun. 14(8), 4614–4625 (2015). T Li, T Zhang, B Zhong, Z Zhang, AV Vasilakos, in Proceedings IEEE 26th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC). Physical layer security via maximal ratio combining and relay selection over Rayleigh fading channel (IEEE, 2015), pp. 612–616. C Cai, Y Cai, W Yang, W Yang, in Proceedings International Conference on Wireless Communications and Signal Processing. Average secrecy rate analysis with relay selection using decode-and-forward strategy in cooperative networks (IEEE, 2013), pp. 1–4. Y Zhou, G Pan, T Li, H Liu, C Tang, Y Chen, Secrecy outage performance for partial relay selection schemes in cooperative systems. IET Commun.9(16), 1980–1987 (2015). H Lei, H Zhang, IS Ansari, Z Ren, G Pan, KA Qaraqe, MS Alouini, On Secrecy Outage of Relay Selection in Underlay Cognitive Radio Networks Over Nakagami-m Fading Channels. IEEE Trans. Cogn. Commun. Netw. 3(4), 614–627 (2017). K Chopra, R Bose, A Joshi, Secrecy performance of threshold-based decode-and-forward cooperative cognitive radio network. IET Commun.11(9), 1396–1406 (2017). FS Al-Qahtani, C Zhong, HM Alnuweiri, Opportunistic relay selection for secrecy enhancement in cooperative networks. IEEE Trans. Commun. 63(5), 1756–1770 (2015). I Krikidis, Opportunistic relay selection for cooperative networks with secrecy constraints. IET Commun.4(15), 1787–1791 (2010). B Van Nguyen, K Kim, in Proceedings IEEE International Workshop on Information Forensics and Security (WIFS). Single relay selection for secure communication in a cooperative system with multiple full-duplex decode-and-forward relays (IEEE, 2015), pp. 1–6. R Zhao, Y Yuan, L Fan, YC He, Secrecy performance analysis of cognitive decode-and-forward relay networks in Nakagami-m fading channels. IEEE Trans. Commun. 65(2), 549–563 (2017). MZI Sarkar, T Ratnarajah, Z Ding, Beamforming with opportunistic relaying for wireless security. IET Commun.8(8), 1198–1210 (2014). Z Pengyu, Y Jian, C Jianshu, W Jian, Y Jin, in Proceedings IEEE Wireless Communications and Networking Conference (WCNC). Analyzing Amplify-and-Forward and Decode-and-Forward Cooperative Strategies in Wyner's Channel Model (IEEE, 2009), pp. 1–5. T Wang, in Proceedings 5th IEEE International Conference on Broadband Network & Multimedia Technology (IC-BNMT). Comparison of the energy efficiency for decode-and-forward and amplify-and-forward two-way relaying (IEEE, 2013), pp. 232–236. G Levin, S Loyka, in Proceedings 22nd International Zurich Seminar on Communications (IZS). Amplify-and-forward versus decode-and-forward relaying: which is better? (Eidgenössische Technische Hochschule Zurich, 2012), pp. 123–126. S Ghose, C Kundu, R Bose, Secrecy performance of dual-hop decode-and-forward relay system with diversity combining at the eavesdropper. IET Commun.10(8), 904–914 (2016). J Proakis, Digital Communications, 4th edn (McGraw-Hill, New York, 2001). J Barros, MR Rodrigues, in Proceedings IEEE International Symposium on Information Theory. Secrecy capacity of wireless channels (IEEE, 2006), pp. 356–360. T Lu, P Liu, S Panwar, in Proceedings IEEE 81st Vehicular Technology Conference (VTC Spring). Shining a light into the darkness: How cooperative relay communication mitigates correlated shadow fading (IEEE, 2015), pp. 1–7. H Lei, IS Ansari, G Pan, B Alomair, M-S Alouini, Secrecy capacity analysis over α−μ fading channels. IEEE Commun. Lett. 21(6), 1445–1448 (2017). We would like to show our gratitude to Dr. Robin Gandhi, University of Nebraska Omaha, USA, for sharing his pearls of wisdom with us during the course of this research. We are also immensely grateful to the reviewers for their salient observations and insights that significantly improved the manuscript. There is no funding available for this research work. Dept. of Electrical Engineering, Indian Institute of Technology, Delhi, New Delhi-110016, India Khyati Chopra & Ranjan Bose Dept. of Computer Science, University of Maryland, Baltimore County, Baltimore, 21250, MD, USA Anupam Joshi Search for Khyati Chopra in: Search for Ranjan Bose in: Search for Anupam Joshi in: KC conceived this diversity combining study and participated in its numerical analysis. RB participated in the analysis of this algorithm study and performed the simulations. AJ also participated in the analysis of this algorithm study and coordination. All authors read and approved the final manuscript. Correspondence to Khyati Chopra. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Decode-forward relay Dual-hop Relay selection Secrecy outage probability Secrecy capacity Threshold based
CommonCrawl
Application of Newton's Laws This page provides the chapter on the application of Newton's laws from the "DOE Fundamentals Handbook: Classical Physics," DOE-HDBK-1010-92, U.S. Department of Energy, June 1992. Other related chapters from the "DOE Fundamentals Handbook: Classical Physics" can be seen to the right. DOE Handbook: Classical Physics Force and Weight Force can be thought of simply as a push or pull, but is more clearly defined as any action on a body that tends to change the velocity of the body. Weight is a force exerted on an object due to the object's position in a gravitational field. In the study of forces, the student must make valid assumptions called for in the formulation of real problems. The ability to understand and make use of the correct assumptions in the formulation and solution of engineering problems is certainly one of the most important abilities of a successful operator. One of the objectives of this manual is to provide an opportunity to develop this ability through the study of the fundamentals and the analysis of practical problems. An effective method of attack on all engineering problems is essential. The development of good habits in formulating problems and in representing their solutions will prove to be a valuable asset. Each solution should proceed with a logical sequence of steps from hypothesis to conclusion, and its representation should include a clear statement of the following parts, each clearly defined: a) given data, b) results desired, c) necessary diagrams, d) calculations, and e) answers and conclusions. Many problems become clear and straightforward once they are begun with a logical and disciplined method of attack. The subject of classical physics is based on surprisingly few fundamental concepts and involves mainly the application of these basic relations to a variety of situations. Newton's laws of motion are some of the fundamental concepts used in the study of force and weight. Force is defined as a vector quantity that tends to produce an acceleration of a body in the direction of its application. Changing the body's velocity causes the body to accelerate. Therefore, force can be mathematically defined as given by Newton's second law of motion (Equation 4-1). F = ma F = force on object (Newton or lbf) m = mass of object (kg or lbm) a = acceleration of object (m/sec2 or ft/sec2) Force is characterized by its point of application, its magnitude, and its direction. A force that is actually distributed over a small area of the body upon which it acts may be considered a concentrated force if the dimensions of the area involved are small compared with other pertinent dimensions. Two or more forces may act upon an object without affecting its state of motion. For example, a book resting upon a table has a downward force acting on it caused by gravity and an upward force exerted on it from the table top. These two forces cancel and the net force of the book is zero. This fact can be verified by observing that no change in the state of motion has occurred. Weight is a special application of the concept of force. It is defined as the force exerted on an object by the gravitational field of the earth, or more specifically the pull of the earth on the body. $$ W = {mg \over g_c} $$ W = weight (lbf) m = mass (lbm) of the object g = the local acceleration of gravity (32.17 ft/sec2) gc = a conversion constant employed to facilitate the use of Newton's second law of motion with the English system of units and is equal to 32.17 ft-lbm/lbf-sec2 Note that gc has the same numerical value as the acceleration of gravity at sea level. The mass of a body is the same wherever the body is, whether on the moon or on the earth. The weight of a body, however, depends upon the local acceleration of gravity. Thus, the weight of an object is less on the moon than on the earth because the local acceleration of gravity is less on the moon than on the earth. Calculate the weight of a person with a mass of 185 lbm. $$ \begin{eqnarray} W &=& {mg \over g_c} \nonumber \\ &=& { (185 ~\text{lbm}) \left(32.17 ~{\text{ft} \over \text{sec}^2}\right) \over 32.17 ~{ \text{ft-lbm} \over \text{lbf-sec}^2 } } \nonumber \\ &=& 185 ~\text{lbf} \end{eqnarray} $$ Calculate the weight of a person with a mass of 185 lbm on the moon. Gravity on the moon is 5.36 ft/sec2. $$ \begin{eqnarray} W &=& {mg \over g_c} \nonumber \\ &=& { (185 ~\text{lbm}) \left(5.36 ~{\text{ft} \over \text{sec}^2}\right) \over 32.17 ~{ \text{ft-lbm} \over \text{lbf-sec}^2 } } \nonumber \\ &=& 28.19 ~\text{lbf} \end{eqnarray} $$ With the idea of mass and weight understood, especially their differences, the concept of gravitational force is more easily explained. Any object that is dropped will accelerate as it falls, even though it is not in physical contact with any other body. To explain this, the idea of gravitational force was developed, resulting in the concept that one body, such as the earth, exerts a force on another body, even though they are far apart. The gravitational attraction of two objects depends upon the mass of each and the distance between them. This concept is known as Newton's law of gravitation, which was introduced in an earlier chapter. Free-Body Diagrams In studying the effect of forces on a body it is necessary to isolate the body and determine all forces acting upon it. This method of using a free-body diagram is essential in understanding basic and complex force problems. In solving a problem involving forces it is essential that Newton's laws are carefully fixed in mind and that these principles are applied literally and exactly. In applying these principles it is essential that the body be isolated from all other bodies so that a complete and accurate account of all forces which act on this body may be considered. The diagram of such an isolated body with the representation of all external forces acting on it is called a Free-Body Diagram. It has long been established that the free-body-diagram method is the key to the understanding of engineering problems. This is because the isolation of a body is the tool that clearly separates cause and effect and focuses our attention to the literal application of a principle. Consider the book resting on the table in Figure 1. Although the book is stationary, two forces are acting on the book to keep it stationary. One is the weight (W) of the book exerting a force down on the table. The other is the force exerted up by the table to hold the book in place. This force is known as the normal force (N) and is equal to the weight of the book. A normal force is defined as any perpendicular force with which any two surfaces are pressed against each other. The free-body diagram for this situation, illustrated on the right side in Figure 1, isolates the book and presents the forces acting on the object. Figure 1: Book on a Table Constructing a Free-Body Diagram In constructing a free-body diagram the following steps are usually followed. Step 1. Determine which body or combination of bodies is to be isolated. The body chosen will usually involve one or more of the desired unknown quantities. Step 2. Next, isolate the body or combination of bodies chosen with a diagram that represents its complete external boundaries. Step 3. Represent all forces that act on the isolated body as applied by the removed contacting and attracting bodies in their proper positions in the diagram of the isolated body. Do not show the forces that the object exerts on anything else, since these forces do not affect the object itself. Step 4. Indicate the choice of coordinate axes directly on the diagram. Pertinent dimensions may also be represented for convenience. Note, however, that the free-body diagram serves the purpose of focusing accurate attention on the action of the external forces; therefore, the diagram should not be cluttered with excessive information. Force arrows should be clearly distinguished from other arrows to avoid confusion. For this purpose colored pencils may be used. When these steps are completed a correct free-body diagram will result, and the student can apply the appropriate equations to the diagram to find the proper solution. The car in Figure 2 is being towed by a force of some magnitude. Construct a free-body diagram showing all the forces acting on the car. Figure 2: Car Following the steps to construct a free-body diagram (shown in Figure 3), the object (the car) is chosen and isolated. All the forces acting on the car are represented with proper coordinate axes. Those forces are: Fapp - The force applied to tow the car FK - The frictional force that opposes the applied force due to the weight of the car and the nature of the surfaces (the car's tires and the road) W - The weight of the car N - The normal force exerted by the road on the car Figure 3: Free-Body Diagram The frictional force (FK) is a force that opposes the direction of motion. This force is explained in more detail in the chapter on types of forces. To solve this practical problem, the student would assign values for each force as determined by data given in the problem. After assigning a sign convention (e.g., + for forces upward and to the right, − for forces downward and to the left), the student would sum all forces to find the net force acting on the body. Using this net force information and appropriate equations, the student could solve for the requested unknowns. A variation would be to have the student find an unknown force acting on the body given sufficient information about the other forces acting on the body. The student will learn to solve specific examples using free-body diagrams in a later chapter. Some advanced free-body diagrams for various types of systems are shown in Figure 4. Figure 4: Various Free-Body Diagrams Force Equilibrium Knowledge of the forces required to maintain an object in equilibrium is essential in understanding the nature of bodies at rest and in motion. Net Force When forces act on an object, the result may be a change in the object's state of motion. If certain conditions are satisfied, however, the forces may combine to maintain a state of equilibrium or balance. To determine if a body is in equilibrium, the overall effect of all the forces acting on it must be assessed. All the forces that act on an object result in essentially one force that influences the object's motion. The force which results from all the forces acting on a body is defined as the net force. It is important to remember that forces are vector quantities. When analyzing various forces you must account for both the magnitude (displacement) of the force as well as the direction in which the force is applied. As described in the previous chapter, this is best done using a free-body diagram. To understand this more clearly, consider the book resting on the table in section A of Figure 5. Figure 5: Net Force The book remains stationary resting on the table because the table exerts a normal force upward equal to the weight of the book. Therefore, the net force on the book is zero. If a force is applied to the book (section B of Figure 5), and the effect of friction is neglected, the net force will be equal to the applied force, and the book will move in the direction of the applied force. The free-body diagram in section C of Figure 5 shows that the weight (W) of the book is canceled by the normal force (N) of the table since they are equal in magnitude but opposite in direction. The resultant (net) force is therefore equal to the applied force (FAPP). Since an object in equilibrium is considered to be in a state of balance, it can be surmised that the net force on the object is equal to zero. That is, if the vector sum of all the forces acting on an object is equal to zero, then the object is in equilibrium. Newton's first law of motion describes equilibrium and the effect of force on a body that is in equilibrium. That law states "An object remains at rest (if originally at rest) or moves in a straight line with a constant velocity if the net force on it is zero." Newton's first law of motion is also called the law of inertia. Inertia is the tendency of a body to resist a change in its state of motion. The first condition of equilibrium, a consequence of Newton's first law, may be written in vector form, "A body will be in translational equilibrium if and only if the vector sum of forces exerted on a body by the environment equals zero." For example, if three forces act on a body it is necessary for the following to be true for the body to be in equilibrium. F1 + F2 + F3 = 0 This equation may also be written as follows. Σ F = 0 This sum includes all forces exerted on the body by its environment. The vanishing of this vector sum is a necessary condition, called the first condition of equilibrium, that must be satisfied in order to ensure translational equilibrium. In three dimensions (x,y,z), the component equations of the first condition of equilibrium are: Σ FX = 0 Σ FY = 0 Σ FZ = 0 This condition applies to objects in motion with constant velocity and to bodies at rest or in static equilibrium (referred to as STATICS). Applying the knowledge that an object in equilibrium has a net force equal to zero, the following example can be solved: The object in Figure 6 has a weight of 125 lbf. The object is suspended by cables as shown. Calculate the tension (T1) in the cable at 30° with the horizontal. Figure 6: Hanging Object The tension in a cable is the force transmitted by the cable. The tension at any point in the cable can be measured by cutting a suitable length from it and inserting a spring scale. Since the object and its supporting cables are motionless (i.e., in equilibrium), we know that the net force acting on the intersection of the cables is zero. The fact that the net force is zero tells us that the sum of the x-components of T1, T2, and T3 is zero, and the sum of the y-components of T1, T2, and T3 is zero. Σ Fx = T1.x + T2.x + T3.x = 0 Σ Fy = T1.y + T2.y + T3.y = 0 The tension T3 is equal to the weight of the object, 125 lbf. The x and y components of the tensions can be found using trigonometry (e.g., sine function). Substituting known values into the second equation above yields the following. $$ \begin{eqnarray} \Sigma F_y = (T_1 \sin 30^{\circ}) + (T_2 \sin 180^{\circ}) + (T_3 \sin 270^{\circ}) &=& 0 \nonumber \\ (T_1)(0.5) + (T_2)(0) + (125 ~\text{lbf})(-1) &=& 0 \nonumber \\ 0.5 ~T_1 - 125 ~\text{lbf} &=& 0 \nonumber \\ 0.5 ~T_1 &=& 125 ~\text{lbf} \nonumber \\ T_1 &=& 250 ~\text{lbf} \end{eqnarray} $$ A simpler method to solve this problem involves assigning a sign convention to the free-body diagram and examining the direction of the forces. By choosing (+) as the upward direction and (−) as the downward direction, the student can determine by examination that 1) the upward component of T1 is +T1sin30°, 2) the tension T3 is −125 lbf, and 3) T2 has no y- component. Therefore, using the same equation as before, we obtain the following. $$ \begin{eqnarray} \Sigma F_y = (T_1 \sin 30^{\circ}) - 125 ~\text{lbf} &=& 0 \nonumber \\ 0.5 ~T_1 &=& 125 ~\text{lbf} \nonumber \\ T_1 &=& 250 ~\text{lbf} \end{eqnarray} $$ If the sum of all forces acting upon a body is equal to zero, that body is said to be in force equilibrium. If the sum of all the forces is not equal to zero, any force or system of forces capable of balancing the system is defined as an equilibrant. A 2000 lbm car is accelerating (on a frictionless surface) at a rate of 2 ft/sec2. What force must be applied to the car to act as an equilibrant for this system? a. Draw a free-body diagram. b. A Force, F2, MUST be applied in the opposite direction to F1 such that the sum of all forces acting on the car is zero. Σ Forces = F1 + F2 + N + W = 0 c. Since the car remains on the surface, forces N and W are in equal and opposite directions. Force F2 must be applied in an equal and opposite direction to F1 in order for the forces to be in equilibrium. $$ \begin{eqnarray} F_2 = F_1 = {ma \over g_c} &=& (2000 ~\text{lbm} \times 2 ~\text{ft/sec}^2) \div 32.17 ~{\text{ft-lbm} \over \text{lbf-sec}^2} \nonumber \\ &=& 124 ~\text{lbf} \end{eqnarray} $$ Types of Force When determining how an object reacts to a force or forces, it is important to understand the different types of forces that may act on the object. The previous section discussed the equilibrium of forces as they act on bodies. Recalling that a force is defined as a vector quantity that tends to produce an acceleration of a body in the direction of its application, it is apparent that the student must be acquainted with the various types of forces that exist in order to construct a correct free-body diagram and apply the appropriate equation. A force is applied either by direct mechanical contact or by remote action. Tensile and Compressive Forces In discussing the types of forces, a simple rule is used to determine if the force is a tensile or a compressive force. If an applied force on a member tends to pull the member apart, it is said to be in tension. If a force tends to compress the member, it is in compression. It should also be mentioned that ropes, cables, etc., that are attached to bodies can only support tensile loads, and therefore such objects are in tension when placed on the free-body diagram. In addition, when a fluid is involved, it should be understood that fluid forces are almost always compressive forces. Another type of force often used in classical physics is the force resulting from two surfaces in contact, where one of the surfaces is attempting to move parallel to or over the other surface. Such forces are referred to as friction forces. There are two types of friction forces: those due to dry friction, sometimes called Coulomb friction, and those resulting from fluid friction. Fluid friction develops between layers of fluid moving at different velocities. This type of frictional force is used in considering problems involving the flow of fluids through pipes. Such problems are covered in the Fundamentals Manual on fluid flow. In this section, problems involving rigid bodies which are in contact along dry surfaces are considered. The laws of dry friction are best understood by the following experiment. A block of weight W is placed on a horizontal plane surface (see Figure 9). The forces acting on the block are its weight W and the normal force N of the surface. Since the weight has no horizontal component, the normal force of the surface also has no horizontal component; the reaction is therefore normal to the surface and is represented by N in part (a) of the figure. Suppose now, that a horizontal force P is applied to the block (see part (b)). If P is small, the block will not move. Some other horizontal force must therefore exist which balances P. This other force is the static-friction force F, which is actually the resultant of a great number of forces acting over the entire surface of contact between the block and the plane. The nature of these forces is not known exactly, but it is generally assumed that these forces are due to the irregularities of the surfaces in contact and also to molecular action. Figure 9: Frictional Forces If the force P is increased, the friction force F also increases, continuing to oppose P, until its magnitude reaches a certain maximum value FM (see part (c) of Figure 9). If P is further increased, the friction force cannot balance it any more, and the block starts sliding. As soon as the block has been set in motion, the magnitude of F drops from FM to a lower value FK. This is because there is less interpenetration between the irregularities of the surfaces in contact when these surfaces move with respect to one another. From then on, the block keeps sliding with increasing velocity (i.e., it accelerates) while the friction force, denoted by FK and called the kinetic-friction force, remains approximately constant. Experimental evidence shows that the maximum value FM of the static-friction force is proportional to the normal component N of the reaction of the surface, as shown in Equation 4-5. FM = μS N The term μS is a constant called the coefficient of static friction. Similarly, the magnitude FK of the kinetic-friction force may be expressed in the following form. FK = μK N The term μK is a constant called the coefficient of kinetic friction. The coefficients of friction, μS and μK, do not depend upon the area of the surfaces in contact. Both coefficients, however, depend strongly on the nature of the surfaces in contact. Since they also depend upon the exact condition of the surfaces, their value is seldom known with an accuracy greater than 5 percent. It should be noted that frictional forces are always opposite in direction to the motion (or impending motion) of the object. An object moving at constant speed in a circle is not in equilibrium. Although the magnitude of the linear velocity is not changing, the direction of velocity is continually changing. Since a change in direction requires acceleration, an object moving in a circular path has a constant acceleration towards the center of the circular path. Recalling Newton's second law of motion, F = ma, a force is required to cause acceleration. Therefore, to have constant acceleration towards the center of the circular path, there must be a net force acting towards the center. This force is known as centripetal force. Without this force, an object will move in a straight line. Figure 10 illustrates the centripetal force. Figure 10: Centripetal Force Centrifugal Force Another force, which appears to be opposite the direction of motion, is the centrifugal force acting on an object that follows a curved path. This force appears to be a force directed away from the center of the circular path. This is actually a fictitious force, but is an apparent force that is used to describe the forces present due to an object's rotation. To better understand centripetal and centrifugal forces, consider that a string is attached to the plane in Figure 10. As the plane rotates about the center, the string places a centripetal force on the plane. This causes the plane's velocity to change in direction, thus causing it to travel in a circle. Figure 11: Centrifugal Force The apparent outward force, centrifugal force, seems to pull the plane away from the center shown in Figure 11. This is the same apparent outward force one feels when riding in a car when the car travels in a circle. It can be proven that centrifugal force is not an actual force by cutting the string. In doing so, the plane will fly off in a straight line that is tangent to the circle at the velocity it had the moment the string was cut. If there were an actual centrifugal force present, the plane would not fly away in a line tangent to the circle, but would fly directly away from the circle (see Figure 12). Figure 12: Loss of Centripetal Force
CommonCrawl
cell death & disease ALKBH5 suppresses tumor progression via an m6A-dependent epigenetic silencing of pre-miR-181b-1/YAP signaling axis in osteosarcoma Ye Yuan1,2,3,4 na1, Gege Yan1 na1, Mingyu He1 na1, Hong Lei1, Linqiang Li5, Yang Wang2,3, Xiaoqi He1, Guanghui Li1, Quan Wang1, Yuelin Gao1, Zhezhe Qu1, Zhongting Mei1, Zhihua Shen2,3, Jiaying Pu2,3, Ao Wang2,3, Wei Zhao1,2, Huiwei Jiang1,2, Weijie Du1 & Lei Yang1 Cell Death & Disease volume 12, Article number: 60 (2021) Cite this article Gene silencing ALKBH5 is the main enzyme for m6A-based demethylation of RNAs and it has been implicated in many biological and pathophysiological processes. Here, we aimed to explore the potential involvement of ALKBH5 in osteosarcoma and decipher the underlying cellular/molecular mechanisms. We discovered downregulated levels of demethylase ALKBH5 were correlated with increased m6A methylation in osteosarcoma cells/tissues compared with normal osteoblasts cells/tissues. ALKBH5 overexpression significantly suppressed osteosarcoma cell growth, migration, invasion, and trigged cell apoptosis. In contrast, inhibition of ALKBH5 produced the opposite effects. Whereas ALKBH5 silence enhanced m6A methylations of pre-miR-181b-1 and YAP-mRNA exerting oncogenic functions in osteosarcoma. Moreover, upregulation of YAP or downregulation of mature miR-181b-5p displayed a remarkable attenuation of anti-tumor activities caused by ALKBH5. Further results revealed that m6A methylated pre-miR-181b-1 was subsequently recognized by m6A-binding protein YTHDF2 to mediate RNA degradation. However, methylated YAP transcripts were recognized by YTHDF1 to promote its translation. Therefore, ALKBH5-based m6A demethylation suppressed osteosarcoma cancer progression through m6A-based direct/indirect regulation of YAP. Thus, ALKBH5 overexpression might be considered a new approach of replacement therapy for osteosarcoma treatment. Osteosarcoma is one of the most common primary solid malignancy of bone, primarily affecting teenagers and young adults1,2. Standard treatments for patients include chemotherapy and surgery. Survival has increased considerably due to the advanced treatment strategies3, however, there is still no known way to prevent it. Thus, it is urgent to get insight into the underlying mechanism, and develop new therapeutic agents against osteosarcoma. N6-methyladenosine (m6A) is an abundant modification of messenger RNAs (mRNAs) in eukaryotes4,5,6. The effects of m6A modification on RNA are determined by the interplay between m6A methyltransferases (writers), demethylases (erasers), and binding proteins (readers). Recently, the key components of m6A writers have been identified including a stable heterodimer core complex of methyltransferase-like 3—methyltransferase-like 14 (METTL3-METTL14) that functions in cellular m6A deposition on mammalian nuclear RNAs, as well as Wilms' tumor 1-associating protein (WTAP), as a splicing factor, interacting with this complex and affecting this methylation7,8. While, m6A erasers including fat mass and obesity-associated protein (FTO) and ALKB homolog 5 (ALKBH5) remove m6A modification from RNA, which interacts with m6A readers such as YTH N6-methyladenosine RNA binding protein 1 (YTHDF1) and insulin-like growth factor 2 mRNA binding protein 1 (IGF2BP1)9 ect. It has been reported m6A based modification exerts diverse biological functions10,11,12,13,14. For instance, FTO acts as an oncogenic factor in acute myeloid leukemia (AML)15. ALKBH5 has been shown to be involved in pancreatic cancer16, glioblastoma17, and to impacts male mouse fertility18. However, it remains largely unknown the functions and underlying mechanism of m6A modification in human osteosarcoma. Here, we reported that ALKBH5-induced m6A demethylation inhibits human osteosarcoma tumor cell growth, migration, and invasion through m6A-based post-transcriptional regulation of pre-miRNA-181b-1 and an oncogenic transcriptional co-activator Yes-associated protein 1 (YAP). The m6A demethylase ALKBH5 is downregulated in human osteosarcoma We firstly quantified m6A contents by m6A ELISA and immunofluorescence (IF) assays in human osteosarcoma cell lines such as U2OS, Saos2, 143B, and human osteoblast (hOB) hFOB1.19 cell line. The results showed that m6A contents were significantly increased in osteosarcoma cells (Fig. 1A–C). Moreover, there was a significant decrease in demethylase ALKBH5 mRNA inversely correlated with m6A contents in all three osteosarcoma cell lines as compared with hOB cells, but not in METTL3, METTL14, WTAP, and FTO (Fig. 1D). Meanwhile, immunostaining confirmed a significant decrease of ALKBH5 in U2OS, Saos2, 143B osteosarcoma cell lines, as compared with hOB cells (Fig. 1E, F). Furthermore, lower protein expression of ALKBH5 was detected in human osteosarcoma tissues as compared with normal bone tissues (Fig. 1G). We further applied immunohistochemistry (IHC) assays to measure ALKBH5 protein expression in osteosarcoma tissue microarrays (TMAs) containing 102 tissue cores (Fig. 1H, I and Supplementary Fig. S1). Significantly lower ALKBH5 protein expression was detected in malignant osteosarcoma cores especially in the IVB stage, the highest degree of osteosarcoma, compared with normal bone tissues (Fig. 1H, I). Kaplan–Meier survival analysis from The Cancer Genome Atlas (TCGA) data set (http://www.oncolnc.org/) showed that patients with high ALKBH5 expression exhibited a superior survival, while patients with low ALKBH5 expression exhibited poor survival rate (Fig. 1J). The above results demonstrated that ALKBH5 is generally downregulated and may mediate m6A modification having predominant roles in human osteosarcoma. Fig. 1: Increased m6A modification level together with the reduced expression of demethylase ALKBH5 in human osteosarcoma. A m6A ELISA experiments (n = 3) showing the increase of global m6A level in RNAs in human osteosarcoma cell lines compared with human osteoblasts (hOB) cell line. B Representative confocal microscopy images with m6A (red) and DAPI (blue) of human osteosarcoma cells compared with hOB cells (n = 7, Bar: 25 μm). C Bar graph showing the quantification of mean influence intensity of m6A positive cells. D Expression of individual m6A modifiers in human osteosarcoma cells compared with hOB cells. E Immunostaining for anti-ALKBH5 in hOB cells and osteosarcoma cells (n = 8, Bar: 25 μm). F Bar graph showing the quantification of mean influence intensity of ALKBH5 positive cells. G Western blot showing the protein expression of ALKBH5 in normal tissues and osteosarcoma tissues. H Immunohistochemistry (IHC) analysis of ALKBH5 protein expression on tissue microarrays (TMAs) composed of benign bone tissues (n = 2), IIA stage osteosarcomas (n = 24), IIB stage osteosarcomas (n = 66) and IVB stage osteosarcomas (n = 10) tumor cores. Representative IHC images (magnification ×80) are presented (upper, Bar: 50 μm). I Bar graph representing the percentages of ALKBH5 positive cells number (lower). J Kaplan–Meier survival curve indicates the difference in survival rate between ALKBH5 high expression and ALKBH5 low expression patients. Data are expressed as mean ± SEM. *P < 0.05; **P < 0.01; ***P < 0.001. ALKBH5-dependent m6A demethylation of RNAs severely impacted the growth and motility of osteosarcoma cells To determine whether ALKBH5 regulated m6A modification has a role in osteosarcoma cells, we conducted gain-of-function and loss-of-function studies. As depicted in Fig. 2A, B, the transfection efficiency of ALKBH5 plasmids or siRNA was confirmed by qRT-PCR and western blot. Next, we examined the effect of ALKBH5 on cell proliferation, migration, and invasion. Indeed, overexpressed ALKBH5 remarkably inhibited cell proliferation, invasion, and migration abilities in U2OS cells in both EdU staining, wound-healing cell migration, and the Transwell cell invasion assays, while inhibition of ALKBH5 induced the opposite effects (Fig. 2C–E). In addition, the percentage of both early and late apoptotic cells based on Annexin V/PI staining were significantly increased upon overexpression of ALKBH5, while little effects were observed on ALKBH5 knockdown (Fig. 2F). Moreover, elevated ALKBH5 decreased, or depleting ALKBH5 increased the colony-formation capacities of U2OS osteosarcoma cells (Fig. 2G). In line with the results for U2OS, these effects of ALKBH5 were further confirmed in another osteosarcoma cell line Saos2 (Supplementary Fig. S2). Fig. 2: Tumor inhibition impact of ALKBH5 in human osteosarcoma cells. A, B qRT-PCR (n = 3) and western blot were performed to confirm the transfection efficiency of ALKBH5 in U2OS cells. C Effects of forced expression of ALKBH5 (upper) and ALKBH5 silencing (lower) on U2OS cell proliferation were tested by EdU staining (Bar: 25 μm, n = 3). D Transwell assays showing the invasion ability (Bar: 150 μm, n = 4). Bar graph representing the quantification of invasive cells. E Migration ability was detected by wound-healing assay at 0 and 24 h, respectively, after transfection. Bar graph displaying the mean relative distance of migrated cells (Bar: 200 μm, n = 5). F Annexin V-FITC/PI staining analysis of cell apoptosis after transfection of ALKBH5 plasmid or siRNA for 24 h. G The effects of ALKBH5 on colony-formation ability. (n = 3). Bar graph illustrating the quantitative analysis. Data are expressed as mean ± SEM. *P < 0.05; **P < 0.01; ***P < 0.001. Identification of ALKBH5/m6A―pre-miR-181b-1/miR-181b-5p―YAP axis as a novel pathway leading to osteosarcoma tumor suppression As shown above, we have demonstrated the importance of ALKBH5-dependent m6A demethylation of RNAs for osteosarcoma tumor suppression with both gain- and loss-of-function approaches. Next, we went on to get further insight into the mechanisms accounting for our findings. It has been reported that in addition to protein-coding genes, large non-coding RNAs function as gene regulators during the progression of bone cancer19,20. MiRNA processing is also regulated specifically for osteosarcoma. However, no reports have shown the biological function of m6A modification during miRNA processing in osteosarcoma. Therefore, we used m6A epitranscriptomic microarray to identify the modified miRNAs precursor of ALKBH5 in control- and overexpressed ALKBH5-transfected U2OS cells (Fig. 3A). Of 773 pre-miRNAs detected by the microarray, we identified 11 pre-miRNAs with >20% decrease (>1.2-fold decrease) in ALKBH5 overexpression-treated cells relative to those in the control cells. The top 10 ALKBH5-mediated m6A-demethylated pre-miRNAs are listed in Table 1. Notably, among the pre-miRNAs, pre-miR-181b-1 methylation was markedly decreased upon overexpression of ALKBH5. More importantly, pre-miR-181b-1 sequences are broadly conserved across species. These findings suggest pre-miR-181b-1 as a potential target for ALKBH5 actions in osteosarcoma. Next, we confirmed ALKBH5 overexpression effects on m6A level changes of pre-miR-181b-1 using gene-specific m6A-qPCR. As shown in Fig. 3B, we observed a strong enrichment of pre-miR-181b-1 in the m6A-RIP but not in the IgG-IP fractions. In addition, both pre-miR-181b-1 and mature miR-181b-5p was much lower in osteosarcoma cells than in hOB cells (Fig. 3C). Overexpression of ALKBH5 produced significant increases in the expression levels of both pre-miR-181b-1 and miR-181b-5p in U2OS cells. On the contrary, inhibition of ALKBH5 produced the opposite effects (Fig. 3D). As we expected, miR-181b-5p resulted in a decrease in cell migration (Fig. 3E) and cell proliferation (Fig. 3F). Moreover, downregulation of miR-181b-5p (AMO-181b-5p) partly rescued the decreased cell migration and proliferation caused by ALKBH5 overexpression in U2OS cells (Fig. 3G, H). Fig. 3: ALKBH5 weakens the m6A methylation modification of pre-miR-181b-1 and enhances the expression levels of both pre-miR-181b-1 and miR-181-5p. A m6A-RIP microarray analysis (upper) showing inhibitory effects of ALKBH5 on m6A methylation of pre-miRNAs relative to the control group. Two potentially m6A sites of pre-miR-181b-1 predicted by SRAMP program (lower). B m6A methylation modification of miR-181-5p detected by gene-specific m6A assay. C pre-miR-181b-1 and miR-181-5p endogenous levels in osteosarcoma cell lines compared with hOB cells. D qRT-PCR analysis revealed the function of ALKBH5 overexpression or knockdown on pre-miR-18b-1 and miR-181-5p expression. E Wound-healing assay performed at 0 and 24 h, respectively, after transfected with NC or miR-181-5p mimics. Bar graph representing mean relative distance of migrated cells (Bar: 200 μm, n = 4). F Representative images of EdU staining in U2OS cells with or without miR-181-5p mimics. Bar graph quantifying the percentage of EdU-positive cells (Bar: 25 μm, n = 5). G Migration ability of U2OS after transfected with ALKBH5 plasmids and/or co-transfected with miR-181-5p inhibitor (AMO-181-5p) (Bar: 200 μm, n = 4). H EdU staining showing the reversing effects of AMO-181-5p on cell proliferation (Bar: 25 μm, n = 5). Data are expressed as mean ± SEM. *P < 0.05; **P < 0.01; ***P < 0.001 (vs. the first group). ###P < 0.001 (vs. the second group). Table 1 Top 10 pre-miRNAs with ALKBH5-mediated m6A demethylation. We then searched for the candidate target genes by computational prediction. In this way, we identified Yes-associated protein 1 (YAP) as a potential target gene for miR-181-5p (Fig. 4A). It has been reported that YAP is an oncogene having important roles in multiple tumor developments21,22. Increased expression of YAP can significantly facilitate the malignant transformation of cells. We next confirmed that overexpressed the level of miR-181b-5p indeed directly repressed its target gene YAP expression in osteosarcoma cells (Fig. 4B). In addition, we found that overexpressed ALKBH5 obviously silenced both mRNA and protein levels of YAP in U2OS cells (Fig. 4C), whereas suppressed ALKBH5 present elevating expression of YAP (Fig. 4D). Next, we further confirmed the effects of YAP on osteosarcoma cell growth. The silence of YAP by siRNA dramatically suppressed the proliferation, invasion, migration, and colony-forming abilities of U2OS cells (Fig. 4E–H). On the contrary, overexpression of YAP produced the opposite effects in osteosarcoma cells (Supplementary Fig. S3). Fig. 4: YAP is the critical target gene of miR-181-5p in human OS cells. A Sequence alignment showing the complementarity between miR-181-5p and YAP gene with the potential binding sites (seed site). The red bases indicate the seed site and the vertical lines represent the base-pairing between miR-181-5p and YAP. B The change of YAP-protein level after transfection of miR-181-5p mimics. C qRT-PCR (n = 3, left) and western blot (right) showing the decreased expression of YAP with ectopic expression of ALKBH5 (n = 3). D Expression of YAP at mRNA (left) and protein (right) levels with or without ALKBH5 silence. E EdU staining for evaluation of the YAP knockdown influences on the proliferation of U2OS cells (Bar: 25 μm, n = 5). F Representative images of invasive cells on the membrane for Transwell assay (Bar: 150 μm, n = 4). G Migration ability was detected by wound-healing assay at 0 and 24 h, respectively, with or without YAP silencing. (Bar: 200 μm, n = 5). H Representative images of the colony-formation assay with or without YAP silencing. (n = 3). Data are expressed as mean ± SEM. **P < 0.01; ***P < 0.001. To establish the relationship between ALKBH5 and YAP, we further analyzed the effects of cell growth upon co-transfection by two overexpressed plasmids of ALKBH5 and YAP in osteosarcoma cells. As shown in Fig. 5A, cell proliferation was significantly increased in the group of co-transfection by two plasmids as compared with alone ALKBH5 overexpression. Furthermore, YAP counteracted the inhibitory effects of ALKBH5 on the invasion and migration of U2OS cells (Fig. 5B, C). In line with the findings above, co-transfection of ALKBH5 and YAP markedly increased the percentage of live cells, decreased apoptotic cells (Fig. 5D), and recovered the ability of colony formation (Fig. 5E). Next, we assessed the in vivo effectiveness of ALKBH5-mediated m6A demethylation using an osteosarcoma xenograft mouse model. ALKBH5 overexpression reduced osteosarcoma tumor growth as evidenced by lower tumor volumes and weights, and this effect was abrogated by co-transfection of overexpressed YAP (Fig. 5F–I). Fig. 5: YAP abrogates the inhibition effects of ALKBH5 both on human osteosarcoma cell viability and tumor growth in the xenograft mouse model. A EdU staining showing the reversing effects of YAP on proliferation inhibition of U2OS cells induced by ALKBH5 overexpression (Bar: 25 μm, n = 5). B The effects of YAP on the weakening of ALKBH5 anti-invasion ability detected by Transwell assay (Bar: 150 μm, n = 6). C Cell metastasis analyzed by migration assays (Bar: 200 μm, n = 7). D Annexin V-FITC/PI staining analysis of apoptotic cells (n = 3). E Representative images of colony-formation cells (n = 3). F 143B cells transfected with empty plasmid (control), alone ALKBH5 plasmid or together with YAP plasmid, and then injected cells into female nude mice. Representative photographs of the gross 143B tumors 8 weeks after injection. The red square marks the location of the tumor (n = 3). G The image showing the comparison of the excised tumor size of 143B xenografts in nude mice. H Curve diagram showing the volume of the tumor measured once 2 weeks throughout the experiment. I Weight of tumor tissues removed from nude mice. Data are expressed as mean ± SEM. *P < 0.05; **P < 0.01; ***P < 0.001 (vs. the first group). ###P < 0.001 (vs. the second group). Because ALKBH5-mediated m6A demethylation appeared to increase pre-miR-181b-1 expression, we hypothesized that pre-miR-181b-1 is a target of YTHDF2 (Fig. 6A), the m6A reader protein that promotes the decay of m6A methylated RNAs23. Consistent with our hypothesis, we observed a strong enrichment of pre-miR-181b-1 in the YTHDF2-IP fractions (Fig. 6B). We then silenced YTHDF2 expression by siRNA confirmed by qRT-PCR (Fig. 6C), and western blot (Fig. 6D). SiRNA of YTHDF2 increased expression of both pre-miR-181b-1 and miR-181b-5p after transfection with siYTHDF2 in U2OS cells (Fig. 6E, F). Moreover, the tumor-suppressive effects of ALKBH5 overexpression were further enhanced by siYTHDF2 (Fig. 6G, H). The above data indicated that ALKBH5-mediated pre-miR-181b-1 m6A demethylation have key roles in osteosarcoma. Fig. 6: m6A reader YTHDF2 positively regulates pre-miR-181b-1 stabilization. A Schematic diagram displaying that YTHDF2 promotes pre-miR-181b-1 stabilization via competing with ALKBH5 for pre-miR-181b-1 combining. B RIP assay showing that the anti-YTHDF2 antibody efficiently captured miR-181-5p transcripts. C, D Transfection efficiency of YTHDF2 silencing confirmed via qRT-PCR and western blot. E, F Inhibition effects of siYTHDF2 both on pre-miR-181b-1 (E) and miR-181-5p expression (F). G, H Migration (G, Bar: 200 μm, n = 4) and proliferative abilities (H, Bar: 25 μm, n = 5) of U2OS after transfected with ALKBH5 plasmids and/or co-transfected with siYTHDF2. Data are expressed as mean ± SEM. ***P < 0.001 (vs. the first group). ###P < 0.001 (vs. the second group). Identification of YAP mRNA as a direct target of ALKBH5 in osteosarcoma Interestingly, according to a sequence-based SRAMP m6A modification site predictor (http://www.cuilab.cn/sramp), we observed mRNA of YAP gene carrying nine potential m6A modification sites (Fig. 7A and Supplementary Fig. S4). We next clarified whether ALKBH5 could directly regulate the m6A methylation and gene degradation of YAP. We performed gene-specific m6A-qPCR to test the expression of YAP. The m6A abundance in YAP mRNA was markedly decreased upon overexpressed ALKBH5 in the m6A-RIP group but was not in the IgG-RIP group (Fig. 7B). Additionally, siALKBH5 enhanced the stability of YAP mRNA in the presence of transcription inhibitor actinomycin D (ActD) as compared with siNC group in U2OS cells (Fig. 7C). Meanwhile, siALKBH5 inhibited the degradation of YAP in the presence of translation inhibitor cycloheximide (CHX) (Fig. 7D). However, as shown in Fig. 7E, F, ALKBH5 overexpression produced the opposite effects resulting in a significant decrease in the YAP-mRNA stability and an increase in the YAP-protein degradation in osteosarcoma cells. Fig. 7: ALKBH5 directly regulates mRNA and protein stability of YAP. A Nine potentially m6A sites of YAP mRNA predicted by SRAMP program. B schematic diagram illustrating the procedure of gene-specific m6A-qPCR on left. Change of m6A modification in specific regions of YAP transcripts with ALKBH5 overexpression in U2OS cells on right (n = 4). C qRT-PCR showing YAP transcripts stability in ActD-treated cells after transfected with ALKBH5 siRNA. (n = 3). D YAP-protein stability after ALKBH5 silencing. Cells were harvested at 0 and 8 h after CHX treatments. E qRT-PCR showing YAP transcripts stability in ActD-treated cells after transfected with ALKBH5 plasmids. (n = 3). F YAP-protein stability after ALKBH5 overexpression. Cells were harvested at 0 and 8 h after CHX treatments. Data are expressed as mean ± SEM. *P < 0.05; **P < 0.01; ***P < 0.001. The above results revealed that ALKBH5-mediated m6A demethylation inhibits YAP expression, we hypothesized that methylated YAP transcripts are potential targets of YTHDF1, the m6A reader protein promoting the translation of methylated transcripts24. The abundance in YAP mRNA was markedly increased in the YTHDF1-RIP group as compared with the IgG-RIP group (Fig. 8A), suggesting that YTHDF1 can recognize the m6A-modification sites in YAP mRNA. Next, we found that YTHDF1 siRNA (siYTHDF1) decreased YAP-protein levels (Fig. 8B). Overexpressed YTHDF1 leads to an increased YAP in the presence of ALKBH5 overexpression in U2OS cells (Fig. 8C). As expected, upregulated the level of YTHDF1 could partially restore the inhibition effects in U2OS cell proliferation, invasion, migration, apoptosis, and colony-formation abilities (Fig. 8D–H). In line with the results for U2OS, we also observed similar effects of overexpressed YTHDF1 on cell proliferation and migration in another osteosarcoma cell line Saos2 (Supplementary Fig. S5). Fig. 8: m6A-dependent translational enhancement of YAP is positively associated with YTHDF1. A RIP-qPCR analysis of the interaction between YAP with YTHDF1 in U2OS cells (left, n = 3). Diagram illustrating YTHDF1 replace ALKBH5 binding to YAP (right). B YTHDF1 knockdown impairs the translation of YAP. C YTHDF1 overexpression reverses the decrease of YAP caused by ALKBH5 tested via western blot. D EdU staining showing the reversing effects of YTHDF1 on proliferation inhibition of ALKBH5 overexpression (Bar: 25 μm, n = 5). E Cell invasion ability analyzed by migration assays at 24 h after forced expression of ALKBH5 or co-transfected with YTHDF1 plasmids. F The effects of YTHDF1 on the weakening of ALKBH5 anti-invasion ability detected by Transwell assay (Bar: 150 μm, n = 5). G Representative images of colony-formation cells (n = 3). H Annexin V/PI staining of U2OS cells analyzed by FACS. I Graphic abstract of ALKBH5 regulates the progression of osteosarcoma by mediating m6A modification of pre-miR-181b-1 and YAP. Data are expressed as mean ± SEM. *P < 0.05, ***P < 0.001 (vs. the first group). ###P < 0.001 (vs. the second group). The present study generated a number of new findings. First, m6A demethylase ALKBH5 expression levels are decreased, and m6A methylation substantially increased in human osteosarcoma. Second, ALKBH5 exerts tumor-suppressive effects as its overexpression inhibits osteosarcoma cell growth, migration, invasion, and its silence produces the opposite effects. Third, m6A modification of RNAs likely promotes tumor progressions induced by ALKBH5 inhibition; specifically, m6A methylation of pre-miR-181b-1 in the nucleus by the m6A mechanism causes considerable downregulation of mature miR-181b-5p in the cytoplasm, which may account at least partially for the tumor growth. Forth, our results further demonstrated that YAP is a major target gene for miR-181b-5p, and thus ALKBH5 downregulates YAP level through increasing pre-miR-181b-1/miR-181b-5p. Furthermore, we found that ALKBH5 directly inhibits m6A methylation of YAP, and suppresses its mRNA stability and translation thereby its cellular levels. These findings, therefore, suggest that abnormal downregulation of ALKBH5 is likely one of the mechanisms underlying the osteosarcoma (Fig. 8I). Under such a theme, we proposed that ALKBH5 is a tumor-suppressor gene, and ALKBH5 overexpression might be a new approach of replacement therapy for the treatment of human osteosarcoma. Recent studies have demonstrated that the m6A demethylases were dysregulated in several malignant tumors. Li et al. provided that FTO promotes non-small cell lung cancer (NSCLC)25 and breast tumor progressions through increasing the expression of USP726 as well as inhibiting BNIP327 respectively. However, before the present study, it was unclear whether m6A demethylases exerted effects on osteosarcoma. We for the first time revealed that demethylase ALKBH5 overexpression could lead to downregulation of YAP level upon demethylation of its transcripts by m6A. In contrast, silencing of ALKBH5-induced m6A methylation resulting in upregulation of YAP. Consistent with our findings, Song et al. have demonstrated that m6A methyltransferase METTL3 was increased and promoting osteosarcoma cell progression by regulating the m6A level of LEF1 and activating Wnt/β-catenin signaling pathway28. Yet, in addition to RNA methyltransferases and demethylases, m6A modification exerts biological functions via interplaying with binding proteins. It has been previously reported that recognition of m6A mRNA sites by IGF2BP proteins enhancing mRNA stability29, and recognition of m6A by YTHDF1 result in enhanced protein synthesis24. Several studies have revealed the roles of m6A binding proteins in multiple cancer developments. For instance, SRY (sex-determining region Y)-box 2 (SOX2) was the downstream gene of METTL3, and its expression positively correlated with METTL3 and IGF2BP2 in colorectal carcinoma (CRC)30. Silencing YTHDF1 significantly inhibited Wnt/β-catenin pathway activity in CRC26. However, it also remains unclear whether these binding proteins function in osteosarcoma. Our data showed increases in both mRNA and protein expression of YAP after ALKBH5 overexpression in osteosarcoma cells. We then for the first time confirmed that YAP is a target of both YTHDF1 promoting translation of m6A methylated YAP transcripts. Interestingly, we found ALKBH5 mRNA levels were much higher in Saos2 cells than in U2OS and 143B cells, while the protein levels of ALKBH5 were similar in all three cell lines. As we know, many complicated post-transcriptional mechanisms were involved in translating from mRNA into protein31. We assume that ALKBH5 mRNA may be modified by methylation etc. and its stability may then be different in Saos2 cells. Moreover, ALKBH5 Protein stability was also another factor. A pre-miRNA is generally exported by Exportin-5 from the nucleus to the cytoplasm where its loop structure of hairpin is further cleaved by the RNase III enzyme Dicer to generate mature a miRNA32. Once a mature miRNA is incorporated into the RNA-induced silencing complex (RISC), the expression of its targeted genes is repressed. Several studies have revealed the roles of miRNAs in osteosarcoma. For instance, miR-379 suppresses osteosarcoma progression by targeting PDK133. MiR-491 inhibits osteosarcoma lung metastasis and chemoresistance by targeting αB-crystallin34. Our m6A-RIP-microarray data demonstrated that pre-miR-181b-1 was enriched in m6A-RIP fraction with ALKBH5 inhibition, which suppressed osteosarcoma tumor growth. Here, we found that pre-miRNAs can be methylated in the nucleus leading to reduced biogenesis of mature miRNAs. Yet, it remains unknown how exactly the methylation of pre-miR-181b-1 in the nucleus affects its maturation in the cytosol. This issue merits future studies in detail. A previous study has reported that recognition of m6A mRNA sites by YTHDF2 result in mRNA degradation23 mainly by starting with the shortening of the poly(A) tail and subsequent mRNA degradation. But the decay of the m6A-containing RNA may start by 5′-decapping or endo-cleavage35. However, we must admit that the deeply underlying mechanisms of YTHDF2 on pre-miRNAs are at present unknown. It is known that YAP is a potent oncogene, which is one of the main effectors of the Hippo-YAP/TAZ tumor-suppressor pathway controlling cell proliferation and apoptosis36, Recent studies indicated that YAP/TAZ is essential for cancer initiation or growth of most solid tumors37,38. Additionally, YAP/TAZ have also been act as therapeutic targets in various cancers38,39. Several studies have demonstrated that YAP oncogenic function was modulated by multiple cellular factors in cancers. For instance, TNF receptor-associated factor 6 (TRAF6) promoted the migration and colony formation of pancreatic cancer cells through the regulation of YAP40. Neurotrophic Receptor Tyrosine Kinase 1 (NTRK1) inhibition suppressed YAP-driven transcription, cancer cell proliferation, and migration41. Downregulating MK5 expression inhibited the survival of YAP-activated cancer cell lines and mouse xenograft models42. More importantly, a study has demonstrated that YAP found to be highly expressed in both human and mouse osteosarcoma tissues43. Moreover, the Hippo signaling pathway has an essential role in chemoresistance as well. Of the Hippo pathways members, activation of YAP/TAZ displays resistance to chemotherapeutic drugs in tumor cells37. Notably, YAP has acted as a potential target for reducing osteosarcoma chemoresistance44. An intriguing new finding here is that YAP can be m6A methylated directly in the nucleus leading to enhance its mRNA stability, translation, and its cellular level in human osteosarcoma. Collectively, our results for the first time suggest that ALKBH5 is an anti-tumor factor or a pro-apoptotic factor, acting at least partially by suppressing YAP expression through dual mechanisms with direct m6A methylation of YAP and indirect downregulation of YAP level due to methylation of pre-miR-181b-1. We have also demonstrated that pre-miRNAs could be methylated by the ALKBH5 mechanism in the nucleus leading to significant alterations of their maturation in the cytosol. Cell culture and treatment Human osteosarcoma cell line U2OS was cultured in Dulbecco's modified Eagle medium (DMEM) (Life Technologies Corporation, California, USA) supplemented with 10% fetal bovine serum (FBS) (Biological Industries, Israel). Saos2 was grown in McCOY'S 5A medium (HyClone, California, USA) supplemented with 15% fetal bovine serum. 143B was cultivated in RPMI Medium 1640 basic medium (ThermoFisher Scientific, Massachusetts, USA). While human osteoblasts (hOB) cell line hFOB1.19 was cultured in DMEM/F-12 (1:1) basic medium (ThermoFisher Scientific, Massachusetts, USA) supplemented with 10% fetal bovine serum. All cell lines were maintained in an incubator at 37 °C in an atmosphere containing 5% CO2, except hFOB1.19 cells, which were incubated at 33.5 °C. All cell lines were tested for mycoplasma contamination. Cycloheximide (CHX, Cat# HY-12320, MedChemExpress, China) was treated cells for 0 and 8 h by adding into the medium at 100 μg/mL before harvesting. M6A enzyme-linked immunosorbent assay (ELISA) The m6A RNA methylation assay kit (Cat#ab185912; Abcam, Cambridge, UK) was used to measure the m6A content in total RNAs following the manufacture's protocol. Briefly, 400 ng RNAs were coated on assay wells, and then incubated at 37 °C for 90 min. 50 μL capture antibody solution and 50 μL detection antibody solution was then added to assay wells separately incubated for 60 and 30 min at room temperature (RT). The m6A levels were quantified using a microplate reader at a wavelength of 450 nm. Immunofluorescence (IF) 2 × 105 cells were cultured in a glass-bottom cell culture dish for 24 h, and washed with PBS three times. Fixed the cells with 4% paraformaldehyde at RT for 15 min and then washed the cells three times with PBS. Permeabilized cells with 0.3% Triton X-100 (Sigma-Aldrich) for 15 min, and then blocked with goat serum. After washing twice with PBS, treated the cells with primary antibodies of m6A (1:200 dilution, Synaptic Systems, Germany) and ALKBH5 (1:200 dilution, Millipore, Billerica, USA) and incubated at 4 °C overnight. Finally, incubated cells with secondary antibody at RT. Fluorescent images were visualized using a confocal microscope (Fv10i). Total RNA was isolated using the miRNeasy Mini Kit (Cat# 217004; QIAGEN, Germany) according to the manufacturer's protocol. Briefly, cells were disrupted and homogenized with QIAzol Lysis Reagent at RT for 5 min. Then chloroform was added to the samples followed by vigorous shaking for 15 s. After centrifugation at 12,000 × g at 4 °C for 15 min, the upper aqueous phase was transferred to a new collection tube and mixed with ethanol thoroughly. The sample of 700 μL was pipetted into an RNeasy Mini column followed by centrifugation at 8000 × g at RT for 15 s to discard the flow-through. After washing sequentially with buffer RWT and buffer RPE, RNA was dissolved with RNase-free water. Total RNA sample of 500 ng was reverse transcribed to cDNA using High Capacity cDNA Reverse Transcription Kit (Cat# 00676299; ThermoFisher Scientific, Waltham, USA). Amplification and detection were performed using 7500HT Fast Real-Time PCR System (Applied Biosystems) with SYBR Green PCR Master Mix (Cat# 31598800; Roche). GAPDH was used as an endogenous control. For miRNA analyses, U6 was used as an internal standard control. miRNA primers were obtained from RiboBio (Guangzhou, China). Reactions were run in triplicate. The primer pairs used in our PCR analysis are: Forward/Reverse primer sequence (5′–3′) $${\mathrm{METTL}}3 - {\mathrm{F}}\!:\!{\mathrm{CGACGGAAGTATCGCTTGTCA}}$$ $${\mathrm{METTL}}3 - {\mathrm{R}}\!:\!{\mathrm{TTCACCGAGGTCAGCAGTATG}}$$ $${\mathrm{METTL}}14 - {\mathrm{F}}\!:\!{\mathrm{GTCTTAGTCTTCCCAGGATTGTTT}}$$ $${\mathrm{METTL}}14 - {\mathrm{R}}\!:\!{\mathrm{AATTGATGAGATTGCAGCACC}}$$ $${\mathrm{FTO}} - {\mathrm{F}}\!:\!{\mathrm{GACCTGTCCACCAGATTTTCA}}$$ $${\mathrm{FTO}} - {\mathrm{R}}\!:\!{\mathrm{AGCAGAGCAGCATACAACGTA}}$$ $${\mathrm{ALKBH}}5 - {\mathrm{F}}\!:\!{\mathrm{ACTGAGCACAGTCACGCTTCC}}$$ $${\mathrm{ALKBH}}5 - {\mathrm{R}}\!:\!{\mathrm{GCCGTCATCAACGACTACCAG}}$$ $${\mathrm{WTAP}} - {\mathrm{F}}\!:\!{\mathrm{TTACCTTTCCCACTCACTGCT}}$$ $${\mathrm{WTAP}} - {\mathrm{R}}\!:\!{\mathrm{AGATGACTTTCCTTCTTCTCCA}}$$ $${\mathrm{YAP}} - {\mathrm{F}}\!:\!{\mathrm{TGCGTAGCCAGTTACCA}}$$ $${\mathrm{YAP}} - {\mathrm{R}}\!:\!{\mathrm{GGTGCCACTGTTAAGGA}}$$ $${\mathrm{YTHDF}}1 - {\mathrm{F}}\!:\!{\mathrm{ACCTGTCCAGCTATTACCCG}}$$ $${\mathrm{YTHDF}}1 - {\mathrm{R}}\!:\!{\mathrm{TGGTGAGGTATGGAATCGGAG}}$$ $${\mathrm{YTHDF}}2 - {\mathrm{F}}\!:\!{\mathrm{CAGGCATCAGTAGGGCAACA}}$$ $${\mathrm{YTHDF}}2 - {\mathrm{R}}\!:\!{\mathrm{TTATGACCGAACCCACTGCC}}$$ $${\mathrm{GAPDH}} - {\mathrm{F}}\!:\!{\mathrm{AGCCACATCGCTCAGACAC}}$$ $${\mathrm{GAPDH}} - {\mathrm{R}}\!:\!{\mathrm{GCCCAATACGACCAAATCC}}$$ Tissue microarrays (TMAs) and Immunohistochemistry (IHC) analysis Osteosarcoma tissue microarrays were purchased from the Bioaitech Company (Xi'an, China), comprised of 2 normal bone tissues, 100 malignant osteosarcoma cores. The slide was baked at 60 °C for 30 min and then followed by antigen retrieval with tris-EDTA buffer (pH 9.0), medium heat for 10 min to boil, cease-fire for 5 min, and washed with PBS for 5 min ×3 times. Endogenous peroxidase was blocked with 3% H2O2-methanol at RT for 10 min and washed with PBS for 5 min × 3 times. The sections were added normal non-immune animal serum at RT for 10 min and then removed the serum and added a drop of anti-ALKBH5 (1:300) at 4 °C overnight. Then it was washed with 0.1% tween-20 PBS for 5 min × 3 times. Biotin-labeled sheep anti-mouse/rabbit IgG was added and incubated at RT for 10 min followed by washing with 0.1% tween-20 PBS for 5 min × 3 times. Streptomyces anti-biotin protein-peroxidase was added and incubated at RT for 10 min. DAB working solution was incubated for 5 min and stopped by distilled water washing. After hematoxylin re-staining, washing and differentiation, the slide returned to be blue with full washing followed with regular dehydration transparent and being sealed by neutral gum. The percentage of ALKBH5 positive cells were counted in 5 (×400) high-power fields (upper, lower, left, right, and middle) under the microscope, and the mean values were then calculated. Western blot analysis The western blot analysis was described as previously45. Briefly, osteosarcoma cell lines were lysed in cell lysis buffer (Cat# P0013B; Beyotime Biotechnology, Shanghai, China) supplemented with PMSF protease inhibitor on ice for 30 min, followed by centrifuging at 13,500 × g at 4 °C for 15 min. The protein concentration was quantified using a BCA Protein Assay Kit (Cat# P0010S; Beyotime Biotechnology) following the manufacturer's instructions. The protein sample (50 µg) was separated on a polyacrylamide gel and transferred to a nitrocellulose membrane and then blocked with 5% fat-free dry milk at RT for 1 h. Then, the membrane was incubated with a rabbit anti-YAP antibody (1:1000; Cat# D8H1X; Cell Signaling Technology), a rabbit anti-ALKBH5 antibody (1:1000; Cat# ABE547; Millipore, Billerica, USA), a rabbit anti-YTHDF1 antibody (1:1000; Cat# 17479-1-AP; Proteintech, Wuhan, China), a rabbit anti-YTHDF2 antibody (1:1000; Cat# 17479-1-AP; Proteintech), a mouse anti-Tubulin antibody (1:1000; Cat# abs830032; Absin, Shanghai, China), a mouse anti-β-actin antibody (1:1000; Cat# sc-47778; Santa Cruz Biotechnology, Dallas, Texas, United States), a mouse anti-GAPDH antibody (1:500; Cat# abs830030; Absin) at 4 °C overnight. A secondary incubation step was carried out with monoclonal anti-rabbit IgG (1:5000; Cat# ab97051; Abcam) or monoclonal anti-mouse IgG (1:5000; Cat# ab6789; Abcam) at RT for 1 h. Western blot bands were imaged by odyssey CLx and quantified with LI-COR Image Studio Software (LI-COR Biosciences, Lincoln, NE, USA). siRNAs transfection Cells were transfected to knockdown the expression of ALKBH5, YAP, YTHDF1, YTHDF2, and miR-181b-5p using Lipofectamine TM 3000 Transfection Reagent (Cat# L3000-015; Invitrogen, California, USA) according to the manufacturer's instructions. Briefly, 2 × 105 cells were seed in a 6-well plate and they will be 70% confluent at the time of transfection. 6 μL Lipofectamine TM 3000 reagent and 40 nM siRNA (Gene Pharma, Shanghai, China) were diluted in 125 μL Opti-MEM (Cat# 31985-070; gibco, Grand Island, USA) medium respectively. After Mix and incubation for 2 min separately, these two regents were then mixed and incubated for another 10 min and then added it to cells. Subsequent experimental measurements were performed 24 h after transfection. The siRNAs and miR-181b-5p inhibitor sequence used are as following: $${\mathrm{ALKBH}}5\;{\mathrm{siRNA}}\!:\!5^\prime - {\mathrm{CUGCGCAACAAGUACUUCUTT}} - 3^\prime$$ $${\mathrm{YAP}}\;{\mathrm{siRNA}}\!:\!5^\prime - {\mathrm{GGUGAUACUAUCAACCAAATT}} - 3^\prime$$ $${\mathrm{YTHDF}}1\;{\mathrm{siRNA}}\!:\!5^\prime - {\mathrm{CCUGCUCUUCAGCGUCAAUTT}} - 3^\prime$$ $${\mathrm{YTHDF}}2\;{\mathrm{siRNA}}\!:\!5^\prime - {\mathrm{AAGGACGUUCCCAAUAGCCAATT}} - 3^\prime$$ $${\mathrm{miR}} - 181{\mathrm{b}} - 5{\mathrm{p}}\;{\mathrm{inhibitor}}\;\left( {{\mathrm{AMO}}} \right)\!:\!5^\prime - {\mathrm{ACCCACCGACAGCAAUGAAUGUU}} - 3^\prime$$ Plasmid transfection ALKBH5- and YAP-carrying plasmid for overexpression were constructed by Cyagen (Suzhou, China). The YTHDF1-carrying plasmid was obtained from Genechem (Shanghai, China). miR-181b-5p mimic was obtained from Gene Pharma. Cells were transfected with 500 ng plasmid using Lipofectamine TM 3000 Transfection Reagent according to the manufacturer's protocols. Cells were collected 24 h after transfection. The miR-181b-5p mimic sequence used is 5′-AACAUUCAUUGCUGUCGGUGGGU-3′. Ethynyl-2-deoxyuridine (EdU) staining assay EdU Apollo DNA in vitro kit (Ribobio, Guangzhou, China) was used to detect cell proliferation. Cells were plated into a glass-bottom cell culture dish (NEST, Hong Kong, China) at a density of 2.0 × 105. Briefly, cells were fixed with 4% paraformaldehyde (m/v) for 30 min, and followed by incubation of 30 μM EdU at 37 °C for 90 min. After permeabilized in 0.5% Triton X-100, the Apollo staining solution was added into the cell culture medium for 30 min in the dark. Finally, the cells were incubated with 20 μg/mL 4′,6-diamidino-2-phenylindole (DAPI) for 10 min. The EdU index (%) was the average ratio of the number of EdU-positive cells over total cells in five randomly selected areas under the confocal laser scanning microscope (FV10i). Invasion assays A 24 mm Transwell® chambers (Corning #3412, USA) was used to detect cell invasive abilities according to the manufacturer's protocol. 5 × 104 cells infected with plasmid or siRNAs were resuspended in 200 μL serum-free DMEM medium, and seeded in the upper chamber. DMEM medium contained with 10% FBS was added into the lower chamber. After 24 h, cells migrated through the membrane were stained with 0.1% crystal violet (Beyotime Biotechnology, China) for 15 min and counted using light microscopy (ECLIPSE TS100, Nikon). Migration assays Cells were plated into 6-well culture plates at a density of 2.5 × 105 cells/mL. When the confluence of cells reached 70%, a wound was created by scraping the cells with a 200 μL pipette tip. Cells were washed with PBS and then transfected with siRNAs or plasmid. Images were captured at 0 and 24 h after wounding with standard light microscopy (ECLIPSE TS100, Nikon, Japan). The wound area was measured using ImageJ software (National Institutes of Health (NIH), USA). Colony-formation assay Cells transfected with targeted siRNA or plasmid were seeded in a six-well plate with a concentration of 1000 cells per well and cultured in a humidified atmosphere containing 5% CO2 at a constant temperature of 37 °C to form colonies. Two weeks later, cells were fixed and stained with 100% methanol and 0.1% crystal violet for 20 min, separately. Colonies were air-dried and counted. The experiments were repeated three times. FITC Annexin V Apoptosis Detection Kit I (BD Pharmingen, Cat# 556547) was used to detected apoptosis according to the manufacturer's instructions. Briefly, cells were moistened and washed with precooled PBS twice, and then centrifugated at 1500 rpm for 5 min. Cells were diluted with 300 μL 1× Binding Buffer, 5 μL FITC-labeled Annexin V and 5 μL propidium iodide (PI) were added in cell suspension and stained for 15 min. Data were analyzed with CytExpert software. Human m6A epitranscriptomic microarray analysis Total RNA samples were extracted from ALKBH5-overexpressed U2OS cells and the corresponding negative control cells. The samples were incubated with m6A antibody for immunoprecipitation (IP). The modified RNAs were eluted from the immunoprecipitated magnetic beads as the "IP", and the unmodified RNAs were recovered from the supernatant as "Sup". The RNAs were labeled with Cy5 and Cy3, respectively, and designated as cRNAs in separate reactions using Arraystar Super RNA Labeling Kit (Arraystar, Rockville, USA). The cRNAs were combined and hybridized onto Arraystar Mouse Epitranscriptomic Microarray (8×60K, Arraystar). After washing the slides, the arrays were scanned in two-color channels by an Agilent Scanner G2505C. Raw intensities of IP (Cy5-labeled) and Sup (supernatant, Cy3-labeled) were normalized with an average of log2-scaled Spike-in RNA intensities. After Spike-in normalization, the probe signals having Present (P) or Marginal (M) QC flags in at least 1 out of 2 samples were retained as "All Targets Value" in the excel sheet for determination of m6A methylation level and m6A quantity. m6A methylation level was calculated as the percentage of modification based on the IP (Cy5-labeled) and Sup (Cy3-labeled) normalized intensities. m6A quantity was calculated to indicate the degree of m6A methylation of RNAs based on the IP-normalized intensities. Differentially m6A-methylated RNAs were identified by filtering with the fold changes of >1.2. m6A quantification Quantification of m6A RNA methylation was detected by m6A RNA methylation assay kit (Cat# ab185912; Abcam, Cambridge, UK) following the manufacture's protocol. Total RNA samples of 400 ng for each group were used to determine the percentage of m6A. The absorbance was measured at 450 nm using a microplate reader and the percentage of m6A in total RNA was calculated using the following equation: $${\mathrm{m}}^6{\mathrm{A}}\% = \frac{{\left( {{\mathrm{SampleOD}} - {\mathrm{NCOD}}} \right) \div S}}{{\left( {{\mathrm{PCOD}} - {\mathrm{NCOD}}} \right) \div P}} \times 100\%$$ where S represents the amount of input RNA sample in ng, and P the amount of input of positive control in ng. Methylated RNA immunoprecipitation (MeRIP)-qPCR The Magna MeRIP Kit (Cat# CR203146; Millipore, Massachusetts, USA) was used according to the manufacturer's instructions to examine m6A modification on genes. Cells were harvested prior to washing with ice-cold PBS twice and subsequently collected by centrifugation at 1500 rpm at 4 °C for 5 min. Having removed the supernatant, the cells were mixed with 100 μL RIP lysis buffer and incubated with the lysate on ice for 5 min. The cell preparation was then stored at −80 °C. m6A antibody (5 μg) was added to a tube containing magnetic beads, followed by rotation at RT for 30 min. The beads were washed with RIP wash buffer twice and resuspended in 900 μL RIP immunoprecipitation buffer mixed with 100 μL cell lysate followed by centrifugation at 14,000 rpm at 4 °C for 10 min. After rotation at 4 °C overnight, the beads were washed with high salt buffer, followed by extraction with RIP wash buffer. RNA enrichment was analyzed by qRT-PCR. RNA stability assay Cells were seed in a 6-well plate and transfected with desired constructs as described above. After 24 h transfection, cells were treated with actinomycin D (5 μg/mL; Cat# HY-17559; SIGMA-ALDRICH) for 0, 3, and 6 h before collection. Total RNAs were isolated for qRT-PCR analysis. Xenograft tumorigenesis model Three-week-old BABL/c female nude mice were purchased from Beijing Vital River Laboratory Animal Technology Limited Company (Beijing, China) and randomized into three groups. 5 × 106 143B cells were subcutaneously injected in mice, and the tumor volume was assessed every 2 weeks. Eight weeks after injection, the animals were killed. The xenograft tumors were harvested and the tumor volumes were calculated by the standard formula: length × width2/2. All animal studies were approved by the Animal Care and Use Committee of Harbin Medical University. The investigators were blinded to the group allocation during the experiments of the study. Data are expressed as mean ± SEM. Statistical analysis was performed using GraphPad Prism5 software and analyzed with Student's t-test (two-tailed). All experiments were independently repeated at least three times, with similar results obtained. *P < 0.05; **P < 0.01; ***P < 0.001. Ottaviani, G. & Jaffe, N. The epidemiology of osteosarcoma. Cancer Treat. Res. 152, 3–13 (2009). Mirabello, L., Troisi, R. J. & Savage, S. A. Osteosarcoma incidence and survival rates from 1973 to 2004: data from the Surveillance, Epidemiology, and End Results Program. Cancer 115, 1531–1543 (2009). Yuan, G., Chen, J., Wu, D. & Gao, C. Neoadjuvant chemotherapy combined with limb salvage surgery in patients with limb osteosarcoma of Enneking stage II: a retrospective study. OncoTargets Ther. 10, 2745–2750 (2017). Cao, G., Li, H. B., Yin, Z. & Flavell, R. A. Recent advances in dynamic m6A RNA modification. Open Biol. 6, 160003 (2016). Yue, Y., Liu, J. & He, C. RNA N6-methyladenosine methylation in post-transcriptional gene expression regulation. Genes Dev. 29, 1343–1355 (2015). Desrosiers, R., Friderici, K. & Rottman, F. Identification of methylated nucleosides in messenger RNA from Novikoff hepatoma cells. Proc. Natl Acad. Sci. USA 71, 3971–3975 (1974). Liu, J. et al. A METTL3-METTL14 complex mediates mammalian nuclear RNA N6-adenosine methylation. Nat. Chem. Biol. 10, 93–95 (2014). Ping, X. L. et al. Mammalian WTAP is a regulatory subunit of the RNA N6-methyladenosine methyltransferase. Cell Res. 24, 177–189 (2014). Yang, Y., Hsu, P. J., Chen, Y. S. & Yang, Y. G. Dynamic transcriptomic m(6)A decoration: writers, erasers, readers and functions in RNA metabolism. Cell Res. 28, 616–624 (2018). Pan, Y., Ma, P., Liu, Y., Li, W. & Shu, Y. Multiple functions of m(6)A RNA methylation in cancer. J. Hematol. Oncol. 11, 48 (2018). Lan, Q. et al. The critical role of RNA m(6)A methylation in cancer. Cancer Res. 79, 1285–1292 (2019). Lin, X. et al. RNA m(6)A methylation regulates the epithelial mesenchymal transition of cancer cells and translation of Snail. Nat. Commun. 10, 2065 (2019). Yang, S. et al. m(6)A mRNA demethylase FTO regulates melanoma tumorigenicity and response to anti-PD-1 blockade. Nat. Commun. 10, 2782 (2019). Huang, H., Weng, H. & Chen, J. m(6)A modification in coding and non-coding RNAs: roles and therapeutic implications in cancer. Cancer Cell 37, 270–288 (2020). Li, Z. et al. FTO plays an oncogenic role in acute myeloid leukemia as a N(6)-methyladenosine RNA demethylase. Cancer Cell 31, 127–141 (2017). Cho, S. H. et al. ALKBH5 gene is a novel biomarker that predicts the prognosis of pancreatic cancer: a retrospective multicohort study. Ann. Hepato-Biliary-Pancreat. Surg. 22, 305–309 (2018). Dixit, D., Xie, Q., Rich, J. N. & Zhao, J. C. Messenger RNA methylation regulates glioblastoma tumorigenesis. Cancer Cell 31, 474–475 (2017). Zheng, G. et al. ALKBH5 is a mammalian RNA demethylase that impacts RNA metabolism and mouse fertility. Mol. Cell 49, 18–29 (2013). Wu, Y. et al. Circular RNA circTADA2A promotes osteosarcoma progression and metastasis by sponging miR-203a-3p and regulating CREB3 expression. Mol. Cancer 18, 73 (2019). Yang, Z. et al. Circular RNAs: regulators of cancer-related signaling pathways and potential diagnostic biomarkers for human cancers. Theranostics 7, 3106–3117 (2017). Zhang, X. et al. The role of YAP/TAZ activity in cancer metabolic reprogramming. Mol. Cancer 17, 134 (2018). Maugeri-Sacca, M. & De Maria, R. The Hippo pathway in normal development and cancer. Pharmacol. Ther. 186, 60–72 (2018). Wang, X. et al. N6-methyladenosine-dependent regulation of messenger RNA stability. Nature 505, 117–120 (2014). Wang, X. et al. N(6)-methyladenosine modulates messenger RNA translation efficiency. Cell 161, 1388–1399 (2015). Li, J. et al. The m6A demethylase FTO promotes the growth of lung cancer cells by regulating the m6A level of USP7 mRNA. Biochem. Biophys. Res. Commun. 512, 479–485 (2019). Bai, Y. et al. YTHDF1 regulates tumorigenicity and cancer stem cell-like activity in human colorectal carcinoma. Front. Oncol. 9, 332 (2019). Niu, Y. et al. RNA N6-methyladenosine demethylase FTO promotes breast tumor progression through inhibiting BNIP3. Mol. Cancer 18, 46 (2019). Miao, W., Chen, J., Jia, L., Ma, J. & Song, D. The m6A methyltransferase METTL3 promotes osteosarcoma progression by regulating the m6A level of LEF1. Biochem. Biophys. Res. Commun. 516, 719–725 (2019). Huang, H. et al. Recognition of RNA N(6)-methyladenosine by IGF2BP proteins enhances mRNA stability and translation. Nat. Cell Biol. 20, 285–295 (2018). Zhou, X. et al. YAP aggravates inflammatory bowel disease by regulating M1/M2 macrophage polarization and gut microbial homeostasis. Cell Rep. 27, 1176–1189.e1175 (2019). Greenbaum, D., Colangelo, C., Williams, K. & Gerstein, M. Comparing protein abundance and mRNA expression levels on a genomic scale. Genome Biol. 4, 117 (2003). Lund, E. & Dahlberg, J. E. Substrate selectivity of exportin 5 and Dicer in the biogenesis of microRNAs. Cold Spring Harb. Symp. Quant. Biol. 71, 59–66 (2006). Li, Z., Shen, J., Chan, M. T. & Wu, W. K. MicroRNA-379 suppresses osteosarcoma progression by targeting PDK1. J. Cell. Mol. Med. 21, 315–323 (2017). Wang, S. N. et al. miR-491 inhibits osteosarcoma lung metastasis and chemoresistance by targeting alphaB-crystallin. Mol. Ther. 25, 2140–2149 (2017). Du, H. et al. YTHDF2 destabilizes m(6)A-containing RNA through direct recruitment of the CCR4-NOT deadenylase complex. Nat. Commun. 7, 12626 (2016). Ahmed, A. A., Mohamed, A. D., Gener, M., Li, W. & Taboada, E. YAP and the Hippo pathway in pediatric cancer. Mol. Cell. Oncol. 4, e1295127 (2017). Zanconato, F., Cordenonsi, M. & Piccolo, S. YAP/TAZ at the roots of cancer. Cancer Cell 29, 783–803 (2016). Lee, J. Y. et al. YAP-independent mechanotransduction drives breast cancer progression. Nat. Commun. 10, 1848 (2019). Zanconato, F., Battilana, G., Cordenonsi, M. & Piccolo, S. YAP/TAZ as therapeutic targets in cancer. Curr. Opin. Pharmacol. 29, 26–33 (2016). Li, J. A. et al. TRAF6 regulates YAP signaling by promoting the ubiquitination and degradation of MST1 in pancreatic cancer. Clin. Exp. Med. 19, 211–218 (2019). Yang, X. et al. NTRK1 is a positive regulator of YAP oncogenic function. Oncogene 38, 2778–2787 (2019). Seo, J. et al. MK5 regulates YAP stability and is a molecular target in YAP-driven cancers. Cancer Res. https://doi.org/10.1158/0008-5472.CAN-19-1339 (2019). Chan, L. H. et al. Hedgehog signaling induces osteosarcoma development through Yap1 and H19 overexpression. Oncogene 33, 4857–4866 (2014). Wang, D. Y. et al. Hippo/YAP signaling pathway is involved in osteosarcoma chemoresistance. Chin. J. Cancer 35, 47 (2016). Yeh, C. M. et al. Melatonin as a potential inhibitory agent in head and neck cancer. Oncotarget 8, 90545–90556 (2017). This work was supported by grants from the National Natural Science Fund of China (81972117), Natural Science Foundation of Heilongjiang Province of China for outstanding youth (YQ2020H019), College of Pharmacy, Harbin Medical University Excellent Young Talents Funding (2019-YQ-13), Heilongjiang Innovative Talent Training Fund for Young Teachers (UNPYSCT-2017073), Special Fund for the Development of Local Colleges and Universities supported by the Central Finance (Outstanding Young Talents Support Project, 0103/30011190007), and Youth Reserve Talent Fund of Harbin Science and Technology Bureau (2017RAQXJ163). These authors contributed equally: Ye Yuan, Gege Yan, Mingyu He Department of Orthopedics at The First Affiliated Hospital, and Department of Pharmacology at College of Pharmacy (The Key Laboratory of Cardiovascular Medicine Research, Ministry of Education), Harbin Medical University, 150086, Harbin, China Ye Yuan, Gege Yan, Mingyu He, Hong Lei, Xiaoqi He, Guanghui Li, Quan Wang, Yuelin Gao, Zhezhe Qu, Zhongting Mei, Wei Zhao, Huiwei Jiang, Weijie Du & Lei Yang Department of Pharmacy, The Second Affiliated Hospital of Harbin Medical University, 150086, Harbin, China Ye Yuan, Yang Wang, Zhihua Shen, Jiaying Pu, Ao Wang, Wei Zhao & Huiwei Jiang Department of Clinical pharmacology, College of Pharmacy, Harbin Medical University, 150086, Harbin, China Ye Yuan, Yang Wang, Zhihua Shen, Jiaying Pu & Ao Wang Research Unit of Noninfectious Chronic Diseases in Frigid Zone, Chinese Academy of Medical Sciences, 2019RU070, Harbin, China Ye Yuan Department of General Surgery, The First Affiliated Hospital of Harbin Medical University, 150001, Harbin, China Linqiang Li Gege Yan Mingyu He Hong Lei Yang Wang Xiaoqi He Guanghui Li Quan Wang Yuelin Gao Zhezhe Qu Zhongting Mei Zhihua Shen Jiaying Pu Ao Wang Wei Zhao Huiwei Jiang Weijie Du Lei Yang Y.Y., G.Y., M.H., H.L, L.L., Y.W., X.H., G.L., Q.W., Y.G., Z.Q., Z.M., Z.S., J.P., A.W., W.Z., and H.J. performed research; Y.Y., G.Y., M.H., and H.L. analyzed data; Y.Y., W.D., and L.Y. designed the study and wrote the manuscript. Correspondence to Weijie Du or Lei Yang. Written informed consent was obtained from all participants in accordance with the Declaration of Helsinki. All the collection of specimens and animal handling in this study was reviewed and approved by the Medical Ethics Committee of the Second Affiliated Hospital of Harbin Medical University (KY2018-185). Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Edited by G. Ciliberto Figure S1 Supplementary Figure Legends Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Yuan, Y., Yan, G., He, M. et al. ALKBH5 suppresses tumor progression via an m6A-dependent epigenetic silencing of pre-miR-181b-1/YAP signaling axis in osteosarcoma. Cell Death Dis 12, 60 (2021). https://doi.org/10.1038/s41419-020-03315-x For Authors & Referees Cell Death & Disease ISSN 2041-4889 (online)
CommonCrawl
Computer Science > Data Structures and Algorithms Title:Deterministic algorithms for the Lovasz Local Lemma: simpler, more general, and more parallel Authors:David G. Harris (Submitted on 17 Sep 2019) Abstract: The Lovasz Local Lemma (LLL) is a keystone principle in probability theory, guaranteeing the existence of configurations which avoid a collection $\mathcal B$ of "bad" events which are mostly independent and have low probability. In its simplest "symmetric" form, it asserts that whenever a bad-event has probability $p$ and affects at most $d$ bad-events, and $e p d < 1$, then a configuration avoiding all $\mathcal B$ exists. A seminal algorithm of Moser & Tardos (2010) gives nearly-automatic randomized algorithms for most constructions based on the LLL. However, deterministic algorithms have lagged behind. We address three specific shortcomings of the prior deterministic algorithms. First, our algorithm applies to the LLL criterion of Shearer (1985); this is more powerful than alternate LLL criteria and also removes a number of nuisance parameters and leads to cleaner and more legible bounds. Second, we provide parallel algorithms with much greater flexibility in the functional form of of the bad-events. Third, we provide a derandomized version of the MT-distribution, that is, the distribution of the variables at the termination of the MT algorithm. We show applications to non-repetitive vertex coloring, independent transversals, strong coloring, and other problems. These give deterministic algorithms which essentially match the best previous randomized sequential and parallel algorithms. Comments: This superseded arXiv:1807.06672 Subjects: Data Structures and Algorithms (cs.DS) Cite as: arXiv:1909.08065 [cs.DS] (or arXiv:1909.08065v1 [cs.DS] for this version) From: David Harris [view email] [v1] Tue, 17 Sep 2019 20:02:39 UTC (49 KB) cs.DS
CommonCrawl
CanaDAM 2019 SFU Harbour Centre, May 29 - 31, 2019 canadam.math.ca/2019 canadam home E-JC 25 Participant List Public Interest Lecture Schedule: Plenary Talks Schedule: Invited Minisymposia Schedule: Special Minisymposia Schedule: Contributed Minisymposia Schedule: Contributed Talks Downloadable PDF Program E-JC 25 Conference Plenary Speakers Public Interest Lecture Schedule Overview Schedule: Plenary Talks Schedule: Invited Minisymposia Schedule: Special Minisymposia Schedule: Contributed Minisymposia Schedule: Contributed Talks Downloadable PDF Program Schedule - Contributed Minisymposia Please note that schedules are subject to change without notice, particularly changes within a given session. Analytic and Probabilistic Techniques in Combinatorics - Part I (CM1) Analytic and Probabilistic Techniques in Combinatorics - Part II (CM2) Average Graph Parameters - Part I (CM3) Average Graph Parameters - Part II (CM4) Bootstrap Percolation (CM5) Colourings and homomorphisms (CM6) Covering Arrays - Part I (CM7) Covering Arrays - Part II (CM8) Design Theory - Part I (CM9) Design Theory - Part II (CM10) Design Theory - Part III (CM11) Elegant and Discrete Mathematics (CM12) Finite Fields in Discrete Mathematics - Part I (CM13) Finite Fields in Discrete Mathematics - Part II (CM14) Finite Geometries and Applications (CM15) Graph Polynomials - Part I (CM16) Graph Polynomials - Part II (CM17) Graph Searching Games - Part I (CM18) Graph Searching Games - Part II (CM19) Graph Structure and Algorithms (CM20) Matching Theory (CM21) Minisymposium in honor of Frank Ruskey's 65th birthday (CM22) Optimization, Geometry and Graphs (CM23) Structured families of graphs and digraphs: characterizations, algorithms and partition problems (CM24) Symmetry in Graphs - Part I (CM25) Symmetry in Graphs - Part II (CM26) Org: Jan Volec (Emory University and Universitat Hamburg) Contemporary combinatorics is an exciting and rapidly growing discipline on the frontier of mathematics and computer science. Many new techniques in combinatorics rely on applications of tools from other mathematical areas such as algebra, analysis and probability. In the last decade, various novel methods have emerged. For example, recent works in the probabilistic method culminated with the celebrated container method which answered many long-standing open problems, new developments of algebraic techniques were crucial in settling famous conjectures in design theory or number theory, analytic approaches to Szemerédi's regularity lemma served as the corner-stone of graph limits, which then spin-off to techniques for large networks and development of flag algebras. In this mini-symposium, we aim to bring researchers in combinatorics in order to present further developments and applications of these methods, and talk about completely new approaches. We will discuss relevant open problems, exchange research ideas, and initiate new collaborations. 10:30 - 10:50 Debsoumya Chakraborti (Carnegie Mellon University), Extremal Graphs With Local Covering Conditions, Canfor Policy Room 1600 10:55 - 11:15 Joonkyung Lee (Universitat Hamburg), On triangulated common graphs, Canfor Policy Room 1600 11:20 - 11:40 Jon Noel (University of Warwick), Cycles of length three and four in tournaments, Canfor Policy Room 1600 11:45 - 12:05 Yanitsa Pehova (University of Warwick), Decomposing graphs into edges and triangles, Canfor Policy Room 1600 12:10 - 12:30 Florian Pfender (University of Colorado Denver), 5-Cycles in Graphs, Canfor Policy Room 1600 15:30 - 15:50 Robert Hancock (Masaryk University), Some results in 1-independent percolation, Canfor Policy Room 1600 15:55 - 16:15 Guilherme Oliveira Mota (Universidade Federal do ABC), The multicolour size-Ramsey number of powers of paths, Canfor Policy Room 1600 16:20 - 16:40 Robert Šámal (Charles University), A rainbow version of Mantel's Theorem, Canfor Policy Room 1600 16:45 - 17:05 Maryam Sharifzadeh (University of Warwick), Graphons with minimum clique density, Canfor Policy Room 1600 Org: Lucas Mol and Ortrud Oellermann (University of Winnipeg) Probably the oldest and most well-known average graph parameter, the average distance of a graph - also known as the Wiener index, dates back to 1947. Of particular interest is the close correlation of the Wiener index of the molecular graph and the chemical properties of the substance such as the boiling point, viscosity and surface tension. In this minisymposium results on various average graph parameters such as the average distance in a digraph, the average order of subtrees of trees and some of its generalizations, as well as the average connectivity of graphs and digraphs are presented. 10:30 - 10:50 Lucas Mol (University of Winnipeg), The Mean Subtree Order and the Mean Connected Induced Subgraph Order, Sauder Industries Policy Room 2270 10:55 - 11:15 Stephan Wagner (Stellenbosch University), Extremal subtree densities of trees, Sauder Industries Policy Room 2270 11:20 - 11:40 Hua Wang (Georgia Southern University), Average distance between leaves and peripheral vertices, Sauder Industries Policy Room 2270 11:45 - 12:05 Pengyu Liu (Simon Fraser University), A polynomial metric on rooted binary tree shapes, Sauder Industries Policy Room 2270 15:30 - 15:50 Stijn Cambie (Radboud University), Asymptotic resolution of a question of Plesník, Sauder Industries Policy Room 2270 15:55 - 16:15 Peter Dankelmann (University of Johannesburg), The average distance of maximal planar graphs, Sauder Industries Policy Room 2270 16:20 - 16:40 Suil O (State University of New York, Korea), Average connectivity and average edge-connectivity in graphs, Sauder Industries Policy Room 2270 16:45 - 17:05 Ortrud Oellermann (University of Winnipeg), The average connectivity of minimally $2$-connected graphs, Sauder Industries Policy Room 2270 Org: Natasha Morrison (Instituto National de Matemática Pura e Aplicada) and Jonathan Noel (University of Warwick) Bootstrap percolation is a process on graphs which models real world phenomena including the dynamics of ferromagnetism and the spread of opinions in a social network. Topics covered in this minisymposium include recent breakthroughs on old and difficult problems alongside some of the most exciting new research directions in the area. Tuesday May 28 15:30 - 15:50 Janko Gravner (University of California, Davis), Polluted Bootstrap Percolation, McCarthy Tetrault Lecture Room 2245 15:55 - 16:15 Lianna Hambardzumyan (McGill University), Polynomial method and graph bootstrap percolation, McCarthy Tetrault Lecture Room 2245 16:20 - 16:40 David Sivakoff (The Ohio State University), Bootstrap percolation on Cartesian products of lattices with Hamming graphs, McCarthy Tetrault Lecture Room 2245 16:45 - 17:05 Ivailo Hartarsky (École normale supérieure de Lyon), The second term for two-neighbour bootstrap percolation in two dimensions, McCarthy Tetrault Lecture Room 2245 Org: Gary MacGillivray (University of Victoria) The talks focus on aspects of graph colouring and homomorphisms including fractional colourings, oriented colourings, geometric homomorphisms and reconfiguration problems. 10:30 - 10:50 Debra Boutin (Hamilton College), Geometric Homomorphisms and the Geochromatic Number, Scotiabank Lecture Room 1315 10:55 - 11:15 Richard Brewster (Thompson Rivers University), The complexity of signed graph homomorhpisms, Scotiabank Lecture Room 1315 11:20 - 11:40 Christopher Duffy (University of Saskatchewan), Colourings, Simple Colourings, and a Connection to Bootstrap Percolation, Scotiabank Lecture Room 1315 11:45 - 12:05 John Gimbel (University of Alaska), Bounds on the fractional chromatic number of a graph., Scotiabank Lecture Room 1315 12:10 - 12:30 Jae-Baek Lee (Kyungpook National University), Reconfiguring Reflexive Digraphs, Scotiabank Lecture Room 1315 Org: Lucia Moura (University of Ottawa) and Brett Stevens (Carleton University) A covering array with $N$ rows, $k$ columns, $v$ symbols and strength $t$ is an $N \times k$ array with entries from a $v$-ary alphabet such that each of its subarrays with $t$ columns contains every $t$-tuple of the alphabet at least once as a row. Covering arrays have gained a lot of attention in the theory of combinatorial designs and in applications to software and network testing. Classical covering arrays and their many generalizations have interesting relations to areas of combinatorics such as extremal set theory, finite fields, graph homomorphisms, covering codes and combinatorial group testing. Methods for their construction range from recursive and algebraic to probabilistic and computational. In this two-part mini-symposium, we have a collection of talks highlighting current research on various aspects of covering arrays. 10:30 - 10:50 Brett Stevens (Carleton University), Introduction to covering arrays, Canfor Policy Room 1600 10:55 - 11:15 Yasmeen Akhtar (Arizona State University, USA), Constructing High Index Covering Arrays and Their Application to Design of Experiments, Canfor Policy Room 1600 11:20 - 11:40 Kirsten Nelson (Carleton University), Constructing covering arrays from interleaved sequences, Canfor Policy Room 1600 11:45 - 12:05 Myra B. Cohen (Iowa State University), Learning to Build Covering Arrays with Hyperheuristic Search, Canfor Policy Room 1600 A covering array with $N$ rows, $k$ columns, $v$ symbols and strength $t$ is an $N\times k$ array with entries from a $v$-ary alphabet such that each of its subarrays with $t$ columns contains every $t$-tuple of the alphabet at least once as a row. Covering arrays have gained a lot of attention in the theory of combinatorial designs and in applications to software and network testing. Classical covering arrays and their many generalizations have interesting relations to areas of combinatorics such as extremal set theory, finite fields, graph homomorphisms, covering codes and combinatorial group testing. Methods for their construction range from recursive and algebraic to probabilistic and computational. In this two-part mini-symposium, we have a collection of talks highlighting current research on various aspects of covering arrays. 15:30 - 15:50 Lucia Moura (University of Ottawa), Getting hyper with covering arrays, Canfor Policy Room 1600 15:55 - 16:15 Anant Godbole (East Tennessee State University, USA), Covering Arrays for Some Equivalence Classes of Words, Canfor Policy Room 1600 16:20 - 16:40 Muhammad Javed (Ryerson University), Sequence Covering Arrays, Canfor Policy Room 1600 16:45 - 17:05 André Castoldi (Universidade Tecnológica Federal do Paraná, Brazil), Bounds on Covering Codes in Rosenbloom-Tsfasman Spaces using Ordered Covering Arrays, Canfor Policy Room 1600 Org: Andrea Burgess (University of New Brunswick), Peter Danziger (Ryerson University) and David Pike (Memorial University of Newfoundland) 2019 marks the 175th anniversary of the birth of F\'{e}lix Walecki, who did pioneering work in design theory, particularly in factorizations and cycle decompositions of the complete graph. In addition to celebrating this event, this minisymposium brings together leading and emerging researchers in combinatorial design theory to share their results pertaining to designs and related structures, their properties and applications. Wednesday May 29 15:30 - 15:50 Esther Lamken (California Institute of Technology), Constructions and uses of incomplete pairwise balanced designs, Canadian Pacific Lecture Room 1530 15:55 - 16:15 Peter Dukes (University of Victoria), Packings of 4-cliques in complete graphs, Canadian Pacific Lecture Room 1530 16:20 - 16:40 Flora Bowditch (University of Victoria), Localized Structure in Graph Decompositions, Canadian Pacific Lecture Room 1530 16:45 - 17:05 Iren Darijani (Memorial University of Newfoundland), k-colourings of star systems, Canadian Pacific Lecture Room 1530 10:30 - 10:50 Marco Buratti (Università degli Studi di Perugia), Cyclic designs: some selected topics, McLean Management Studies Lab 2945 10:55 - 11:15 Saad El-Zanati (Illinois State University), On edge orbits and hypergraph designs, McLean Management Studies Lab 2945 11:20 - 11:40 Francesca Merola (Università Roma Tre), Cycle systems of the complete multipartite graph, McLean Management Studies Lab 2945 11:45 - 12:05 Mateja Sajna (University of Ottawa), On the Honeymoon Oberwolfach Problem, McLean Management Studies Lab 2945 12:10 - 12:30 Sibel Ozkan (Gebze Technical University), On The Hamilton-Waterloo Problem and its Generalizations, McLean Management Studies Lab 2945 15:30 - 15:50 Doug Stinson (University of Waterloo), Constructions of optimal orthogonal arrays with repeated rows, McLean Management Studies Lab 2945 15:55 - 16:15 Brett Stevens (Carleton University), Affine planes with ovals for blocks, McLean Management Studies Lab 2945 16:20 - 16:40 Trent Marbach (Nankai University), Balanced Equi-n-squares, McLean Management Studies Lab 2945 16:45 - 17:05 Hadi Kharighani (University of Lethbridge), Unbiased Orthogonal Designs, McLean Management Studies Lab 2945 Org: Karen Meagher (University of Regina) Discrete math is famous for being an area of mathematics where the problems are easy to state, but difficult to prove. This session will focus on results where the problems are easy to state, but the solutions are surprisingly elegant and give deeper insight into the mathematics behind the problem. The speakers will each describe an elegant new result in their field. The talks will focus on key ideas in the proofs and the intriguing aspects of their results. The goal is to offer some entry points into modern algebraic combinatorics, enumerative combinatorics, graph theory, and extremal set theory. 15:30 - 15:50 Karen Meagher (University of Regina), All 2-transitive groups have the Erdos-Ko-Rado Property, McLean Management Studies Lab 2945 15:55 - 16:15 Marni Mishna (Simon Fraser University), On the complexity of the cogrowth sequence, McLean Management Studies Lab 2945 16:20 - 16:40 Jessica Striker (North Dakota State University), Bijections - Marvelous, Mysterious, and Missing, McLean Management Studies Lab 2945 16:45 - 17:05 Steph van Willigenburg (Univeristy of British Columbia), The positivity of trees, McLean Management Studies Lab 2945 17:10 - 17:30 Hanmeng (Harmony) Zhan (Université de Montréal), Some elegant results in algebraic graph theory, McLean Management Studies Lab 2945 Org: Petr Lisonek (Simon Fraser University) and Daniel Panario (Carleton University) In this minisymposium several topics in discrete mathematics where finite fields play an important role are presented. The talks show the use of finite fields to construct combinatorial objects and to prove interesting results in areas such as designs, graphs, Latin squares, cryptography, Boolean functions, codes and sequences, algebraic curves and finite geometries, among others. 10:30 - 10:50 Daniel Panario (Carleton University), Finite Fields in Discrete Mathematics, Sauder Industries Policy Room 2270 10:55 - 11:15 Thais Bardini Idalino (University of Ottawa), Embedding cover-free families and cryptographical applications, Sauder Industries Policy Room 2270 11:20 - 11:40 Daniele Bartoli (University of Perugia), More on exceptional scattered polynomials, Sauder Industries Policy Room 2270 11:45 - 12:05 Claudio Qureshi (University of Campinas), Dynamics of Chebyshev polynomials over finite fields, Sauder Industries Policy Room 2270 12:10 - 12:30 Anne Canteaut (Inria Paris), Searching for APN permutations with the butterfly construction, Sauder Industries Policy Room 2270 15:30 - 15:50 Sihem Mesnager (University of Paris VIII), On good polynomials over finite fields for optimal locally recoverable codes, Sauder Industries Policy Room 2270 15:55 - 16:15 Lucas Reis (University of Sao Paulo), Permutations of finite sets from an arithmetic setting, Sauder Industries Policy Room 2270 16:20 - 16:40 Daniel Katz (California State University, Northridge), Nonvanishing minors and uncertainty principles for Fourier analysis over finite fields, Sauder Industries Policy Room 2270 16:45 - 17:05 Ariane Masuda (City University of New York), Functional Graphs of R\'edei Functions, Sauder Industries Policy Room 2270 17:10 - 17:30 Petr Lisonek (Simon Fraser University), Maximally non-associative quasigroups, Sauder Industries Policy Room 2270 Org: Sam Mattheus (Vrije Universiteit Brussel) Finite geometries is the research field in which finite incidence structures, often defined over finite fields, are investigated. Among the structures of interest are vector spaces and projective spaces, generalized polygons and others. The study of these structures and their substructures is the central topic in this area for several reasons. Plenty of these substructures are investigated for their intrinsic importance and interest, others are investigated because of their relation to other research areas such as coding theory, graph theory and even number theory. In this symposium we will have a mix of both, presenting purely geometrical problems, graph theoretical problems with geometrical roots, applications to coding theory and even an application in number theory over finite fields. 10:30 - 10:50 Sam Mattheus (Vrije Universiteit Brussel), Number theory in finite fields from a geometrical point of view, Cominco Policy Room 1415 10:55 - 11:15 Jozefien D'haeseleer (Universiteit Gent), Projective solids pairwise intersecting in at least a line, Cominco Policy Room 1415 11:20 - 11:40 Jan De Beule (Vrije Universiteit Brussel), A lower bound on the size of linear sets on a projective line of finite order, Cominco Policy Room 1415 11:45 - 12:05 Lins Denaux (Universiteit Gent), Small weight code words in the code of points and hyperplanes of PG(n,q), Cominco Policy Room 1415 12:10 - 12:30 Lisa Hernandez Lucas (Vrije Universiteit Brussel), Dominating sets in finite generalized quadrangles, Cominco Policy Room 1415 Org: Danielle Cox (Mount Saint Vincent University) and Christopher Duffy (University of Saskatchewan) Polynomials are powerful mathematical models. Many combinatorial sequences can be investigated via their associated generating polynomial. The study of graph polynomials can be found in the literature of many combinatorial problems. For instance, one can investigate combinatorial sequences associated with graph properties, such as independence or domination by looking at the analytic properties of the associated generating polynomial. Other combinatorial problems, such as network reliability and graph colouring are modelled using polynomials. This two part mini-symposium will highlight interesting new results related to the study of graph polynomials. 10:30 - 10:50 Iain Beaton (Dalhousie University), Independence Equivalence Class of Paths and Cycles, Canfor Policy Room 1600 10:55 - 11:15 Ben Cameron (Dalhousie University), The Maximum Modulus of an Independence Root, Canfor Policy Room 1600 11:20 - 11:40 Mackenzie Wheeler (University of Victorica), Chromatic Uniqueness of Mixed Graphs, Canfor Policy Room 1600 11:45 - 12:05 Lucas Mol (University of Winnipeg), The Subtree Polynomial, Canfor Policy Room 1600 12:10 - 12:30 Lise Turner (University of Waterloo), Convergence of Coefficients of the Rank Polynomial in Benjamini-Schramm Convergent Sequences of Graphs, Canfor Policy Room 1600 Org: Danielle Cox (Mount Saint Vincent Universityx) and Christopher Duffy (Christopher Duffy) 15:30 - 15:50 David Wagner (University of Waterloo), Ursell inequalities for random spanning trees, Canfor Policy Room 1600 15:55 - 16:15 Christopher Duffy (University of Saskatchewan), The Oriented Chromatic Polynomial, Canfor Policy Room 1600 16:20 - 16:40 Nicholas Harvey (University of British Columbia), Computing the Independence Polynomial in Shearer's Region for the Lovasz Local Lemma, Canfor Policy Room 1600 16:45 - 17:05 Danielle Cox (Mount Saint Vincent University), Optimal Graphs for Domination Polynomials, Canfor Policy Room 1600 Org: Anthony Bonato (Ryerson University) and Danielle Cox (Mount Saint Vincent University) In graph searching games such as Cops and Robbers, agents must capture or slow an intruder loose on a network. The rules of the game dictate how the players move and how capture occurs. The associated optimization parameter in Cops and Robbers is the cop number, which measures how many cops are needed for a guaranteed capture. The study of the cop number has lead to a number of unsolved problems, ranging from Meyniel's conjecture to Schroeder's conjecture on graphs with bounded genus. Cops and Robbers is only one graph searching game among many others, and graph searching intersects with algorithmic, structural, and probabilistic graph theory. Other recent graph searching games and processes that have generated interest are Zombies and Survivors, localization, graph burning, and Firefighting. The proposed minisymposium brings together leading researchers in graph searching, who will present state-of-the-art research in this direction. 10:30 - 10:50 Anthony Bonato (Ryerson University), Bounds and algorithms for graph burning, Sauder Industries Policy Room 2270 10:55 - 11:15 Nancy Clarke (Acadia University), $\ell$-Visibility Cops and Robber, Sauder Industries Policy Room 2270 11:20 - 11:40 Sean English (Ryerson University), Catching Robbers Quickly and Efficiently, Sauder Industries Policy Room 2270 11:45 - 12:05 Natasha Komarov (St. Lawrence University), Containing a robber on a graph, Sauder Industries Policy Room 2270 15:30 - 15:50 Bill Kinnersley (University of Rhode Island), Cops and Lawless Robbers, Sauder Industries Policy Room 2270 15:55 - 16:15 Kerry Ojakian (Bronx Community College (C.U.N.Y.)), Graphs that are cop-win, but not zombie-win, Sauder Industries Policy Room 2270 16:20 - 16:40 Pawel Pralat (Ryerson University), Zero Forcing Number of Random Regular Graphs, Sauder Industries Policy Room 2270 16:45 - 17:05 Ladislav Stacho (Simon Fraser University), Efficient Periodic Graph Traversal on Graphs with a Given Rotation System, Sauder Industries Policy Room 2270 Org: Kathie Cameron (Wilfrid Laurer University) and Shenwei Huang (Nankai University) Graph algorithms are at the core of discrete mathematics and computer science. They play an increasingly critical role in fundamental research as well as real applications. In this mini-symposium, we will hear a variety of exciting developments on complexity of graph problems such as colouring, $\chi$-bounds, clique minors, and hamiltonian cycles, and on structure of important classes of graphs and digraphs. 10:30 - 10:50 Kathie Cameron (Wilfrid Laurer University), Hadwiger's Conjecture for (Cap, Even Hole)-Free Graphs, Cominco Policy Room 1415 10:55 - 11:15 Owen Merkel (University of Waterloo), An optimal $\chi$-Bound for ($P_6$, diamond)-free graphs, Cominco Policy Room 1415 11:20 - 11:40 Juraj Stacho (Google Zurich), 3-colorable Subclasses of $P_8$-free Graphs, Cominco Policy Room 1415 11:45 - 12:05 César Hernández Cruz (CINVESTAV Mexico), On the Pancyclicity of $k$-quasi-transitive Digraphs ofLlarge Diameter, Cominco Policy Room 1415 12:10 - 12:30 Pavol Hell (Simon Fraser University), Bipartite Analogues of Comparability and Co-comparability Graphs, Cominco Policy Room 1415 Org: Nishad Kothari (University of Campinas) Matching Theory pertains to the study of perfect matchings in graphs. It is one of the oldest branches of graph theory that finds many applications in combinatorial optimization, and that continues to inspire new results. For several problems in Matching Theory, such as counting the number of perfect matchings, one may restrict attention to `matching covered' or `1-extendable' graphs --- connected graphs in which each edge lies in some perfect matching. Lov\'asz and Plummer (1986) provide a comprehensive treatment of the subject in their book ``Matching Theory''. Since then, a lot more work has been done to further our understanding of the structure of $1$-extendable graphs, as well as their generalization `$k$-extendable' graphs --- connected graphs in which every matching of size $k$ may be extended to a perfect matching. In this minisymposium, we shall cover some of the recent developments in this beautiful area that continues to blossom. 10:30 - 10:50 Marcelo Carvalho (Federal University of Mato Grosso do Sul (UFMS)), Birkhoff--von Neumann Graphs that are PM-compact, McCarthy Tetrault Lecture Room 2245 10:55 - 11:15 Nishad Kothari (University of Campinas (UNICAMP)), Constructing $K_4$-free bricks that are Pfaffian, McCarthy Tetrault Lecture Room 2245 11:20 - 11:40 Phelipe Fabres (Federal University of Mato Grosso do Sul (UFMS)), Minimal Braces, McCarthy Tetrault Lecture Room 2245 11:45 - 12:05 Michael Plummer (Vanderbilt University), Distance Matching in Planar Triangulations: some new results, McCarthy Tetrault Lecture Room 2245 12:10 - 12:30 Robert Aldred (University of Otago), Asymmetric Distance Matching Extension, McCarthy Tetrault Lecture Room 2245 Org: Torsten Mütze (TU Berlin) and Joe Sawada (University of Guelph) Frank Ruskey turned 65 last year, and the goal of this minisymposium is to honor his scientific achievements in discrete mathematics and theoretical computer science, by bringing together collaborators, colleagues, academic descendants and friends on this occasion. The talks center around combinatorial algorithms, Gray codes, Venn diagrams, and other discrete topics that are close to Frank's own contributions. 10:30 - 10:50 Joe Sawada (University of Guelph), From 3/30 on Frank's midterm to a career in Academia, Sauder Industries Policy Room 2270 10:55 - 11:15 Gary MacGillivray (University of Victoria), Using combinatorial algorithms to search for golf schedules, Sauder Industries Policy Room 2270 11:20 - 11:40 Alejandro Erickson (University of Victoria), Tatami Tilings in a Template for Teaching to Teenagers, Sauder Industries Policy Room 2270 11:45 - 12:05 Gara Pruesse (Vancouver Island University), Linear Extensions of Posets -- Gray codes, fast generation algorithms, and a long-standing conjecture, Sauder Industries Policy Room 2270 12:10 - 12:30 Torsten Mütze (TU Berlin), Combinatorial generation via permutation languages, Sauder Industries Policy Room 2270 Org: Bruce Shepherd (UBC) This session links topics in optimization arising in geometric and graphical settings. 10:30 - 10:50 Coulter Beeson (UBC), Revisiting the Core of Papadimitriou's Multi-Flow Game, Scotiabank Lecture Room 1315 10:55 - 11:15 Will Evans (UBC), Minimizing Interference Potential Among Moving Entities, Scotiabank Lecture Room 1315 11:20 - 11:40 David Hartvigsen (Notre Dame), Finding Triangle-free 2-factors, Revisited, Scotiabank Lecture Room 1315 11:45 - 12:05 Venkatesh Srinivasan (UBC), Scalable Misinformation Prevention in Social Networks, Scotiabank Lecture Room 1315 12:10 - 12:30 Tamon Stephen (SFU), On the Circuit Diameter Conjecture, Scotiabank Lecture Room 1315 Org: César Hernández-Cruz (CINVESTAV, Mexico) There are many graph and digraph families that can be characterized by forbidding the existence of certain substructures, e.g., induced subgraphs or minors. Two main questions naturally arise for these families: Can they be recognized efficiently? Is the characterization useful to solve hard problems efficiently in these classes? This session is devoted to the study of such graph and digraph families, their characterization theorems, and how their structure is useful to solve, or approximate, vertex partition problems (colourings, homomorphisms, vertex arboricity) efficiently. 15:30 - 15:50 Sebastián González Hermosillo de la Maza (Simon Fraser University), Arboricity and feedbacks sets in cographs, Fletcher Challenge Theatre 1900 15:55 - 16:15 Seyyed Aliasghar Hosseini (Simon Fraser University), The evolution of the structure of ABC-minimal trees, Fletcher Challenge Theatre 1900 16:20 - 16:40 Jing Huang (University of Victoria), Graph and digraph classes arising from list homomorphism problems, Fletcher Challenge Theatre 1900 16:45 - 17:05 Mahdieh Malekian (Simon Fraser University), The structure of graphs with no $H$-immersion, Fletcher Challenge Theatre 1900 Org: Joy Morris (University of Lethbridge) Symmetry in graphs has both beauty and practical implications, and typically involves the actions of permutation groups on the vertices and/or on the edges. This minisymposium will explore recent work on symmetry in graphs. Talks will emphasise situations where symmetries are limited in some way (for example, removing symmetries by colouring vertices or edges, or studying graphs that only admit specified symmetries). This is part 1 of 2. 10:30 - 10:50 Debra Boutin (Hamilton College), New Techniques in the Cost of 2-Distinguishing Hypercubes, Canfor Policy Room 1600 10:55 - 11:15 Karen Collins (Wesleyan University), The distinguishing number of posets and lattices, Canfor Policy Room 1600 11:20 - 11:40 Richard Hammack (Virginia Commonwealth University), Edge-transitive direct products of graphs, Canfor Policy Room 1600 11:45 - 12:05 Bohdan Kivva (University of Chicago), Minimal degree of the automorphism group of primitive coherent configurations, Canfor Policy Room 1600 12:10 - 12:30 Florian Lehner (University of Warwick), On symmetries of vertex and edge colourings of graphs, Canfor Policy Room 1600 15:30 - 15:50 Michael Giudici (University of Western Australia), Arc-transitive bicirculants, Canfor Policy Room 1600 15:55 - 16:15 Klavdija Kutnar (University of Primorska), Hamilton paths of cubic vertex-transitive graphs, Canfor Policy Room 1600 16:20 - 16:40 Joy Morris (University of Lethbridge), Almost all Cayley digraphs are DRRs, Canfor Policy Room 1600 16:45 - 17:05 Gabriel Verret (University of Auckland), An update on the Polycirculant Conjecture, Canfor Policy Room 1600
CommonCrawl
Photoshop CC 2015 Version 17 Activation With Key Free Download [Mac/Win] Posted on June 30, 2022 June 30, 2022 by levekry Photoshop CC 2015 Version 17 Crack+ Free Registration Code Free 2022 [New] * Adobe Photoshop CS4 Format Guide (www.adobe.com/uk/creativecloud/photoshop/cs4-formatguide.pdf) * Photoshop CS4 Workflow Guide ( * Photoshop CS4 New Features and Key Concept Guide ( * The Missing Manual: Learning Photoshop CS4 by Example ( * Learning Photoshop CS4 ( * Tech Book: Photo Manipulations with Adobe Photoshop CS4 ( * Eureka: From Start to Finish, Photoshop CS4 Unleashed ( * Photoshop CS4 Workflow: The Basics (www.amazon.co.uk/Photoshop-Workflow-basics-Shelley-Barbour/dp/0596001215/ref=sr_1_2?ie=UTF8&s=books&qid=1221722551&sr=8-2) * Photoshop CS4 New Features and Key Concept Guide: Photo Manipulations ( * Photoshop CS4 Video Tutorials: From Beginner to Expert (www.youtube.com/user/trailofbits) * Photoshop CS4 for Creative Pros: Mastering the Adobe Photoshop CS4 Workflow ( Photoshop CC 2015 Version 17 Keygen Full Version [Mac/Win] What is a graphics editor? A graphics editor is a software package used to modify images. It is a general term that covers a lot of different types of software, from photo retouching to publishing. Graphics editors are not like paint programs or drawing programs. They manipulate images in a different way than these other types of programs. They contain layers, which allow you to combine an image with other layers and modify them in different ways at the same time. Unlike other programs, you can also manipulate layers on a pixel-by-pixel basis. You can change the colors of particular elements on specific pixels of an image using the color palette. You can also add text and other effects to the image. You can change the size and position of text, resize objects, and add effects to the image. You can also define the opacity of any part of the image. Adobe Photoshop Elements Image Editing Software Adobe Photoshop Elements is a software package that is designed for both novice and professional photographers, as well as professionals and hobbyists who want to modify and create graphic images. Adobe Photoshop Elements is designed to allow a photographer to modify an image, perform retouching and apply effects without having to use Adobe Photoshop. You can use this software to perform advanced photo editing and to make design images. Photoshop Elements is also designed to make it easier for inexperienced people to use the software. Adobe Photoshop Elements contains a variety of features and options for modifying and creating images and graphics. Some of the features of Photoshop Elements are similar to those of Adobe Photoshop, but there are a lot of differences. This software package is designed for photographers, graphic designers, web designers, Discord emoji creators and memers. Photoshop Elements is available as a software package with a DVD disc or as a software package available online. Adobe Photoshop Elements 18 Adobe Photoshop Elements is a simple graphics editing software package that is designed for photographers. It is simple to use and is designed to make image editing easy. Photoshop Elements is also designed to make it easier for people to learn how to use it. The software is available for both PC and Mac. Photoshop Elements is a software package that is a traditional version of the software called Photoshop. Adobe Photoshop Elements comes with many features designed to improve the effectiveness of creating and modifying images. You can easily remove background objects or use a particular color. You can also perform magic wand selection of objects on an image. You can also select and a681f4349e Photoshop CC 2015 Version 17 With Product Key he could achieve that great object of making America prosperous, happy, and free. If the country was at all inclined to form a coalition with Italy, they feared this might be achieved by compensating the Italian for abandonment of the San Giorgio Pacific Railway, and they were probably right. They feared that it would have been politically necessary to replace the undermanned and incompetent Frémont's army by an army commanded by a star the Union could boast. They feared, too, that the Mexican War might be the prelude to a war with the great coalition of Europe's great powers, which, it was assumed, had, after all, been the object of the war. The results of that war were still not decisive, as the bankers and leading merchants were already making clear, but when the smoke of battle had cleared there was to be no forgetting the exhilaration of seeing for the first time ever the American flag floating over the Capitol, and other Washington buildings that had seemed impregnable. Jefferson Davis, meanwhile, was emboldened by a belief that he could reach any goal he chose, and was disappointed when his plans in Virginia and the West (which he dismissed as "want of thought or resources") were thwarted. James Buchanan, whether he agreed with his conclusions or not, readily agreed to everything the Republicans wanted. These men were effective, but they had no real chance. They had to compete with the fortunes of the most famous and most successful political party in the world. The Republicans were now the party of Abraham Lincoln. Jefferson Davis and his followers were on the horns of a dilemma. The more he tried to cement a personal friendship with Lincoln, the more the president-elect distrusted him. Jefferson Davis could have had the presidency without sacrificing the support of his powerful backers in the extreme South, but he chose to take the risk. He never did get Lincoln's trust.CHARLOTTE, N.C. (AP) — The Charlotte Hornets announced their 55-man training camp roster on Monday, and here's a look at the players who earned spots: ___ FIRST TEAM Quincy Pondexter Trevor Ariza Alex Guererro Jeff Taylor Terry Rozier Nicolas Batum Nicolas Batum Lance Stephenson Lance Stephenson Nicolas Batum Nicolas Batum Q: What happens when "Table stats" turns off, in PostgreSQL? What happens when the "Table stats" tab in the PostgreSQL 9.3 GUI is off? Does it store the stats in the table meta information? A: Don't worry: it's just writing a tuple to the shared_buffers buffer, exactly as described in Table stats: Table stats are written to shared_buffers. A: It writes an entry in shared_buffers. c_{2\pi i}}; {2\pi i})] = \exp(-\pi i \sigma^{2}/2)$. In the following we work with the transfer function $\omega$. We denote the $s$-transform of $f$ by $F(s)=\omega^{*}(s)f(s)$. The expectation of the radar waveform under the sampling model can be written in the following form $$\begin{aligned} \mathbb{E}\left[f_{t}(y_{1},y_{2}) \right] &= \mathbb{E} \left[f(y_{1},y_{2}) | (y_{1},y_{2})\in\mathbb{T}\times\mathbb{R} \right] \\ &= \int_{\mathbb{R}} f(y_{1},y_{2})\omega(y_{1},y_{2})dy_{1}dy_{2}\\ &= \mathbb{E}_{\omega} \left[\mathbb{E}\left[f(y_{1},y_{2}) | y_{1},y_{2}\in\mathbb{T}\right]|_{(y_{1},y_{2})\in \mathbb{R}^{2}} \right].\end{aligned}$$ Equation can be written in the following form $$\begin{aligned} \mathbb{E}\left[f_{t}(y_{1},y_{2}) \right] &= \mathbb{E}_{\omega} \left[F(e^{ -\sigma^{2}/2}y_{1},e^{ -\sigma^{2}/2}y_{2})|y System Requirements For Photoshop CC 2015 Version 17: Windows 8.1, 8, 7, Vista, or XP 3.4 GHz quad-core processor 2 GB of RAM 12 GB free hard drive space DirectX 11 compatible graphics card with a RAM of 1 GB Sound card required Internet access (preferably Wi-Fi) DirectX 11 compatible device You can download the game through the main menu's "download" tab, which you can access by pressing START on your keyboard. Follow The Exiled on Facebook: https://insenergias.org/wp-content/uploads/2022/06/Photoshop_2021_Version_2231.pdf https://nadiasalama.com/photoshop-2022-version-23-0-1-crack-full-version-3264bit/ https://connectingner.com/2022/06/30/photoshop-cc-2018-win-mac-2022-latest/ https://munchyn.com/wp-content/uploads/2022/06/Adobe_Photoshop_CC_2015_version_16.pdf https://carolinmparadis.com/2022/06/30/adobe-photoshop-2022-version-23-0-install-crack-with-license-code-2022-latest/ https://7smabu2.s3.amazonaws.com/upload/files/2022/06/Ty5qG5YWMSFMUnM7fx8Q_30_4df74132351584e9d3a4158baac9ee9f_file.pdf https://ryhinmobiliaria.co/wp-content/uploads/2022/06/Adobe_Photoshop_2022__Activation___Download.pdf https://mainemadedirect.com/wp-content/uploads/2022/06/Photoshop_2022_Version_231_Crack_Patch__X64.pdf https://xn--80aagyardii6h.xn--p1ai/adobe-photoshop-2022-version-23-4-1-serial-number-and-product-key-crack-license-key-full-updated-2022/ https://shalamonduke.com/wp-content/uploads/2022/06/walfgard.pdf https://autko.nl/2022/06/photoshop-cc-2019-version-20-crack-activation-code-for-windows/ https://mugvn.s3.amazonaws.com/upload/files/2022/06/qsZqBWpn2w7DYGDk5758_30_6ccb1bb33fbabedbfaccacc7f6fa67d8_file.pdf https://unamath.com/blog/index.php?entryid=3318 https://unsk186.ru/photoshop-2021-version-22-1-0-patch-full-version-free-download-mac-win-128191/ https://jimmyvermeulen.be/photoshop-2021-version-22-5-1-crack-activation-code-free-download-win-mac-march-2022/ https://kjvreadersbible.com/adobe-photoshop-2021-version-22-1-1-keygen-crack-serial-key-download-pc-windows/ https://www.la-pam.nl/wp-content/uploads/2022/06/paljai.pdf https://abckidsclub.pl/adobe-photoshop-2022-version-23-2-serial-number-win-mac-final-2022/ https://virtual.cecafiedu.com/blog/index.php?entryid=3317 ← Adobe Photoshop CC 2018 Version… Photoshop CS6 jb-keygen.exe (Updated 2022) →
CommonCrawl
A model for the biomass–density dynamics of seagrasses developed and calibrated on global data Vasco M. N. C. S. Vieira ORCID: orcid.org/0000-0001-9858-62541, Inês E. Lopes1 & Joel C. Creed2 BMC Ecology volume 19, Article number: 4 (2019) Cite this article Seagrasses are foundation species in estuarine and lagoon systems, providing a wide array of services for the ecosystem and the human population. Understanding the dynamics of their stands is essential in order to better assess natural and anthropogenic impacts. It is usually considered that healthy seagrasses aim to maximize their stand biomass (g DW m−2) which may be constrained by resource availability i.e., the local environment sets a carrying capacity. Recently, this paradigm has been tested and reassessed, and it is believed that seagrasses actually maximize their efficiency of space occupation—i.e., aim to reach an interspecific boundary line (IBL)—as quick as possible. This requires that they simultaneously grow in biomass and iterate new shoots to increase density. However, this strategy depresses their biomass potential. to comply with this new paradigm, we developed a seagrass growth model that updates the carrying capacities for biomass and shoot density from the seagrass IBL at each time step. The use of a joint biomass and density growth rates enabled parameter estimation with twice the sample sizes and made the model less sensitive to episodic error in either of the variables. The use of instantaneous growth rates enabled the model to be calibrated with data sampled at widely different time intervals. We used data from 24 studies of six seagrass species scattered worldwide. The forecasted allometric biomass–density growth trajectories fit these observations well. Maximum growth and decay rates were found consistently for each species. The growth rates varied seasonally, matching previous observations. State-of-art models predicting both biomass and shoot density in seagrass have not previously incorporated our observation across many seagrass species that dynamics depend on current state relative to IBL. Our model better simulates the biomass–density dynamics of seagrass stands while shedding light on its intricacies. However, it is only valid for established patches where dynamics involve space-filling, not for colonization of new areas. Seagrasses are dominant primary producers in coastal systems, and particularly in estuarine and lagoon ecosystems. Worldwide, seagrasses provide a wide array of ecosystem services that vary substantially with geographical location and the morphological and demographic characteristics of the species [1]. By inhabiting the coastline, seagrasses are subject to negative terrestrial human mediated impacts. The most frequent is eutrophication, which affects seagrasses directly through the deleterious effect of pollutants and indirectly by promoting blooms of opportunistic and epiphytic algae that may shade and smother seagrass stands [2,3,4,5,6]. Decreases in biomass, shoot density and growth rates are common consequences [7,8,9,10,11]. Seagrasses have a modular construction. Buried in the sediment, the rhizomes elongate and laterally grow new nodes with shoots. The internode length depends on the species and on its growth mode and clonal-growth plasticity, as was demonstrated in the reanalysis by Vieira et al. [12] of the Dadae Bay case study [13]. Multiple shoots can occur on a single rhizome, with their appearance restricted to rhizome nodes. Although species specific, the clonal growth of seagrass stands usually takes two stages [14]: during the earlier years of patch formation the stands elongate their rhizomes in a Diffusion-Limited Aggregation model to occupy the available substrate. Once the patch is established, it changes to an Eden strategy aiming at spreading to the neighbouring areas. Within the saturated patch, new space only becomes available upon the death of old shoots. Renton et al. [15] explicitly modelled the survival and growth of rhizomes and shoots to optimize transplant strategies for restoration. A different approach has been preferred when modelling established stands to quantify their primary production and total biomass. Plus et al. [16] estimated shoot density, above-ground biomass and below-ground biomass using a set of differential equations with a linear structure that ignored the environmental carrying capacity. Irrespective of the stand's developmental stage and modelling approach, the environmental factors most commonly influencing seagrass growth rate are temperature, irradiance and concentration of inorganic nutrients [8, 14,15,16,17,18]. Biomass–density relations that may relate to yield became central to plant demography in the 1950's [19,20,21]. Significant insights into the dynamics of plant stands can be inferred from bi-logarithmic plots with log10D in the x axis and log10B in the y axis, where D is density in numbers of individuals (ramets) per unit area (ind m−2) and B is stand biomass per unit area (g DW m−2). The time trajectory of a monospecific even-aged stand under crowded conditions is named the "intraspecific dynamic biomass–density relation", or alternatively the "self-thinning line". While the stands endure active growth, crowding induces mortality of the weaker, which in turn opens space for the growth of the fitter. This iterative process generates a line with negative slope reflecting the environmental carrying capacity and degree of intraspecific competition [20, 21] (Fig. 1). Above any self-thinning line is placed a boundary line that no stand or species can pass and reflects the maximum possible efficiency of space occupation [12, 20,21,22] (Fig. 1). This boundary line is termed the Interspecific Boundary Line (IBL) and is given by log10B = β0 + β1∙log10D. The IBL for terrestrial plants has coefficients β0 = 4.87 and β1 = − 0.33 [21]. Recently, algae were demonstrated to occupy space more efficiently than plants [22], with the algae IBL exhibiting coefficients β0 = 6.69 and β1 = − 0.67 placed above the plant IBL (Fig. 1). Nevertheless, there was a threshold of log10B ≈ 5 that neither algae nor plants were able to cross [22]. The perpendicular distance from each algal stand to their boundary reflected its specific efficiency of space occupation and was used to discriminate among taxa, functional groups, clonality or latitude [22]. Soon after, Vieira et al. [12] demonstrated that seagrasses are also limited by their own IBL. With coefficients β0 = 4.569 and β1 = − 0.438, the seagrass IBL was placed far below the algae and plant boundaries (Fig. 1). Biomass–density relations. Theoretical schematic of self-thinning under different resource levels and observed interspecific boundary line (IBL) of algae, terrestrial plants and seagrasses Self-thinning does not apply to many clonal algae and plants because modules (ramets) are physically interconnected allowing the sharing of acquired resources and offsetting competition [23,24,25,26,27,28,29]. Although not necessarily self-thinning, seagrasses [12], terrestrial clonal plants [23] and clonal algae [22] have been demonstrated to be limited by their respective IBL, as are non-clonal macrophytes. Therefore, it is both possible and legitimate to use their stands' distances to their IBL as estimators of their efficiencies of space occupation. When doing such estimation, Vieira et al. [12] found that seagrasses tend to develop biomass and shoot density in a trajectory approximately perpendicular to their IBL. Hence, when the environment is favourable, seagrass stands grow approaching their IBL by simultaneously increasing shoot density and stand biomass (Fig. 2). On the other hand, when the environment is unfavourable, seagrass stands shrink back and depart their IBL by simultaneously decreasing their shoot density and stand biomass (Fig. 2). This particular temporal biomass–density scenario suggests that seagrasses (i) grow to maximize the efficiency of space occupation and not just biomass, and (ii) aim at the quickest route to maximize this efficiency. Biomass–density relations of seagrasses. Observed (obs) and estimated by the allometric instantaneous growth model (model) or the isometric null hypothesis (H0) In this study we developed a model for the growth of established seagrass stands that mimicked the observed patterns mentioned above. It required that the growth model solved simultaneously for biomass and density considering one carrying capacity for each of these properties. To develop such a model, we nested logistic functions for the stand biomass and shoot density with the carrying capacities iteratively updated by selecting the IBL coordinates closest to the current stand coordinates. This represents a new paradigm in modelling seagrass meadows as former models ignored (i) the coordinated biomass and shoot density growth, (ii) the existence of carrying capacities for biomass and shoot density, and (iii) their dependency on the efficiency of space occupation. We calibrated a model for each of the six studied species. Their simulations were analysed regarding their ecological implications as well as comparisons among species. Vieira et al. [12] gathered data comprising the biomass and shoot density presented in 32 studies of ten seagrass species distributed worldwide. The Halodule wrightii data was provided by Dr. Joel Creed and Dr. Kenneth Dunton. The data from Plus et al. [17] was provided by Dr. Martin Plus. The remaining data were retrieved from the respective publications. The compilation of data, carried out during the years 2017 and 2018, used the Google search engine as well as the search engines in the webpages of all cited publications, and included the keywords 'biomass', 'density', 'seagrass' and the species scientific denominations. Vieira et al. [12] also searched the publication listings of the most cited authors in the subject and the reference lists of the cited works. This data was provided as Additional file 1 associated to that publication. Here, we used a sub-set of this data comprising the biomass and shoot density presented in 24 studies of six seagrass species. These were the species for which the existence of time series data allowed the determination of growth rates fundamental for this modelling. All data used are included in Fig. 2. The software estimating the parameters and running the model are provided as Additional file 1. The biomass–density instantaneous growth model Following the bulk literature on biomass–density relations, the biomass (B in g DW m−2) and density (D in shoots m−2) were replaced by b = log10B and d = log10D. Coincidently, these correspond to instantaneous rates (although traditionally use the e base), allowing the application of linear algebra to non-linear processes, and thus standardizing per day (i.e., ∆b/∆t and ∆d/∆t) growth rates that in their original studies related to quite different time intervals. This advantage of instantaneous over finite rates has made them the most suited for studies in fisheries [30, 31] and evolutionary [32, 33] ecology. Depending on the environmental conditions, the stands approached or departed the seagrass IBL along a path roughly corresponding to the central tendency observed for each species (Fig. 2). This was estimated by Principal Components Analysis (PCA) based on the biomass–density covariance matrix. PCA is a Type II regression, a class of methods (also including reduced major axis—RMA) that has been demonstrated to be better suited for data without a hierarchical structure and/or with approximate x and y variances [34,35,36], as is the case of biomass–density relations [20, 37, 38]. PCA and RMA tend to be complementary, with one excelling where the other fails. However, when applied to biomass–density data, PCA often performs better than RMA [39,40,41]. With these seagrass data, both methods were generally equally good, and RMA performed conspicuously less well only when applied to Cymodocea nodosa (Ucria) Ascherson (1870). Having decided to use PCA, the central tendency was given by the dominant principal component—i.e., the one with the larger eigenvalue. Its slope (i.e., α1 = ∆b/∆d) was taken from its eigenvector, with the b loading corresponding to ∆b and the d loading corresponding to ∆d. The angle θ between the central tendency and the d horizontal axis was estimated from the slope i.e., θ = arctg((∆b/∆d). This angle weights the allometry in the biomass growth relative to the density growth. Larger θ implies more biomass grown per unit increase in shoot density. The θ under the null hypothesis (H0) of isometric biomass–density growth was estimated for comparison. In this case the increase in the stand's biomass per area is exclusively a consequence of iteration of new shoots without any increase in individual biomass. Obviously, shoots are not "born" at adult size, so we assume that the time frame for growth to adult size is rapid relative to new shoot production. This isometric biomass–density growth represents a situation where a cohort of shoots reaches a fixed adult size before the emergence of the next cohort. Because the axes of the biomass–density plot are in logarithmic scales, the slope of the b:d central tendency (α1) observed for each species represents the exponent in their allometric relation \(B = 10^{{\alpha_{0} }} D^{{\alpha_{1} }}\). Under the isometric null hypothesis this exponent is 1, leading to θ = 0.785. Consequently, irrespective of the species, θ > 0.785 implied an allometric biomass–density growth due to older shoots keeping increasing their biomass. The carrying capacities Kb and Kd were taken from the IBL (Fig. 3) in two situations: (i) at each iteration of the instantaneous growth model, and (ii) during model calibration, for the estimation of the growth parameter r. Thus, Kb and Kd corresponded to the intersection of the IBL with a straight line passing by the stand's location during the iteration and preserving the slope (and thus, also the θ) previously estimated from the central tendency: $$K_{d} = \frac{{b - \beta_{0} - \alpha_{1} d}}{{\beta_{1} - \alpha_{1} }}$$ $$K_{b} = \beta_{0} + \beta_{1} K_{d}$$ Iterative update of the biomass and density carrying capacities In the core of the biomass (B) and shoot-density (D) growth models were exponential growth functions where the B and D one time step ahead were given by Bt+1=RB∙Bt and Dt+1 = RD∙Dt. Consequently, the interval growth rates corresponded to RB = Bt+1/Bt and RD = Dt+1/Dt. Changing units to b and d led to RB = 10∆b and RD = 10∆d, with ∆b = bt+1 − bt and ∆d = dt+1 − dt. This adaptation enabled the application of the logistic growth function to instantaneous growth rates preserving its typical sigmoidal-shaped curve (Eqs. 3 and 4). The θ estimated for each species described the proportionality between its biomass and density growth rates, allowing the model to include a single general growth rate (r) i.e., the biomass specific rb = r∙sinθ while the density specific rd = r∙cosθ. $$\frac{\Delta b}{\Delta t} = r \cdot b\left( {\frac{{K_{b} - b}}{{K_{b} }}} \right)\sin \theta$$ $$\frac{\Delta d}{\Delta t} = r \cdot d\left( {\frac{{K_{d} - d}}{{K_{d} }}} \right)\cos \theta$$ During model calibration, the instantaneous logistic growth functions were linearized (Eqs. 5 and 6). Scaling b and d to their estimated carrying capacities yielded the dimensionless quantities b/Kb and d/Kd, most often ranging from 0 and 1 although small negative values also occurred from very small biomasses and/or densities. Kb and Kd were previously estimated from Eqs. (1) and (2). The solution in Eqs. (5) and (6) with both the horizontal (x) and vertical (y) axis in units of day−1 allowed the merging of biomass and density data into a single estimation of r, increasing its accuracy. In this case, r is both the slope and the intercept of the regression line. $$\frac{\Delta b}{\Delta t} \cdot \frac{1}{b \cdot \sin \theta } = r\left( {\frac{{K_{b} - b}}{{K_{b} }}} \right) = r - r\frac{b}{{K_{b} }}$$ $$\frac{\Delta d}{\Delta t} \cdot \frac{1}{d \cdot \cos \theta } = r\left( {\frac{{K_{d} - d}}{{K_{d} }}} \right) = r - r\frac{d}{{K_{d} }}$$ This model structure has, apparently, four parameters: the species-specific biomass–density central tendency (θ), the biomass–density joint growth rate (r), and the biomass and density carrying capacities, respectively Kb and Kd. However, Kb and Kd are not true parameters, rather being iterated from the seagrass IBL, a "universal" boundary line common for all seagrass species. Thus, its β0 and β1 coefficients are universal constants and not parameters to be calibrated. At present these constants were estimated from data of only 10 species [12], but hopefully future studies will provide a more comprehensive dataset to establish the better placement of this IBL and the value of its coefficients. We tested the advantage of our model by comparing with the state-of-the-art in modelling the dynamics of seagrass meadows. This was the MEZO-1D with explicit independent parameterization of shoot density and above-ground biomass, and applied to Z. noltii in the Thau Lagoon [16]. Then, we ran our model in operational mode using this same Z. noltii data. The Kd, Kb and growth rate (r) were estimated for each time interval from the observed biomass and density using Eqs. (1, 2, 5 and 6). The biomass and density were forecasted for the next time step i.e., bt+1=bt + Δb and dt+1=dt + Δd. The bt and dt were the observed biomass and density while the Δb and Δd were estimated solving Eqs. 3 and 4 for Δb and Δd. Excepting Z. marina, the biomass–density growth of all other tested seagrasses was largely allometric (Table 1), meaning that biomass increased both from the emergence of new shoots and the growth of old shoots. In Z. marina, the biomass–density growth was almost isometric. These results were independent of the time interval between consecutive samples. The median interval for Z. marina, Z. japonica and C. nodosa was roughly 1 month. For the Z. noltii, H. wrightii and T. testudinum the bulk of the intervals varied among 2, 3 and 4 months. The biomass–density growth trajectories simulated by allometric (instantaneous growth) and isometric (null hypothesis) models were generally largely different (Fig. 2). These differences depended on how far the starting point was from the seagrass IBL. With Z. marina, the starting point needed to be far below the IBL for the allometric (instantaneous growth model) and isometric (null hypothesis) models to yield conspicuously different trajectories. Otherwise, their trajectories were very similar. By disregarding the biomass growth of older shoots, the simulations of isometric biomass–density growth reached the seagrass IBL (i.e., the carrying capacity) over-estimating densities while under-estimating biomasses. The variation of θ among species was relatively narrow (Table 1). Nevertheless, the smaller θ (Z. marina), the median θ (T. testudinum) and the larger θ (C. nodosa) were found among the species with larger shoots reared at smaller densities, demonstrating that the allometry in the biomass–density growth was independent of species morphotypes. Table 1 Instantaneous growth model parameters The calibration of the instantaneous growth model showed that each species is systematically bounded within a minimum (decay) and a maximum (growth) rate, beyond which observations are scarce (Table 1 and Fig. 4). The estimated maximum rates report the best performance of each species observed on a regular basis, enabling most species to attain the seagrass IBL—i.e., to reach their carrying capacities—in just a few months (Fig. 4). These maxima occur consistently (i.e., fit the same line) along the full range of observed stand biomasses and densities (i.e., along the b/Kb and d/Kd axis), thus corroborating the adequacy of this modelling approach in describing stand dynamics. Comparing among species, the maximum rates were unrelated to shoot size and shoot density. Both larger and smaller maxima were found among the species with larger shoots reared at smaller densities (Table 1). The estimated decay rates occurred consistently along the full range of observed stand biomasses and densities, represented the worst performance of each species observed on a regular basis, and resulted in most species shrinking far away from the seagrass IBL in just a few months (Fig. 4). The maximum decay rates were also unrelated to morphometry. Both larger and smaller maxima were found among the species with larger shoots reared at smaller densities (Table 1). Maximum growth and decay rates were of similar magnitudes. Nevertheless, the episodic occurrence of faster decay rates should relate to adverse extreme events. Bounded within the maximum growth and decay rates, for some species (particularly for C. nodosa and T. testudinum) it was easy to identify a seasonal dynamic cycling through growth, peak, decay and trough of the stands' biomass–density relation (Fig. 5). For other species the seasonal pattern may be blurred by the spatial variability. Model calibration. Inferred for six seagrass species using data retrieved from stands worldwide. Data relative to biomass (triangle) or density (circle) Growth seasonality. Inferred for six seagrass species using data retrieved from stands worldwide The resulting instantaneous growth models simulated well the dynamics of the six tested seagrass species. All trajectories forecasted in the biomass–density plot fit the central tendencies of their respective species (Fig. 2). Nevertheless, it is unclear whether the sensitivity of the trajectories to the initial conditions matched reality or constituted a model weakness. On the one hand, the observation of a clear oblique pattern towards the IBL suggests some sort of control mechanism with negative feedback keeping some species in their respective biomass–density narrow bands. On the other hand, the scatter around the central tendencies, particularly large in some species, casts doubt on the existence or on the efficacy of such a control mechanism. Both models captured the seasonal variation observed in Z. noltii in the Thau Lagoon: b and d both varied by 1 unit, and predictions were generally within 0.2 units (Fig. 6). This seasonal variation was achieved in MEZO-1D through a time-varying carrying capacity set indirectly through resource availability, hence producing smooth seasonal variation. Our model, instead, generated dynamic increases and declines in seagrass through time-varying r (i.e., negative in fall and winter, positive in spring and summer), while Kd and Kb varied in time but always along the IBL. Part of our model fit was derived from implementation in operational mode, thus preventing error propagation and amplification through time; MEZO-1D applied in operational mode (always updated from observed rather than predicted values) would likely also fit better. Nevertheless, our model correctly tracked the contributions of shoot loss and smaller size at the end of the time series, whereas MEZO-1D overestimated biomass per area and underestimated shoot density. Seagrass demographic models. Left panels have the b and d time series yield by our model run in operational mode and of MEZO-1D in long range forecast. Right panels have model validation Our model and findings are only valid for established stands (or patches), where the below ground system is already spread through the whole surface and the occupation of the space available above-ground is only a consequence of the shoot dynamics (their growth and mortality). This limitation is a consequence of our model being based on the biomass–density relation and Interspecific Boundary Line estimated from (and for) established stands. These fundamental ecological principles applied to terrestrial plants [19,20,21] and algae [22] are also only valid for established stands, and in the generality of the plant and algae cases the colonization of the free surface is by the dispersal of seedlings or sporelings instead of rhizome elongation. Seagrass stands that are not fully established require long term expansion of their rhizomes to fully occupy the free surfaces [14]. Consequently, seagrass populations are unstable when their below-ground system is harmed or destroyed. Even damaging their clonal integrity or just severing the apical meristem can be enough to significantly reduce their production of shoots, leafs and biomass [42]. Contrasting with the large timeframe required for the establishment of new stands (or patches), our results demonstrate that, when their below ground system is established and healthy, seagrasses quickly attain their maximum efficiency of space occupation. Even starting from poor conditions (i.e., low biomass and/or low density), the simultaneous increase in shoot density and biomass gets seagrasses up to their maximum efficiency in a few months. Contributing for such a quick response may be the occurrence of dormant shoots ready to develop upon the physiological perception of favourable environments in those species that have them [43] and clonal shoot production rates that are inversely related to shoot density in other species [44]. We therefore conclude that: (i) the health of the stand's below-ground system is a key aspect for the stability of seagrass stands, and (ii) fast vs. slow growing species only makes sense when addressing the below-ground growth. But once this is established, all species can grow their shoot density and above-ground biomass to their carrying capacities in just a few months. Our results confirm that postulated by Vieira et al. [12] that seagrasses are programmed to maximizing their efficiency of space occupation (i.e., approach their seagrass IBL) as quickly as possible by simultaneously adjusting biomass and shoot density. For accurate simulations of this adjustment the correct allometric biomass–density growth algorithm is fundamental. Disregarding the simultaneous adjustment of biomass and shoot density, or using the wrong allometric relation, inevitably leads to extremely biased estimates of biomass, density and their carrying capacities. Distinct seagrass species show different patterns in their simultaneous growth in biomass and in shoot density. Z. marina was the only species whose stands showed almost isometric growth, implying that the addition of biomass resulted mainly from the emergence of new shoots. In all other tested species the biomass–density growth was allometric, implying that the addition of biomass resulted both from the emergence of new shoots and the continuous growth (or weight increment) of old shoots. Our model contributed with developments that are fundamental for the modelling of this biomass and shoot density dynamics of seagrasses. The MEZO-1D [16] is, to our knowledge, the state-of-the-art in modelling seagrass stands. Yet, it disregards (i) the coordinated growth of biomass and shoot density, (ii) the existence of carrying capacities for these two properties, and (iii) the carrying capacities being set by the efficiency of space occupation. Ideally, seagrass demographic models like the MEZO-1D should be merged with ours, and here we put forward a way to do it: the general framework of our model must be preserved as the ultimate carrying capacity is set by the IBL (the maximum possible efficiency of space occupation) and not by nutrients or light. Nevertheless, these other factors do set a secondary carrying capacity, enabling the stands to approximate the IBL (r > 0) or leading them to depart from it (r < 0). The most obvious solution is setting a secondary carrying capacity for biomass (bK), then evaluate the stands placement relative to it (b) and scale the growth rate to this differential i.e., r∝bK − b. This way, a stand grows towards the IBL while it is not being limited by resources (i.e., r = λ(bK − b) > 0, with λ a positive scaling constant) but departs the IBL when it is being limited by resources (i.e., r = λ(bK − b) < 0). This simple solution also postulates that stands with smaller shoots summing up to lower stand biomass are less constrained by resources. This dynamic is supported by the results presented in Fig. 5, where positive and negative r often changed seasonally. So, despite the advances brought about by our model, much improvement is still possible and required. Better quality data, particularly with finer temporal resolution, should allow better calibration and assessment of the seasonal dynamics. Another fundamental aspect for the development of the current model is its sensitivity to initial conditions. It is uncertain whether this represents reality or is a mathematical flaw. The large scatter around the central tendency of each species biomass–density plot suggests that at least part of this sensitivity is real. It is also reasonable to expect that the different biomass–density strategies reflect the different morphological and physiological limits of each species. The presence of dormant shoots ready to develop upon the physiological perception of favourable environments in Thalassia testudinum [43] may be one specific differential with a strong influence on the balance between the coordinated growth in biomass and shoot density. The existence of dormant shoots has also been suggested though not confirmed in Cymodocea nodosa [43]; in this study this species was observed to have the strongest biomass dominance in the biomass–density coordinated growth (see Table 1), implying that its seasonal variation in biomass greatly surpassed its seasonal variation in density. It might be that "waking-up" dormant shoots and getting them to resume growth of leaves could explain the efficiency of space occupation by Cymodocea nodosa. This may also help explain the narrow biomass–density band occupied by Cymodocea nodosa, as was observed both in this study and by Vieira et al. [12]. For the calibration of the current model it is fundamental to know whether the counts of shoot-density in the data included (or not) dormant shoots. For the development of more comprehensive mechanistic models the presence of dormant shoots should be considered. Vieira et al. [12] demonstrated that seagrasses generally followed the same seasonal pattern, with the spring and summer corresponding to the favourable season and the autumn and winter corresponding to the unfavourable season, but still a differentiation occurred among seagrasses in their maximum efficiency of space occupation. Together, our study and the study by Vieira et al. [12] unveiled further details about the biomass–density dynamics of seagrasses. By the end of the favourable season the seagrass stands may already be at (or close to) their carrying capacities set by their seagrass IBL, while unbounded by any light or nutrient availability. In these cases the stands do not grow further simply because it is physically impossible for them to occupy more space, and not because the environmental conditions are less adequate. On the other hand, our results showed that seagrasses require some months to attain their IBL, and some species require more time than others. The question is raised as to whether the lower maximum efficiencies observed in some species by Vieira et al. [12] are a direct limitation of their morphology or result from growth rates that were too slow for the short favourable season. The latter case may explain our results, and those by Vieira et al. [12], demonstrating that Z. marina and T. testudinum have a potential efficiency of space occupation better than that reported so far. It may be that in the studied sites, following the harsher winters, the favourable summers did not last long enough for the stands to grow to their maximum. Our model, built on the new paradigm about the joint biomass–density dynamics of seagrasses, sheds light on the intricacy of their ecology. Consequently, it simulates the dynamics of seagrass stands better than its predecessors, which mostly focused on either one of these demographic variables. The few that simultaneously accounted for both biomass and density failed to consider their coordinated growth and carrying capacities. However, our model is only valid for established patches where dynamics involve space-filling, not colonization of new areas. Furthermore, its correct estimation of biomass, density and their carrying capacities requires an accurate knowledge of the allometric biomass–density growth. The application of our model demonstrated that seagrass beds at low density have the potential to increase stand biomass rapidly under favorable environmental conditions. Consequently, preventing total loss of meristems is the key to the preservation of seagrass stands, and anchoring shoots at low density in suitable environments should promote rapid restoration. The enhanced knowledge generated by our model is valuable for future research while its enhanced predictive ability is valuable for management efforts. DW: dalgal : perpendicular distance to the algae IBL dgrass : perpendicular distance to the seagrass IBL IBL: interspecific boundary line PCA: principal components analysis RMA: reduced major axis Nordlund LM, Koch EW, Barbier EB, Creed JC. Seagrass ecosystem services and their variability across genera and geographical regions. PLoS ONE. 2016;11(10):e0163091. https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pone.0163091. Valiela I, McClelland J, Hauxwell J, Behr PJ, Hersh D, Foreman K. Macroalgal blooms in shallow estuaries: controls and ecophysiological and ecosystem consequences. Limnol Oceanogr. 1997;42:1105–18. Burkholder JM, Tomasko DA, Touchette BW. Seagrasses and eutrophication. J Exp Mar Biol Ecol. 2007;350:46–72. Brun FG, Olivé I, Malta EJ, Vergara JJ, Hernández I, Pérez-Lloréns J. Increased vulnerability of Zostera noltii to stress caused by low light and elevated ammonium levels under phosphate deficiency. Mar Ecol Prog Ser. 2008;365:67–75. Pergent G, Boudouresque C-F, Dumay O, Pergent-Martini C, Wyllie-Echeverria S. Competition between the invasive macrophyte Caulerpa taxifolia and the seagrass Posidonia oceanica: contrasting strategies. BMC Ecol. 2008;8:20. https://0-doi-org.brum.beds.ac.uk/10.1186/1472-6785-8-20. Thomsen MS, Wernberg T, Engelen AH, Tuya F, Vanderklift MA, Holmer M, et al. A meta-analysis of seaweed impacts on seagrasses: generalities and knowledge gaps. PLoS ONE. 2012;7(1):e28595. Cabaço S, Machás R, Santos R. Biomass–density relationships of the seagrass Zostera noltii: a tool for monitoring anthropogenic nutrient disturbance. Estuar Coast Shelf Sci. 2007;2007(74):557–64. Cabaço S, Machás R, Vieira V, Santos R. Impacts of urban wastewater discharge on seagrass meadows (Zostera noltii). Estuar, Coast. Shelf Sci. 2008;78:1–13. Cabaço S, Apostolaki ET, Garcıa-Marín P, Gruber R, Hernandez I, Martınez-Crego B, et al. Effects of nutrient enrichment on seagrass population dynamics: evidence and synthesis from the biomass–density relationships. J Ecol. 2013;101:1552–62. Romero J, Martínez-Crego B, Alcoverro T, Pérez M. A multivariate index based on the seagrass Posidonia oceanica (POMI) to assess ecological status of coastal waters under the water framework directive (WFD). Mar Pollut Bull. 2007;55:196–204. García-Marín P, Cabaço S, Hernández I, Vergara JJ, Silva J, Santos R. Multi-metric index based on the seagrass Zostera noltii (ZoNI) for ecological quality assessment of coastal and estuarine systems in SW Iberian Peninsula. Mar Pollut Bull. 2013;68:46–54. Vieira VMNCS, Lopes IE, Creed JC. The biomass–density relationship in seagrasses and its use as an ecological indicator. BMC Ecol. 2018;18:44. https://0-doi-org.brum.beds.ac.uk/10.1186/s12898-018-0200-1. Lee SW, Bae KJ. Temporal dynamics of subtidal Zostera marina and intertidal Zostera japonica on the southern coast of Korea. Mar Ecol. 2006;27:133–44. Sintes T, Marbà N, Duarte CM, Kendrick GA. Nonlinear processes in seagrass colonisation explained by simple clonal growth rules. Oikos. 2005;108:165–75. https://0-doi-org.brum.beds.ac.uk/10.1111/j.0030-1299.2005.13331.x. Renton M, Airey M, Cambridge ML, Kendrick GA. Modelling seagrass growth and development to evaluate transplanting strategies for restoration. Ann Bot. 2011;108(6):1213–23. Plus M, Chapelle A, Ménesguen A, Deslous-Paoli J-M, Auby I. Modelling seasonal dynamics of biomasses and nitrogen contents in a seagrass meadow (Zostera noltii Hornem.): application to the Thau lagoon (French Mediterranean coast). Ecol Model. 2003;161:213–38. Plus M, Deslous-Paoli J-M, Dagault F. Factors influencing primary production of seagrass beds (Zostera noltii Hornem.) in the Thau lagoon (French Mediterranean coast). J Exp Mar Biol Ecol. 2001;259:63–84. Lee K-S, Park SR, Kim YK. Effects of irradiance, temperature, and nutrients on growth dynamics of seagrasses: a review. J Exp Mar Biol Ecol. 2007;350:144–75. Yoda K, Kira T, Ogawa H, Hozumi K. Self-thinning in overcrowded pure stands under cultivated and natural conditions (Intraspecific competition among higher plants. XI). J Biol. 1963;14:107–29. Weller DE. A reevaluation of the −3/2 power rule of plant self-thinning. Ecol Monogr. 1987;57:23–43. Scrosati RA. The interspecific biomass–density relationship for terrestrial plants: where do clonal red seaweeds stand and why? Ecol Lett. 2000;3:191–7. Creed J, Vieira VMNCS, Norton TA, Caetano D. A meta-analysis shows that seaweeds surpass plants, setting life-on-Earth's limit for biomass packing. BMC Ecol. 2019. https://0-doi-org.brum.beds.ac.uk/10.1186/s12898-019-0218-z. Hutchings MJ. Weight–density relationships in ramet populations of clonal perennial herbs, with special reference to the 3/2 power law. J Ecol. 1979;67:21–33. Westoby M. The self-thinning rule. Adv Ecol Res. 1984;14:167–225. de Kroon H, Kalliola R. Shoot dynamics of the giant grass Gynerium sagittatum in Peruvian Amazon floodplains, a clonal plant that does show self-thinning. Oecologia. 1995;101:124–31. Lazo ML, Chapman ARO. Components of crowding in a modular seaweed: sorting through the contradictions. Mar Ecol Prog Ser. 1998;174:257–67. Scrosati R, Servière-Zaragoza E. Ramet dynamics for the clonal seaweed Pterocladiella capillacea (Rhodophyta): a comparison with Chondrus crispus and with Mazzaella cornucopiae (Gigartinales). J Phycol. 2000;36:1061–8. Steen H, Scrosati R. Intraspecific competition in Fucus serratus and F. evanescens (Phaeophyceae: Fucales) germlings: effects of settlement density, nutrient concentration, and temperature. Mar Biol. 2004;144:61–70. Rivera M, Scrosati R. Self-thinning and size inequality dynamics in a clonal seaweed (Sargassum lapazeanum, Phaeophyceae). J Phycol. 2008;44:45–9. Allen MS, Miranda LE, Brock RE. Implications of compensatory and additive mortality to the management of selected sportfish populations. Lakes Reserv Res Manag. 1998;3:67–79. Allen MS, Walter CJ, Myers R. Temporal trends in largemouth Bass mortality, with fishery implications. North Am J Fish Manag. 2008;28:418–27. Vieira VMNCS, Engelen AH, Huanel OR, Guillemin ML. Haploid females in the isomorphic biphasic life-cycle of Gracilaria chilensis excel in survival. BMC Evol Biol. 2018;18:174. https://0-doi-org.brum.beds.ac.uk/10.1186/s12862-018-1285-z. Vieira VMNCS, Engelen AH, Huanel OR, Guillemin M-L. Differentiation of haploid and diploid fertilities in Gracilaria chilensis affect ploidy ratio. BMC Evolutionary Biology, in press. Pearson K. On lines and planes of closest fit to systems of points in space. Philos Mag. 1901;2:559–72. Draper NR. Straight line regression when both variables are subject to error. In: Proceedings of the 1991 Kansas State University Conference on Applied Statistics in Agriculture. 1992;1–18. Smith RJ. Use and misuse of reduced major axis for line-fitting. Am J Phys Anthropol. 2009;140:476–86. Scrosati R. On the analysis of self-thinning among seaweeds. J Phycol. 1997;33:1077–9. https://0-doi-org.brum.beds.ac.uk/10.1046/j.1529-8817.2000.00041.x. Scrosati R. Review of studies on biomass–density relationships (including self-thinning lines) in seaweeds: main contributions and persisting misconceptions. Phycol. Res. 2005;53:224–33. Vieira VMNCS, Creed J. Estimating significances of differences between slopes: a new methodology and software. Comput Ecol Softw. 2013;3(3):44–52. Vieira VMNCS, Creed J. Significances of differences between slopes: an upgrade for replicated time series. Comput Ecol Softw. 2013;3(4):102–9. Vieira VMNCS, Creed J, Scrosati RA, Santos A, Dutschke G, Leitão F, et al. On the choice of linear regression algorithms. Annu Res Rev Biol. 2016;10(3):1–9. Terrados J, Duarte CM, Kenworthy WJ. Is the apical growth of Cymodocea nodosa dependent on clonal integration? Mar Ecol Prog Ser. 1997;158:103–10. van Tussenbroek BI, Galindo CA, Marquez J. Dormancy and foliar density regulation in Thalassia testudinum. Aquat Bot. 2000;68:281–95. Ruesink JL, et al. Life history and morphological shifts in an intertidal seagrass following multiple disturbances. J Exp Mar Biol Ecol. 2012;424–425:25–31. VV collected the data, developed the models and software, designed and performed the data analysis, interpreted the results and wrote the article. JCC collected the data, interpreted the results and reviewed the article. IEL collected the data and reviewed the article. All authors read and approved the final manuscript. Our gratitude to the researchers who shared their data with us. Their contributions were fundamental for the success of our study. The dataset supporting the conclusions of this article is included within Additional file 2 provided by Vieira et al. [12]. VV and IEL were funded by ERDF Funds of the Competitiveness Factors Operational Programm -COMPETE and national funds of the FCT-Foundation for Science and Technology (FCT/MCTES (PIDDAC)) under the project UID/EEA/50009/2019. This work was supported by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (JCC, Ciências do Mar 1137/2010); Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro (JCC, FAPERJ-E-26/111.574/2014 and E26/201.286/2014); and Conselho Nacional de Desenvolvimento Científico e Tecnológico (JCC, CNPq- 307117/2014-6). The funders took no part in the design of the study, in the collection, analysis, and interpretation of data, and in writing the manuscript. MARETEC, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais, 1049-001, Lisbon, Portugal Vasco M. N. C. S. Vieira & Inês E. Lopes Departamento de Ecologia, Instituto de Biologia Roberto Alcântara Gomes, Universidade do Estado do Rio de Janeiro, Rua São Francisco Xavier 524, Rio de Janeiro, RJ, 20559-900, Brazil Joel C. Creed Search for Vasco M. N. C. S. Vieira in: Search for Inês E. Lopes in: Search for Joel C. Creed in: Correspondence to Vasco M. N. C. S. Vieira. 12898_2019_221_MOESM1_ESM.m Additional file 1. Matlab executable file running the biomass-density seagrass growth model with parameters calibrated to six seagrass species scattered worldwide. Vieira, V.M.N.C.S., Lopes, I.E. & Creed, J.C. A model for the biomass–density dynamics of seagrasses developed and calibrated on global data. BMC Ecol 19, 4 (2019) doi:10.1186/s12898-019-0221-4 Accepted: 16 January 2019 Above-ground biomass Cymodoceae Halodule Logistic growth Thalassia Zostera
CommonCrawl
Frolov, Sergei Anatol'evich Total publications: 18 (18) in MathSciNet: 13 (13) in zbMATH: 7 (7) in Web of Science: 18 (18) in Scopus: 18 (18) Cited articles: 17 Citations in Math-Net.Ru: 79 Citations in Web of Science: 312 Citations in Scopus: 413 Presentations: 10 This page: 983 Abstract pages: 3411 Full texts: 1414 Candidate of physico-mathematical sciences http://www.mathnet.ru/eng/person21230 List of publications on Google Scholar https://zbmath.org/authors/?q=ai:frolov.sergei-anatolevich https://elibrary.ru/author_items.asp?authorid=5760 inSPIRE personal page (High Energy Physics (HEP) information system) Full list of publications: | scientific publications | by years | by types | by times cited in WoS | by times cited in Scopus | common list | 1. Chantelle Esper, Sergey Frolov, "$T\overline{T}$ deformations of non-relativistic models", JHEP, 2021:6 (2021), 101 , 34 pp. ; 2. Sergey A. Frolov, "$T\overline T$ Deformation and the Light-Cone Gauge", Proc. Steklov Inst. Math., 309 (2020), 107–126 (cited: 12) 3. S. A. Frolov, "$T\bar{T}$, $\tilde{J}J$, $JT$ and $\tilde{J}T$ deformations", J. Phys. A, 53:2 (2020), 25401 , 25401 pp. (cited: 16); 4. Sergey Frolov, "Free field representation of the ZF algebra of the $SU(N)\times SU(N)$ PCF model", J. Phys. A, 50:37 (2017), 374001 , 45 pp. (cited: 3) (cited: 3) 5. G. Arutyunov, S. Frolov, B. Hoare, R. Roiban, A. A. Tseytlin, "Scale invariance of the $\eta$-deformed $AdS_5\times S^5$ superstring, $T$-duality and modified type II equations", Nuclear Phys. B, 903 (2016), 262–303 (cited: 113) (cited: 123) 6. S. Frolov, M. Heinze, G. Jorjadze, J. Plefka, "Static gauge and energy spectrum of single-mode strings in $\mathrm{AdS}_5\times \mathrm{S}^5$", J. Phys. A, 47 (2014), 085401 , 20 pp. (cited: 12) (cited: 11) 7. G. Arutyunov, R. Borsato, S. Frolov, "$S$-matrix for strings on $\eta$-deformed $\mathrm{AdS_5\times S^5}$", JHEP, 2014, no. 4, 2 , 23 pp. (cited: 97) (cited: 151) 8. G. E. Arutyunov, S. A. Frolov, "Virasoro amplitude from the $S^N{\mathbf R}^{24}$-orbifold sigma model", Theoret. and Math. Phys., 114:1 (1998), 43–66 (cited: 51) (cited: 53) 9. S. A. Frolov, "Physical phase space of the lattice Yang–Mills theory and moduli space of flat connections on a Riemann surface", Theoret. and Math. Phys., 113:1 (1997), 1289–1298 (cited: 1) (cited: 1) 10. G. E. Arutyunov, S. A. Frolov, L. O. Chekhov, "$R$-matrix quantization of the elliptic Ruijsenaars–Schneider model", Theoret. and Math. Phys., 111:2 (1997), 536–562 (cited: 1) (cited: 1) 11. A. A. Slavnov, S. A. Frolov, C. V. Sochichiu, "$SO(N)$-invariant Wess–Zumino action and its quantization", Theoret. and Math. Phys., 105:2 (1995), 1407–1425 (cited: 2) 12. A. A. Slavnov, S. A. Frolov, "Canonical quantization of anomalous theories", Theoret. and Math. Phys., 92:3 (1992), 1038–1046 (cited: 1) 13. S. A. Frolov, "BRST quantization of gauge theories in Hamiltonian-like gauges", Theoret. and Math. Phys., 87:2 (1991), 464–477 (cited: 1) (cited: 1) 14. A. A. Slavnov, S. A. Frolov, "Lagrangian BRST quantization and unitarity", Theoret. and Math. Phys., 85:3 (1990), 1237–1255 (cited: 1) 15. S. A. Frolov, "Hamiltonian BRST quantization of an antisymmetric tensor field", Theoret. and Math. Phys., 76:2 (1988), 886–890 (cited: 2) (cited: 2) 16. A. A. Slavnov, S. A. Frolov, "Quantization of non-Abelian antisymmetric tensor field", Theoret. and Math. Phys., 75:2 (1988), 470–477 (cited: 10) (cited: 13) 17. A. A. Slavnov, S. A. Frolov, "Propagator of Yang–Mills field in light-cone gauge", Theoret. and Math. Phys., 73:2 (1987), 1158–1165 (cited: 14) (cited: 16) 18. A. A. Slavnov, S. A. Frolov, "Quantization of Yang–Mills fields in the $A_0=0$ gauge", Theoret. and Math. Phys., 68:3 (1986), 880–884 (cited: 7) (cited: 6) Presentations in Math-Net.Ru 1. TTbar deformation and the light-cone gauge S. A. Frolov Seminar of the Department of Theoretical Physics, Steklov Mathematical Institute of RAS 2. Free field representation of the Zamolodchikov-Faddeev algebra and form factors of the $SU(N) \times SU(N)$ Principal Chiral Field model 3. Static gauge and energy spectrum of single-mode strings in $AdS5\timesS5$ 4. $S$-matrix for strings on $\eta$-deformed $AdS_5 \times S_5$ S. A. Frolov, G. E. Arutyunov 5. Free field representation and form factors of the chiral Gross-Neveu model Sergey Frolov 6. О $q$-деформациях зеркального термодинамического анзаца Бете 7. Scaling dimensions from mirror 8. Интегрируемость в AdS-CFT соответствии 9. The Zamolodchikov–Faddeev algebra for strings on $AdS_5\times S^5$ 10. Lax pair for strings in Lunin–Maldacena background Trinity College, Dublin Steklov Mathematical Institute of Russian Academy of Sciences, Moscow
CommonCrawl
Problems in Mathematics Problems by Topics Gauss-Jordan Elimination Linear Transformation Vector Space Eigen Value Cayley-Hamilton Theorem Diagonalization Exam Problems Group Homomorphism Sylow's Theorem Module Theory Ring Theory LaTex/MathJax Login/Join us Solve later Problems My Solved Problems You solved 0 problems!! Solved Problems / Solve later Problems Linear Independent Vectors and the Vector Space Spanned By Them Problem 141 Let $V$ be a vector space over a field $K$. Let $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n$ be linearly independent vectors in $V$. Let $U$ be the subspace of $V$ spanned by these vectors, that is, $U=\Span \{\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n\}$. Let $\mathbf{u}_{n+1}\in V$. Show that $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n, \mathbf{u}_{n+1}$ are linearly independent if and only if $\mathbf{u}_{n+1} \not \in U$. Add to solve later $(\implies)$ Suppose that the vectors $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n, \mathbf{u}_{n+1}$ are linearly independent. If $\mathbf{u}_{n+1}\in U$, then $\mathbf{u}_{n+1}$ is a linear combination of $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n$. Thus, we have \[\mathbf{u}_{n+1}=c_1\mathbf{u}_1+c_2\mathbf{u}_2+\cdots+c_n \mathbf{u}_n\] for some scalars $c_1, c_2, \dots, c_n \in K$. However, this implies that we have a nontrivial linear combination \[c_1\mathbf{u}_1+c_2\mathbf{u}_2+\cdots+c_n \mathbf{u}_n-\mathbf{u}_{n+1}=\mathbf{0}.\] This contradicts that $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n, \mathbf{u}_{n+1}$ are linearly independent. Hence $\mathbf{u}_{n+1} \not \in U$. $(\impliedby)$ Suppose now that $\mathbf{u}_{n+1} \not \in U$. If the vectors $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n, \mathbf{u}_{n+1}$ are linearly dependent, then there exists $c_1, c_2 \dots, c_n, c_{n+1}\in K$ such that not all of them are zero and \[c_1\mathbf{u}_1+c_2\mathbf{u}_2+\cdots+c_n \mathbf{u}_n+c_{n+1}\mathbf{u}_{n+1}=\mathbf{0}.\] We claim that $c_{n+1} \neq 0$. If $c_{n+1}=0$, then we have \[c_1\mathbf{u}_1+c_2\mathbf{u}_2+\cdots+c_n \mathbf{u}_n=\mathbf{0}\] and since $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n$ are linearly independent, we must have $c_1=c_2=\cdots=c_n=0$. This means that all $c_i$ are zero but this contradicts our choice of $c_i$. Thus $c_{n+1} \neq 0$. Then we have \[\mathbf{u}_{n+1}=\frac{-c_1}{c_{n+1}}\mathbf{u}_1+\frac{-c_2}{c_{n+1}}\mathbf{u}_2+\cdots+\frac{-c_n}{c_{n+1}}\mathbf{u}_n.\] (Note: we needed to check $c_{n+1} \neq 0$ to divide by it.) This implies that $\mathbf{u}_{n+1}$ is a linear combination of vectors $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n$, and thus $\mathbf{u}_{n+1} \in U$, a contradiction. Therefore, the vectors $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n, \mathbf{u}_{n+1}$ are linearly independent. Click here if solved 9 Show the Subset of the Vector Space of Polynomials is a Subspace and Find its Basis Let $P_3$ be the vector space over $\R$ of all degree three or less polynomial with real number coefficient. Let $W$ be the following subset of $P_3$. \[W=\{p(x) \in P_3 \mid p'(-1)=0 \text{ and } p^{\prime\prime}(1)=0\}.\] Here $p'(x)$ is the first derivative of $p(x)$ and […] Does an Extra Vector Change the Span? Suppose that a set of vectors $S_1=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is a spanning set of a subspace $V$ in $\R^5$. If $\mathbf{v}_4$ is another vector in $V$, then is the set \[S_2=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4\}\] still a spanning set for […] The Subset Consisting of the Zero Vector is a Subspace and its Dimension is Zero Let $V$ be a subset of the vector space $\R^n$ consisting only of the zero vector of $\R^n$. Namely $V=\{\mathbf{0}\}$. Then prove that $V$ is a subspace of $\R^n$. Proof. To prove that $V=\{\mathbf{0}\}$ is a subspace of $\R^n$, we check the following subspace […] The Subspace of Linear Combinations whose Sums of Coefficients are zero Let $V$ be a vector space over a scalar field $K$. Let $\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k$ be vectors in $V$ and consider the subset \[W=\{a_1\mathbf{v}_1+a_2\mathbf{v}_2+\cdots+ a_k\mathbf{v}_k \mid a_1, a_2, \dots, a_k \in K \text{ and } […] Determine Whether Each Set is a Basis for $\R^3$ Determine whether each of the following sets is a basis for $\R^3$. (a) $S=\left\{\, \begin{bmatrix} 1 \\ 0 \\ -1 \end{bmatrix}, \begin{bmatrix} 2 \\ 1 \\ -1 \end{bmatrix}, \begin{bmatrix} -2 \\ 1 \\ 4 \end{bmatrix} […] Vector Space of Polynomials and Coordinate Vectors Let $P_2$ be the vector space of all polynomials of degree two or less. Consider the subset in $P_2$ \[Q=\{ p_1(x), p_2(x), p_3(x), p_4(x)\},\] where \begin{align*} &p_1(x)=x^2+2x+1, &p_2(x)=2x^2+3x+1, \\ &p_3(x)=2x^2, &p_4(x)=2x^2+x+1. \end{align*} (a) Use the basis […] Find a Basis and the Dimension of the Subspace of the 4-Dimensional Vector Space Let $V$ be the following subspace of the $4$-dimensional vector space $\R^4$. \[V:=\left\{ \quad\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} \in \R^4 \quad \middle| \quad x_1-x_2+x_3-x_4=0 \quad\right\}.\] Find a basis of the subspace $V$ […] Column Rank = Row Rank. (The Rank of a Matrix is the Same as the Rank of its Transpose) Let $A$ be an $m\times n$ matrix. Prove that the rank of $A$ is the same as the rank of the transpose matrix $A^{\trans}$. Hint. Recall that the rank of a matrix $A$ is the dimension of the range of $A$. The range of $A$ is spanned by the column vectors of the matrix […] Tags: linear algebralinear combinationlinearly independentspanspanning setsubspacevectorvector space Next story Find a Value of a Linear Transformation From $\R^2$ to $\R^3$ Previous story Rank and Nullity of a Matrix, Nullity of Transpose Linear Transformation to 1-Dimensional Vector Space and Its Kernel by Yu · Published 03/09/2017 · Last modified 11/18/2017 Linear Transformation $T(X)=AX-XA$ and Determinant of Matrix Representation Find the Rank of a Matrix with a Parameter This website's goal is to encourage people to enjoy Mathematics! This website is no longer maintained by Yu. ST is the new administrator. Linear Algebra Problems by Topics The list of linear algebra problems is available here. Introduction to Matrices Elementary Row Operations Gaussian-Jordan Elimination Solutions of Systems of Linear Equations Linear Combination and Linear Independence Nonsingular Matrices Inverse Matrices Subspaces in $\R^n$ Bases and Dimension of Subspaces in $\R^n$ General Vector Spaces Subspaces in General Vector Spaces Linearly Independency of General Vectors Bases and Coordinate Vectors Dimensions of General Vector Spaces Linear Transformation from $\R^n$ to $\R^m$ Linear Transformation Between Vector Spaces Orthogonal Bases Determinants of Matrices Computations of Determinants Introduction to Eigenvalues and Eigenvectors Eigenvectors and Eigenspaces Diagonalization of Matrices The Cayley-Hamilton Theorem Dot Products and Length of Vectors Eigenvalues and Eigenvectors of Linear Transformations Jordan Canonical Form Elementary Number Theory (1) Field Theory (27) Group Theory (126) Linear Algebra (485) Math-Magic (1) Module Theory (13) Probability (16) Ring theory (67) Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog. Successful Probability of a Communication Network Diagram Lower and Upper Bounds of the Probability of the Intersection of Two Events Find the Conditional Probability About Math Exam Experiment What is the Probability that Selected Coin was Two-Headed? If a Smartphone is Defective, Which Factory Made It? Intersection of Two Null Spaces is Contained in Null Space of Sum of Two Matrices Quiz 10. Find Orthogonal Basis / Find Value of Linear Transformation Vector Space of Functions from a Set to a Vector Space Eigenvalues of a Matrix and its Transpose are the Same Differentiating Linear Transformation is Nilpotent How to Diagonalize a Matrix. Step by Step Explanation. Determine Whether Each Set is a Basis for $\R^3$ Find a Basis for the Subspace spanned by Five Vectors Prove Vector Space Properties Using Vector Space Axioms 12 Examples of Subsets that Are Not Subspaces of Vector Spaces The Intersection of Two Subspaces is also a Subspace Find a Basis and the Dimension of the Subspace of the 4-Dimensional Vector Space How to Find a Basis for the Nullspace, Row Space, and Range of a Matrix Express a Vector as a Linear Combination of Other Vectors Summary: Possibilities for the Solution Set of a System of Linear Equations Site Map & Index abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam field theory finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space Search More Problems Membership Level Free If you are a member, Login here. Problems in Mathematics © 2020. All Rights Reserved. More in Linear Algebra Rank and Nullity of a Matrix, Nullity of Transpose Let $A$ be an $m\times n$ matrix. The nullspace of $A$ is denoted by $\calN(A)$. The dimension of the nullspace...
CommonCrawl
Water model A water model is defined by its geometry, together with other parameters such as the atomic charges and Lennard-Jones parameters. In computational chemistry, classical water models are used for the simulation of water clusters, liquid water, and aqueous solutions with explicit solvent. The models are determined from quantum mechanics, molecular mechanics, experimental results, and these combinations. To imitate a specific nature of molecules, many types of model have been developed. In general, these can be classified by following three points; (i) the number of interaction points called site, (ii) whether the model is rigid or flexible, (iii) whether the model includes polarization effects. An alternative to the explicit water models is to use an implicit solvation model, also known as a continuum model, an example of which would be the COSMO Solvation Model or the Polarizable continuum model (PCM). 1 Simple water models 2 2-site 3.1 Flexible SPC water model 3.2 Other models 8 Computational cost Simple water models The rigid models are known as the simplest water models which rely on non-bonded interactions. In these models, bonding interactions are implicitly treated by holonomic constraints. The electrostatic interaction is modeled using Coulomb's law and the dispersion and repulsion forces using the Lennard-Jones potential.[1][2] The potential for models such as TIP3P and TIP4P is represented by E a b = ∑ i on a ∑ j on b k C q i q j r i j + A r O O 12 − B r O O 6 {\displaystyle E_{ab}=\sum _{i}^{{\text{on }}a}\sum _{j}^{{\text{on }}b}{\frac {k_{C}q_{i}q_{j}}{r_{ij}}}+{\frac {A}{r_{{\text{O}}{\text{O}}}^{12}}}-{\frac {B}{r_{{\text{O}}{\text{O}}}^{6}}}} where kC, the electrostatic constant, has a value of 332.1 Å·kcal/mol in the units commonly used in molecular modeling{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}; qi are the partial charges relative to the charge of the electron; rij is the distance between two atoms or charged sites; and A and B are the Lennard-Jones parameters. The charged sites may be on the atoms or on dummy sites (such as lone pairs). In most water models, the Lennard-Jones term applies only to the interaction between the oxygen atoms. The figure below shows the general shape of the 3- to 6-site water models. The exact geometric parameters (the OH distance and the HOH angle) vary depending on the model. 2-site A 2-site model of water based on the familiar three-site SPC model (see below) has been shown to predict the dielectric properties of water using site-renormalized molecular fluid theory.[3] Three-site models have three interaction points corresponding to the three atoms of the water molecule. Each site has a point charge, and the site corresponding to the oxygen atom also has the Lennard-Jones parameters. Since 3-site models achieve a high computational efficiency, these are widely used for many applications of molecular dynamics simulations. Most of models use a rigid geometry matching that of actual water molecules. An exception is the SPC model, which assumes an ideal tetrahedral shape (HOH angle of 109.47°) instead of the observed angle of 104.5°. The table below lists the parameters for some 3-site models. TIPS[4] SPC[5] TIP3P[6] SPC/E[7] r(OH), Å 0.9572 1.0 0.9572 1.0 HOH, deg A × 10−3, kcal Å12/mol B, kcal Å6/mol q(O) −0.80 −0.82 −0.834 −0.8476 q(H) +0.40 +0.41 +0.417 +0.4238 The SPC/E model adds an average polarization correction to the potential energy function: E p o l = 1 2 ∑ i ( μ − μ 0 ) 2 α i {\displaystyle E_{pol}={\frac {1}{2}}\sum _{i}{\frac {(\mu -\mu ^{0})^{2}}{\alpha _{i}}}} where μ is the dipole of the effectively polarized water molecule (2.35 D for the SPC/E model), μ0 is the dipole moment of an isolated water molecule (1.85 D from experiment), and αi is an isotropic polarizability constant, with a value of 1.608 × 10−40 F m2. Since the charges in the model are constant, this correction just results in adding 1.25 kcal/mol (5.22 kJ/mol) to the total energy. The SPC/E model results in a better density and diffusion constant than the SPC model. The TIP3P model implemented in the CHARMM force field is a slightly modified version of the original. The difference lies in the Lennard-Jones parameters: unlike TIP3P, the CHARMM version of the model places Lennard-Jones parameters on the hydrogen atoms too, in addition to the one on oxygen. The charges are not modified.[8] Flexible SPC water model The flexible simple point charge water model (or Flexible SPC water model) is a re-parametrization of the three-site SPC water model.[9][10] The SPC model is rigid, whilst the flexible SPC model is flexible. In the model of Toukan and Rahman, the O-H stretching is made anharmonic and thus the dynamical behavior is well described. This is one of the most accurate three-center water models without taking into account the polarization. In molecular dynamics simulations it gives the correct density and dielectric permittivity of water.[11] Flexible SPC is implemented in the MDynaMix and Abalone programs. Ferguson (flex. SPC) CVFF (flex.) MG (flexible and dissociative)MG model The four-site models have four interaction points by adding one dummy atom near of the oxygen along the bisector of the HOH angle of the three-site models (labeled M in the figure). The dummy atom only has a negative charge. This model improves the electrostatic distribution around the water molecule. The first model to use this approach was the Bernal-Fowler model published in 1933, which may also be the earliest water model. However, the BF model doesn't reproduce well the bulk properties of water, such as density and heat of vaporization, and is therefore only of historical interest. This is a consequence of the parameterization method; newer models, developed after modern computers became available, were parameterized by running Metropolis Monte Carlo or molecular dynamics simulations and adjusting the parameters until the bulk properties are reproduced well enough. The TIP4P model, first published in 1983, is widely implemented in computational chemistry software packages and often used for the simulation of biomolecular systems. There have been subsequent reparameterizations of the TIP4P model for specific uses: the TIP4P-Ew model, for use with Ewald summation methods; the TIP4P/Ice, for simulation of solid water ice; and TIP4P/2005, a general parameterization for simulating the entire phase diagram of condensed water. BF[12] TIPS2[13] TIP4P-Ew[14] TIP4P/Ice[15] TIP4P/2005[16] 0.96 0.9572 0.9572 0.9572 0.9572 0.9572 105.7 104.52 104.52 104.52 104.52 104.52 r(OM), Å 0.15 0.15 0.15 0.125 0.1577 0.1546 560.4 695.0 600.0 656.1 857.9 731.3 q(M) −0.98 −1.07 −1.04 −1.04844 −1.1794 −1.1128 +0.49 +0.535 +0.52 +0.52422 +0.5897 +0.5564 TIP4PF (flexible) The 5-site models place the negative charge on dummy atoms (labeled L) representing the lone pairs of the oxygen atom, with a tetrahedral-like geometry. An early model of these types was the BNS model of Ben-Naim and Stillinger, proposed in 1971, soon succeeded by the ST2 model of Stillinger and Rahman in 1974. Mainly due to their higher computational cost, five-site models were not developed much until 2000, when the TIP5P model of Mahoney and Jorgensen was published. When compared with earlier models, the TIP5P model results in improvements in the geometry for the water dimer, a more "tetrahedral" water structure that better reproduces the experimental radial distribution functions from neutron diffraction, and the temperature of maximum density of water. The TIP5P-E model is a reparameterization of TIP5P for use with Ewald sums. BNS[17] ST2[17] TIP5P[18] TIP5P-E[19] 1.0 1.0 0.9572 0.9572 r(OL), Å 1.0 0.8 0.70 0.70 LOL, deg 77.4 238.7 544.5 554.3 q(L) −0.19562 −0.2357 −0.241 −0.241 +0.19562 +0.2357 +0.241 +0.241 RL, Å RU, Å Note, however, that the BNS and ST2 models do not use Coulomb's law directly for the electrostatic terms, but a modified version that is scaled down at short distances by multiplying it by the switching function S(r): S ( r i j ) = { 0 , if r i j ≤ R L ( r i j − R L ) 2 ( 3 R U − R L − 2 r i j ) ( R U − R L ) 2 , if R L ≤ r i j ≤ R U 1 , if R U ≤ r i j {\displaystyle S(r_{ij})={\begin{cases}0,&{\mbox{if }}r_{ij}\leq R_{L}\\{\frac {(r_{ij}-R_{L})^{2}(3R_{U}-R_{L}-2r_{ij})}{(R_{U}-R_{L})^{2}}},&{\mbox{if }}R_{L}\leq r_{ij}\leq R_{U}\\1,&{\mbox{if }}R_{U}\leq r_{ij}\end{cases}}} Therefore the RL and RU parameters only apply to BNS and ST2. A 6-site model that combines all the sites of the 4- and 5-site models was developed by Nada and van der Eerden.[20] Originally designed to study water/ice systems, however has a very high melting temperature[21] The effect of explicit solute model on solute behavior in bimolecular simulations has been also extensively studied. It was shown that explicit water models affected the specific solvation and dynamics of unfolded peptides while the conformational behavior and flexibility of folded peptides remained intact.[22] MB model. A more abstract model resembling the Mercedes-Benz logo that reproduces some features of water in two-dimensional systems. It is not used as such for simulations of "real" (i.e., three-dimensional) systems, but it is useful for qualitative studies and for educational purposes.[23] Coarse-grained models. One- and two-site models of water have also been developed.[24] In coarse grain models, each site can represent several water molecules. Computational cost The computational cost of a water simulation increases with the number of interaction sites in the water model. The CPU time is approximately proportional to the number of interatomic distances that need to be computed. For the 3-site model, 9 distances are required for each pair of water molecules (every atom of one molecule against every atom of the other molecule, or 3 × 3). For the 4-site model, 10 distances are required (every charged site with every charged site, plus the O-O interaction, or 3 × 3 + 1). For the 5-site model, 17 distances are required (4 × 4 + 1). Finally, for the 6-site model, 26 distances are required (5 × 5 + 1). When using rigid water models in molecular dynamics, there is an additional cost associated with keeping the structure constrained, using constraint algorithms (although with bond lengths constrained it is often possible to increase the time step). Water (properties) Water (data page) Water dimer Force field implementation Molecular mechanics Software for molecular mechanics modeling ↑ Dyer KM; Perkyns JS; Stell G; Pettitt BM. Site-Renormalized molecular fluid theory: on the utility of a two-site model of water. Mol. Phys. 2009, 107, 423-431. ↑ Jorgensen, W. L. Quantum and statistical mechanical studies of liquids. 10. Transferable intermolecular potential functions for water, alcohols, and ethers. Application to liquid water. J. Am. Chem. Soc. 1981, 103, 335-340. ↑ H.J.C. Berendsen, J.P.M. Postma, W.F. van Gunsteren, and J. Hermans, In Intermolecular Forces, edited by B. Pullman (Reidel, Dordrecht, 1981), p. 331. ↑ 6.0 6.1 Jorgensen, W. L.; Chandrasekhar, J.; Madura, J. D.; Impey, R. W.; Klein, M. L. Comparison of simple potential functions for simulating liquid water. J. Chem. Phys 1983, 79, 926-935. Template:Hide in printTemplate:Only in print ↑ H. J. C. Berendsen, J. R. Grigera, and T. P. Straatsma. The Missing Term in Effective Pair Potentials. J. Phys. Chem 1987, 91, 6269-6271. Template:Hide in printTemplate:Only in print ↑ MacKerell, A. D., Jr.; Bashford, D.; Bellott, R. L.; Dunbrack, R. L., Jr.; Evanseck, J. D.; Field, M. J.; Fischer, S.; Gao, J.; Guo, H.; Ha, S.; Joseph-McCarthy, D.; Kuchnir, L.; Kuczera, K.; Lau, F. T. K.; Mattos, C.; Michnick, S.; Ngo, T.; Nguyen, D. T.; Prodhom, B.; Reiher, W. E., III; Roux, B.; Schlenkrich, M.; Smith, J. C.; Stote, R.; Straub, J.; Watanabe, M.; Wiorkiewicz-Kuczera, J.; Yin, D.; Karplus, M. All-Atom Empirical Potential for Molecular Modeling and Dynamics Studies of Proteins. J. Phys. Chem. 1998, 102, 3586-3616. Template:Hide in printTemplate:Only in print ↑ Bernal, J. D.; Fowler, R.H. J. Chem. Phys. 1933, 1, 515. Template:Hide in printTemplate:Only in print ↑ Jorgensen, W. L. Revised TIPS for simulations of liquid water and aqueous solutions. J. Chem. Phys 1982, 77, 4156-4163. Template:Hide in printTemplate:Only in print ↑ H. W. Horn, W. C. Swope, J. W. Pitera, J. D. Madura, T. J. Dick, G. L. Hura, and T. Head-Gordon. Development of an improved four-site water model for biomolecular simulations: TIP4P-Ew. J. Chem. Phys. 2004, 120, 9665-9678. Template:Hide in printTemplate:Only in print ↑ J. L. F. Abascal, E. Sanz, R. García Fernández, and C. Vega. A potential model for the study of ices and amorphous water: TIP4P/Ice. J. Chem. Phys. 2005, 122, 234511. Template:Hide in printTemplate:Only in print ↑ J. L. F. Abascal and C. Vega. A general purpose model for the condensed phases of water: TIP4P/2005. J. Chem. Phys. 2005, 123, 234505. Template:Hide in printTemplate:Only in print ↑ 17.0 17.1 F.H. Stillinger, A. Rahman, Improved simulation of liquid water by molecular dynamics. J. Chem. Phys. 1974, 60, 1545-1557. Template:Hide in printTemplate:Only in print ↑ Mahoney, M. W.; Jorgensen, W. L. A five-site model liquid water and the reproduction of the density anomaly by rigid, non-polarizable models. J. Chem. Phys. 2000, 112, 8910-8922. Template:Hide in printTemplate:Only in print ↑ Rick, S. W. A reoptimization of the five-site water potential (TIP5P) for use with Ewald sums. J. Chem. Phys. 2004, 120, 6085-6093. Template:Hide in printTemplate:Only in print ↑ H. Nada, J.P.J.M. van der Eerden, J. Chem. Phys. 2003, 118, 7401. Template:Hide in printTemplate:Only in print ↑ Abascal et al.Template:Hide in printTemplate:Only in print ↑ P. Florova, P. Sklenovsky, P. Banas, M. Otyepka. J. Chem. Theory Comput. 2010, 6, 3569–3579. Template:Hide in printTemplate:Only in print ↑ K. A. T. Silverstein, A. D. J. Haymet, and K. A. Dill. A Simple Model of Water and the Hydrophobic Effect. J. Am. Chem. Soc. 1998, 120, 3166-3175. Template:Hide in printTemplate:Only in print ↑ S. Izvekov, G. A. Voth. Multiscale coarse graining of liquid-state systems J. Chem. Phys. 2005, 123, 134105. Template:Hide in printTemplate:Only in print Template:Water Retrieved from "https://en.formulasearchengine.com/index.php?title=Water_model&oldid=251963"
CommonCrawl
Exponential decay of Lebesgue numbers October 2012, 32(10): 3787-3800. doi: 10.3934/dcds.2012.32.3787 The existence of uniform attractors for 3D Brinkman-Forchheimer equations Yuncheng You 1, , Caidi Zhao 2, and Shengfan Zhou 3, Department of Applied Mathematics, Shanghai Normal University, Shanghai 200234, China Department of Mathematics and Information Science, Wenzhou University, Wenzhou, Zhejiang 325035, China Department of Mathematics, Zhejiang Normal University, Jinhua, 321004, China Received April 2011 Revised April 2012 Published May 2012 The longtime dynamics of the three dimensional (3D) Brinkman-Forchheimer equations with time-dependent forcing term is investigated. It is proved that there exists a uniform attractor for this nonautonomous 3D Brinkman-Forchheimer equations in the space $\mathbb{H}^1(\Omega)$. When the Darcy coefficient $\alpha$ is properly large and $L^2_b$-norm of the forcing term is properly small, it is shown that there exists a unique bounded and asymptotically stable solution with interesting corollaries. Keywords: uniform attractor., Brinkman-Forchheimer equation, asymptotic dynamics. Mathematics Subject Classification: Primary: 35B40, 35B41, 35Q35; Secondary: 46E25, 20C2. Citation: Yuncheng You, Caidi Zhao, Shengfan Zhou. The existence of uniform attractors for 3D Brinkman-Forchheimer equations. Discrete & Continuous Dynamical Systems, 2012, 32 (10) : 3787-3800. doi: 10.3934/dcds.2012.32.3787 A. V. Babin and M. I. Vishik, "Attractors of Evolution Equations," Translated and revised from the 1989 Russian original by Babin, Studies in Mathematics and its Applications, 25, North-Holland Publishing Co., Amsterdam, 1992. Google Scholar O. Çelebi, V. Kalantarov and D. Uğurlu, On continuous dependence on coefficients of the Brinkman-Forchheimer equations, Applied Mathematics Letters, 19 (2006), 801-807. doi: 10.1016/j.aml.2005.11.002. Google Scholar V. V. Chepyzhov and M. I. Vishik, "Attractors for Equations of Mathematical Physics," AMS Colloquium Publications, 49, AMS, Providence, RI, 2002. Google Scholar M. Firdaouss, J.-L. Guermond and P. Le Quéré, Nonlinear corrections to Darcy's law at low Reynolds numbers, Journal of Fluid Mechanics, 343 (1997), 331-350. doi: 10.1017/S0022112097005843. Google Scholar F. Franchi and B. Straughan, Continuous dependence and decay for the Forchheimer equations, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 459 (2003), 3195-3202. doi: 10.1098/rspa.2003.1169. Google Scholar T. Giorgi, Derivation of the Forchheimer law via matched asymptotic expansions, Transport in Porous Media, 29 (1997), 191-206. doi: 10.1023/A:1006533931383. Google Scholar N. Ju, Existence of global attractor for the three-dimensional modified Navier-Stokes equations, Nonlinearity, 14 (2001), 777-786. doi: 10.1088/0951-7715/14/4/306. Google Scholar V. K. Kalantarov and S. Zelik, Smooth attractor for the Brinkman-Forchheimer equations with fast growing nonlinearities, preprint, 2011, arXiv:1101.4070. Google Scholar S. Lu, Attractors for nonautonomous 2D Navier-Stokes equations with less regular normal forces, J. Differential Equations, 230 (2006), 196-212. doi: 10.1016/j.jde.2006.07.009. Google Scholar S. Lu, H. Wu and C. Zhong, Attractors for nonautonomous 2D Navier-Stokes equations with normal external forces, Disc. Cont. Dyn. Syst., 13 (2005), 701-719. doi: 10.3934/dcds.2005.13.701. Google Scholar Y. Ouyang and L. Yan, A note on the existence of a global attractor for the Brinkman-Forchheimer equations, Nonlinear Analysis, 70 (2009), 2054-2059. doi: 10.1016/j.na.2008.02.121. Google Scholar L. E. Payne, J. C. Song and B. Straugham, Continuous dependence and convergence results for Brinkman and Forchheimer models with variable viscosity, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 455 (1999), 2173-2190. doi: 10.1098/rspa.1999.0398. Google Scholar L. E. Payne and B. Straugham, Convergence and continuous dependence for the Brinkman-Forchheimer equations, Studies in Applied Mathematics, 102 (1999), 419-439. doi: 10.1111/1467-9590.00116. Google Scholar R. Rosa, The global attractor for the 2D Navier-Stokes flow on some unbounded domains, Nonlinear Analysis, 32 (1998), 71-85. doi: 10.1016/S0362-546X(97)00453-7. Google Scholar G. R. Sell and Y. You, "Dynamics of Evolutionary Equations," Applied Mathematical Sciences, 143, Springer-Verlag, New York, 2002. Google Scholar A. Shenoy, Non-Newtonian fluid heat transfer in porous media, Adv. Heat transfer, 24 (1994), 101-190. doi: 10.1016/S0065-2717(08)70233-8. Google Scholar B. Straughan, "Stability and Wave Motion in Porous Media," Applied Mathematical Sciences, 165, Springer, New York, 2008. Google Scholar D. Ugurlu, On the existence of a global attractor for the Brinkman-Forchheimer equations, Nonlinear Analysis, 68 (2008), 1986-1992. doi: 10.1016/j.na.2007.01.025. Google Scholar S. Whitaker, The Forchheimer equation: A theoretical development, Transport in Porous Media, 25 (1996), 27-62. doi: 10.1007/BF00141261. Google Scholar B. Wang and S. Lin, Existence of global attractors for the three-dimensional Brinkman-Forchheimer equation, Math. Meth. Appl. Sci., 31 (2008), 1479-1495. doi: 10.1002/mma.985. Google Scholar C. Zhao and S. Zhou, L$^2$-compact uniform attractors for a non-autonomous incompressible non-Newtonian fluid with locally uniformly integrable external forces in distribution space, J. Math. Phys., 48 (2007), 12 pp. doi: 10.1063/1.2709845. Google Scholar C. Zhao and S. Zhou, Pullback attractors for a non-autonomous incompressible non-Newtonian fluid, J. Differential Equations, 238 (2007), 394-425. doi: 10.1016/j.jde.2007.04.001. Google Scholar Wenjing Liu, Rong Yang, Xin-Guang Yang. Dynamics of a 3D Brinkman-Forchheimer equation with infinite delay. Communications on Pure & Applied Analysis, 2021, 20 (5) : 1907-1930. doi: 10.3934/cpaa.2021052 Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074 Kush Kinra, Manil T. Mohan. Convergence of random attractors towards deterministic singleton attractor for 2D and 3D convective Brinkman-Forchheimer equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021061 Varga K. Kalantarov, Sergey Zelik. Smooth attractors for the Brinkman-Forchheimer equations with fast growing nonlinearities. Communications on Pure & Applied Analysis, 2012, 11 (5) : 2037-2054. doi: 10.3934/cpaa.2012.11.2037 Manil T. Mohan. Optimal control problems governed by two dimensional convective Brinkman-Forchheimer equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021020 Qiangheng Zhang, Yangrong Li. Regular attractors of asymptotically autonomous stochastic 3D Brinkman-Forchheimer equations with delays. Communications on Pure & Applied Analysis, 2021, 20 (10) : 3515-3537. doi: 10.3934/cpaa.2021117 Timir Karmakar, Meraj Alam, G. P. Raja Sekhar. Analysis of Brinkman-Forchheimer extended Darcy's model in a fluid saturated anisotropic porous channel. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2022001 Chunyou Sun, Daomin Cao, Jinqiao Duan. Non-autonomous wave dynamics with memory --- asymptotic regularity and uniform attractor. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 743-761. doi: 10.3934/dcdsb.2008.9.743 Cedric Galusinski, Serguei Zelik. Uniform Gevrey regularity for the attractor of a damped wave equation. Conference Publications, 2003, 2003 (Special) : 305-312. doi: 10.3934/proc.2003.2003.305 Olivier Goubet, Wided Kechiche. Uniform attractor for non-autonomous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2011, 10 (2) : 639-651. doi: 10.3934/cpaa.2011.10.639 Manil T. Mohan. Global and exponential attractors for the 3D Kelvin-Voigt-Brinkman-Forchheimer equations. Discrete & Continuous Dynamical Systems - B, 2020, 25 (9) : 3393-3436. doi: 10.3934/dcdsb.2020067 Xueli Song, Jianhua Wu. Non-autonomous 2D Newton-Boussinesq equation with oscillating external forces and its uniform attractor. Evolution Equations & Control Theory, 2022, 11 (1) : 41-65. doi: 10.3934/eect.2020102 Seung-Yeal Ha, Jinyeong Park, Xiongtao Zhang. A global well-posedness and asymptotic dynamics of the kinetic Winfree equation. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1317-1344. doi: 10.3934/dcdsb.2019229 Fang Li, Bo You. On the dimension of global attractor for the Cahn-Hilliard-Brinkman system with dynamic boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6387-6403. doi: 10.3934/dcdsb.2021024 Xiaolin Jia, Caidi Zhao, Juan Cao. Uniform attractor of the non-autonomous discrete Selkov model. Discrete & Continuous Dynamical Systems, 2014, 34 (1) : 229-248. doi: 10.3934/dcds.2014.34.229 David Rossmanith, Ashok Puri. Recasting a Brinkman-based acoustic model as the damped Burgers equation. Evolution Equations & Control Theory, 2016, 5 (3) : 463-474. doi: 10.3934/eect.2016014 Rong Yang, Li Chen. Mean-field limit for a collision-avoiding flocking system and the time-asymptotic flocking dynamics for the kinetic equation. Kinetic & Related Models, 2014, 7 (2) : 381-400. doi: 10.3934/krm.2014.7.381 Yangrong Li, Lianbing She, Jinyan Yin. Longtime robustness and semi-uniform compactness of a pullback attractor via nonautonomous PDE. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1535-1557. doi: 10.3934/dcdsb.2018058 Yejuan Wang, Peter E. Kloeden. The uniform attractor of a multi-valued process generated by reaction-diffusion delay equations on an unbounded domain. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4343-4370. doi: 10.3934/dcds.2014.34.4343 Messoud Efendiev, Etsushi Nakaguchi, Wolfgang L. Wendland. Uniform estimate of dimension of the global attractor for a semi-discretized chemotaxis-growth system. Conference Publications, 2007, 2007 (Special) : 334-343. doi: 10.3934/proc.2007.2007.334 Yuncheng You Caidi Zhao Shengfan Zhou
CommonCrawl
Anomaly detection in quasi-periodic energy consumption data series: a comparison of algorithms Proceedings of the Energy Informatics.Academy Conference 2022 (EI.A 2022) Niccolò Zangrando1, Piero Fraternali1, Marco Petri1, Nicolò Oreste Pinciroli Vago1 & Sergio Luis Herrera González1 The diffusion of domotics solutions and of smart appliances and meters enables the monitoring of energy consumption at a very fine level and the development of forecasting and diagnostic applications. Anomaly detection (AD) in energy consumption data streams helps identify data points or intervals in which the behavior of an appliance deviates from normality and may prevent energy losses and break downs. Many statistical and learning approaches have been applied to the task, but the need remains of comparing their performances with data sets of different characteristics. This paper focuses on anomaly detection on quasi-periodic energy consumption data series and contrasts 12 statistical and machine learning algorithms tested in 144 different configurations on 3 data sets containing the power consumption signals of fridges. The assessment also evaluates the impact of the length of the series used for training and of the size of the sliding window employed to detect the anomalies. The generalization ability of the top five methods is also evaluated by applying them to an appliance different from that used for training. The results show that classical machine learning methods (Isolation Forest, One-Class SVM and Local Outlier Factor) outperform the best neural methods (GRU/LSTM autoencoder and multistep methods) and generalize better when applied to detect the anomalies of an appliance different from the one used for training. Appliance-level energy consumption monitoring is a core component of the control system of smart buildings (Shah et al. 2019; Shaikh et al. 2014). The consumption data can be either directly collected with such devices as smart plugs, or inferred with non intrusive load monitoring (NILM) algorithms able to break down the household aggregate consumption signal into the contributions of individual appliances (Azizi et al. 2021). The analysis of energy consumption data series enables forecasting and diagnostic applications, such as load prediction (Amasyali and El-Gohary 2018), anomaly detection (AD) (Fan et al. 2018) and predictive maintenance (Cheng et al. 2020). AD in temporal data series is the task of identifying data points or intervals in which the time series deviates from normality. AD finds application in different fields such as healthcare, where it applies to the analysis of clinical images (Schlegl et al. 2019) and of ECG data (Chauhan and Vig 2015), cybersecurity, where it is used for malware identification (Sanz et al. 2014), manufacturing, where it helps monitoring machines and prevent break downs (Kharitonov et al. 2022), and in the utility industry, where it supports the early identification of critical events such as appliance malfunctioning (Mishra et al. 2020) and water leakage (Seyoum et al. 2017; Muniz and Gomes 2022). In the energy field, AD may be combined with energy load forecasting to improve accuracy (Koukaras et al. 2021), or integrated as a component for detecting non nominal energy fluctuations for enhancing decision making in energy transfer between microgrids (An interdisciplinary 2021). Energy consumption time series can be collected from home appliances and building systems with complex periodic or quasi-periodic behavior, such as coolers, water heaters and fridges, which present specific challenges when performing anomaly detection. Machine learning and neural models trained on normal data may overfit with respect to the length of the period. This phenomenon makes the model sensible even to small variations of the cycle duration, which can happen during normal functioning (Liu et al. 2020). As a consequence, the detector may emit a high number of false positive alerts when such small variations occur and also may degrade its performances sensibly when used to detect anomalies of an appliance of the same type but with a different cycle duration. The literature on AD in temporal data series still lacks a systematic comparison of algorithms belonging to different families on quasi-periodic data sets. Therefore the development of an AD application in such a scenario still has to confront with design decisions such as the choice of the most effective algorithm, the minimum duration of the time series to use for training, the minimum size of the signal prediction/reconstruction window needed to identify the anomalous behavior, and the portability of the chosen algorithm from one appliance to another one with "similar" behavior. This paper tries to fill the gap in the literature about AD in quasi-periodic time series by systematically comparing the performances of 12 algorithms representative of different families of approaches. The experiments were performed on 3 distinct data sets regarding the fridges power consumption. The aim of the experiments is to address the following questions: Q1 How do the selected algorithm compare in the AD task on quasi-periodic time series under multiple performance metrics? Q2 For the algorithms that require training, what is the relationship between the length of the training series and the performances? Q3 For the algorithms that exploit a window-based approach for the prediction, what is the relationship between the length of the window and the performances? Q4 What is the generalization capability of the methods? How does performance degrade when a method trained on an appliance is tested on the time series produced by a distinct appliance of the same type? The essential findings can be summarized as follows: The classical ML algorithms Isolation Forest (ISOF), One-Class SVM (OC-SVM), and Local Oulier Factor (LOF) outperform the best neural models (GRU/LSTM autoencoder and multisteps methods) Two weeks of training data are sufficient for most methods, with the multisteps approaches attaining a modest improvement if one month of data is used. The length of the prediction/reconstruction window has a different impact on neural and non-neural methods. ISOF and OC SVM are less dependent on the training set with respect to the neural models, which have a sensible performance decay when tested on an appliance different from the one used for training. The top result of all the experiments is attained by ISOF on the Fridge3 time series, trained with a sub-sequence of length equal to one month and with a window size of 2 \(\times\) period: Precision = 0.947, Recall = 0.965, \(\hbox {F}_{1}\) score = 0.956. The above mentioned findings can help understand better the requirements and performances of AD algorithms on quasi-periodic data series so as to design more effective household energy consumption applications, e.g., by equipping the mobile apps that are nowadays bundled with smart plug products with functionalities for consumption monitoring, energy saving recommendations and alerting of potential appliance malfunctioning. The rest of the article is organised as follows: Section "Related work" overviews the state of the art in anomaly detection. Section Experimental settings describes the experimental configuration, including the description of the dataset and of the evaluated algorithms. Section "Experimental results" discusses the results of the performed experiments. Section "Qualitative analysis of results" discusses qualitatively a few examples of the predictions made by the reviewed methods. Finally, Section "Conclusions" draws the conclusions and illustrates our future work. Anomaly detection in temporal data series exploits data collected with a broad spectrum of sensors in diverse fields, such as weather monitoring, natural resources distribution and consumption (e.g., water and natural gas), network traffic surveillance, and electrical load measurement (Firth et al. 2017; A platform for Open 2022; Makonin et al. 2016; Shakibaei 2020). As an example, the work in Makonin et al. (2016) discusses the use of residential home smart meters for data collection and highlights how such series often exhibit anomalous behaviors. Raw data must be pre-processed to get ready for further analysis. Besides the usual operations of data cleaning and validation, a prominent task is data annotation, which associates data points or intervals with the specifications of significant events, such as change points and anomalies. For example, Rimor Rashid et al. (2018) is a time-series data annotator supporting the labelling of data with anomaly tags, which can be used as ground truth for training and evaluating predictive models. AD can be conducted in both univariate (Braei and Wagner 2020) and multivariate time series (Su et al. 2019; Li et al. 2018; Blázquez-García et al. 2021). In the case of multivariate time series, exploiting variable correlation may be necessary for reducing the number of parameters needed to model the problem (Pena and Poncela 2006). Examples of multivariate time series dimensionality reduction techniques are principal components analysis (Cook et al. 2019; Pena and Poncela 2006), canonical correlation analysis (Box and Tiao 1977), and factor modelling (Pena and Box 1987). AD approaches can be classified in two main families (Cook et al. 2019): non-regressive and regressive. Non-regressive approaches rely on the fundamental statistical quantities computed on the time series (e.g., mean and variance) and combine them with fixed thresholds, but their effectiveness is limited (Cook et al. 2019). The authors of Kao and Jiang (2019) proposed a statistical AD framework using the Dickey-Fuller test, the Fourier transform, and the Pearson correlation coefficient to analyze periodic time series. Performance evaluation on five NAB datasets (Ahmad et al. 2017) showed that the proposed approach performs well on the NAB Jumps periodic data set and outperforms the models it was compared to. Other types of non-regressive techniques are ML methods for time series analysis. In Oehmcke et al. (2015) the Local Outlier Factor (LOF) method was employed to identify anomalous events in the marine domain and attained 83.4% precision. The Isolation Forest (ISOF) algorithm has been applied to streaming data in Ding and Fei (2013), achieving an AUC score of 0.98 in one of the test dataset. In Zhang et al. (2008) the One-Class Support Vector Machines (OC-SVM) has been implemented for the identification of network anomalies, and for the test set, the outliers identified perfectly match the human visual detection result. Regressive approaches compute a model of the time series generation process. In the case of AD, an autoregression model is used to forecast the variable of interest from its past values. Autoregressive models include methods based on Autoregressive Moving Average (ARMA) (Pincombe 2005; Kadri et al. 2016; Kozitsin et al. 2021) and on Neural Networks, such as Autoencoders (AE) (Yin et al. 2020; Li et al. 2020) and Recurrent Neural Networks (RNNs) (Canizo et al. 2019; Malhotra et al. 2015). Forecasting-based AD approaches are divided into single-step and multi-step methods depending on the number of predicted points. The former strategy is preferable for short-term forecasting (i.e., minutes, hours, and days) and the latter for long-term data series analysis. In the electric load analysis domain, the work in Masum et al. (2018) studies the problem of time series forecasting for electric load measurements and shows that Long Short-Term Memory (LSTM), a deep learning model, outperforms AutoRegressive Integrated Moving Average (ARIMA), a statistical-based model, on three data sets obtained from the Open Power System Data on electric load in Great Britain, Poland, and Italy (A platform for Open 2022). Zhang et al. (2019) shows the importance of an Fast Fourier Transform (FFT) based periodicity pre-processor to extract the period in smart grids time series. Pereira et al. (2018) proposes the use of Variational Autoencoders (VAE) for the unsupervised anomaly detection in solar energy generation time series and the results show that the trained model is able to detect anomalous patterns by using the probabilistic reconstruction metrics as anomaly scores. Himeur et al. (2021) surveys several Artificial Intelligence methods for anomaly detection in buildings' energy consumption, identifying several factors (e.g., occupancy and outdoor temperatures) that influence time series behavior. In the specific field of periodic data series analysis, Zhang et al. (2020) employs a periodicity pre-processor to find the time series period and segment the data into windows. Then it exploits a combination of an RNN and a CNN to detect anomalies achieving an \(\hbox {F}_{1}\) score near 0.9 on all the test datasets. Zhang et al. (2019) also uses a periodicity pre-processor, based on the Fourier transform, and maps multiple periods onto a single cycle to identify deviations across subsequent periods. Pereira et al. (2018) uses Bi-LSTM to detect anomalies and proposes the use of attention maps to explain the results. Capozzoli et al. (2018) encodes periodic time series using letters as a data size reduction technique. The classification process led to robust results with a global accuracy that ranged between 80% and 90%. These works show the advantages of pre-processing to exploit the data periodicity and of dimensionality reduction techniques and discuss results interpretability. The proliferation of time series analysis methods and of AD specific approaches has spawned a stream of research focused on comparing the performance of alternative techniques. For example, the work in Masum et al. (2018) compares the multi-step forecasting performance of ARIMA and LSTM-based RNN models and shows that the LSTM model outperforms the ARIMA model for multi-step electric load forecasting. Our preliminary work (Zangrando et al. 2022) compares CNN-powered and RNN-powered AD methods with One-Class Support Vector Machines and Isolation Forest techniques on one quasi-periodic data set, using standard metrics (precision, recall, \(\hbox {F}_{1}\) score). In this paper we deepen the analysis assessing performances under multiple metrics, investigating the impact of the training sub-sequence duration and of the analysis window size, and contrasting the generalization capacity of the reviewed approaches. Experimental settings The experiments exploit a fridge energy consumption data set collected using smart plugs. The energy consumption data have been collected in Greek residential households using the BlitzWolf BW-SHP2 smart plugs, which allow exporting the time series through an API. The data collection system, the assessed algorithms and the evaluation framework were all implemented in Python. The time series in the data set record the active power consumption of three fridges for over 2 months, with 1 minute data resolution. The time series have been divided into sub-sequences for training, validation, and testing of the methods. Table 1 summarizes the data split. Table 1 The dataset collection period and the train-val and test split When working in normal conditions, the energy consumption curve of a fridge displays a cyclic behavior alternating between a high consumption state (ON) and a low consumption stage (OFF). Figure 1 shows an example of the consumption data of one appliance. Example of the fridge energy consumption data series. The time series is formed by subsequent ON-OFF cycles and is quasi-periodical Data set analysis Periodicity analysis Normal fridge consumption shows a cyclic behavior. Periodicity analysis aims at detecting the mean period corresponding to an ON-OFF cycle and possibly to other longer patterns (e.g., seasonal effects). It is a preliminary step before the application of AD and requires a non-anomalous sub-series, which can be created by manually removing anomalies from the training sub-sequence. The Fast Fourier Transform (FFT) is applied on the anomaly-free sub-sequence to map the data into the frequency domain and the periodicity is defined as the inverse of the frequency corresponding to the highest power in the FFT, as proposed in Kao and Jiang (2019). Table 2 summarizes the periodicity, expressed in minutes of the three data sets. The periods range from 45 minutes to 1h 40 minutes. No seasonal affect is found because the train set refers to only one month. Figure 2 shows the power spectrum computed for one of the three appliances. Table 2 The periods determined for the energy consumption time series, expressed in minutes The power spectrum computed by the periodicity pre-processor (right) on the fridge energy consumption time series (left). The period detected for an ON-OFF cycle is about 80 minutes for the analyzed data set Ground truth annotation For training and testing purposes, the energy consumption time series have been annotated with ground truth (GT) metadata to specify the points that deviate from normality. Three independent annotators have labeled the data points, with a Boolean tag (normal/anomalous) and with a categorical label denoting the type of the anomaly, with the interface shown in Figure 3. The interface of the GT anomaly annotator at work on the fridge time series. The user can specify the anomalies and add meta-data to them. The user has annotated the currently selected GT anomaly, shown in red, with the Continuous ON state label Anomaly classes and their distribution The anomalies have been distinguished in the following categories: Continuous OFF state, when the appliance is in the low consumption state for a long time, Continuous ON state, when the appliance is in the consumption state for an abnormally long time, Spike, when the appliance has an abnormal consumption peak possibly preceded by a ramp and followed by a decay period, Spike + Continuous, when the appliance has a consumption peak followed by a prolonged ON state, Other, when the anomaly does not follow a well-defined pattern. Figure 4 shows the distribution of the anomaly categories in the data set of the three fridges. The plots highlight the different anomalous behavior of the appliances. Fridge2 is mainly subject to continuous ON cycles. Fridge 1 shows a similar pattern, but the prolonged ON states are preceded by an abrupt increase in the consumption. Fridge3 is subject to a more detectable anomalous behavior because almost 95% of the anomalies are of spike type, which are easier to detect also visually. The anomaly type distribution on the three fridge energy consumption data series GT anomaly duration distribution. Figure 5 shows the GT anomaly duration distribution on the data series of the three fridges. The distributions of Fridge1 and Fridge2 are centered close the time series period, which suggests the presence of anomalies shorter than an ON-OFF cycle. The distribution of Fridge3 is centered around values higher than the mean ON-OFF cycle duration, which is typical of the transient behavior caused by high consumption spikes. The anomaly duration distribution on the fridge energy consumption data sets. The distributions of Fridge1 and Fridge2 are centered close the time series period, which suggests the presence of anomalies shorter than an ON-OFF cycle whereas the distribution of Fridge3 is centered around values higher than the mean ON-OFF cycle duration Compared algorithms Algorithm list and definitions The algorithm selection considered the most common methods used in the reviewed studies and their nature (statistical, regressive, neural) so as to achieve a balanced representation of the different approaches. Basic Statistics is an extension of the method presented in Kao and Jiang (2019) for periodic series. The first step analyzes the anomaly-free training data series to determine the periodicity. Then, the anomaly-free train set is divided into non-overlapping windows of the same size as the period and the Pearson product-moment correlation coefficient is computed on all the pairs of contiguous windows to check whether the time series is periodic within the two windows. If it is periodic, the ratio \(R_{std} = \frac{|Std_{current} - Std_{previous}|}{Std_{previous}}\) is computed. An anomaly occurs if \(R_{std}\) exceeds a threshold \(\tau\), defined as follows. \(R_{std}\) is calculated for each window pairs in the train set and the maximum value (\(R_{max}\)) allowed in a non-anomalous time series is found. Then the threshold \(\tau\) is determined on the validation set by performing a grid search. Given a set of possible thresholds \(\tau _\alpha = R_{max}(1+\alpha )\), with \(\alpha\) ranging from 0 to 10 with step 0.1, the threshold \(\tau\) is defined as the value corresponding to the best \(F_1\) score obtained by applying the anomaly definition rule on the validation set. Finally, the same rule is applied to the test set using the computed threshold value. AutoRegressive (AR) (Hyndman and Athanasopoulos 2021) is an autoregression model exploiting past data to predict current data. The prediction model is defined as: $$\begin{aligned} y_t = c + \sum _{i=1}^{p} \phi _i y_{t-i} + \varepsilon _t \end{aligned}$$ where \(c, \phi _i\) are the model parameters and \(\varepsilon _t\) is a white noise term. Anomalies are computed from the prediction error by thresholding. AutoRegressive Integrated Moving Average (ARIMA) (Hyndman and Athanasopoulos 2021; Masum et al. 2018) is a model exploiting past data, differencing of the original time series and a linear combination of white noise terms. A model ARIMA(p, d, q) is defined as: $$\begin{aligned} y^\prime _t=c + \sum _{i=1}^{p} \phi _i y_{t-i}^{\prime } + \sum _{j=1}^{q} \theta _j \varepsilon _{t-j} + \varepsilon _t \end{aligned}$$ where \(y^\prime _t\) is the differenced time series, \(\varepsilon _t\) is a white noise term and \(c, \phi _i, \theta _j\) are the model parameters. Anomalous points are defined as in AR. Local Outlier Factor (LOF) (Breunig et al. 2000) is a clustering algorithm based on the identification of the nearest neighbors and of local outliers. One-Class SVM (OC SVM) (Schölkopf et al. 1999) is the use of support vector machine (SVM) for novelty detection. Isolation Forest (ISOF) (Liu et al. 2008) is an ensemble method that creates different binary trees for isolating anomalous data points. Gated Recurrent Unit (GRU) (Chung et al. 2014) is a class of Recurrent Neural Network (RNNs) that exploit update gate and reset gate to decide what information should be passed to the output. Gated Recurrent Unit multisteps (GRU-MS) is based on GRU and is used to predict multiple consecutive data points in the future. Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber 1997) is another class of RNNs exploiting a cell with an input gate, an output gate and a forget gate. Both GRU and LSTM are designed to take advantage of the past context of the data and to avoid the gradient vanishing problem of RNNs. Long Short-Term Memory multisteps (LSTM-MS) is based on LSTM and is used to forecast several consecutive data points. GRU-Autoencoder (GRU-AE) (Zhang et al. 2019) is a hybrid model using an autoencoder and a GRU network. LSTM-Autoencoder (LSTM-AE) (Cho et al. 2014) is another hybrid model coupling an autoencoder and an LSTM network. Training procedure and parameter settings The hyperparameters of the ISOF, OC SVM, LOF, and ARIMA models are set with Bayesian search employing the hold-out set method. For each configuration, the chosen hyperparameters are used to fit the model and the performances are evaluated on the validation set. LOF, OC SVM and ISOF are assessed using the maximum \(\hbox {F}_{1}\)-score whereas the ARIMA models using the mean squared error (MSE) on predictions. The hyperparameters yielding the maximum \(\hbox {F}_{1}\) or the lowest MSE are selected. ARIMA is trained on anomaly-free data to learn normal patterns as done in Yaacob et al. (2010). ISOF, LOF and OC SVM work on spatial data and thus the univariate time series is projected onto a space \({\mathbb {R}}^n\) with \(n \ge 1\) (Braei and Wagner 2020; Oehmcke et al. 2015). A window of size n is used to extract from the time series \(N-n+1\) vectors of length n of consecutive points, where N is the length of the time series. Then, the spatial algorithms are trained on the projected vectors. At test time, the test set is projected onto \({\mathbb {R}}^n\) and the score of each projected vector is computed. The anomaly score of a point in the time series is defined as the average of all the anomaly scores of the vectors that contain the point. For all the neural models, training is performed on anomaly-free data. Table 3 summarizes the relevant features and parameters of the compared methods. Table 3 Relevant configuration parameters of the compared methods Anomaly definition, GT matching, and performance metrics Anomaly definition strategies. An anomaly definition strategy specifies how the output of the anomaly detector and the data points of the time series are compared in order to identify whether a point is anomalous. AD algorithms adopt different strategies to identify abnormal points: Confidence: an anomaly score is directly provided as output by the model. Absolute and Squared Error (Munir et al. 2018): the anomaly score is defined as the absolute or squared error between the input and the predicted/reconstructed value. Likelihood (Malhotra et al. 2015): each point in the time series is predicted/reconstructed l times and associated with multiple error values. The probability distribution of the errors made by predicting on normal data is used to compute the likelihood of normal behavior on the test data, which is used to derive an anomaly score. Mahalanobis (Malhotra et al. 2016): each point in the time series is predicted/reconstructed l times. For each point, the anomaly score is calculated as the square of the Mahalanobis distance between the error vector and the Gaussian distribution fitted from the error vectors computed during validation. Windows strategy (Keras 2022): a score vector of dimension l is associated with each point. Each element \(s_i\) of the score vector is the mean absolute or mean squared error of the i-th predicted/reconstructed window that contains the point. A threshold \(\tau\) is then applied to the calculated score(s) for classifying the point as normal or anomalous. Table 4 shows the anomaly definition strategies of the compared methods. Anomaly detection criteria and thresholds. The criteria are the ones adopted in order to identify an anomaly. They are strongly related to the nature of the used algorithm. The anomaly identification criteria used by the compared methods are classified in: Prediction error prediction models identify anomalies based on the difference between the predicted value and the observed one. Anomalies are identified based on the residuals between the input and the generated data: the higher the difference, the higher the likelihood of an anomaly. Reconstruction error this criterion applies to all the models that aim at generating an output as close as possible to the input, such as the autoencoder-based models. As for the prediction models, the larger the residual, the higher the probability of an anomaly. Dissimilarity dissimilarity models classify anomalous points by comparing them with the features or with the distribution of normal points or by matching them with the clusters computed from the normal time series. Table 4 summarizes the detection criteria used by the different algorithms. Table 4 Anomaly detection criteria and definition strategies adopted for each algorithm GT matching To evaluate the predictions as true positives (TP), false positives (FP), false negatives (FN), and true negatives (TN), a Point to Point matching strategy has been adopted: each anomalous point is compared only to the corresponding one in the input data series using the GT label. Performance metrics The evaluation adopts the most widely used machine learning metrics, precision, recall, and \(\hbox {F}_{1}\) score, defined as follow: $$\begin{aligned} precision = \frac{TP}{TP + FP} \text { , } recall = \frac{TP}{TP + FN} \text { , } F_1 score = 2 * \frac{precision * recall}{precision + recall} \end{aligned}$$ Experimental results In this section we summarize the responses to the four questions introduced in the Introduction. For space reasons we condense the results of the 144 (12 methods \(\times\) 3 training periods \(\times\) 4 window sizes) experiments on 3 data sets and discuss only the essential findings. The complete list of results is published at the address: https://github.com/herrera-sergio/AD-periodic-TS. Q1: comparative performances Figure 6 shows the comparison of the methods over all the data sets and across all the training duration values and sizes of the sliding window. The ISOF method consistently achieves the best \(\hbox {F}_{1}\) score, followed by OC SVM and LOF. The AE and MS neural methods have comparable performances. The multi-step approaches exhibit a more consistent behavior yielding smaller values of the standard deviation and the GRU-AE method performs slightly worse than the other approaches. The neural methods that predict only one point in the future (LSTM and GRU) have low performance and a rather inconsistent behavior. This is expected due to the high sampling frequency, which makes one step prediction ineffective to detect anomalies. Of the remaining non-neural methods, ARIMA and Basic Statistic are positioned at the low end of the performance range. The top result on all the experiments is attained by ISOF on the Fridge3 time series, trained with a sub-sequence of length equal to one month and with a window size of 2 \(\times\) period: Precision = 0.947, Recall = 0.965, \(\hbox {F}_{1}\) score = 0.956. A special case is that of AR. The training of the method converges only for the shortest duration of the training sub-sequence (a half period). However, the trained model delivers on average a good \(\hbox {F}_{1}\) score. It can be observed that AR grossly fails in the accuracy of the predicted values but nonetheless the error of the points that belong to a normal sub-sequence is very different from the error of the points that lie within an anomalous sub-sequence, which results in good AD performances. Comparison of the performances of all the algorithms on all the appliances and across all the training duration periods and window sizes. The methods are ordered in descending order of the median values of the \(\hbox {F}_{1}\) score Figure 7 shows the performance break down by appliance. As expected all methods, but ARIMA and Basic Statistics, perform better on the Fridge3 data set, which contains more recognizable anomalies mostly of a single type (\(\approx 95\%\) of type spike). On the Fridge1 and Fridge2 data sets the performances follow the same ranking as in Fig. 6, with the same top-4 methods (ISOF, AR, OC SVM and LOF) and almost equivalent performances of the MS and AE methods. On the Fridge3 data set the methods that predict one step in the future (LSTM and GRU) work better. This analysis highlights that the performances of the models are affected by the considered appliance. Indeed, in Fridge1 the performances are more subject to variations, while in Fridge3 are more consistent. Moreover, ARIMA and Basic Statistics show low performances independently on the complexity of the dataset, which suggests their inadequacy for this kind of problem. The results are in line with those of the work of Kharitonov et al. (2022) in which the authors compare the performances of alternative techniques to detect failures using manufacturing machine logs and observed that k-nearest neighbors (KNN) and LOF performed better, while autoencoders could not be considered for deployment in a real-case scenario. Similarly, Elmrabit et al. (2020) found that classical machine learning techniques outperformed deep learning for the AD task in cybersecurity datasets. Break down of the performance of all the algorithms by appliance. The methods are ordered by descending median value of the \(\hbox {F}_{1}\) score Q2: training sub-sequence duration Figure 8 shows the variation of the \(\hbox {F}_{1}\) metrics for the 10 methods that could be trained with all the three sub-sequences (2 weeks, 3 weeks, one month). The results show that the 2 weeks training period is sufficient for most of the methods. Only the multisteps (MS) methods attain a very slight average performance improvement if the training period length extends to 1 month. The results on the time series of Fridge1 and Fridge2 show a similar trend. All the detailed results can be found in the mentioned project repository. Variation of the \(\hbox {F}_{1}\) score with the duration of the training sub-sequence. The AR and ARIMA method did not complete the training with all the periods Q3: window length Variation of the \(\hbox {F}_{1}\) score with the size (in periods) of the sliding window. The AR and ARIMA method did not complete the training with all the periods Figure 9 shows the variation of the \(\hbox {F}_{1}\) metrics with the sliding window size (half a period, one period, two and three periods), limited to the 9 methods that could be trained completely. The results show a difference in the pattern between neural and non-neural methods. With ISOF and OC SVM the \(\hbox {F}_{1}\) score decreases when the window size increases. With a value greater than half a period the methods progressively loose effectiveness: the variance increases and the \(\hbox {F}_{1}\) score decreases. This is likely the effect of the worse trade-off between the noise and the context knowledge enclosed in the window. The AE methods deliver the best \(\hbox {F}_{1}\) score when the window size equals twice the duration of the period. A similar trend is also displayed by MS methods, with LSTM-MS showing a slight monotonic increase up to the three periods. The one step neural methods GRU and LSTM are rather insensitive to the window size, but their performance is at the lower end of the range. The LOF approach exhibit the same trend as the AE and MS neural methods. The value at the (2 \(\times\) period) point of the neural methods shows that such a duration gives sufficient context for encoding the periodic features of the time series well and that going beyond that size is either counterproductive or yields a modest benefit. In the AE methods, the negative effect of the window size extension may be also due to the dimensionality reduction to a latent space operated by the neural architecture, which may become less effective when the dimension of the original space gets too large. The results on the time series of Fridge2 and Fridge3 show a similar trend. All the detailed results can be found in the mentioned project repository. Q4: generalization The generalization experiments assess the top-5 methods (ISOF, OC SVM, LOF LSTM-AE and GRU-AE) on a dataset different from the one on which the methods have been originally trained. Each method is tested in two variants: the original version trained on the first appliance and a version in which the threshold value is fine-tuned on the validation data series of the target appliance. Figure 10 contrasts the \(\hbox {F}_{1}\) scores obtained by the baseline version of the algorithm, i.e., the one trained and tested on the same dataset, the \(\hbox {F}_{1}\) scores achieved by fine tuning the threshold on the validation set of the target appliance, and the \(\hbox {F}_{1}\) scores obtained without any fine tuning. The top performing method (ISOF) is also the one that generalizes best, even without fine tuning the threshold. In general, ISOF and OC SVM are less dependent on the training set with respect to the neural models, which have a sensible performance decay when tested on a different appliance. The degradation is more sensible when the test appliances is Fridge3, which has almost all anomalies of type spike, which are absent in Fridge1 and Fridge2. Comparison of the generalization performance of the top-5 methods. The orange bar represents the baseline \(\hbox {F}_{1}\) score (i.e., training and testing done on the same dataset), the blue bar denotes the \(\hbox {F}_{1}\) score achieved by fine tuning the threshold on the validation set of the target appliance, and the green bar shows the performances obtained using the trained algorithm without fine tuning Qualitative analysis of results To get a qualitative appreciation of the different behavior of the best models, Fig. 11 directly compares the anomalies detected by ISOF, OC SVM and LSTM-AE with the GT anomalies. The detected anomalies are highlighted with a color that depends on the method and the GT anomalies are circled in red. The plot on the left column show a situation in which all the three methods are able to detect more or less the same anomalous data points. The detected points match well the GT annotations. The plots on the right column show how the methods react to a change of the duration of the ON-OFF cycle (an acceleration in the displayed example, which may be caused by a different load of the fridge or by a change in the set point of the thermostat). Only the ISOF method is robust to such an occurrence. The other methods instead signal many normal points as anomalous, because they consider the entire cycle variation as an anomaly. Given that the time series of the appliances are quasi-periodic, as shown in the power spectrum of Fig. 2, the robustness with respect to small variations of the ON-OFF cycle is a very relevant benefit of the ISOF method. Qualitative analysis of the predictions of three methods on Fridge1: ISOF, LSTM-AE, OC SVM. ISOF (top) is more robust to the variations of the duration of the cycles, while the others show a weakness in the identification of the anomalous points, in fact, LSTM-AE (middle) and OC SVM (bottom) label numerous normal points as anomalous In this paper we have discussed the results of the experimental comparison of 12 AD methods on three quasi-periodic data series collected with smart plugs connected to three distinct fridges. The comparison has first assessed the prediction performances, measured with the \(\hbox {F}_{1}\) score metrics, which confirmed that the non-neural machine learning methods ISOF, OC SVM and LOF attain the best results, followed by the autoencoder-based and multi-step neural methods (GRU-AE, GRU-MS, LSTM-AE, LSTM-MS). In particular, the ISOF method trained with a sub-sequence of length equal to one month and with a window size of 2 \(\times\) period attained a very good result on a fridge data series containing mostly spike anomalies (Precision = 0.947, Recall = 0.965, \(\hbox {F}_{1}\) score = 0.956). Next we evaluated the impact of the duration of the sub-sequence used for training the algorithms, which shows that the 2 weeks training period is sufficient for most of the methods and that the AR and ARIMA algorithms did not complete the training within reasonable time with time series of longer duration. The impact of the sliding window size was also investigated. Non-neural machine learning algorithms require a shorter window (half of the period is enough), whereas neural models deliver the best performance with a larger window size (two periods in most cases). Finally, the generalization ability of the top performing methods has been assessed too. The best method (ISOF) is also the one that preserves its performances intact when applied to a different appliance, even without fine-tuning the threshold on the target appliance. Future work will further pursue the investigation of AD algorithms on quasi-periodic data series, focusing also on their runtime performance on hardware with memory and processing constraints. The objective is designing a timely, accurate and efficient system for dispatching mobile phone alerts about the potential malfunctioning of home appliances to real-world users. All the material relative to this article is publicly available in the following repository https://github.com/herrera-sergio/AD-periodic-TS. The dataset used for the study are private and permission for publication was not granted, it will be included in the repository if permission is granted in the future. Autoencoders AR: Autoregressive ARIMA: Autoregressive integrated moving average ARMA: Autoregressive moving average Bi-LSTM: Bidirectional long short-term memory Electrocardiography FFT: FN: False negative FP: GRU: Gated recurrent unit GRU-AE: Gated recurrent unit autoencoder GRU-MS: Gated recurrent unit multisteps GT: Ground truth ISOF: Isolation forest KNN: K-nearest neighbors LOF: Local outlier factor LSTM: Long short-term memory LSTM-AE: Long short-term memory autoencoder LSTM-MS: Long short-term memory multisteps MAE: Multisteps MSE: Mean squared error NILM: Non intrusive load monitoring NN: OC SVM: One-class support vector machine RNNs: Squared error SVM: TN: True negative TP: True positive VAE: Variational autoencoders A platform for Open Data of the European power system. https://open-power-system-data.org/. Accessed 3 June (2022) Ahmad S, Lavin A, Purdy S, Agha Z (2017) Unsupervised real-time anomaly detection for streaming data. Neurocomputing 262:134–147 Amasyali K, El-Gohary NM (2018) A review of data-driven building energy consumption prediction studies. Renew Sustain Energy Rev 81:1192–1205 An interdisciplinary approach on efficient virtual microgrid to virtual microgrid energy balancing incorporating data preprocessing techniques. Computing. 2021;p. 1–42 Azizi E, Beheshti MTH, Bolouki S (2021) Appliance-level anomaly detection in nonintrusive load monitoring via power consumption-based feature analysis. IEEE Trans Consumer Electron 67(4):363–371. https://doi.org/10.1109/TCE.2021.3129356 Blázquez-García A, Conde A, Mori U, Lozano JA (2021) A review on outlier/anomaly detection in time series data. ACM Comput Surveys (CSUR) 54(3):1–33 Box GE, Tiao GC (1977) A canonical analysis of multiple time series. Biometrika 64(2):355–365 Article MathSciNet MATH Google Scholar Braei M, Wagner S (2020) Anomaly detection in univariate time-series: a survey on the state-of-the-art. arXiv preprint arXiv:2004.00433 Breunig MM, Kriegel HP, Ng RT, Sander J (2000) LOF: Identifying Density-Based Local Outliers. In: Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data. SIGMOD '00. New York, NY, USA: Association for Computing Machinery; p. 93-104. Available from: https://doi.org/10.1145/342009.335388 Canizo M, Triguero I, Conde A, Onieva E (2019) Multi-head CNN-RNN for multi-time series anomaly detection: an industrial case study. Neurocomputing 363:246–260 Capozzoli A, Piscitelli MS, Brandi S, Grassi D, Chicco G (2018) Automated load pattern learning and anomaly detection for enhancing energy management in smart buildings. Energy 157:336–352 Chauhan S, Vig L (2015) Anomaly detection in ECG time signals via deep long short-term memory networks. In: 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA). IEEE; 2015. p. 1–7 Cheng JC, Chen W, Chen K, Wang Q (2020) Data-driven predictive maintenance planning framework for MEP components based on BIM and IoT using machine learning algorithms. Autom Constr 112:103087 Cho K, Van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, et al (2014) Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 Chung J, Gulcehre C, Cho K, Bengio Y (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 Cook AA, Mısırlı G, Fan Z (2019) Anomaly detection for IoT time-series data: a survey. IEEE Internet Things J 7(7):6481–6494 Ding Z, Fei M (2013) An anomaly detection approach based on isolation forest algorithm for streaming data using sliding window. IFAC Proc 46(20):12–17 Elmrabit N, Zhou F, Li F, Zhou H (2020) Evaluation of Machine Learning Algorithms for Anomaly Detection. In: 2020 International Conference on Cyber Security and Protection of Digital Services (Cyber Security); p. 1–8 Fan C, Xiao F, Zhao Y, Wang J (2018) Analytical investigation of autoencoder-based methods for unsupervised anomaly detection in building energy data. Appl Energy 211:1123–1135 Firth S, Kane T, Dimitriou V, Hassan T, Fouchal F, Coleman M, et al (2017) REFIT Smart Home dataset. Available from: https://repository.lboro.ac.uk/articles/dataset/REFIT_Smart_Home_dataset/2070091 Himeur Y, Ghanem K, Alsalemi A, Bensaali F, Amira A (2021) Artificial intelligence based anomaly detection of energy consumption in buildings: a review, current trends and new perspectives. Appl Energy 287:116601 Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780 Hyndman RJ, Athanasopoulos G (2021) Forecasting: principles and practice, 3rd edition. OTexts Kadri F, Harrou F, Chaabane S, Sun Y, Tahon C (2016) Seasonal ARMA-based SPC charts for anomaly detection: application to emergency department systems. Neurocomputing 173:2102–2114 Kao JB, Jiang JR (2019) Anomaly detection for univariate time series with statistics and deep learning. In: 2019 IEEE Eurasia Conference on IOT, Communication and Engineering (ECICE). IEEE; p. 404–407 Keras (2022) Keras documentation: Timeseries Anomaly detection using an autoencoder;. https://keras.io/examples/timeseries/timeseries_anomaly_detection/. Accessed 3 June 2022 Kharitonov A, Nahhas A, Pohl M, Turowski K (2022) Comparative analysis of machine learning models for anomaly detection in manufacturing. Proc Comput Sci 200:1288–1297 Koukaras P, Bezas N, Gkaidatzis P, Ioannidis D, Tzovaras D, Tjortjis C (2021) Introducing a novel approach in one-step ahead energy load forecasting. Sustain Comput Inf Syst 32:100616 Kozitsin V, Katser I, Lakontsev D (2021) Online forecasting and anomaly detection based on the ARIMA model. Appl Sci 11(7):3194 Li D, Chen D, Goh J, Ng Sk (2018) Anomaly detection with generative adversarial networks for multivariate time series. arXiv preprint arXiv:1809.04758 Li L, Yan J, Wang H, Jin Y (2020) Anomaly detection of time series with smoothness-inducing sequential variational auto-encoder. IEEE Trans Neural Netw Learning Syst 32(3):1177–1191 Liu FT, Ting KM, Zhou ZH (2008) Isolation Forest. In: 2008 Eighth IEEE International Conference on Data Mining; p. 413–422 Liu F, Zhou X, Cao J, Wang Z, Wang T, Wang H, et al (2020) Anomaly detection in quasi-periodic time series based on automatic data segmentation and attentional LSTM-CNN. IEEE Transactions on Knowledge and Data Engineering. 2020 Makonin S, Ellert B, Bajić IV, Popowich F (2016) Electricity, water, and natural gas consumption of a residential house in Canada from 2012 to 2014. Sci Data 3(1):1–12 Malhotra P, Vig L, Shroff G, Agarwal P, et al (2015) Long short term memory networks for anomaly detection in time series. In: Proceedings. vol. 89; p. 89–94 Malhotra P, Ramakrishnan A, Anand G, Vig L, Agarwal P, Shroff G (2016) LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148 Masum S, Liu Y, Chiverton J (2018) Multi-step time series forecasting of electric load using machine learning models. In: International conference on artificial intelligence and soft computing. Springer; p. 148–159 Mishra M, Nayak J, Naik B, Abraham A (2020) Deep learning in electrical utility industry: a comprehensive review of a decade of research. Eng Appl Artif Intell 96:104000 Munir M, Siddiqui SA, Dengel A, Ahmed S (2018) DeepAnT: a deep learning approach for unsupervised anomaly detection in time series. IEEE Access 7:1991–2005 Muniz Do Nascimento W, Gomes-Jr L (2022) Enabling low-cost automatic water leakage detection: a semi-supervised, autoML-based approach. Urban Water J 1–11 Oehmcke S, Zielinski O, Kramer O (2015) Event Detection in Marine Time Series Data. In: Hölldobler S, Peñaloza R, Rudolph S (eds) KI 2015: Advances in Artificial Intelligence. Springer International Publishing, Cham, pp 279–286 Oehmcke S, Zielinski O, Kramer O (2015) Event detection in marine time series data. In: Joint German/Austrian Conference on Artificial Intelligence (Künstliche Intelligenz). Springer; 2015. p. 279–286 Pena D, Box GE (1987) Identifying a simplifying structure in time series. J Am Stat Assoc 82(399):836–843 MathSciNet MATH Google Scholar Pena D, Poncela P (2006) Dimension reduction in multivariate time series. In: Advances in distribution theory, order statistics, and inference. Springer; p. 433–458 Pereira J, Silveira M (2018) Unsupervised anomaly detection in energy time series data using variational recurrent autoencoders with attention. In, (2018) 17th IEEE international conference on machine learning and applications (ICMLA). IEEE 1275–1282 Pincombe B (2005) Anomaly detection in time series of graphs using ARMA processes. Asor Bull 24(4):2 Rashid H, Batra N, Singh P (2018) Rimor: Towards identifying anomalous appliances in buildings. In: Proceedings of the 5th Conference on Systems for Built Environments; p. 33–42 Sanz B, Santos I, Ugarte-Pedrero X, Laorden C, Nieves J, Bringas PG (2014) Anomaly detection using string analysis for android malware detection. In: International Joint Conference SOCO'13-CISIS'13-ICEUTE'13. Springer; 2014. p. 469–478 Schlegl T, Seeböck P, Waldstein SM, Langs G, Schmidt-Erfurth U (2019) f-AnoGAN: fast unsupervised anomaly detection with generative adversarial networks. Med Image Anal 54:30–44 Schölkopf B, Williamson RC, Smola A, Shawe-Taylor J, Platt J (1999) Support Vector Method for Novelty Detection. In: Solla S, Leen T, Müller K, editors. Advances in Neural Information Processing Systems. vol. 12. MIT Press; Available from: https://proceedings.neurips.cc/paper/1999/file/8725fb777f25776ffa9076e44fcfd776-Paper.pdf Seyoum S, Alfonso L, Van Andel SJ, Koole W, Groenewegen A, Van De Giesen N (2017) A Shazam-like household water leakage detection method. Proc Eng 186:452–459 Shah AS, Nasir H, Fayaz M, Lajis A, Shah A (2019) A review on energy consumption optimization techniques in IoT based smart building environments. Information 10(3):108 Shaikh PH, Nor NBM, Nallagownden P, Elamvazuthi I, Ibrahim T (2014) A review on optimized control systems for building energy and comfort management of smart sustainable buildings. Renew Sustain Energy Rev 34:409–429 Shakibaei P (2020) Data-driven anomaly detection from residential smart meter data Su Y, Zhao Y, Niu C, Liu R, Sun W, Pei D (2019) Robust anomaly detection for multivariate time series through stochastic recurrent neural network. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining; p. 2828–2837 Yaacob AH, Tan IKT, Chien SF, Tan HK (2010) ARIMA Based Network Anomaly Detection. In: 2010 Second International Conference on Communication Software and Networks; p. 205–209 Yin C, Zhang S, Wang J, Xiong NN (2020) Anomaly detection based on convolutional recurrent autoencoder for IoT time series. IEEE Trans Syst Man Cybern Syst 52(1):112–122 Zangrando N, Herrera S, Koukaras P, Dimara A, Fraternali P, Krinidis S, et al (2022) Anomaly Detection in Small-Scale Industrial and Household Appliances. In: Maglogiannis I, Iliadis L, Macintyre J, Cortez P, editors. Artificial Intelligence Applications and Innovations. AIAI 2022 IFIP WG 12.5 International Workshops—MHDW 2022, 5G-PINE 2022, AIBMG 2022, ML@HC 2022, and AIBEI 2022, Hersonissos, Crete, Greece, June 17-20, 2022, Proceedings. vol. 652 of IFIP Advances in Information and Communication Technology. Springer; p. 229–240. Available from: https://doi.org/10.1007/978-3-031-08341-9_19 Zhang R, Zhang S, Lan Y, Jiang J (2008) Network anomaly detection using one class support vector machine. In: Proceedings of the International MultiConference of Engineers and Computer Scientists. vol. 1. Citeseer Zhang C, Patras P, Haddadi H (2019) Deep learning in mobile and wireless networking: a survey. IEEE Commun Surveys Tutorials 21(3):2224–2287 Zhang L, Shen X, Zhang F, Ren M, Ge B, Li B (2019) Anomaly detection for power grid based on time series model. In: 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). IEEE; p. 188–192 Zhang S, Chen X, Chen J, Jiang Q, Huang H (2020) Anomaly detection of periodic multivariate time series under high acquisition frequency scene in IoT. In: 2020 International Conference on Data Mining Workshops (ICDMW). IEEE; p. 543–552 This work has been supported by the European Union's Horizon 2020 project PRECEPT, under Grant agreement No. 958284. About this supplement This article has been published as part of Energy Informatics Volume 5 Supplement 4, 2022: Proceedings of the Energy Informatics. Academy Conference 2022 (EI.A 2022). The full contents of the supplement are available online at https://energyinformatics.springeropen.com/articles/supplements/volume-5-supplement-4. This paper is part of the funded project PRECEPT (No.958284) by the funding agency European Union's Horizon 2020 Framework. Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133, Milan, Italy Niccolò Zangrando, Piero Fraternali, Marco Petri, Nicolò Oreste Pinciroli Vago & Sergio Luis Herrera González Niccolò Zangrando Piero Fraternali Marco Petri Nicolò Oreste Pinciroli Vago Sergio Luis Herrera González NZ analyzed the dataset and prepared the split of the data set for training/testing; led the implementation of the algorithms and the evaluation of the models. PF designed the research and the experimentation procedure; analyzed the results and made a major contribution to the writing of the manuscript. MP implemented the regressive algorithms, performed the training of the algorithms and the evaluation. NOPV implemented procedure for the identification of the period on the data sets, implemented the statistical algorithm, performed the training of the algorithm and the evaluation. SLHG contributed to the analysis of the data and design of the experiments; collaborated with the training of the algorithms and prepared the first draft of the document. All authors read and approved the final manuscript. Correspondence to Sergio Luis Herrera González. Zangrando, N., Fraternali, P., Petri, M. et al. Anomaly detection in quasi-periodic energy consumption data series: a comparison of algorithms. Energy Inform 5 (Suppl 4), 62 (2022). https://doi.org/10.1186/s42162-022-00230-7
CommonCrawl
Gibbs entropy, Clausius' entropy and irreversibility I have a bunch of doubts and confusions on the concept of entropy which have been bothering me for a while now. The most important ones are of a more technical nature, arisen from the reading of this and Jaynes' paper "Gibbs vs Boltzmann entropies". Although they seem like great texts, they leave me confused when they argue that the Gibbs entropy remains constant and that this is precisely what one needs to prove in the Second Law. I quote, from the first link: we return to the specific case of a gas of N particles, this time confined to one side of a box containing a removable partition. We suppose that the initial state is such that we can descnbe it using the canonical probability distribution. From our earlier discussion we can then say that the Gibbs entropy SG is maximized and equal to the experimental entropy SE. We now suppose that the partition is opened and the atoms occupy the whole box. We wait until the state variables stop changing, so in that sense the system is in equilibrium and a new experimental entropy S'E can be defined. Also, all the motions of the gas molecules are Hamiltonian, so that the Gibbs entropy S'G has not changed: S'G = SG The probability distribution of the N particles is no longer the canonical one, however, because of the (very subtle!) correlations it contains reflecting the fact that the molecules were originally on one side of the partition. This means that the Gibbs entropy S'G is now in general less than the maximum attainable for the new values of the state variables, which is in turn equal to the new experimental entropy. So SE = SG = S'G ≤ S'E Well, I can say they totally lost me here. After the expansion, the system reaches an equilibrium. We shouldn't care about how we reached that equilibrium, so I would think that maximizing $S_G= -K_B \int \rho \log \rho d \mu$ -with $\rho$ the joint probability distribution of the position and momentum of the $N$ particles and with the restriction that the average energy is, say, $U$- should give again the probability distribution $\rho$ in the new equilibrium. So, I defintely don't understand at all why they say that "The probability distribution of the N particles is no longer the canonical one," neither do I understand the statement "because of the (very subtle!) correlations it contains reflecting the fact that the molecules were originally on one side of the partition," since I believe it is not important at all how we reached the equilibrium! If this was the case, why could we prove that in the first case (when all the molecules are in one side of the box) the Gibbs entropy coincided with the "experimental" one (I, by the way, don't know exactly what they mean by "experimental" entropy; do they mean Clausius'?). Didn't Jaynes prove that Gibbs entropy coincide with Clausius'? (I got that from his paper, at least). But how can Gibbs entropy remain constant if the entropy "MUST" increase? Jaynes, in his paper, writes things like $(S_G)_{2}-(S_G)_{1},$ so that changes in the Gibbs entropy must be something meaninful, despite being so clear that the hamiltonian evolution leaves $\rho$ the same. Well, I guess it's being really hard to explain accurately the nature of my confusions, but hopefully someone around here has struggled with similar issues and can give an enligthening clarification. EXPANDING THE ORIGINAL QUESTION: Let me follow the notation from Jaynes' paper, which I linked above. If a let the gas go through a free adiabatic expansion, since the evolution is hamiltonian, it is clear that $\frac{dW_N}{dt}$ obeys Liouville equation, but since $\{ e^{-\beta H}, H \}=0,$ it is clear that the $W_N$ remains constant and thus so do the Gibbs entropy $S_G.$ However, at the end of section IV in Jaynes' paper, he states If the time-developed distribution function $W_N(t^{\prime}$... And then I don't know what's going on anymore..! Does the $W_N$ change in time or not? And, if in a thermal equilibrium we do not use the Canonical Ensemble, which ensemble are we using instead? How is the distribution function? What is the mathematical expresion of the new macroscopic restrictions we should add when maximizing the entropy functional to derive that new probability distribution? thermodynamics statistical-mechanics entropy Qwertuy QwertuyQwertuy $\begingroup$ I wrote an answer to a slightly different question here. It discusses some of the history of how entropy came about and provides some useful references on the topic (though I do not think it answers your question... but it might be helpful reference-wise). $\endgroup$ – honeste_vivere Feb 8 '16 at 14:33 $\begingroup$ great answer and thanks for the link to it. +1 on both $\endgroup$ – Wolpertinger Feb 8 '16 at 16:10 $\begingroup$ $W$ does change in time, since the expansion means Hamiltonian changes. What does not change is information entropy (the Gibbs entropy). Check my answer here: physics.stackexchange.com/questions/256302/… $\endgroup$ – Ján Lalinský May 17 '16 at 21:09 We shouldn't care about how we reached that equilibrium In fact the entire point of the example is that we do. I shall try to explain why. Jaynes and Gull both work in the framework of Bayesian inference (I can recommend the introductory text: http://www.amazon.co.uk/Data-Analysis-A-Bayesian-Tutorial/dp/0198568320. The title may seem unrelated to thermodynamics, but is actually at the heart of science itself). One important concept they use is the Principle of Maximum Entropy. It states essentially what you said in your question: so I would think that maximizing $ S_G=−K_B∫\rho \log\rho\: \mathrm d\mu $ - with ρ the joint probability distribution of the position and momentum of the N particles and with the restriction that the average energy is, say, U - should give again the probability distribution ρ in the new equilibrium. This yields the canonical ensemble. Now the assumption behind this is that there is no prior information that we have about the system. But in the case of the box we DO have prior information about the system, namely that the particles used to be on the right of the box. So the particles are in a different ensemble. This ensemble could probably be calculated, but it would be complicated and is not necessary for the sake of his argument. About your questions regarding the "experimental entropy": What they mean by that is the entropy that is defined by: $$ \Delta S_E = \int_\rm{reversible} \frac{\partial Q}{T} $$ So yes, I guess that is what people also call "Clausius' entropy". What Jaynes proved is that $ S_G $ and $ S_E $ (choosing the right constant offset for $ S_E $) coincide for the canonical ensemble. So in the argument above we have noted that the ensemble changes from canonical to something else. The new ensemble will have same $ S_G $ (since it was shown to be constant under the equations of motion) but $ S_E $ will have increased. This is then a restatement of the second law of thermodynamics (for systems starting in the canonical ensemble. I have struggled to find a prove for systems starting in other ensembles, would be happy if someone could link me.) So that should also answer your question about how $ S_G $ staying constant is consistent with the second law. WolpertingerWolpertinger $\begingroup$ You say that the statistical ensemble we use here is different, because we have prior information about the system. But do we ? In many introductory texbooks, we try to calculate the speed of a single particle after a few collisions. We can show that after a very short period of time (say 10 collisions) we completely lose the information about the speed or position of the particle. So is it reasonable to assume that we can track these "subtle" correlations ? Would it really make a measurable difference in the entropy ? $\endgroup$ – Dimitri Feb 8 '16 at 14:46 $\begingroup$ I am not 100% sure which calculation you are talking about. In the example used in the paper they are explicitly talking about a quantum system. And the whole point of the paper is that it does make a difference. For the relation to classical systems the comment about diffusion in this paper (bayes.wustl.edu/etj/articles/cmystery.pdf) might help. $\endgroup$ – Wolpertinger Feb 8 '16 at 15:29 $\begingroup$ I was thinking of a classical calculation of the variation in angle $\delta \theta$ in the speed of a particle after a collision. So maybe my misunderstanding comes from the quantum nature of the system. But still, I find it hard to believe that the fact that the particles were on the same size at time $t_0$ can affect the thermodynamics a long time after that. Or maybe it comes from these Bayesian statistics you were referring to ? $\endgroup$ – Dimitri Feb 8 '16 at 15:48 $\begingroup$ I share Dimitri's concern. Furthermore, I feel there is a contradiction in Jaynes' paper. Surely that's my lack of understanding, but I'll tell you what I feel by expanding my original question. Thanks! $\endgroup$ – Qwertuy Feb 8 '16 at 15:48 $\begingroup$ The article you linked to is very interesting. The canonical ensemble point of view doesn't take into account the time-reversal symmetry breaking in the information we have about the system: we know its past, but not its future. It kind of makes reasonable why the canonical description fails to account for the experimentally measured entropy. Thanks ! $\endgroup$ – Dimitri Feb 8 '16 at 16:34 Not the answer you're looking for? Browse other questions tagged thermodynamics statistical-mechanics entropy or ask your own question. Liouville's Theorem and Boltzmann equation for plasma Understanding Gibbs $H$-theorem: where does Jaynes' "blurring" argument come from? Formal demonstration that minimizing the free energy equals maximizing the entropy If particles can find themselves spontaneously arranged, isn't entropy actually decreasing? Should entropy have units and temperature in terms of energy? Boltzmann–Gibbs-distribution as resulting from a limiting density of states? Landau's derivation of the law of entropy increase - clarification Is there any useful sense in which entropy fluctuates? Gibbs vs. Boltzmann entropies E. T. Jaynes' subjectivism vs measurement of distributions Temperature from the Gibbs entropy Entropy as a function of internal energy in an arbitrary ensemble
CommonCrawl
Nerd sniping Humor, Physics No Responses » I just came across this XKCD comic. Though I can happily report that so far, I managed to avoid getting hit by a truck, it is a life situation in which I found myself quite a number of times in my life. In fact, ever since I've seen this comic an hour or so ago, I've been wondering about the resistor network. Thankfully, in the era of the Internet and Google, puzzles like this won't keep you awake at night; well-reasoned solutions are readily available. Anyhow, just in case anyone wonders, the answer is 4/π − 1/2 ohms. Imaging extended sources with the solar gravitational lens Astronomy, Personal, Physics, Space No Responses » Yesterday, we posted our latest paper on arXiv. Again, it is a paper about the solar gravitational lens. This time around, our focus was on imaging an extended object, which of course can be trivially modeled as a multitude of point sources. However, it is a multitude of point sources at a finite distance from the Sun. This adds a twist. Previously, we modeled light from sources located at infinity: Incident light was in the form of plane waves. But when the point source is at a finite distance, light from it comes in the form of spherical waves. Now it is true that at a very large distance from the source, considering only a narrow beam of light, we can approximate those spherical waves as plane waves (paraxial approximation). But it still leaves us with the altered geometry. But this is where a second observation becomes significant: As we can intuit, and as it is made evident through the use of the eikonal approximation, most of the time we can restrict our focus onto a single ray of light. A ray that, when deflected by the Sun, defines a plane. And the investigation can proceed in this plane. The image above depicts two such planes, corresponding to the red and the green ray of light. These rays do meet, however, at the axis of symmetry of the problem, which we call the optical axis. However, in the vicinity of this axis the symmetry of the problem is recovered, and the result no longer depends on the azimuthal angle that defines the plane in question. To make a long story short, this allows us to reuse our previous results, by introducing the additional angle β, which determines, among other things, the additional distance (compared to parallel rays of light coming from infinity) that these light rays travel before meeting at the optical axis. This is what our latest paper describes, in full detail. Is everything in physics knowable? Astronomy, Physics No Responses » Here is a thought that has been bothering me for some time. We live in a universe that is subject to accelerating expansion. Galaxies that are not bound gravitationally to our Local Group will ultimately vanish from sight, accelerating away until the combination of distance and increasing redshift will make their light undetectable by any imaginable instrument. Similarly, accelerating expansion means that there will be a time in the very distant future when the cosmic microwave background radiation itself will become completely undetectable by any conceivable technological means. In this very distant future, the Local Group of galaxies will have merged already into a giant elliptical galaxy. Much of this future galaxy will be dark, as most stars would have run out of fuel already. But there will still be light. Stars would still occasionally form. Some dwarf stars will continue to shine for trillions of years, using their available fuel at a very slow rate. Which means that civilizations might still emerge, even in this unimaginably distant future. And when they do, what will they see? They will see themselves as living in an "island universe" in an otherwise empty, static cosmos. In short, precisely the kind of cosmos envisioned by many astronomers in the early 1920s, when it was still popular to think of the Milky Way as just such an island universe, not yet recognizing that many of the "spiral nebulae" seen through telescopes are in fact distant galaxies just as large, if not larger, than the Milky Way. But these future civilizations will see no such nebulae. There will be no galaxies beyond their "island universe". No microwave background either. In fact, no sign whatsoever that their universe is evolving, changing with time. So what would a scientifically advanced future civilization conclude? Surely they would still discover general relativity. But would they believe its predictions of an expanding cosmos, despite the complete lack of evidence? Or would they see that prediction as a failure of the theory, which must be remedied? In short, how would they ever come into possession of the knowledge that their universe was once young, dense, and full of galaxies, not to mention background radiation? My guess is that they won't. They will have no observational evidence, and their theories will reflect what they actually do see (a static, unchanging island universe floating in infinite, empty space). Which raises the rather unnerving, unpleasant question: To what extent exist already features in our universe that are similarly unknowable, as they can no longer be detected by any conceivable instrumentation? Is it, in fact, possible to fully understand the physics of the universe, or are we already doomed to never being able to develop a full picture? I find this question surprisingly unnerving and depressing. Internet, Personal, Physics No Responses » My research is unsupported. That is to say, with the exception of a few conference invitations when my travel costs were covered, I never received a penny for my research on the Pioneer Anomaly and my other research efforts. Which is fine, I do it for fun after all. Still, in this day and age of crowdfunding, I couldn't say no to the possibility that others, who find my efforts valuable, might choose to contribute. Hence my launching of a Patreon page. I hope it is well-received. I have zero experience with crowdfunding, so this really is a first for me. Wish me luck. Some common, naive misunderstandings in cosmology Physics No Responses » I run across this often. Well-meaning folks who read introductory-level texts or saw a few educational videos about physical cosmology, suddenly discovering something seemingly profound. And then, instead of asking themselves why, if it is so easy to stumble upon these results, they haven't been published already by others, they go ahead and make outlandish claims. (Claims that sometimes land in my Inbox, unsolicited.) Let me explain what I am talking about. As it is well known, the rate of expansion of the cosmos is governed by the famous Hubble parameter: \(H\sim 70~{\rm km}/{\rm s}/{\rm Mpc}\). That is to say, two galaxies that are 1 megaparsec (Mpc, about 3 million light years) apart will be flying away from each other at a rate of 70 kilometers a second. It is possible to convert megaparsecs (a unit of length) into kilometers (another unit of length), so that the lengths cancel out in the definition of \(H\), and we are left with \(H\sim 2.2\times 10^{-18}~{\rm s}^{-1}\), which is one divided by about 14 billion years. In other words, the Hubble parameter is just the inverse of the age of the universe. (It would be exactly the inverse of the age of the universe if the rate of cosmic expansion was constant. It isn't, but the fact that the expansion was slowing down for the first 9 billion years or so and has been accelerating since kind of averages things out.) And this, then, leads to the following naive arithmetic. First, given the age of the universe and the speed of light, we can find out the "radius" of the observable universe: $$a=\dfrac{c}{H},$$ or about 14 billion light years. Inverting this equation, we also get \(H=c/a\). But the expansion of the cosmos is governed by another equation, the first so-called Friedmann equation, which says that $$H^2=\dfrac{8\pi G\rho}{3}.$$ Here, \rho is the density of the universe. The mass within the visible universe, then, is calculated as usual, just using the volume of a sphere of radius \(a\): $$M=\dfrac{4\pi a^3}{3}\rho.$$ Putting this expression and the expression for \(H\) back into the Friedmann equation, we get the following: $$a=\dfrac{2GM}{c^2}.$$ But this is just the Schwarzschild radius associated with the mass of the visible universe! Surely, we just discovered something profound here! Perhaps the universe is a black hole! Well… not exactly. The fact that we got the Schwarzschild radius is no coincidence. The Friedmann equations are, after all, just Einstein's field equations in disguise, i.e., the exact same equations that yield the formula for the Schwarzschild radius. Still, the two solutions are qualitatively different. The universe cannot be the interior of a black hole's event horizon. A black hole is characterized by an unavoidable future singularity, whereas our expanding universe is characterized by a past singularity. At best, the universe may be a time-reversed black hole, i.e., a "white hole", but even that is dubious. The Schwarzschild solution, after all, is a vacuum solution of Einstein's field equations, wereas the Friedmann equations describe a matter-filled universe. Nor is there a physical event horizon: the "visible universe" is an observer-dependent concept, and two observers in relative motion or even two observers some distance apart, will not see the same visible universe. Nonetheless, these ideas, memes perhaps, show up regularly, in manuscripts submitted to journals of dubious quality, appearing in self-published books, or on the alternative manuscript archive viXra. And there are further variations on the theme. For instance, the so-called Planck power, divided by the Hubble parameter, yields \(2Mc^2\), i.e., twice the mass-energy in the observable universe. This coincidence is especially puzzling to those who work it out numerically, and thus remain oblivious to the fact that the Planck power is one of those Planck units that does not actually contain the Planck constant in its definition, only \(c\) and \(G\). People have also been fooling around with various factors of \(2\), \(\tfrac{1}{2}\) or \(\ln 2\), often based on dodgy information content arguments, coming up with numerical ratios that supposedly replicate the matter, dark matter, and dark energy content. Aviation, Health, History, Physics, Politics No Responses » Today, I answered a question on Quora about the nature of \(c\), the speed of light, as it appears in the one equation everyone knows, \(E=mc^2.\) I explained that it is best viewed as a conversion factor between our units of length and time. These units are accidents of history. There is nothing fundamental in Nature about one ten millionth the distance from the poles to the equator of the Earth (the original definition of the meter) or about one 86,400th the length of the Earth's mean solar day. These units are what they are, in part, because we learned to measure length and time long before we learned that they are aspects of the same thing, spacetime. And nothing stops us from using units such as light-seconds and seconds to measure space and time; in such units, the value of the speed of light would be just 1, and consequently, it could be dropped from equations altogether. This is precisely what theoretical physicists often do. But then… I commented that something very similar takes place in aviation, where different units are used to measure horizontal distance (nautical miles, nmi) and altitude (feet, ft). So if you were to calculate the kinetic energy of an airplane (measuring its speed in nmi/s) and its potential energy (measuring the altitude, as well as the gravitational acceleration, in ft) you would need the ft/nmi conversion factor of 6076.12, squared, to convert between the two resulting units of energy. As I was writing this answer, though, I stumbled upon a blog entry that discussed the crazy, mixed up units of measure still in use worldwide in aviation. Furlongs per fortnight may pretty much be the only unit that is not used, as just about every other unit of measure pops up, confusing poor pilots everywhere: Meters, feet, kilometers, nautical miles, statute miles, kilograms, pounds, millibars, hectopascals, inches of mercury… you name it, it's there. Part of the reason, of course, is the fact that America, alone among industrialized nations, managed to stick to its archaic system of measurements. Which is another historical accident, really. A lot had to do with the timing: metric transition was supposed to take place in the 1970s, governed by a presidential executive order signed by Gerald Ford. But the American economy was in a downturn, many Americans felt the nation under siege, the customary units worked well, and there was a conservative-populist pushback against the metric system… so by 1982, Ronald Reagan disbanded the Metric Board and the transition to metric was officially over. (Or not. The metric system continues to gain ground, whether it is used to measure bullets or Aspirin, soft drinks or street drugs.) Yet another example similar to the metric system is the historical accident that created the employer-funded healthcare system in the United States that American continue to cling to, even as most (all?) other advanced industrial nations transitioned to something more modern, some variant of a single-payer universal healthcare system. It happened in the 1920s, when a Texas hospital managed to strike a deal with public school teachers in Dallas: For 50 cents a month, the hospital picked up the tab of their hospital visits. This arrangement became very popular during the Great Depression when hospitals lost patients who could not afford their hospital care anymore. The idea came to be known as Blue Cross. And that's how the modern American healthcare system was born. As I was reading this chain of Web articles, taking me on a tour from Einstein's \(E=mc^2\) to employer-funded healthcare in America, I was reminded of a 40-year old British TV series, Connections, created by science historian James Burke. Burke found similar, often uncanny connections between seemingly unrelated topics in history, particularly the history of science and technology. Another Perimeter visit Personal, Physics No Responses » Just got back from The Perimeter Institute, where I spent three very short days. I had good discussions with John Moffat. I again met Barak Shoshany, whom I first encountered on Quora. I attended two very interesting and informative seminar lectures by Emil Mottola on quantum anomalies and the conformal anomaly. I also gave a brief talk about our research with Slava Turyshev on the Solar Gravitational Lens. I was asked to give an informal talk with no slides. It was a good challenge. I believe I was successful. My talk seemed well received. I was honored to have Neil Turok in the audience, who showed keen interest and asked several insightful questions. Congratulations, Donna Strickland Canada, Physics No Responses » I just watched a news conference held by the University of Waterloo, on account of Donna Strickland being awarded the Nobel prize in physics. This is terrific news for Canada, for the U. of Waterloo, and last but most certainly not least, for women in physics. Heartfelt congratulations! Atiyah, Riemann and the fine structure constant Mathematics, Physics No Responses » Michael Atiyah, 89, is one of the greatest living mathematicians. Which is why the world pays attention when he claims to have solved what is perhaps the greatest outstanding problem in mathematics, the Riemann hypothesis. Here is a simple sum: \(1+\frac{1}{2^2}+\frac{1}{3^2}+…\). It is actually convergent: The result is \(\pi^2/6\). Other, similar sums also converge, so long as the exponent is greater than 1. In fact, we can define a function: $$\begin{align*}\zeta(x)=\sum\limits_{i=1}^\infty\frac{1}{i^x}.\end{align*}$$ Where things get really interesting is when we extend the definition of this \(\zeta(x)\) to the entire complex plane. As it turns out, its analytic continuation is defined almost everywhere. And, it has a few zeros, i.e., values of \(x\) for which \(\zeta(x)=0\). The so-called trivial zeros of \(\zeta(x)\) are the negative even integers: \(x=-2,-4,-6,…\). But the function also has infinitely many nontrivial zeros, where \(x\) is complex. And here is the thing: The real part of all known nontrivial zeros happens to be \(\frac{1}{2}\), the first one being at \(x=\frac{1}{2}+14.1347251417347i\). This, then, is the Riemann hypothesis: Namely that if \(x\) is a non-trivial zero of \(\zeta(x)\), then \(\Re(x)=\frac{1}{2}\). This hypothesis baffled mathematicians for the past 130 years, and now Atiyah claims to have solved it, accidentally (!), in a mere five pages. Unfortunately, verifying his proof is above my pay grade, as it references other concepts that I would have to learn first. But it is understandable why the mathematical community is skeptical (to say the least). A slide from Atiyah's talk on September 24, 2018. What is not above my pay grade is analyzing Atiyah's other claim: a purported mathematical definition of the fine structure constant \(\alpha\). The modern definition of \(\alpha\) relates this number to the electron charge \(e\): \(\alpha=e^2/4\pi\epsilon_0\hbar c\), where \(\epsilon_0\) is the electric permeability of the vacuum, \(\hbar\) is the reduced Planck constant and \(c\) is the speed of light. Back in the days of Arthur Eddington, it seemed that \(\alpha\sim 1/136\), which led Eddington himself onto a futile quest of numerology, trying to concoct a reason why \(136\) is a special number. Today, we know the value of \(\alpha\) a little better: \(\alpha^{-1}\simeq 137.0359992\). Atiyah produced a long and somewhat rambling paper that fundamentally boils down to two equations. First, he defines a new mathematical constant, denoted by the Cyrillic letter \(\unicode{x427}\) (Che), which is related to the fine structure constant by the equation $$\begin{align*}\alpha^{-1}=\frac{\pi\unicode{x427}}{\gamma},\tag{1.1*}\end{align*}$$ where \(\gamma=0.577…\) is the Euler–Mascheroni constant. Second, he offers a definition for \(\unicode{x427}\): $$\begin{align*}\unicode{x427}=\frac{1}{2}\sum\limits_{j=1}^\infty 2^{-j}\left(1-\int_{1/j}^j\log_2 x~dx\right).\tag{7.1*}\end{align*}$$ (The equation numbers are Atiyah's; I used a star to signify that I slightly simplified them.) Atiyah claims that this sum is difficult to calculate and then goes into a long-winded and not very well explained derivation. But the sum is not difficult to calculate. In fact, I can calculate it with ease as the definite integral under the summation sign is trivial: $$\begin{align*}\int_{1/j}^j\log_2 x~dx=\frac{(j^2+1)\log j-j^2+1}{j\log 2}.\end{align*}$$ After this, the sum rapidly converges, as this little bit of Maxima code demonstrates (NB: for \(j=1\) the integral is trivial as the integration limits collapse): (%i1) assume(j>1); (%o1) [j > 1] (%i2) S:1/2*2^(-j)*(1-integrate(log(x)/log(2),x,1/j,j)); log(j) + 1 ---------- + j log(j) - j (- j) - 1 j (%o2) 2 (1 - -------------------------) log(2) (%i3) float(sum(S,j,1,50)); (%o3) 0.02944508691740671 (%i4) float(sum(S,j,1,100)); (%i6) float(sum(S,j,1,100)*%pi/%gamma); (%o6) 0.1602598029967022 Unfortunately, this does not look like \(\alpha^{-1}=137.0359992\) at all. Not even remotely. So we are all left to guess, sadly, what Atiyah was thinking when he offered this proposal. We must also remember that \(\alpha\) is a so-called "running" constant, as its value depends on the energy of the interaction, though presumably, the constant in question here is \(\alpha\) in the infrared limit, i.e., at zero energy. Sterile neutrinos? Not so fast… I am reading some breathless reactions to a preprint posted a few days ago by the MiniBooNE experiment. The experiment is designed to detect neutrinos, in particular neutrino oscillations (the change of one neutrino flavor into another.) The headlines are screaming. Evidence found of a New Fundamental Particle, says one. Strange New Particle Could Prove Existence of Dark Matter, says another. Or how about, A Major Physics Experiment Just Detected A Particle That Shouldn't Exist? The particle in question is the so-called sterile neutrino. It is a neat concept, one I happen to quite like. It represents an elegant resolution to the puzzle of neutrino handedness. This refers to the chirality of neutrinos, essentially the direction in which they spin compared to their direction of motion. We only ever see "left handed" neutrinos. But neutrinos have rest mass. So they move slower than light. That means that if you run fast enough and outrun a left-handed neutrino, so that relative to you it is moving backwards (but still spins in the same direction as before), when you look back, you'll see a right-handed neutrino. This implies that right-handed neutrinos should be seen just as often as left-handed neutrinos. But they aren't. How come? Sterile neutrinos offer a simple answer: We don't see right-handed neutrinos because they don't interact (they are sterile). That is to say, when a neutrino interacts (emits or absorbs a Z-boson, or emits or absorbs a W-boson while changing into a charged lepton), it has to be a left-handed neutrino in the interaction's center-of-mass frame. If this view is true and such sterile neutrinos exist, even though they cannot be detected directly, their existence would skew the number of neutrino oscillation events. As to what neutrino oscillations are: neutrinos are massive. But unlike other elementary particles, neutrinos do not have a well-defined mass associated with their flavor (electron, muon, or tau neutrino). When a neutrino has a well-defined flavor (is in a flavor eigenstate) it has no well-defined mass and vice versa. This means that if we detect neutrinos in a mass eigenstate, their flavor can appear to change (oscillate) between one state or another; e.g., a muon neutrino may appear at the detector as an electron neutrino. These flavor oscillations are rare, but they can be detected, and that's what the MiniBooNE experiment is looking for. And that is indeed what MiniBooNE found: an excess of events that is consistent with neutrino oscillations. MiniBooNE detects electron neutrinos. These can come from all kinds of (background) sources. But one particular source is an intense beam of muon neutrinos produced at Fermilab. Because of neutrino oscillations, some of the neutrinos in this beam will be detected as electron neutrinos, yielding an excess of electron neutrino events above background. And that's exactly what MiniBooNE sees, with very high confidence: 4.8σ. That's almost the generally accepted detection threshold for a new particle. But this value of 4.8σ is not about a new particle. It is the significance associated with excess electron neutrino detection events overall: an excess that is expected from neutrino oscillations. So what's the big deal, then? Why the screaming headlines? As far as I can tell, it all boils down to this sentence in the paper: "Although the data are fit with a standard oscillation model, other models may provide better fits to the data." What this somewhat cryptic sentence means is best illustrated by a figure from the paper: This figure shows the excess events (above background) detected by MiniBooNE, but also the expected number of excess events from neutrino oscillations. Notice how only the first two red data points fall significantly above the expected number. (In case you are wondering, POT means Protons On Target, that is to say, the number of protons hitting a beryllium target at Fermilab, producing the desired beam of muon neutrinos.) Yes, these two data points are intriguing. Yes, they may indicate the existence of new physics beyond two-neutrino oscillations. In particular, they may indicate the existence of another oscillation mode, muon neutrinos oscillating into sterile neutrinos that, in turn, oscillate into electron neutrinos, yielding this excess. Mind you, if this is a sign of sterile neutrinos, these sterile neutrinos are unlikely dark matter candidates; their mass would be too low. Or these two data points are mere statistical flukes. After all, as the paper says, "the best oscillation fit to the excess has a probability of 20.1%". That is far from improbable. Sure, the fact that it is only 20.1% can be interpreted as a sign of some tension between the Standard Model and this experiment. But it is certainly not a discovery of new physics, and absolutely not a confirmation of a specific model of new physics, such as sterile neutrinos. And indeed, the paper makes no such claim. The word "sterile" appears only four times in the paper, in a single sentence in the introduction: "[…] more exotic models are typically used to explain these anomalies, including, for example, 3+N neutrino oscillation models involving three active neutrinos and N additional sterile neutrinos [6-14], resonant neutrino oscillations [15], Lorentz violation [16], sterile neutrino decay [17], sterile neutrino non-standard interactions [18], and sterile neutrino extra dimensions [19]." So yes, there is an intriguing sign of an anomaly. Yes, it may point the way towards new physics. It might even be new physics involving sterile neutrinos. But no, this is not a discovery. At best, it's an intriguing hint; quite possibly, just a statistical fluke. So why the screaming headlines, then? I wish I knew. Comoving distance Astronomy, Physics 7 Responses » There is an excellent diagram accompanying an answer on StackExchange, and I've been meaning to copy it here, because I keep losing the address. The diagram summarizes many measures of cosmic expansion in a nice, compact, but not necessarily easy-to-understand form: So let me explain how to read this diagram. First of all, time is going from bottom to top. The thick horizontal black line represents the moment of now. Imagine this line moving upwards as time progresses. The thick vertical black line is here. So the intersection of the two thick black lines in the middle is the here-and-now. Distances are measured in terms of the comoving distance, which is basically telling you how far a distant object would be now, if you had a long measuring tape to measure its present-day location. The area shaded red (marked "past light cone") is all the events that happened in the universe that we could see, up to the moment of now. The boundary of this area is everything in this universe from which light is reaching us right now. So just for fun, let us pick an object at a comoving distance of 30 gigalightyears (Gly). Look at the dotted vertical line corresponding to 30 Gly, halfway between the 20 and 40 marks (either side, doesn't matter.) It intersects the boundary of the past light cone when the universe was roughly 700 million years old. Good, there were already young galaxies back then. If we were observing such a galaxy today, we'd be seeing it as it appeared when the universe was 700 million years old. Its light would have spent 13.1 billion years traveling before reaching our instruments. Again look at the dotted vertical line at 30 Gly and extend it all the way to the "now" line. What does this tell you about this object? You can read the object's redshift (z) off the diagram: its light is shifted down in frequency by a factor of about 9. You can also read the object's recession velocity, which is just a little over two times the vacuum speed of light. Yes… faster than light. This recession velocity is based on the rate of change of the scale factor, essentially the Hubble parameter times the comoving distance. The Doppler velocity that one would deduce from the object's redshift yields a value less than the vacuum speed of light. (Curved spacetime is tricky; distances and speeds can be defined in various ways.) Another thing about this diagram is that in addition to the past, it also sketches the future, taking into account the apparent accelerating expansion of the universe. Notice the light red shaded area marked "event horizon". This area contains everything that we will be able to see at our present location, throughout the entire history of the universe, all the way to the infinite future. Things (events) outside this area will never be seen by us, will never influence us. Note how the dotted line at 30 Gly intersects this boundary when the universe is about 5 billion years old. Yes, this means that we will only ever see the first less than 5 billion years of existence of a galaxy at a comoving distance of 30 Gly. Over time, light from this galaxy will be redshifted ever more, until it eventually appears to "freeze" and disappears from sight, never appearing to become older than 5 billion years. Notice also how the dashed curves marking constant values of redshift bend inward, closer and closer to the "here" location as we approach the infinite future. This is a direct result of accelerating expansion: Things nearer and nearer to us will be caught up in the expansion, accelerating away from our location. Eventually this will stop, of course; cosmic acceleration will not rip apart structures that are gravitationally bound. But we will end up living in a true "island universe" in which nothing is seen at all beyond the largest gravitationally bound structure, the local group of galaxies. Fortunately that won't happen anytime soon; we have many tens of billions of years until then. Lastly, the particle horizon (blue lines) essentially marks the size of the visible part of the universe at any given time. Notice how the width of the interval marked by the intersection of the now line and the blue lines is identical to the width of the past light cone at the bottom of this diagram. Notice also how the blue lines correspond to infinite redshift. As I said, this diagram is not an easy read but it is well worth studying. Hawking again Stephen Hawking passed away over a month ago, but I just came across this beautiful tribute from cartoonist Sean Delonas. It was completely unexpected (I was flipping through the pages of a magazine) and, I admit, it had quite an impact on me. Not the words, inspirational though they may be… the image. The empty wheelchair, the frail human silhouette walking away in the distance. Modified gravity and NGC1052-DF2 The recent discovery of a galaxy, NGC1052-DF2, with no or almost no dark matter made headlines worldwide. Nature 555, 629–632 (29 March 2018) Somewhat paradoxically, it has been proclaimed by some as evidence that the dark matter paradigm prevails over theories of modified gravity. And, as usual, many of the arguments were framed in the context of dark matter vs. MOND, as if MOND was a suitable representative of all modified gravity theories. One example is a recent Quora question, Can we say now that all MOND theories is proven false, and there is really dark matter after all? I offered the following in response: First of all, allow me to challenge the way the question is phrased: "all MOND theories"… Please don't. MOND (MOdified Newtonian Dynamics) is not a theory. It is an ad hoc, phenomenological replacement of the Newtonian acceleration law with a simplistic formula that violates even basic conservation laws. The formula fits spiral galaxy rotation curves reasonably well, consistent with the empirical Tully—Fisher law that relates galaxy masses and rotational velocities, but it fails for just about everything else, including low density globular clusters, dwarf galaxies, clusters of galaxies, not to mention cosmological observations. MOND was given a reprieve in the form of Jacob Beckenstein's TeVeS (Tensor—Vector—Scalar gravity), which is an impressive theoretical exercise to create a proper classical field theory that reproduces the MOND acceleration law in the weak field, low velocity limit. However, TeVeS suffers from the same issues MOND does when confronted with data beyond galaxy rotation curves. Moreover, the recent gravitational wave event, GW170817, accompanied by the gamma ray burst GRB170817 from the same astrophysical event, thus demonstrating that the propagation speed of gravitational and electromagnetic waves is essentially identical, puts all bimetric theories (of which TeVeS is an example) in jeopardy. But that's okay. News reports suggesting the death of modified gravity are somewhat premature. While MOND has often been used as a straw man by opponents of modified gravity, there are plenty of alternatives, many of them much better equipped than MOND to deal with diverse astrophysical phenomena. For instance, f(R) gravity, entropic gravity, Horava—Lifshitz gravity, galileon theory, DGP (Dvali—Gabadadze—Porrati) gravity… The list goes on and on. And yes, it also includes John Moffat's STVG (Scalar—Tensor—Vector Gravity — not to be confused with TeVeS, the two are very different animals) theory, better known as MOG, a theory to which I also contributed. As to NGC1052-DF2, for MOG that's actually an easy one. When you plug in the values for the MOG approximate solution that we first published about a decade ago, you get an effective dynamical mass that is less than twice the visible (baryonic) mass of this galaxy, which is entirely consistent with its observed velocity dispersion. In fact, I'd go so far as to boldly suggest that NGC1052-DF2 is a bigger challenge for the dark matter paradigm than it is for some theories of modified gravity (MOG included). Why? Because there is no known mechanism that would separate dark matter from stellar mass. Compare this to the infamous Bullet Cluster: a pair of galaxy clusters that have undergone a collision. According to the explanation offered within the context of the dark matter paradigm (NB: Moffat and Brownstein showed, over a decade ago, that the Bullet Cluster can also be explained without dark matter, using MOG), their dark matter halos just flew through each other without interaction (other than gravity), as did the stars (stars are so tiny compared to the distance between them, the likelihood of stellar collisions is extremely remote, so stars also behave like a pressureless medium, like dark matter.) Interstellar/intergalactic clouds of gas, however, did collide, heating up to millions of degrees (producing bright X-rays) and losing much of their momentum. So you end up with a cloud of gas (but few stars and little dark matter) in the middle, and dark matter plus stars (but little gas) on the sides. This separation process works because stars and dark matter behave like a pressureless medium, whereas gas does not. But in the case of NGC1052-DF2, some mechanism must have separated stars from dark matter, so we end up with a galaxy (one that actually looks nice, with no signs of recent disruption). I do not believe that there is currently a generally accepted, viable candidate mechanism that could accomplish this. RIP Stephen Hawking Stephen Hawking died earlier today. Hawking was diagnosed with ALS in the year I was born, in 1963. Defying his doctor's predictions, he refused to die after a few years. Instead, he carried on for another astonishing 55 years, living a full life. Public perception notwithstanding, he might not have been the greatest living physicist, but he was certainly a great physicist. The fact that he was able to accomplish so much despite his debilitating illness made him an extraordinary human being, a true inspiration. Here is a short segment, courtesy of CTV Kitchener, filmed earlier today at the Perimeter Institute. My friend and colleague John Moffat, who met Hawking many times, is among those being interviewed: The Solar Gravitational Telescope There is a very interesting concept in the works at NASA, to which I had a chance to contribute a bit: the Solar Gravitational Telescope. The idea, explained in this brand new NASA video, is to use the bending of light by the Sun to form an image of distant objects. The resolving power of such a telescope would be phenomenal. In principle, it is possible to use it to form a megapixel-resolution image of an exoplanet as far as 100 light years from the Earth. The technical difficulties are, however, challenging. For starters, a probe would need to be placed at least 550 astronomical units (about four times the distance to Voyager 1) from the Sun, precisely located to be on the opposite side of the Sun relative to the exoplanet. The probe would then have to mimic the combined motion of our Sun (dragged about by the gravitational pull of planets in the solar system) and the exoplanet (orbiting its own sun). Light from the Sun will need to be carefully blocked to ensure that we capture light from the exoplanet with as little noise as possible. And each time the probe takes a picture of the ring of light (the Einstein ring) around the Sun, it will be the combined light of many adjacent pixels on the exoplanet. The probe will have traverse a region that is roughly a kilometer across, taking pictures one pixel at a time, which will need to be deconvoluted. The fact that the exoplanet itself is not constant in appearance (it will go through phases of illumination, it may have changing cloud cover, perhaps even changes in vegetation) further complicates matters. Still… it can be done, and it can be accomplished using technology we already have. By its very nature, it would be a very long duration mission. If such a probe was launched today, it would take 25-30 years for it to reach the place where light rays passing on both sides of the Sun first meet and thus the focal line begins. It will probably take another few years to collect enough data for successful deconvolution and image reconstruction. Where will I be 30-35 years from now? An old man (or a dead man). And of course no probe will be launched today; even under optimal circumstances, I'd say we're at least a decade away from launch. In other words, I have no chance of seeing that high-resolution exoplanet image unless I live to see (at least) my 100th birthday. Still, it is fun to dream, and fun to participate in such things. Though now I better pay attention to other things as well, including things that, well, help my bank account, because this sure as heck doesn't. And the answer is: Momentum conservation I was surprised by the number of people who found my little exercise about kinetic energy interesting. However, I was disappointed by the fact that only one person (an astrophysicist by trade) got it right. It really isn't a very difficult problem! You just have to remember that in addition to energy, momentum is also conserved. In other words, when a train accelerates, it is pushing against something… the Earth, that is. So ever so slightly, the Earth accelerates backwards. The change in velocity may be tiny, but the change in energy is not necessarily so. It all depends on your reference frame. So let's do the math, starting with a train of mass \(m\) that accelerates from \(v_1\) to \(v_2\). (Yes, I am doing the math formally; we can plug in the actual numbers in the end.) Momentum is of course velocity times mass. Momentum conversation means that the Earth's speed will change as \[\Delta v = -\frac{m}{M}(v_2-v_1),\] where \(M\) is the Earth's mass. If the initial speed of the earth is \(v_0\), the change in its kinetic energy will be given by \[\frac{1}{2}M\left[(v_0+\Delta v)^2-v_0^2\right]=\frac{1}{2}M(2v_0\Delta v+\Delta v^2).\] If \(v_0=0\), this becomes \[\frac{1}{2}M\Delta v^2=\frac{m^2}{M}(v_2-v_1)^2,\] which is very tiny if \(m\ll M\). However, if \(|v_0|>0\) and comparable in magnitude to \(v_2-v_1\) (or at least, \(|v_0|\gg|\Delta v|\)), we get \[\frac{1}{2}M(2v_0\Delta v+\Delta v^2)=-mv_0(v_2-v_1)+\frac{m^2}{2M}(v_2-v_1)^2\simeq -mv_0(v_2-v_1).\] Note that the actual mass of the Earth doesn't even matter; we just used the fact that it's much larger than the mass of the train. So let's plug in the numbers from the exercise: \(m=10000~{\rm kg}\), \(v_0=-10~{\rm m}/{\rm s}\) (negative, because relative to the moving train, the Earth is moving backwards), \(v_2-v_1=10~{\rm m}/{\rm s}\), thus \(-mv_0(v_2-v_1)=1000~{\rm kJ}\). So the missing energy is found as the change in the Earth's kinetic energy in the reference frame of the second moving train. Note that in the reference frame of someone standing on the Earth, the change in the Earth's kinetic energy is imperceptibly tiny; all the \(1500~{\rm kJ}\) go into accelerating the train. But in the reference frame of the observer moving on the second train on the parallel tracks, only \(500~{\rm kJ}\) goes into the kinetic energy of the first train, whereas \(1000~{\rm kJ}\) is added to the Earth's kinetic energy. But in both cases, the total change in kinetic energy, \(1500~{\rm kJ}\), is the same and consistent with the readings of the electricity power meter. Then again… maybe the symbolic calculation is too abstract. We could have done it with numbers all along. When a \(10000~{\rm kg}\) train's speed goes from \(10~{\rm m}/{\rm s}\) to \(20~{\rm m}/{\rm s}\), it means that the \(6\times 10^{24}~{\rm kg}\) Earth's speed (in the opposite direction) will change by \(10000\times 10/(6\times 10^{24})=1.67\times 10^{-20}~{\rm m}/{\rm s}\). In the reference frame in which the Earth is at rest, the change in kinetic energy is \(\tfrac{1}{2}\times (6\times 10^{24})\times (1.67\times 10^{-20})^2=8.33\times 10^{-16}~{\rm J}\). However, in the reference frame in which the Earth is already moving at \(10~{\rm m}/{\rm s}\), the change in kinetic energy is \(\tfrac{1}{2}\times (6\times 10^{24})\times (10+1.67\times 10^{-20})^2-\tfrac{1}{2}\times (6\times 10^{24})\times 10^2\)\({}=\tfrac{1}{2}\times (6\times 10^{24})\times[2\times 10\times 1.67\times 10^{-20}+(1.67\times 10^{-20})^2] \)\({}\simeq 1000~{\rm kJ}\). Enough blogging about personal stuff like our cats. Here is a neat little physics puzzle instead. Solving this question requires nothing more than elementary high school physics (assuming you were taught physics in high school; if not, shame on the educational system where you grew up). No tricks, no gimmicks, no relativity theory, no quantum mechanics, just a straightforward application of what you were taught about Newtonian physics. We have two parallel rail tracks. There is no friction, no air resistance, no dissipative forces. On the first track, let's call it A, there is a train. It weighs 10,000 kilograms. It is accelerated by an electric motor from 0 to 10 meters per second. Its kinetic energy, when it is moving at \(v=10~{\rm m/s}\), is of course \(K=\tfrac{1}{2}mv^2=500~{\rm kJ}\). Next, we accelerate it from 10 to 20 meters per second. At \(v=20~{\rm m/s}\), its kinetic energy is \(K=2000~{\rm kJ}\), so an additional \(1500~{\rm kJ}\) was required to achieve this change in speed. All this is dutifully recorded by a power meter that measures the train's electricity consumption. So far, so good. But now let's look at the B track, where there is a train moving at the constant speed of \(10~{\rm m/s}\). When the A train is moving at the same speed, the two trains are motionless relative to each other; from B's perspective, the kinetic energy of A is zero. And when A accelerates to \(20~{\rm m/s}\) relative to the ground, its speed relative to B will be \(10~{\rm m/s}\); so from B's perspective, the change in kinetic energy is \(500~{\rm kJ}\). But the power meter is not lying. It shows that the A train used \(1500~{\rm kJ}\) of electrical energy. Question: Where did the missing \(1000~{\rm kJ}\) go? First one with the correct answer gets a virtual cookie. GW170817 is a big freaking deal Today, a "multi-messenger" observation of a gravitational wave event was announced. This is a big freaking deal. This is a Really Big Freaking Deal. For the very first time, ever, we observed an event, the merger of two neutron stars, simultaneously using both gravitational waves and electromagnetic waves, the latter including light, radio waves, UV, X-rays, gamma rays. From http://iopscience.iop.org/article/10.3847/2041-8213/aa91c9 The significance of this observation must not be underestimated. For the first time, we have direct validation of a LIGO gravitational wave observation. It demonstrates that our interpretation of LIGO data is actually correct, as is our understanding of neutron star mergers; one of the most important astrophysical processes, as it is one of the sources of isotopes heavier than iron in the universe. Think about it… every time you hold, say, a piece of gold in your hands, you are holding something that was forged in an astrophysical event like this one billions of years ago. Gravitational wave astronomy has arrived So here it is: another gravitational wave event detection by the LIGO observatories. But this time, there is a twist: a third detector, the less sensitive European VIRGO observatory, also saw this event. This is amazing. Among other things, having three observatories see the same event is sufficient to triangulate the sky position of the event with much greater precision than before. With additional detectors coming online in the future, the era of gravitational wave astronomy has truly arrived. Video about the Solar Gravitational Telescope Astronomy, Personal, Physics, Space 1 Response » There is a brand new video on YouTube today, explaining the concept of the Solar Gravitational Telescope concept: It really is very well done. Based in part on our paper with Slava Turyshev, it coherently explains how this concept would work and what the challenges are. Thank you, Jimiticus. But the biggest challenge… this would be truly a generational effort. I am 54 this year. Assuming the project is greenlighted today and the spacecraft is ready for launch in ten years' time… the earliest for useful data to be collected would be more than 40 years from now, when, unless I am exceptionally lucky with my health, I am either long dead already, or senile in my mid-90s.
CommonCrawl
Cavity nonlinear optics due to collective atomic motion S. Gupta, K. L. Moore, K. W. Murch, and D. M. Stamper-Kurn, Cavity nonlinear optics at low photon numbers from collective atomic motion, quant-ph/0706.1052 (June 2007) Nonlinear optical phenomena typically occur at high optical intensities, as conventional materials mediate only weak coupling between photons. Under the condition of strong coupling, atomic saturation leads a host of nonlinear effects. In our experiment collective atomic motion mediates the coupling between photons. The long lived motional coherence of ultracold gasses permits nonlinear optical phenomena to occur when the average number of photons in the cavity is far less than one. When atoms are located at the gradient of the probe standing wave field, forces from the probe potential cause a spatial shift of the atoms, changing the coupling, and thereby the total index of refraction. This is analogous to the Kerr effect in solids. This Kerr-effect leads to asymmetric and bistable spectra of the cavity, and is equivalent to they system of a damped, driven, nonlinear oscillator. The interaction of the atoms with the probe light depends on the position of the atoms, since they experience different intensities of the probe depending on where in the standing wave they are located. If the atoms are distributed evenly throughout the standing wave, the average interaction is simply half of the maximum. Consider, for example a sample of atoms which are harmonically trapped at a location which is located at the inflection point of the probe intensity. Here, atoms experience a force, due to the gradient of the AC stark shift potential, which is proportional to the intensity of the probe light in the cavity. The effect of this force is to cause a spatial shift in the position of the trap minimum, and thereby the interaction of the atoms with the probe. We probe the atom cavity system with probe light far detuned from the atomic resonance, in this limit, each atom can be considered as a small piece of refractive medium, changing the index of refraction inside the cavity. Here, the presence of N atoms shifts the resonance of the atom cavity system by an amount $$ \Delta_N = N g^2/\Delta_a$$ Accounting for how the probe changes the spatial position of the atoms, and thereby the coupling, \(g\). \(\Delta_N\) is then partially proportional to intensity of the probe. $$ \Delta_N =\frac{ N g^2}{\Delta_a} (1 + \epsilon \bar{n}) $$ Where \(\epsilon\), depends on the parameters of the system, and \(\bar{n}\), the average number of photons in the cavity characterizes the intracavity intensity. This indicates that the atom-cavity linecenter depends on the probe intensity. We study this effect by sweeping the probe frequency across the cavity resonance. For probe light which is red-detuned from the atomic resonance, the resulting position shift moves atoms to regions of higher coupling, and shifts the cavity resonance further to the left (i.e. larger \(|\Delta_N|\)). When the probe light is swept across the resonance, an asymmetric transmission is observed. Here, as the probe is swept from left to right, the increasing intensity causes the cavity line to shift increasingly to the left, resulting in the observed asymmetric spectra. The cavity lineshape is described by a lorentzian who's line center depends on the intracavity intensity. For a probe of constant intensity \(\bar{n}_{max}\), the resulting intensity is, $$ \bar{n} = \frac{\bar{n}_{max}}{1+ (\frac{\Delta_{pc} - \Delta_N (1+\epsilon \bar{n})}{\kappa})^2} $$ Where \(\Delta_{pc}\) is the detuning of the probe frequency from the bare cavity resonance. The intracavity intensity \(\bar{n}\) is now described by a cubic function, for which we may in some cases have three real solutions, corresponding to a situation where for a constant drive, there are three possible intracavity intensities which may take place. In practice, though there are three possible real solutions to the intracavity intensity, only two of these solutions are stable. Tuning the parameters of our system to this region of bistability we observe starkly different transmissions depending on whether the sweep of the probe was from right to left or from left to right. In contrast to previous work in Cavity QED systems which observed absorptive bistability[1-3], the dispersive bistabilty observed here results from motion of the atomic medium, and not from an atomic saturation based nonlinearity. [1] G. Rempe et al., Phys. Rev. Lett. 67, 1727 (1991). [2] J. Gripp et al., Phys. Rev. A 54, R3746 (1996). [3] J. A. Sauer et al., Phys. Rev. A 69, 051804(R) (2004). Source: http://dx.doi.org/10.1103/PhysRevLett.99.213601 Newer PostSpontaneously modulated spin textures in a dipolar spinor BEC Older Post Measuring quantum fluctuations of light by the radiation forces on ultracold atoms
CommonCrawl
Informal Caregiving, Employment Status and Work Hours of the 50+ Population in Europe Nicola Ciccarelli1 & Arthur Van Soest1 De Economist volume 166, pages 363–396 (2018)Cite this article Using panel data on the age group 50–70 in 15 European countries, we analyze the effects of providing informal care to parents, parents-in-law, stepparents, and grandparents on employment status and work hours. We account for fixed individual effects and test for endogeneity of caregiving using moments exploiting standard instruments (e.g., parental death) as well as higher-order moment conditions (Lewbel instruments). Specification tests suggest that informal care provision and daily caregiving can be treated as exogenous variables. We find a significant and negative effect of daily caregiving on employment status and work hours. This effect is particularly strong for women. On the other hand, providing care at a weekly (or less than weekly) frequency does not significantly affect paid work. We do not find evidence of heterogeneous effects of caregiving on paid work across European regions. Informal caregiving refers to unpaid care provided by family members and friends, to individuals who are temporarily or permanently unable to function independently. Such care is currently the most common source of long-term care (see Costa-Font et al. 2016 and references therein). The ageing of industrialized countries' populations, and notably the growing number of the very old, is increasing the need for informal caregiving and, more generally, the need for long-term care services (Costa-Font et al. 2015). Informal caregiving may affect the employment status and work hours of caregivers, since caregiving is a time and energy consuming activity that may be hard to combine with work duties. From a policy point of view, it is important to understand whether caregiving indeed has a negative impact on employment status or number of hours of paid work. For example, policies that reduce formal care opportunities or increase the costs of formal care will probably lead to more informal care, and it is important to know whether this has negative side effects on labour supply. In this study we estimate the causal effects of caregiving on employment and work hours using static and dynamic panel data models. Since the majority of informal caregivers provide help to their elderly parents (see, e.g., Plaisier et al. 2015, pp. 267–274),Footnote 1 our study focuses on the effects of informal care provision for parents, parents-in-law, stepparents, and grandparents. Our data come from SHARE, the Survey of Health, Ageing and Retirement in Europe, providing longitudinal information at the individual level on individuals of age 50 and over in a large set of European countries. SHARE contains rich data on participation in (and frequency of) informal care, and on employment and work hours.Footnote 2 The simultaneous nature of decisions on caregiving and paid work activities makes the identification of causal effects challenging. Many older studies do not have the ambition to estimate causal effects. A substantial number of recent studies, however, aims at identifying the causal effect of caregiving on employment and work hours using a variety of models and identification strategies, e.g. panel data models with fixed effects, and cross sectional (or panel data) instrumental variable estimators treating caregiving as endogenous. Past work has used parental health, parental death, and distance between parents and children as instrumental variables for caregiving by the respondent (see, e.g., Bolin et al. 2008; Van Houtven et al. 2013).Footnote 3 In addition to these instrumental variables, we also use instruments that rely on higher-order moment conditions, following the methodology of Lewbel (2012). Since in this way we use two very different sources of (plausibly exogenous) variation in caregiving status, the results of endogeneity tests reported in this study are likely to be more informative than those of endogeneity tests based upon essentially only one source reported in many previous studies. Our main findings are the following. Controlling for individual time-invariant characteristics (which may affect caregiving and employment status or work hours), we find that intensive caregiving significantly reduces the probability of being employed and the number of hours of paid work. On the other hand, providing care at a weekly (or less than weekly) frequency does not significantly affect paid work. These results are confirmed in case we also control for state-dependence in employment status (or state-dependence in work hours). Moreover, the effects of intensive caregiving on employment status and work hours are stronger for females than for males. Furthermore, interacting caregiving status with dummies for geographic region of residence,Footnote 4 we find that the effect of caregiving on paid work is homogeneous across European regions. Finally, we do not find evidence that informal care provision and daily caregiving would be endogenous with respect to employment or work hours. The remainder of the paper is organized as follows. In Sect. 2 we briefly discuss existing studies which analyze the effects of caregiving on paid work. In Sect. 3 we describe the SHARE data and in Sect. 4 we discuss the methodology. In Sect. 5 we discuss the evidence of the effects of caregiving on employment status and work hours. Section 6 concludes. Table 1 summarizes some recent studies on the effects of caregiving on paid work. In a nutshell, the majority of studies have found that a high frequency of caregiving implies a negative effect on paid work (employment or work hours), though there is no consensus on the size of the effect. On the contrary, low frequency caregiving (e.g., a few hours per month) does not seem to affect paid work. Moreover, there is no evidence of a significant effect of caregiving on wages. Table 1 Selected studies on the effects of informal caregiving on paid work As reported in Carmichael and Charles (1998), the effect of caregiving to elderly parents on labor supply will be the net impact of two offsetting forces. On the one hand, we expect a substitution response: since time is scarce, informal care responsibilities will tend to increase the caregiver's shadow wage rate, reducing the probability of doing paid work. On the other hand, there could be a counteracting income effect: since caring is an expensive activity for the caregiver (see Carmichael and Charles 1998 and references therein), and since the care-receiver may not reimburse the caregiver for these costs, the expenditures associated with caring give a motive to increase earnings. Informal caregiving will reduce employment and work hours if the substitution effect dominates the income effect. Table 1 shows that different methodologies are used to estimate the effect of caregiving on paid work. Studies based on cross-sectional data typically use instrumental variables. Parental health is often used to construct instruments for caregiving, with the argument that parental health has no effect on paid work other than through caregiving (see, e.g., Bolin et al. 2008). Some recent studies also argue that instrumental variables are unnecessary since caregiving can be considered as exogenous (Lilly et al. 2010; Jacobs et al. 2014). Longitudinal studies often look at transition probabilities (e.g., Berecki-Gisolf et al. 2008), investigating whether individuals doing informal care and paid work at time t have a higher probability to leave the labor market before time \(t+1\) compared to individuals who are working but do not give care in period t. Other longitudinal studies use panel data models with individual fixed effects, thus controlling for all time-invariant confounding factors. See, e.g., Heitmueller (2007), who also compares cross-sectional IV estimates with fixed effects (non-IV) panel data estimates. The most advanced studies combine instrumental variables and panel data models; see, e.g., Ciani (2012) or Van Houtven et al. (2013). The sample period and the nature of the sample vary widely across studies. Many studies use data for one particular country (US, UK, Australia, Canada). Some only look at women who are traditionally less attached to the labor market than men and more often participate in informal care. Summarizing the results, we can say that low-frequency caregiving often has a small and insignificant effect on paid employment. On the other hand, intensive caregiving (defined in various ways) often has a stronger effect on employment or hours of paid work than low-frequency or no caregiving. The largest effect on employment is found by Crespo and Mira (2014). They find that, for southern European women that provide daily caregiving because of parental disability, daily caregiving implies a 45–65% decrease of the probability of being employed. The largest effect on hours of paid work is found by Van Houtven et al. (2013), who find with US data that intensive caregiving reduces the working week by an average of three hours. The results also vary with methodology; Ciani (2012) shows that the impact of informal care provision on employment strongly depends on the chosen method of estimation, from about 0-percentage-points (fixed effects instrumental variables estimation) to minus 24-percentage-points (pooled instrumental variables estimation). The Survey of Health, Ageing and Retirement in Europe (SHARE) is a European longitudinal dataset containing information on individuals of age 50 and older and their spouses. SHARE is modeled after the US Health and Retirement Study. It is currently composed of six waves with data ranging from 2004–2005 to 2015. Wave 3 is a life history survey that did not collect the information we need and cannot be used for the current analysis. In this study, four waves of the SHARE dataset are used for a longitudinal analysis: wave 1 collected in 2004–2005, wave 2 in 2006–2007, wave 4 in 2011–2012, and wave 5 in 2013.Footnote 5 The countries included for each wave are listed in Table 9 in the Appendix. Since we use panel data techniques, we focus on countries that are present in at least two waves.Footnote 6 Furthermore, we do not use data for Israel since interview years for Israel differ from those of the remaining countries. Data for (respondents living in) ten countries are included for each of the four waves: Austria, Belgium, Denmark, France, Germany, Italy, Netherlands, Spain, Sweden, and Switzerland. Additionally, we use data for Greece (waves 1,2), Czech Republic (waves 2,4,5), Poland (waves 2,4), Slovenia (waves 4,5), and Estonia (waves 4,5). In our analysis, we focus on the age group 50–70. SHARE has some spouses younger than 50 years old, but since this group is not representative for all those younger than 50, we discard these observations in the analysis. Since the retirement age has been increasing across Europe in recent years and more and more people retire after age 65, we chose the upper threshold of 70 years of age. SHARE is multidisciplinary, providing information on all relevant domains of the lives of the 50+ population. The most relevant information for our purposes is on labor market position and retirement, social support (including informal care), health, demographics, and family background (e.g., number of living parents). Employment Status and Weekly Work Hours As dependent variables, we created an employment (employee or self-employed) dummy on the basis of a survey question on occupational status, and the variable hours of paid work using a survey question on usual hours of paid work including unpaid or paid overtime.Footnote 7 We set hours of paid work equal to zero for individuals that are not currently employed. See Table 2 for the wordings of the questions and descriptive statistics by country.Footnote 8 Employment rates vary substantially across countries, from around 50% for Switzerland and the Scandinavian countries to less than 30% for men and even less for women in Italy, Poland and Slovenia. This partially reflects differences in (early) retirement arrangements (e.g., see Schils 2008). The standard amount of weekly work hours in Europe is around 40; including overtime, the sample average of weekly hours (conditional on being employed) is 41.19 for males and 33.62 for females. The variation across countries is much larger for women than for men. The Netherlands has a particularly low sample mean for women, reflecting the fact that the large majority of Dutch women work part-time. Table 2 Descriptive statistics on variables for paid work Variables for Informal Care Provision We use two variables for caregiving: (a) a dummy variable for informal care provision, taking the value 1 if the respondent helps parents, parents-in-law, stepparents or grandparents (henceforth "parents") that live in another household, and 0 otherwise; (b) a dummy variable for daily caregiving, taking the value 1 if the respondent provides informal care at daily or almost daily frequency for a "parent" that lives in another household, and 0 otherwise.Footnote 9 We only use data on individuals that are family respondents, since questions on informal care provision are asked to non-family-respondents in waves 1 and 2 only. The question related to informal care provision for individuals that live outside the household has changed somewhat over time. In waves 1 and 2, there is a detailed breakdown of whether the help was given for (a) personal care (e.g., dressing), (b) practical household help (e.g., shopping), (c) help with paperwork (e.g., filling out forms).Footnote 10 In waves 4 and 5, respondents are simply asked whether they have given personal care or practical household help to someone outside the household, and there is no follow-up question regarding the type of care provided by the respondent.Footnote 11 Therefore, in order to construct a consistent measure of informal care provision, we define informal caregiving as help for personal care or practical household help. If the respondent reported that (s)he provided help, a follow-up question asked who was the recipient, a relative (e.g., a child, a parent), a neighbor, or a friend. This exercise was repeated for up to three different recipients of informal care. We generated a dummy variable equal to 1 if the respondent provided personal care or practical household help for (at least) one "parent" living in another household, 0 otherwise, denoted as "informal caregiving" from now on. Summary statistics on informal caregiving for male and female respondents (in the age group 50–70) by country are presented in the left-hand panel of Table 3. In the complete sample, 11.7% of all males and 13.5% of all females provide informal care to a "parent" living in another household. In all countries, females are more likely to provide informal care than males. The prevalence of informal caregiving is lowest in Greece and Poland and highest in Denmark. Table 3 Descriptive statistics on variables for caregiving For each extra-residential recipient of informal caregiving, the respondent was asked whether care to this recipient was given (a) daily or almost every day, (b) almost every week, (c) almost every month, (d) less often (than at monthly frequency). We define a dummy for daily or almost daily caregiving as 1 if daily or almost daily care was given to at least one "parent" that lives in another household, 0 otherwise (henceforth "daily caregiving").Footnote 12 The frequencies of (almost) daily care provision by gender and country are given in the right-hand panel of Table 3. Only a minority of informal care providers give informal care almost daily, particularly among males and in the Scandinavian countries. Relatively high rates of almost daily informal care are found in Italy and the Czech Republic. Overall, the participation rates in (almost) daily informal care are 2.2% for men and 3.7% for women aged 50–70. Independent variables (used as controls in the regression models) include the standard demographics age, age squared, marital status, number of children, and household size.Footnote 13 See Table 10 in the Appendix for descriptive statistics. Almost 64% of all respondents are married. The average number of living children is 2.06, but the average household size of 2.12 shows that only a small minority of them live in the same household as their parent. Informal care decisions can depend on household composition since they are often negotiated in a family context (Heitmueller 2007, p. 538). We use static and dynamic linear panel data models with fixed individual specific effects.Footnote 14 The static model can be written as $$\begin{aligned} y_{it} = x_{it}' \beta + \alpha _i + \epsilon _{it}, (i=1,..., N; t=1,..., T). \end{aligned}$$ Here \(i =1, \ldots ,N\) and \(t = 1,\ldots ,T\) denote the individual and the wave, respectively; \(y_{it}\) is the employment dummy or the number of work hours. \(\beta \) is a K-dimensional vector of unknown parameters; the K-dimensional vector \( x_{it}\) contains the explanatory variables (including wave fixed effects). \(\alpha _i\) varies across individuals and is fixed over time for the same individual. To eliminate unobserved time-invariant individual heterogeneity, the model is estimated in first differences,Footnote 15 defining \(\Delta y_{it} = y_{it} - y_{i,t-1}\), etc.: $$\begin{aligned} \Delta y_{it} = \Delta x_{it}' \beta + \Delta \epsilon _{it}, (i=1,..., N; t=2,\ldots ,T). \end{aligned}$$ The static model with strictly exogenous explanatory variables assumes that \(E[\Delta x_{it} \Delta \epsilon _{it}] = 0\) for \(t = 2, \ldots , T\). This model can be estimated using ordinary least squares (OLS) on the equation in first differences (2) (henceforth the FD estimator). If, however, \(\Delta y_{it}\) has a causal effect on \(\Delta x_{it}\) (reverse causality), then the FD estimator is inconsistent. Moreover, if (time-varying) omitted variables are correlated with both \(\Delta x_{it}\) and \( \Delta y_{it}\), then the FD estimator is also inconsistent. To account for this potential problem, the static model with potentially endogenous \(\Delta x_{it}\) can be estimated using a first difference (generalized) instrumental variables estimator (FDIV), provided strictly exogenous instruments \(z_{it}\) (or \(\Delta z_{it}\)) are available. We use several instruments that have been exploited in the existing literature; they will be discussed below in detail. These instruments rely on the moment condition \(E[\Delta z_{it} \Delta \epsilon _{it}] = 0\). Moreover, we use instruments that rely on higher-order moment conditions, following Lewbel (2012). We use robust standard errors, clustered at the individual level. In addition, we use dynamic linear panel data models with fixed individual specific effects. The dynamic panel data model can be expressed in first differences as $$\begin{aligned} \Delta y_{it} = \gamma \Delta y_{i(t-1)} + \Delta x_{it}' \beta + \Delta \epsilon _{it}, (i=1,..., N; t=3,\ldots ,T), \end{aligned}$$ where \(\Delta y_{i(t-1)}\) is the state dependence variable (in first differences) and \(\gamma \) is the state dependence parameter. The dynamic panel data models are estimated with the Generalized Method of Moments (GMM), where the moments depend on the assumptions on \(\epsilon _{it}\) (and \(x_{it}\)). The assumption that \(\epsilon _{it}\) is independent of everything before t implies that \(y_{is}\), with \((s = 1,\dots ,t-2)\), is independent of \(\Delta \epsilon _{it}\). This leads to moments with instruments for the state dependence variable (Arellano and Bond 1991).Footnote 16 Under the weaker assumption that \(\epsilon _{it}\) is independent of everything in time period \(t-2\) or earlier, higher order lags such as \(y_{is}\), with \(s \le t-3\), need to be used as instruments. Moreover, strictly exogenous instruments \(z_{it}\) (or \(\Delta z_{it}\)) can be used for endogenous variables contained in \( \Delta x_{it}\). Since the residuals may be heteroscedastic or arbitrarily correlated over time, we use cluster-robust standard errors, clustered at the individual level. The estimator for the dynamic panel data model in differences (3) is denoted as the AB (Arellano Bond) estimator.Footnote 17 Finally, since the estimation of dynamic panel data models relies on the assumptions on \(\epsilon _{it}\) (see above), we test for serial correlation of the error term in levels (\(\epsilon _{it}\)), using the test proposed in Section 10.6.3 of Wooldridge (2002).Footnote 18 Instrumental Variables for Informal Caregiving and for Daily Caregiving To allow (and test) for endogeneity of the caregiving variables, we use the standard moment condition \(E[\Delta z_{it} \Delta \epsilon _{it}]=0\), as well as higher-order moment conditions using so-called Lewbel instruments (see Lewbel 2012) exploiting heteroskedasticity. Parental Survival Status and Health, and Distance from Parental Residence We first describe the instrumental variables that are based on the moment condition \(E[\Delta z_{it} \Delta \epsilon _{it}]= 0 \). Following the literature, we constructed several instrumental variables for (daily or almost daily) caregiving. The most common instruments used in the literature are based upon health and survival status of parents or health of household members (see, e.g., Bolin et al. 2008; Ciani 2012; Heitmueller 2007; Van Houtven et al. 2013). In our set of instrumental variables (\(z_{it}\)), we therefore include: (1) a dummy variable equal to 1 if the respondent's mother is dead in wave t, 0 otherwise; (2) a dummy equal to 1 if the respondent's father is dead in wave t, 0 otherwise; (3) a dummy equal to 1 if the respondent's mother (is alive and) has poor or very poor health in wave t, 0 otherwise; (4) a dummy equal to 1 if the respondent's father (is alive and) has poor or very poor health in wave t, 0 otherwise.Footnote 19 Moreover, following Bolin et al. (2008), we include an instrument based upon the (geographical) distance between respondent and potential care recipient: (5) a dummy equal to 1 if the respondent's mother (is alive and) lives less than 1 kilometer away from the respondent in wave t, 0 otherwise. We also constructed an instrumental variable that is based on distance between the father's residence and the respondent's residence; however, but this was not added since it proved to be a very weak instrument.Footnote 20 Existing studies conclude that these instrumental variables are likely to be valid. Coe and van Houtven (2009) find that parental health does not have a direct effect on the respondent's health or depressive symptoms, implying that parental death is unlikely to directly affect the respondent's work behavior via the bereavement effect. It is likely that the findings reported in Coe and van Houtven (2009) also apply in our study, since we use a similar panel data framework as Coe and van Houtven (2009). In a cross-sectional framework, it is often argued that parental health may affect the work behavior of adult children via the transmission of health-related genetic characteristics (e.g., see the discussion in Van Houtven et al. 2013, p. 243). Since we account for fixed effects, however, this problem does not occur, since genetic characteristics are time-invariant and will be filtered out. Distance from parental residence is related to the time cost of providing informal care. With smaller distance between the informal caregiver and the care-receiver, time costs for the former will decrease, which in turn may increase the amount of informal care provided. Since we control for individual fixed effects that capture time-invariant preferences for work and time-invariant preferences related to distance from the parental residence, the instrument "distance from the mother's residence" is likely to affect the respondent's employment status only through its effect on caregiving. Heteroscedasticity-based Instrumental Variables The second source of identification exploits higher-order moment conditions. The first aim of adding this second source of identification is to increase the (first-stage) strength of the instrument set. Moreover, having two very different types of instruments helps to increase the reliability of instrumental variables estimates; see Murray (2006), specifically, the section "Use Alternative Instruments". As shown in Lewbel (2012), variables Z that are correlated with heteroscedasticity of the first-stage equation and that are uncorrelated with the product of the error terms of the first-stage and second-stage equations can be exploited to construct interactions of \(Z-\bar{Z}\) with the residuals from the first stage equation explaining the endogenous regressor as instruments. The first condition can be easily tested through a Breusch-Pagan test for heteroskedasticity. Z can contain control variables, or variables that are excluded from the regression model, or both (see page 70 of Lewbel 2012). We use for Z (1) respondent's age, (2) respondent's household size, (3) a time dummy for the second wave, which are all significantly correlated with the heteroscedasticity of the error term of the first stage equation at the 5% significance level. In our context of estimation in first differences, the heteroscedasticity-based instruments are constructed using the following procedure: (1) we estimate the first-stage equation \(\Delta Caregiving_{it} = \) \( \Delta Controls_{it}' \beta + \) \( + \Delta \epsilon _{2it} \) by OLS, where \(\Delta \epsilon _{2it}\) is the first-stage error term in first differences, and where \(Controls_{it}\) include both control variables and wave fixed effects; (2) we save the first-stage residuals \(\widehat{\Delta \epsilon _{2it}}\); (3) we construct \( [\Delta Z_{it} - E(\Delta Z_{it})]\), where \(\Delta Z_{it}\) is a vector of random variables in first differences that is (assumed to be) uncorrelated with the product of the error terms \((\Delta \epsilon _{1it} \Delta \epsilon _{2it})\); (4) we generate the heteroscedasticity-based instruments, \( [\Delta Z_{it} - E(\Delta Z_{it})] \widehat{\Delta \epsilon _{2it}}\). We study the effects of informal caregiving (at any frequency) and daily caregiving on participation in paid work (an employment dummy) and paid work hours. In order to analyze the sensitivity of the estimates for the model assumptions, we present the main results for a number of (static and dynamic) panel data models. We first analyze the effects of (informal and daily) caregiving on employment and on work hours using the complete sample. Finally, we also investigate whether the impact of informal caregiving and daily caregiving on paid work variables differs by gender or across European regions. The Effects of Informal Caregiving and Daily Caregiving on Employment We first present estimates using techniques that do not exploit instrumental variables for informal caregiving and daily caregiving. Subsequently, we discuss estimates using static and dynamic panel data models, with the instruments for caregiving presented in Sect. 3 and using lags as instruments for the lagged dependent variable. Table 4 The effects of caregiving on employment—OLS, FD, (Static) FDIV, and AB (Arellano Bond) estimates OLS and First Difference Estimates Pooled OLS and FD (first difference) estimates for the effects of the caregiving variables on employment are reported in Columns 1–4 of Table 4.Footnote 21 Using the pooled OLS estimator, we find that the coefficient on informal caregiving is positive and significant at the 5% level, suggesting that people who are active on the labor market also tend to give informal care, but not necessarily reflecting any causal relationship (see Column 1 of Table 4). Controlling for individual fixed effects with the FD estimator reverses the sign of the coefficient on informal caregiving, and makes the coefficient statistically insignificant at the 5% level (see Column 3 of Table 4). We find that providing care at a (almost) daily frequency negatively affects the employment probability, both in the case that we do not control for individual fixed effects (see Column 2), and in the case that we control for individual fixed effects (see Column 4). In the latter case, the coefficient on daily caregiving is significant at the 5% level, suggesting that it is hard to combine daily caregiving and work. The estimate reported in Column 4 implies that providing care at a (almost) daily frequency reduces the probability of being employed by 7.6% (2.7-percentage-points) on average. By and large, these results are in line with earlier results. For example, Lilly et al. (2007) found that "only those heavily involved in caregiving are significantly more likely to withdraw from the labor market than non-caregivers." These estimates may still not reflect causal effects, e.g., due to reverse causality which would make the caregiving variable endogenous. We test whether the caregiving variables are exogenous by comparing FD estimates with first difference instrumental variables (FDIV) estimates, using the instrumental variables presented in Sect. 4.1. First Difference IV Estimates We present (second-stage) FDIV estimates of static panel data models in Columns 5–6 of Table 4. As shown in Column 5 of Table 4, using the set of eight instrumental variables described in Sect. 4.1, we find that the null hypothesis of exogeneity of informal caregiving is not rejected at the 5% significance level (Hausman test's p-value = 0.337). Moreover, the instrumental variables are jointly strong (F-statistic = 28.68),Footnote 22 and the over-identifying restrictions are not rejected at the 5% significance level (Hansen test's p-value = 0.219). The latter result indicates that the instrumental variables are likely to be exogenous. Finally, in line with the estimate from the first difference regression (see Column 3), we find that the second-stage coefficient on informal caregiving is insignificant (see Column 5). Based on the results from first difference and first difference IV estimations, we can conclude that low-intensity informal caregiving does not exert a negative effect on employment. If the instrumented regressor is daily caregiving, we also find that exogeneity is not rejected at the 5% significance level (Hausman test's p-value = 0.721); see Column 6 of Table 4. This suggests that daily caregiving is exogenous with respect to the error term in the employment equation. Moreover, the instrumental variables are jointly strong to predict daily caregiving in the first-stage equation (F-statistic = 11.05),Footnote 23 and the instruments are likely to be exogenous since the over-identifying restrictions test does not reject at the 5% significance level (Hansen test's p-value = 0.114). In line with the estimate from the first difference regression (see Column 4 of Table 4), we find that the effect of daily caregiving on employment is negative. Moreover, the magnitude of the (second-stage) coefficient on daily caregiving is almost identical to the magnitude of the corresponding coefficient obtained with the first difference regression (see Column 4 and Column 6 of Table 4). These results suggest that the substitution effect arising from daily informal care provision dominates the income effect, i.e., daily caregiving exerts a negative effect on employment.Footnote 24 To conclude, in line with some studies reported in Sect. 2, we find that the caregiving variables are exogenous with respect to the error term on employment in the case that we control for individual fixed effects, wave fixed effects, and control variables. Additionally, considering the results of first difference and first difference IV estimations, we find evidence of a disemployment effect of daily caregiving, while providing informal care at any frequency does not imply a disemployment effect. These results are in line with the existing literature using panel data for other countries, such as Heitmueller (2007). He uses British data and finds an insignificant effect of caregiving but a significant effect of intensive caregiving, both in cross-section IV models and in standard fixed effects panel data models. Arellano Bond Estimates We present (second-stage) Arellano Bond estimates of dynamic panel data models in the last two columns of Table 4.Footnote 25 Since we control for lagged dependent variable (see Eq. (3)), the sample size shrinks from around 27,860 observations (static model) to 7609 observations in the case of dynamic models. Additionally, due to the inclusion of the lagged dependent variable in the regression model, respondents included in the sample for dynamic models are approximately 2 years older than respondents that are included in the sample for static models. Since the Hausman tests suggest that caregiving variables can be treated as exogenous variables also in the dynamic models,Footnote 26 we present the AB estimates that do not use instruments for the caregiving variables, using \(employment_{i(t-2)}\) and \(employment_{i(t-3)}\) as instruments for the state dependence variable in differences (\(\Delta employment_{i(t-1)}\)). Since these instruments rely on the assumption on serial correlation of the error term in levels (\(\epsilon _{it}\)), we first test for serial correlation in the error term. We do not reject the hypothesis that the error term in levels is not serially correlated at the 5% significance level (see the last row of Table 4), implying that the second and third lags of employment can be used as instruments for the state dependence variable (\(\Delta y_{i(t-1)}\)). Using the above-mentioned instruments for the state dependence variable (\(\Delta y_{i(t-1)}\)), we find that the lagged dependent variable is always highly significant (see Columns 7–8). Moreover, the state dependence parameter is close to 0.5, indicating that being employed in the previous wave increases the probability of being employed in the current wave by approximately 50-percentage-points. We find that, in line with the (static) FD estimate, the coefficient on informal caregiving is insignificant at the 5% level (see Column 3 and Column 7, respectively). This confirms our previous finding that informal caregiving as such does not exert a negative effect on employment. On the other hand, the coefficient on daily caregiving is significant at the 5% level, and daily caregiving leads to a 6.5-percentage-points decrease (22.0% decrease)Footnote 27 of the probability of being employed (see Column 8). In line with results from the static (FD) model, we conclude that daily caregiving exerts a strong disemployment effect. The estimated coefficient on daily caregiving in the dynamic panel data model is much larger than the corresponding coefficient in the static (FD) model (see Column 4 and Column 8 of Table 4, respectively). For this reason, we also estimated the effect of daily caregiving on employment using the smaller sample of 7609 observations—i.e., the sample for the dynamic model—and using the (static) FD estimator; in this case, the coefficient on daily caregiving is \(-0.056\) (\(p=0.003\)). The latter result implies that the contrasting evidence obtained with the static FD model and the dynamic panel data model can be ascribed to two main factors: (a) the inclusion of the state dependence variable in the regression model; (b) the presence of older individuals in the sample for the dynamic panel data model. Indeed, we may expect a stronger disemployment effect of daily caregiving for older individuals than for the overall sample: compared to younger individuals, older individuals may have a higher preference for dropping out of the labor market (or for retirement) when they are confronted with the need of providing daily informal care. To summarize, we find that the assumptions of the dynamic model are supported by the data. Furthermore, the lagged dependent variable exerts a strong effect, implying that employment is strongly persistent. Omitting the state dependence variable from the regression model thus might lead to misleading conclusions. Based on these previous arguments, we prefer the dynamic panel data models to the static panel data models, as the dynamic models capture the apparent state dependence in employment. Combining Daily and Non-daily Caregiving In the previous analysis, we separately analyzed the effect of the dummy variables for informal care provision and daily care provision on employment.Footnote 28 This choice was made for the following reason. We first tried to use both a dummy variable for informal care provision at weekly and less than weekly frequency (henceforth, non-daily caregiving), and a dummy variable for daily caregiving, but the instrumental variables were jointly weak for the (joint) instrumentation of these dummy variables.Footnote 29 Since the variables for informal care provision used in the previous analysis appear to be exogenous with respect to the error term on employment (see discussion above), we present extensions of the models treating caregiving as exogenous in which non-daily caregiving and daily caregiving can affect employment differently, using the pooled OLS, (static) first difference and AB estimators;Footnote 30 the regression results reported below account for the time-varying control variables reported in Sect. 3 and wave fixed effects. The estimation results are presented in Table 11 in the Appendix. Pooled OLS estimates provide correlational evidence for the relationship between caregiving and employment status. Using the (static) first difference and AB estimators, we find that non-daily caregiving is completely insignificant (see Columns 2–3 of Table 11), i.e., low-frequency informal care does not increase the likelihood of withdrawal from the labor market. On the contrary, daily caregiving is statistically significant at the 5% level using both the first difference and AB estimators (see Columns 2–3 of Table 11). Moreover, using the same panel data specifications, we find that the estimated coefficient on daily caregiving is approximately the same as in the case where the control group is composed of both non-caregivers and non-daily caregivers (see Columns 2 and 3 of Table 11 in the Appendix, and see Columns 4 and 8 of Table 4, respectively). To conclude, we find that the estimates for the effect of daily caregiving on employment status are not impacted by the choice of the control group. This implies that providing daily care to an elderly "parent" in the current wave has the same (negative) effect on employment for individuals that were non-caregivers or provided care at low frequency in the previous wave. The Effects of Informal Caregiving and Daily Caregiving on Work Hours In this section, we report estimates for the effect of informal caregiving and daily caregiving on work hours. We first present pooled OLS and first difference estimates, not using instruments for informal and daily caregiving. Next, we discuss first difference instrumental variables (FDIV) and Arellano Bond (AB) estimates, using instruments for the caregiving variables and/or for the lagged dependent variable. Pooled OLS and First Difference Estimates Pooled OLS and FD estimates for the effect of caregiving on work hours are reported in Columns 1–4 of Table 5.Footnote 31 The coefficient on informal caregiving switches from positive and statistically significant in the case of (pooled) OLS estimation to negative and insignificant in the case of first difference estimation; see Columns 1 and 3, respectively. This is similar to what we found for the employment dummy. The OLS estimate of the effect of daily caregiving on work hours is significant and negative (see Column 2 of Table 5). In line with the results obtained for the effect of daily caregiving on employment (see Sect. 5.1), the FD estimates show that variation in daily caregiving across waves implies a significant decrease of work hours (see Column 4). The size of the effect is substantial: Daily caregiving implies a 11.8% decrease of work hours.Footnote 32 Table 5 The effects of caregiving on paid work hours—OLS, FD, (Static) FDIV, and AB (Arellano Bond) estimates Since we cannot exclude the possibility that the caregiving variables are correlated with the error term on work hours, we consider first difference instrumental variables (FDIV) estimates, and compare FD and FDIV estimates in order to test for exogeneity of the caregiving variables. The instrumental variables were already described in Sect. 4.1. The (second-stage) FDIV estimates of static panel data models are presented in Columns 5–6 of Table 5. The instruments are jointly strong (F-statistic=28.91). As shown in Column 1 of Table 17 (electronic supplementary materials), the first-stage coefficients on the variables representing parental death, parental health and distance from the maternal residence are as expected. The null hypothesis of exogeneity of informal caregiving is not rejected at the 5% significance level (Hausman test's p-value = 0.669). Moreover, the over-identifying restrictions test does not reject (Hansen test's p-value = 0.426), suggesting that the instruments are valid. In line with the result from the first difference regression reported in Column 3, we find that the second-stage coefficient on informal caregiving is insignificant and close to zero,Footnote 33 indicating that informal care provision does not significantly reduce the number of paid work hours. Column 6 shows that the instrumental variables for daily caregiving are jointly strong (F-statistic=10.979).Footnote 34 They also seem to be exogenous, since the null hypothesis of the Hansen test is not rejected at the 5% significance level (Hansen test's p-value = 0.344). Moreover, the null hypothesis of exogeneity of daily caregiving is not rejected at the 5% significance level (Hausman test's p-value = 0.546). The second-stage coefficient on daily caregiving is negative, confirming that daily caregiving tends to reduce the number of work hours. Due to the large standard error associated to the coefficient on daily caregiving, we find that daily caregiving is not significant at the 5% level. However, considering the results from the FD and FDIV estimations, and in particular considering the fact that we do not reject the null hypothesis of exogeneity of daily caregiving, our results lead to the conclusion that providing informal care on a daily basis reduces the number of paid work hours. Second-stage estimates for dynamic panel data models are reported in the last two columns of Table 5.Footnote 35 In unreported regressions, we found that exogeneity of the caregiving variables is not rejected in the dynamic models.Footnote 36 We therefore do not use instruments for the caregiving variables (but we use \(work hours_{i(t-2)}\) and \(work hours_{i(t-3)}\) as instruments for the lagged dependent variable (\(\Delta work hours_{i(t-1)}\))). We do not reject the assumption that the error terms in levels are serially uncorrelated at the 1% significance level (see the last row of Table 5). This implies that the second and third lags of work hours can be used as instruments in the equation in first differences. Using these instruments for the lagged dependent variable, we find that the state dependence parameter is positive and highly significant (see Columns 7–8). Its point estimate is around 0.48, indicating that working an additional hour in the previous wave increases the predicted number of paid work hours in the current wave by 0.48 (hours). Informal caregiving does not have a significant impact on work hours at the 5% significance level (see Column 7), confirming the result of the static FD model (see Column 3). Moreover, in line with the result from the static (FD) model, we find that daily caregiving significantly reduces the number of paid work hours: caring for a "parent" at daily frequency leads to a 27.7% decrease of work hours.Footnote 37 The estimated effect of daily caregiving on work hours from the dynamic panel data model is much stronger than in the static FD model; see Column 4 and Column 8 of Table 5, respectively. We therefore also estimated the effect of daily caregiving on work hours using the static FD estimator and the smaller sample of 7522 observations—i.e., the sample for the dynamic model. From the latter regression we obtained that caring for a "parent" at daily frequency leads to a (statistically significant) 23.0% decrease of work hours.Footnote 38 Since the latter estimate lies between the estimate from the static FD model (see Column 4) and the estimate from the dynamic panel data model (see Column 8), the contrasting evidence obtained with the static FD model and the dynamic panel data model might be explained by two factors: (a) the inclusion of the lagged dependent variable in the regression model; (b) the presence of older individuals in the sample for the dynamic panel data model (see discussion in Sect. 5.1). The age of individuals included in the sample may affect the relationship between daily caregiving and work hours. Indeed, compared to younger individuals, older individuals may have a higher preference for working fewer hours (or to drop out of the labor market) in the case that they also need to provide informal care on a daily basis. Since the main assumption for dynamic panel data models—i.e., absence of serial correlation of the error term in levels - is not rejected at the 1% significance level, and since the state dependence parameter is strongly significant, we prefer dynamic panel data models to static panel data models. As an extension, we discuss estimates for models allowing for separate effects of non-daily caregiving (at weekly or less than weekly frequency) and daily caregiving on work hours. This specification implies that the control group used in the following analysis includes only non-caregivers. We report pooled OLS, first difference, and AB estimation results in Table 12 in the Appendix. Controlling for individual fixed effects through the first difference and AB estimators, we find that the coefficient on non-daily caregiving is completely insignificant (see Columns 2–3 of Table 12). This implies that, from a statistical point of view, providing informal care at weekly (or less than weekly) frequency does not imply a decrease of work hours. On the contrary, the coefficient on daily caregiving is statistically significant at the 5% level using both the first difference and AB estimators (Columns 2–3 of Table 12). Moreover, using the first difference and AB estimators, we find that the estimated effect of daily caregiving on work hours is very similar in the case that the control group includes only non-caregivers (see Columns 2–3 of Table 12), or both non-caregivers and non-daily caregivers (Columns 4 and 8 of Table 5). Heterogeneity by Gender Since females are less attached to the labor market than males (see Table 2), and since females are more likely to provide daily care (and more generally informal care) to "parents" than males (see Table 3), we investigated whether the effect of daily caregiving on employment and work hours differs across genders; estimates by gender for the effect of (any type of) informal caregiving on employment and work hours are not reported (but are available from the authors), since in line with the results reported in Tables 4 and 5, the coefficient on informal caregiving is completely insignificant for both genders. Static FD estimates for the effect of daily caregiving on employment and work hours are reported in Panel A of Table 6. As shown in Columns 1–2 of Panel A, daily caregiving does not significantly affect employment or work hours for males, but significantly affects both the probability of being employed and work hours for females (see Columns 3–4 of Panel A). For females, daily caregiving implies a 10.5% decrease of the probability of being employed and a 13.1% decrease of work hours.Footnote 39 Both effects are larger than the corresponding effects obtained for the full sample (see Sects. 5.1–5.2). A potential explanation is that labor supply of European females in the age group 50–70 is much more flexible than that of males, for example due to substitution between paid work and housework (see Hank and Juerges 2007, Table 1). Women may drop out of the labor market (or they may work fewer hours) when they have to provide daily care for a "parent", since they may not be able to combine daily caregiving, household chores and work duties. Table 6 The effects of daily caregiving on paid work by gender—FD and AB (two step GMM) estimates In Panel B of Table 6, we report (second-stage) Arellano Bond estimates for the effect of daily caregiving on employment and work hours by gender (again using the second and third lags of the dependent variable—\(y_{i(t-s)}\) with \(s=2,3\)—as instruments for \(\Delta y_{i(t-1)}\)). In line with the results from static panel data models, daily caregiving does not significantly affect employment or work hours for males, but it is significant for females. For females, caregiving implies a 30.6% decrease of the employment probability and a 31.8% decrease of work hours (Panel B).Footnote 40 As before, these effects are stronger than the corresponding effects obtained for the full sample (see Sects. 5.1–5.2). In line with the results reported in Sects. 5.1–5.2, the effect of daily caregiving on employment and work hours is stronger in the dynamic models than for the static models. Table 7 The effects of caregiving on paid work by geographical region—FD estimates Heterogeneity Across Geographical Regions Since European regions differ widely in terms of labor market attachment (see Table 2), formal care arrangements, and rates of informal (or daily) care provision (Table 3), it seems useful to investigate whether the effects of informal care on paid work differ across geographical regions. We generated dummies for Western Europe, Southern Europe and Eastern Europe; the baseline is Northern Europe.Footnote 41 We interacted these variables with the aforementioned informal care dummies, and, using the static FD estimator, we regressed the paid work variables on informal caregiving (or daily caregiving) and interactions with geographical region dummies. The results of the (static) FD estimator are reported in Table 7. Even though there are large differences in terms of labor market attachment and rates of informal (or daily) care provision across geographical regions, the effects of informal caregiving and daily caregiving on paid work variables do not statistically significantly differ across geographical regions. Table 8 The effects of caregiving on paid work by geographical region—AB estimates We also replicated the previous analysis using the Arellano Bond estimator, see Table 8.Footnote 42 The interaction terms described above were statistically insignificant, implying that the effects of informal caregiving and daily caregiving on employment and work hours are not statistically different across European regions. We have analyzed the causal effects of informal caregiving and daily caregiving to elderly parents (biological parents, parents-in-law, stepparents) and grandparents on employment and work hours among the 50+ population in 15 European countries, using longitudinal data that cover the time period 2004–2013. We have focused on static and dynamic panel data models that allow for potential endogeneity of the main explanatory variable of interest, exploiting heteroscedasticity-based instruments and instrumental variables that have been introduced in the existing literature. From a methodological point of view, a main finding is that the results are sensitive to the chosen specification. In particular, panel data models that allow for fixed effects lead to different results than models that do not account for correlation between individual effects and the informal care variable that is the main explanatory variable of interest. But even when fixed effects are incorporated, the size and significance level of the effects of (daily) informal caregiving depend on whether a static or dynamic model is used and on whether an immediate reverse causal effect of paid work on informal care is allowed for. This can explain a large part of the variation in existing findings in the extensive empirical literature on the topic. It emphasizes the importance of selecting models that are supported by the data. Using models that pass tests for misspecification, we find that informal caregiving at low intensity does not significantly affect the probability of being employed or hours of paid work. On the other hand, we find negative effects of daily or almost daily caregiving on both the employment probability and weekly hours of paid work. Moreover, these negative effects of daily caregiving are much stronger for females than for males. Using our preferred models for employment and work hours, in which the caregiving variables appear to be exogenous, we find that daily caregiving decreases the probability of being employed by 6.5-percentage-points (22.0%) and reduces hours of paid work by almost 28%. These effects are substantial and have important implications, even though the share of individuals who provide daily caregiving is limited - around 2% of all observations on males and 4% of all observations on females. In spite of the many differences across European regions, we find no evidence of heterogeneity of the effects of informal or daily caregiving across the European regions that we consider (Western, Eastern, Southern or Northern Europe). The policy implications of negative effects of caregiving have been extensively discussed in the existing literature. See, e.g., OECD (2013), Heitmueller (2007) and Colombo et al. (2011). Due to population ageing and the increasing costs of formal long-term care provision, many governments try to shift part of the responsibility for long-term care of the elderly to children or other family members, increasing the demand for informal care provision. Negative spill-over effects on (for example) labor supply of care providers should be taken into account when evaluating this type of policy. We find no evidence that low-intensity informal care would have such a negative effect. If, however, formal care possibilities would become so scarce or expensive that daily or almost daily informal care needs to be provided, then the large negative effects on employment and hours of paid work that we find would be a source of concern. This conclusion was also drawn by Heitmueller (2007) for his analysis of UK data. We find essentially the same result for a large set of European countries, with very different institutional arrangements for formal care of the elderly and with very different labor market institutions. Moreover, we find that daily caregiving in particular has negative effects for women, hampering policies that are aimed at increasing the labor force participation of women in the age group 50 and older. Based on these results, specific policies that reduce the need for daily caregiving to elderly parents deserve serious consideration. One possibility is to create opportunities for substitution, e.g. vouchers or cash benefits that can be used to outsource part of the caregiving duties to specialized personnel. Alternatively, counseling or training sessions for intensive caregivers might be useful. Whether the latter policies are really effective is an open research question. Moreover, we have only considered the potentially negative effects on labor supply and other (negative or positive) side-effects should be considered also when evaluating this type of policies. The existing literature shows that other negative effects may well exist, see for example the evidence of negative effects on (mental) health in Coe and van Houtven (2009) for the US. Whether other effects of (daily) caregiving play a similar role in Europe remains to be investigated. Moreover, most elderly care-receivers receive help from their adult children (Coe and van Houtven 2009). It can be seen as a limitation that SHARE is restricted to 50+ individuals, but the age group 50–70 is the group for which providing informal care to elderly parents is most prevalent. This is not the complete list of instrumental variables that were used in previous studies; see Sect. 2. We distinguish Northern Europe, Western Europe, Eastern Europe, and Southern Europe. Data for wave 6 of SHARE are not used in the analysis, since they were not available at the start of the project. For more information on the data collection, see http://www.share-project.org/data-documentation/waves-overview.html. This excludes Ireland, Luxembourg, Hungary and Portugal. We do not consider wage rates since they can only be computed for the first two waves of SHARE; see Flores and Kalwij (2013) (page 7). For simplicity and ease of comprehension, Table 2 reports summary statistics on the number of work hours conditional on being employed in the current wave, rather than summary statistics for the actual variable for work hours that includes zeros. Informal care given to other sick or disabled individuals is not considered in this study, since data on this are available only for waves 1 and 2. Data on co-residential care to "parents" is not considered, since they do not contain the same information on frequency and type of care provided. In waves 1–2, respondents are asked the following question: "In the last twelve months, have you personally given any kind of help listed on this card to a family member from outside the household, a friend or neighbour?" If the answer is positive, then respondents are asked: "Which types of help have you given to this person in the last twelve months? (a) Personal care, e.g. dressing, (b) practical household help, e.g. shopping, (c) help with paperwork, e.g. filling out forms". In waves 4 and 5, respondents are asked the following question: "In the last twelve months, have you personally given personal care or practical household help to a family member living outside your household, a friend or neighbour?" We also experimented with different measures of intensive caregiving, but found that the only distinction that mattered was (almost) daily frequency or less often than daily frequency. We do not control for household income since this is endogenous to employment status. Differently from Ciani (2012) and Heitmueller (2007) but similarly to Bolin et al. (2008), we do not control for work disability, since in SHARE, work disability questions are only asked to non-workers. For static panel data models, we also report results from pooled OLS estimations (with wave fixed effects) in order to provide correlational evidence. Since the dynamic panel data models are estimated in first differences (see below), for ease of comparability we also estimate the static panel data model (1) in first differences rather than using the within group transformation. Following Arellano and Bond (1991), missing values for instruments \(y_{is}\) are replaced by zeros. We considered adding moments in levels ("System GMM"; see Blundell and Bond 1998) but this either gave virtually identical results or model specifications that were rejected by standard specification tests. Moreover, the fact that the estimated coefficient on the state dependence variable is always much lower than 1, suggests that moments in levels are not necessary. System GMM estimates are available from the authors. This requires the following steps: (1) Estimate the dynamic panel data model in first differences (see eq. (3)), using \(y_{i(t-2)}\) and \(y_{i(t-3)}\) as instruments for \(\Delta y_{i(t-1)}\). (2) Generate the residuals (\(\widehat{\Delta \epsilon _{it}}\)) from step 1, and compute the lagged residuals (\(\widehat{\Delta \epsilon _{i(t-1)}}\)). (3) Perform OLS of \(\widehat{\Delta \epsilon _{it}}\) on \(\widehat{\Delta \epsilon _{i(t-1)}} \). If the error term in levels is not serially correlated, then the coefficient should be equal to \(-0.5\). Values larger than \(-0.5\) indicate that the error term is positively serially correlated. Values of \(\widehat{\rho }\) smaller than \(-0.5\) indicates that the error term is negatively serially correlated. (4) Test whether \(\widehat{\rho }\) is equal to \(-0.5\) using a Wald test. Column 3 of Table 4 shows that the respondent's father is dead for 89.7% of observations (24,988 out of 27,867 observations), and we find that the respondent's mother is dead for 71.2% of observations (19,847 out of 27,867 observations). The F-statistic for "distance from the father's residence" is 3.87 when the instrumented variable is "informal caregiving", and 0.00 when the instrumented variable is "daily caregiving". Complete regression results for OLS and FD estimations are reported in Table 13 in the electronic supplementary materials. Furthermore, the first-stage coefficients on the instrumental variables "mother is dead", "father is dead", "mother's bad health", "father's bad health" and "distance from mother's residence" are as expected (see Column 1 of Table 14 (electronic supplementary materials) for the first-stage estimates). See Column 3 of Table 14 (electronic supplementary materials for first-stage estimates. The second-stage coefficient on daily caregiving from the FDIV estimation is statistically insignificant, but it must be stressed that the standard errors from FDIV estimations are at least 4 times larger than the standard errors from FD estimations; see Columns 3–6 of Table 4. First-stage estimates for dynamic models are reported in Column 1 and Column 3 of Table 15 (electronic supplementary materials). In Column 1 of Table 15 (electronic supplementary materials), informal caregiving is included in the set of independent variables. In Column 3 of Table 15 (electronic supplementary materials), daily caregiving is included in the set of independent variables. In the case of dynamic panel data models, we tested the exogeneity of the caregiving variables as follows. We used the eight instrumental variables specified in Sect. 4.1 for the caregiving variables, and \(y_{i(t-2)}\) and \(y_{i(t-3)}\) as the instruments for the state dependence variable in differences (\(\Delta y_{i(t-1)}\)). We then tested the exogeneity of each caregiving variable using the Hausman test exclusively for the caregiving variable. The null hypothesis of exogeneity of the caregiving variable is not rejected at the 5% level, both for informal caregiving and for daily caregiving. These estimates are not reported and are available from the authors. I.e., \(-\,0.065/0.295= -\,0.220\). See Sect. 3 for the definition of the dummy variables for informal care provision and daily care provision. These estimates are not reported and are available from the authors. In the case of the AB estimator, we use the second and third lag of employment status as the instrumental variables for lagged employment status. Complete regression results for OLS and FD estimations are reported in Table 16 (electronic supplementary materials). I.e., \(-1.55/13.19=-\,0.118\). The average number of paid work hours is 13.19. Thus, the estimated (second-stage) coefficient on informal caregiving (0.205) implies that informal caregiving has a very small (and insignificant) effect on the number of paid work hours. See Column 3 of Table 17 (electronic supplementary materials) for the first-stage estimates. Corresponding first-stage estimates are reported in Columns 1 (using any caregiving) and 3 (using daily caregiving) of Table 18 (electronic supplementary materials). See Footnote 26. The average of work hours for the dynamic model is 10.452. Thus, daily caregiving implies a 27.7% decrease of work hours (i.e., -2.902/10.452). I.e., \(-\,2.400/10.452 = -\,0.2296\). I.e., \(-\,0.034/0.324=-\,0.105\); \(-\,1.410/10.804= -\,0.131\). I.e., \(-\,0.081/0.265=-\,0.306\); \(-\,2.640/8.296= -\,0.318\). Western Europe: Austria, Belgium, France, Germany, Netherlands, Switzerland. Southern Europe: Greece, Italy, Spain. Eastern Europe: Czech Republic, Estonia, Poland, Slovenia. Northern Europe: Denmark, Sweden. First-stage estimates are not reported and are available from the authors. Arellano, M., & Bond, S. (1991). Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations. The Review of Economic Studies, 58(2), 277–297. Berecki-Gisolf, J., Lucke, J., Hockey, R., & Dobson, A. (2008). Transitions into informal caregiving and out of paid employment of women in their 50s. Social Science and Medicine, 67(1), 122–127. Blundell, R., & Bond, S. (1998). Initial conditions and moment restrictions in dynamic panel data models. Journal of Econometrics, 87, 115–143. Bolin, K., Lindgren, B., & Lundborg, P. (2008). Your next of kin or your own career? Caring and working among the 50+ of Europe. Journal of Health Economics, 27, 718–738. Carmichael, F., & Charles, S. (1998). The labour market costs of community care. Journal of Health Economics, 17(6), 747–765. Carmichael, F., Charles, S., & Hulme, C. (2010). Who will care? Employment participation and willingness to supply informal care. Journal of Health Economics, 29(1), 182–190. Ciani, E. (2012). Informal adult care and caregivers' employment in Europe. Labour Economics, 19, 155–164. Coe, N., & van Houtven, C. (2009). Caring for mom and neglecting yourself? The health effects of caring for an elderly parent. Health Economics, 18, 991–1010. Colombo, F., Llena-Nozal, A., Mercier, J., & Tjadens, F. (2011). Help wanted. Providing and paying for long-term care. Paris: OECD Publishing. Costa-Font, J., Courbage, C., & Swartz, K. (2015). Financing long-term care: Ex Ante, Ex Post or Both? Health Economics, 24(S1), 45–57. Costa-Font, J., Karlsson, M., & Øien, H. (2016). Careful in the crisis? Determinants of older people's informal care receipt in crisis-struck European countries. Health Economics, 25(S2), 25–42. Crespo, L., & Mira, P. (2014). Caregiving to elderly parents and employment status of European mature women. Review of Economics and Statistics, 96(4), 693–709. Flores, M., & Kalwij, A. (2013). What do wages add to the health-employment nexus? Evidence from older European workers, Netspar discussion paper 03/2013-005. Hank, K., & Juerges, H. (2007). Gender and the division of household labor in older couples: A European perspective. Journal of Family Issues, 28(3), 399–421. Heitmueller, A. (2007). The chicken or the egg? Endogeneity in labour market participation of informal carers in England. Journal of Health Economics, 26, 536–559. Jacobs, J., Laporte, A., Van Houtven, C., & Coyte, P. (2014). Caregiving intensity and retirement status in Canada. Social Science & Medicine, 102, 74–82. Leigh, A. (2010). Informal care and labor market participation. Labour Economics, 17, 140–149. Lewbel, A. (2012). Using heteroskedasticity to identify and estimate mismeasured and endogenous regressor models. Journal of Business and Economic Statistics, 30(1), 67–80. Lilly, M., Laporte, A., & Coyte, P. (2007). Labor market work and home care's unpaid caregivers: A systematic review of labor force participation rates, predictors of labor market withdrawal, and hours of work. The Milbank Quarterly, 85(4), 641–690. Lilly, M., Laporte, A., & Coyte, P. (2010). Do they care too much to work? The influence of caregiving intensity on the labor force participation of unpaid caregivers in Canada. Journal of Health Economics, 29(6), 895–903. Michaud, P.-C., Heitmueller, A., & Nazarov, Z. (2010). A dynamic analysis of informal care and employment in England. Labour Economics, 17, 455–465. Murray, M. (2006). Avoiding invalid instruments and coping with weak instruments. Journal of Economic Perspectives, 20(4), 111–132. Nguyen, H., & Connelly, L. (2014). The effect of unpaid caregiving intensity on labor force participation: Results from a multinomial endogenous treatment model. Social Science & Medicine, 100, 115–122. OECD. (2013). "Informal Carers" In Health at a Glance 2013: OECD Indicators. Paris: OECD Publishing. Plaisier, I., van Groenou, M., & Keuzenkamp, S. (2015). Combining work and informal care: the importance of caring organisations. Human Resource Management Journal, 25(2), 267–280. Schils, T. (2008). Early retirement in Germany, the Netherlands, and the United Kingdom: A longitudinal analysis of individual factors and institutional regimes. European Sociological Review, 24(3), 315–329. Van Houtven, C., Coe, N., & Skira, M. (2013). The effect of informal care on work and wages. Journal of Health Economics, 32(1), 240–252. Wooldridge, J. (2002). Econometric Analysis of Cross Section and Panel Data. Cambridge, MA: MIT Press. CentER and Department of Econometrics and Operations Research, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands Nicola Ciccarelli & Arthur Van Soest Nicola Ciccarelli Arthur Van Soest Correspondence to Arthur Van Soest. This research was partly funded by the Dutch Ministry of Social Affairs and Employment. We thank Eric Bonsang, Emanuele Ciani, Bertrand Melenberg, Jan C. van Ours, Ben A. Vollaard, Anne van Putten, Pierre Koning, and participants of a workshop in Le Mans, a seminar in Maastricht, and the ESPE conference in Glasgow for useful comments. This paper uses SHARE data and could not have been written without the financial support for SHARE by the European Commission and others (see www.share-project.org). The usual disclaimers apply. Below is the link to the electronic supplementary material. Supplementary material 1 (pdf 270 KB) Table 9 Countries included in the longitudinal dataset Table 10 Descriptive statistics for control variables Table 11 Regressions of employment status on both non-daily and daily caregiving Table 12 Regressions of work hours on both non-daily and daily caregiving Ciccarelli, N., Van Soest, A. Informal Caregiving, Employment Status and Work Hours of the 50+ Population in Europe. De Economist 166, 363–396 (2018). https://doi.org/10.1007/s10645-018-9323-1 Issue Date: September 2018 Informal care
CommonCrawl
Elliptic and Parabolic Boundary Value Problems in Weighted Function Spaces Felix Hummel ORCID: orcid.org/0000-0002-2374-70301 & Nick Lindemulder2,3 Potential Analysis volume 57, pages 601–669 (2022)Cite this article In this paper we study elliptic and parabolic boundary value problems with inhomogeneous boundary conditions in weighted function spaces of Sobolev, Bessel potential, Besov and Triebel-Lizorkin type. As one of the main results, we solve the problem of weighted Lq-maximal regularity in weighted Besov and Triebel-Lizorkin spaces for the parabolic case, where the spatial weight is a power weight in the Muckenhoupt \(A_{\infty }\)-class. In the Besov space case we have the restriction that the microscopic parameter equals to q. Going beyond the Ap-range, where p is the integrability parameter of the Besov or Triebel-Lizorkin space under consideration, yields extra flexibility in the sharp regularity of the boundary inhomogeneities. This extra flexibility allows us to treat rougher boundary data and provides a quantitative smoothing effect on the interior of the domain. The main ingredient is an analysis of anisotropic Poisson operators. Download to read the full article text Alòs, E., Bonaccorsi, S.: Stability for stochastic partial differential equations with D,irichlet white-noise boundary conditions. Infin. Dimens. Anal. Quantum Probab. Relat Top. 5(4), 465–481 (2002) Amann, H.: Linear and quasilinear parabolic problems. Vol. I, volume 89 of Monographs in Mathematics. Birkhäuser Boston, Inc., Boston. Abstract linear theory (1995) Amann, H. : Linear and quasilinear parabolic problemsVol. II, volume 106 of Monographs in Mathematics. Birkhäuser/Springer, Cham. Function spaces (2019) Amann, H.: Linear and quasilinear parabolic problems. Vol. II, volume 106 of Monographs in Mathematics. Birkhäuser/Springer, Cham. Function spaces (2019) Arendt, W., Duelli, M.: Maximal lp-regularity for parabolic and elliptic equations on the line. J. Evol. Equ. 6(4), 773–790 (2006) Boutet de Monvel, L.: Comportement d'un opérateur pseudo-différentiel sur une variété à bord. II. Pseudo-noyaux de Poisson. J. Analyse Math. 17, 255–304 (1966) Boutet de Monvel, L.: Boundary problems for pseudo-differential operators. Acta Math. 126(1-2), 11–51 (1971) Brewster, K., Mitrea, M.: Boundary value problems in weighted S,obolev spaces on Lipschitz manifolds. Mem. Differ. Equ. Math. Phys. 60, 15–55 (2013) Brzeźniak, Z., Goldys, B., Peszat, S., Russo, F. : Second order PDEs with Dirichlet white noise boundary conditions. J. Evol. Equ. 15(1), 1–26 (2015) Bui, H. -Q.: Weighted Besov and Triebel spaces: interpolation by the real method. Hiroshima. Math. J. 12(3), 581–605 (1982) Bui, H. -Q.: Remark on the characterization of weighted Besov spaces via temperatures. Hiroshima. Math. J. 24(3), 647–655 (1994) Bui, H. -Q., Paluszyński, M., Taibleson, M. H.: A maximal function characterization of weighted Besov-Lipschitz and Triebel-Lizorkin spaces. Studia Math. 119(3), 219–246 (1996) Chill, R., Fiorenza, A.: Singular integral operators with operator-valued kernels, and extrapolation of maximal regularity into rearrangement invariant Banach function spaces. J. Evol. Equ. 14(4-5), 795–828 (2014) Chill, R., Król, S.: Real interpolation with weighted rearrangement invariant Banach function spaces. J. Evol. Equ. 17(1), 173–195 (2017) Cioica-Licht, P. A., Kim, K. -H., Lee, K.: On the regularity of the stochastic heat equation on polygonal domains in \(\mathbb {R}^{2}\). J. Differ. Equ. 267(11), 6447–6479 (2019) Cioica-Licht, P. A., Kim, K. -H., Lee, K., Lindner, F.: An Lp-estimate for the stochastic heat equation on an angular domain in \(\mathbb {R}^{2}\). Stoch. Partial Differ. Equ. Anal. Comput. 6(1), 45–72 (2018) Clément, P., Prüss, J.: An operator-valued transference principle and maximal regularity on vector-valued Lp-spaces. In: Evolution equations and their applications in physical and life sciences (Bad Herrenalb, 1998), volume 215 of Lecture Notes in Pure and Appl. Math., pp. 67–87. Dekker, New York (2001) Clément, P., Simonett, G.: Maximal regularity in continuous interpolation spaces and quasilinear parabolic equations. J. Evol. Equ. 1(1), 39–67 (2001) John, B.: Conway. Functions of One Complex Variable, Volume 11 of Graduate Texts in Mathematics, 2nd edn. Springer, New York (1978) Denk, R., Hieber, M., Prüss, J.: \(\mathcal {R}\)-boundedness, Fourier multipliers and problems of elliptic and parabolic type. Mem. Amer. Math. Soc., 166(788), viii+ 114 (2003) Denk, R., Hieber, M., Prüss, J.: Optimal Lp-lq-estimates for parabolic boundary value problems with inhomogeneous data. Math. Z. 257(1), 193–224 (2007) Denk, R., Kaip, M.: General parabolic mixed order systems in lp and applications, volume 239 of Operator Theory Advances and Applications. Birkhäuser/springer, Cham (2013) Denk, R., Seger, T.: Inhomogeneous Boundary Value Problems in Spaces of Higher Regularity. In: Recent Developments of Mathematical Fluid Mechanics, Adv. Math. Fluid Mech., pp 157–173. Birkhäuser/Springer, Basel (2016) Dong, H., Gallarati, C.: Higher-order elliptic and parabolic equations with VMO assumptions and general boundary conditions. J. Funct. Anal. 274(7), 1993–2038 (2018) Dong, H., Gallarati, C.: Higher-order parabolic equations with vmo assumptions and general boundary conditions with variable leading coefficients. International Mathematics Research Notices, pp. rny084 (2018) Dong, H., Kim, D.: Elliptic and parabolic equations with measurable coefficients in weighted Sobolev spaces. Adv. Math. 274, 681–735 (2015) Dore, G.: \(H^{\infty }\) functional calculus in real interpolation spaces. Studia Math. 137(2), 161–167 (1999) Dore, G.: Maximal regularity in lp spaces for an abstract Cauchy problem. Adv. Differ. Equ. 5(1-3), 293–322 (2000) Dore, G., Venni, A.: On the closedness of the sum of two closed operators. Math. Z. 196(2), 189–201 (1987) equations, L.C. Evans.: Partial Differential Volume 19 of Graduate Studies in Mathematics, 2nd edn. American Mathematical Society, Providence (2010) Fabbri, G., Goldys, B.: An LQ problem for the heat equation on the halfline with Dirichlet boundary control and noise. SIAM J. Control Optim. 48 (3), 1473–1488 (2009) Fackler, S., Hytönen, T. P., Lindemulder, N.: Weighted estimates for operator-valued Fourier multipliers Collect. Math. 71(3), 511–548 (2020) Farwig, R., Sohr, H.: Weighted Lq-theory for the Stokes resolvent in exterior domains. J. Math. Soc. Japan 49(2), 251–288 (1997) Giga, Y: Analyticity of the semigroup generated by the Stokes operator in lr spaces. Math. Z. 178(3), 297–329 (1981) Grafakos, L.: Modern Fourier Analysis, Volume 250 of Graduate Texts in Mathematics, 2nd edn. Springer, New York (2009) Grubb, G.: Singular Green operators and their spectral asymptotics. Duke Math. J. 51(3), 477–528 (1984) Grubb, G.: Pseudo-differential boundary problems in lp spaces. Comm. Partial Differ. Equ. 15(3), 289–340 (1990) Grubb, G.: Functional calculus of pseudodifferential boundary problems, volume 65 of Progress in Mathematics, 2nd edn. Birkhäuser Boston, Inc., Boston (1996) Grubb, G.: Modern Fourier analysis, volume 250 of Graduate Texts in Mathematics. Springer, New York (2009) Grubb, G., Kokholm, N. J.: A global calculus of parameter-dependent pseudodifferential boundary problems in lp Sobolev spaces. Acta Math. 171(2), 165–229 (1993) Haroske, D. D., Piotrowska, I.: Atomic decompositions of function spaces with M,uckenhoupt weights, and some relation to fractal analysis. Math Nachr. 281(10), 1476–1494 (2008) Haroske, D. D., Skrzypczak, L.: Entropy and approximation numbers of embeddings of function spaces with Muckenhoupt weights. I. Rev. Mat. Complut. 21 (1), 135–177 (2008) Haroske, D. D., Skrzypczak, L.: Entropy and approximation numbers of embeddings of function spaces with Muckenhoupt weights, II. General weights. Ann. Acad. Sci. Fenn Math. 36(1), 111–138 (2011) Haroske, D. D., Skrzypczak, L.: Entropy numbers of embeddings of function spaces with Muckenhoupt weights, III. Some limiting cases. J. Funct Spaces Appl. 9(2), 129–178 (2011) Hummel, F.: Boundary value problems of elliptic and parabolic type with boundary data of negative regularity J. Evol. Equ. https://doi.org/10.1007/s00028-020-00664-0(2021) Hytönen, T. P., van Neerven, J. M. A. M., Veraar, M. C., Weis, L.: Analysis in Banach spaces. Vol. I. Martingales and Littlewood-Paley theory, volume 63 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3 Folge. Springer (2016) Hytönen, T. P., van Neerven, J. M. A. M., Veraar, M. C., Weis, L.: Analysis in Banach spaces. Vol. II. Probabilistic Methods and Operator Theory., volume 67 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3 Folge. Springer (2017) Johnsen, J.: Elliptic boundary problems and the Boutet de Monvel calculus in Besov and Triebel-Lizorkin spaces. Math. Scand. 79(1), 25–85 (1996) Johnsen, J., Sickel, W.: On the trace problem for Lizorkin-Triebel spaces with mixed norms. Math. Nachr. 281(5), 669–696 (2008) Kalton, N. J., Kunstmann, P. C., Weis, L.: Perturbation and interpolation theorems for the \(h^{\infty }\)-calculus with applications to differential operators. Math. Ann. 336(4), 747–801 (2006) Kalton, N. J., Weis, L: The \(h^{\infty }\)-calculus and sums of closed operators. Math. Ann. 321(2), 319–345 (2001) Kim, K. -H.: Lq(lp)-theory of parabolic PDEs with variable coefficients. Bull. Korean Math. Soc. 45 (1), 169–190 (2008) Köhne, M., Prüss, J., Wilke, M.: On quasilinear parabolic evolution equations in weighted lp-spaces. J. Evol. Equ. 10(2), 443–463 (2010) Krylov, N. V.: Weighted Sobolev spaces and Laplace's equation and the heat equations in a half space. Comm. Partial Differ. Equ. 24(9-10), 1611–1653 (1999) Krylov, N. V.: The heat equation in lq((0,t),lp)-spaces with weights. SIAM J. Math. Anal. 32(5), 1117–1141 (2001) Kunstmann, P. C., Weis, L.: Maximal Lp-Regularity for Parabolic Equations, Fourier Multiplier Theorems and \(H^{\infty }\)-Functional Calculus. In: Functional Analytic Methods for Evolution Equations, Volume 1855 of Lecture Notes in Math., pp 65–311. Springer, Berlin (2004) LeCrone, J., Pruess, J., Wilke, M.: On quasilinear parabolic evolution equations in weighted lp-spaces II. J. Evol. Equ. 14(3), 509–533 (2014) Lindemulder, N.: Parabolic Initial-Boundary Value Problems with 1Nhomoegeneous Data: A Weighted Maximal Regularity Approach. Master's thesis, Utrecht University (2014) Lindemulder, N.: Second Order Operators Subject to Dirichlet Boundary Conditions in Weighted Triebel-Lizorkin Spaces: Parabolic problems (2018) Lindemulder, N.: Maximal Regularity with Weights for Parabolic Problems with Inhomogeneous Boundary Conditions. Journal of Evolution Equations (2019) Lindemulder, N.: An intersection representation for a class of anisotropic vector-valued function spaces. J. Approx. Theory 264(61), 105519 (2021) Lindemulder, N.: Second Order Operators Subject to Dirichlet Boundary Conditions in Weighted Besov and Triebel-Lizorkin Spaces: Elliptic Problems in preparation (2021) Lindemulder, N., Meyries, M., Veraar, M.C.: Complex interpolation with Dirichlet boundary conditions on the half line. To appear in Mathematische Nachrichten, https://arxiv.org/abs/1705.11054 (2017) Lindemulder, N., Veraar, M. C.: The heat equation with rough boundary conditions and holomorphic functional calculus. J. Differ. Equ. 269(7), 5832–5899 (2020) Lindemulder, N., Veraar, M.C.: Parabolic Second Order Problems with Multiplicative Dirichlet Boundary Noise. In preparation (2021) Maz'ya, V., Shaposhnikova, T.: Higher regularity in the layer potential theory for L,ipschitz domains. Ind. Univ. Math J. 54(1), 99–142 (2005) Meyries, M.: Maximal regularity in weighted spaces, nonlinear boundary conditions, And Global Attractors. PhD thesis, Karlsruhe Institute of Technology (2010) Meyries, M., Schnaubelt, R.: Maximal regularity with temporal weights for parabolic problems with inhomogeneous boundary conditions. Math. Nachr. 285(8-9), 1032–1051 (2012) Meyries, M., Veraar, M. C.: Sharp embedding results for spaces of smooth functions with power weights. Studia Math. 208(3), 257–293 (2012) Meyries, M., Veraar, M.C.: Characterization of a class of embeddings for function spaces with M,uckenhoupt weights. Arch Math. (Basel) 103(5), 435–449 (2014) Meyries, M., Veraar, M. C.: Traces and embeddings of anisotropic function spaces. Math. Ann. 360(3-4), 571–606 (2014) Meyries, M., Veraar, M.C.: Pointwise multiplication on vector-valued function spaces with power weights. J. Fourier Anal. Appl. 21(1), 95–136 (2015) Mielke, A.: ÜBer maximale lp-Regularität für Differentialgleichungen in Banach- und Hilbert-Räumen. Math. Ann. 277(1), 121–133 (1987) Mitrea, M., Taylor, M.: The Poisson problem in weighted Sobolev spaces on Lipschitz domains. Ind. Univ. Math. J. 55(3), 1063–1089 (2006) Prüss, J., Simonett, G.: Maximal regularity for evolution equations in weighted Lp,-spaces. Arch Math. (Basel) 82(5), 415–431 (2004) Prüss, J., Simonett, G. : Moving interfaces and Quasilinear parabolic evolution equations, volume 105 of Monographs in Mathematics. Birkhäuser/springer, Cham (2016) Prüss, J., Simonett, G., Wilke, M.: Critical spaces for quasilinear parabolic evolution equations and applications. J. Differ. Equ. 264(3), 2028–2074 (2018) Prüss, J., Wilke, M.: Addendum to the paper "On quasilinear parabolic evolution equations in weighted lp-spaces II". J. Evol. Equ. 17(4), 1381–1388 (2017) Prüss, J., Wilke, M.: On critical spaces for the Navier-Stokes equations. J. Math. Fluid Mech. 20(2), 733–755 (2018) Rempel, S., Schulze, B.-W.: Index Theory of Elliptic Boundary Problems. Akademie, Berlin (1982) Roitberg, Y.: Elliptic boundary value problems in the spaces of distributions, volume 384 of Mathematics and its Applications. Kluwer Academic Publishers Group, Dordrecht. Translated from the Russian by Peter Malyshev and Dmitry Malyshev (1996) Runst, T., Sickel, W.: Sobolev Spaces of Fractional Order, Nemytskij Operators, and Nonlinear Partial Differential Equations, Volume 3 of De Gruyter Series in Nonlinear Analysis and Applications. Walter de Gruyter & Co., Berlin (1996) Rychkov, V. S.: On restrictions and extensions of the Besov and Triebel-Lizorkin spaces with respect to Lipschitz domains. J. London Math. Soc. (2) 60(1), 237–257 (1999) Scharf, B.: Atomic representations in function spaces and applications to pointwise multipliers and diffeomorphisms, a new approach. Math. Nachr. 286(2-3), 283–305 (2013) Schrohe, E.: A Short Introduction to Boutet De Monvel's Calculus. In: Approaches to Singular Analysis (Berlin, 1999), Volume 125 of Oper. Theory Adv. Appl., pp 85–116. Basel, Birkhäuser (2001) Seeley, R. T.: Extension of \(c^{\infty }\) functions defined in a half space. Proc. Amer. Math. Soc. 15, 625–626 (1964) Sickel, W., Skrzypczak, L., Vybíral, J.: Complex interpolation of weighted Besov and Lizorkin-Triebel spaces. Acta Math. Sin. (Engl. Ser.) 30(8), 1297–1323 (2014) Sowers, R. B.: Multidimensional reaction-diffusion equations with white noise boundary perturbations. Ann. Probab. 22(4), 2071–2121 (1994) Višik, M. I., Èskin, G. I.: Elliptic convolution equations in a bounded region and their applications. Uspehi Mat. Nauk. 22(1 (133)), 15–76 (1967) MathSciNet Google Scholar Weis, L.W.: Operator-valued Fourier multiplier theorems and maximal Lp. Math. Ann. 319(4), 735–758 (2001) Wloka, J.: Partial Differential Equations. Cambridge University Press, Cambridge. Translated from the German by C. B. Thomas and M. J. Thomas (1987) The authors would like to thank Mark Veraar for pointing out the Phragmen-Lindelöf Theorem (see [19, Corollary 6.4.4]) for the proof of Lemma 2.1. They would also like to thank Robert Denk for useful discussions on the Boutet de Monvel calculus. Open Access funding enabled and organized by Projekt DEAL. The first author thanks the Studienstiftung des deutschen Volkes for the scholarship during his doctorate and the EU for the partial support within the TiPES project funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No 820970. This is TiPES contribution #102. The second author was supported by the Vidi subsidy 639.032.427 of the Netherlands Organisation for Scientific Research (NWO) until January 2019. Department of Mathematics, Technical University of Munich, Boltzmannstraße 3, 85748, Garching bei München, Germany Felix Hummel Institute of Analysis, Karlsruhe Institute of Technology, Englerstraße 2, 76131, Karlsruhe, Germany Nick Lindemulder Delft Institute of Applied Mathematics, Delft University of Technology, P.O. Box 5031, 2600, GA, Delft, The Netherlands Correspondence to Felix Hummel. Appendix A: A Weighted Version of a Theorem due to Clément and Prüss The following theorem is a weighted version of a result from [17] (see [46, Theorem 5.3.15]). For its statement we need some notation that we first introduce. Let X be a Banach space. We write \(\widehat {C^{\infty }_{c}}(\mathbb {R}^{n};X) := {\mathscr{F}}^{-1}C^{\infty }_{c}(\mathbb {R}^{n};X)\) and \(\widehat {L^{1}}(\mathbb {R}^{n};X) := {\mathscr{F}}^{-1}L^{1}(\mathbb {R}^{n};X)\). Then $$ L_{1,\text{loc}}(\mathbb{R}^{n};\mathcal{B}(X)) \times \widehat{C^{\infty}_{c}}(\mathbb{R}^{n};X) \longrightarrow \widehat{L^{1}}(\mathbb{R}^{n};X), (m,f) \mapsto \mathcal{F}^{-1}[m\hat{f}] =: T_{m}f. $$ For \(p \in (1,\infty )\) and \(w \in A_{p}(\mathbb {R}^{n})\) we define \(\mathfrak {M}L_{p}(\mathbb {R}^{n},w;X)\) as the space of all \(m \in L_{1,\text {loc}}(\mathbb {R}^{n};{\mathscr{B}}(X))\) for which Tm extends to a bounded linear operator on \(L_{p}(\mathbb {R}^{n},w;X)\), equipped with the norm $$ \|{m}\|_{\mathfrak{M}L_{p}(\mathbb{R}^{n},w;X)} := \|{T_{m}}\|_{\mathcal{B}(L_{p}(\mathbb{R}^{n},w;X))}. $$ Theorem A.1 Let X be a Banach space, \(p \in (1,\infty )\) and \(w \in A_{p}(\mathbb {R}^{n})\). For all \(m \in \mathfrak {M}L_{p}(\mathbb {R}^{n},w;X)\) it holds that $$ \{ m(\xi) : \xi\ \text{is a Lebesgue point of}\ m\} $$ is R-bounded with $$ \begin{array}{@{}rcl@{}} {m}_{L_{\infty}(\mathbb{R}^{n};\mathcal{B}(X))} &\leq \mathcal{R}_{p}\left( \{ m(\xi) : \xi \ \text{is a Lebesgue point of}\ m \}\right) \lesssim_{p,w} \|{m}\|_{\mathfrak{M}L_{p}(\mathbb{R}^{n},w;X)}. \end{array} $$ This can be shown as in [46, Theorem 5.3.15]. Let us comment on some modifications that have to be made for the second estimate. Modifying the Hölder argument given there according to Eq. 2-2, the implicit constant Cp,w of interest can be estimated by $$ C_{p,w} \leq \liminf_{\epsilon \to 0}\epsilon^{d}\|{\phi(\epsilon \cdot )}\|_{L_{p}(\mathbb{R}^{n},w)} \|{\psi(\epsilon \cdot )}\|_{L_{p^{\prime}}(\mathbb{R}^{n},w^{\prime}_{p})}, $$ where \(\phi ,\psi \in \mathcal {S}(\mathbb {R}^{n})\) are such that \(\hat {\phi }\), \(\check {\psi }\) are compactly supported with the property that \({\int \limits } \hat {\phi } \check {\psi } d\xi = 1\). By a change of variable, $$ \epsilon^{d}\|{\phi(\epsilon \cdot )}\|_{L_{p}(\mathbb{R}^{n},w)} \|{\psi(\epsilon \cdot )}\|_{L_{p^{\prime}}(\mathbb{R}^{n},w^{\prime}_{p})} = \|{\phi}\|_{L_{p}(\mathbb{R}^{n},w(\epsilon \cdot ))}\|{\psi}\|_{L_{p^{\prime}}(\mathbb{R}^{n},w^{\prime}_{p}(\epsilon \cdot ))}. $$ Since \(\mathcal {S}(\mathbb {R}^{n}) \hookrightarrow L_{p}(\mathbb {R}^{n},w)\) with norm estimate only depending on n, p and \([w]_{A_{p}}\) (as a consequence of [69, Lemma 4.5]) and since the Ap-characteristic is invariant under scaling, the desired result follows. □ Appendix B: Pointwise Multiplication Lemma B.1 Let \({\mathscr{O}}\) be either \(\mathbb {R}^{d}_{+}\) or a \(C^{\infty }\)-domain in \(\mathbb {R}^{d}\) with a compact boundary \(\partial {\mathscr{O}}\), let X be a Banach space, \(U \in \{\mathbb {R}^{d},{\mathscr{O}}\}\) and let either \(p \in [1,\infty )\), \(q \in [1,\infty ]\), \(\gamma \in (-1,\infty )\) and \({\mathscr{A}} \in \{B,F\}\); or X be reflexive, \(p,q \in (1,\infty )\), \(\gamma \in (-\infty ,p-1)\) and \({\mathscr{A}} \in \{{\mathscr{B}},\mathcal {F}\}\). Let \(s_{0},s_{1} \in \mathbb {R}\) and \(\sigma \in \mathbb {R}\) satisfy \(\sigma > \sigma _{s_{0},s_{1},p,\gamma }\). Then for all \(m \in B^{\sigma }_{\infty ,1}(U;{\mathscr{B}}(X))\) and \(f \in {\mathscr{A}}^{s_{0} \vee s_{1}}_{p,q}(U,w^{\partial {\mathscr{O}}}_{\gamma };X)\) there is the estimate $$ \begin{array}{@{}rcl@{}} \|{mf}\|_{\mathscr{A}^{s_{1}}_{p,q}(U,w^{\partial\mathscr{O}}_{\gamma};X)} &\lesssim& \|{m}\|_{L_{\infty}(U;\mathcal{B}(X))+B^{-(s_{0}-s_{1})_{+}}_{\infty,1}(U;\mathcal{B}(X))} \|{f}\|_{\mathscr{A}^{s_{0} \vee s_{1}}_{p,q}(U,w^{\partial\mathscr{O}}_{\gamma};X)} \\&&+ \|{m}\|_{B^{\sigma}_{\infty,1}(U;\mathcal{B}(X))} \|{f}\|_{\mathscr{A}^{s_{0}}_{p,q}(U,w^{\partial\mathscr{O}}_{\gamma};X)}. \end{array} $$ (B-1) The proof of [62, Lemma 3.1] carries over verbatim to the X-valued setting. □ Remark B.2 In connection to the above lemma, note that $$ L_{\infty}(U;\mathcal{B}(X)) + B^{-(s_{0}-s_{1})_{+}}_{\infty,1}(U;\mathcal{B}(X)) = \left\{\begin{array}{ll} L_{\infty}(U;\mathcal{B}(X)),& s_{1} \geq s_{0},\\ B^{s_{1}-s_{0}}_{\infty,1}(U;\mathcal{B}(X)), & s_{1} < s_{0}, \end{array}\right. $$ as a consequence of \(B^{0}_{\infty ,1} \hookrightarrow L_{\infty } \hookrightarrow B^{0}_{\infty ,\infty }\) and \(B^{s}_{\infty ,\infty } \hookrightarrow B^{s-\epsilon }_{\infty ,1}\), \(s \in \mathbb {R}\), 𝜖 > 0. Furthermore, $$ B^{\sigma}_{\infty,1}(U;\mathcal{B}(X)) \hookrightarrow L_{\infty}(U;\mathcal{B}(X)) + B^{-(s_{0}-s_{1})_{+}}_{\infty,1}(U;\mathcal{B}(X)) $$ as σ > s1 − s0 ≥−(s0 − s1)+. Lemma B.1 has a version for more general weights: \(A_{\infty }\)-weights in case (i) and \([A_{\infty }]'_{p}\)-weights in case (ii). The condition \(\sigma >\sigma _{s_{0},s_{1},p,\gamma }\) then has to be replaced by $$ \sigma > \max\left\{ \left( \frac{1}{\rho_{w,p}}-1\right)_{+}-s_{0},-\left( \frac{1}{\rho_{w^{\prime}_{p},p^{\prime}}}-1\right)_{+}+s_{1} ,s_{1}-s_{0}\right\}, $$ where \(\rho _{w,p} := \sup \{ r \in (0,1) : w \in A_{p/r}\}\) with the convention that \(\sup \emptyset = \infty \) and \(\frac {1}{\infty }=0\). Definition B.4 Let \((S,{\mathscr{A}},\mu )\) be a measure space and X a Banach space. Then we define the space \(\mathcal {R}L_{\infty }(S;{\mathscr{B}}(X))\) as the space of all strongly measurable functions \(f\colon S\to {\mathscr{B}}(X)\) such that $$ \|f\|_{\mathcal{R}L_{\infty}(S;\mathcal{B}(X))}:=\inf_{g}\mathcal{R}\{g(\omega):\omega\in S\}<\infty $$ where the infimum is taken over all strongly measurable \(g\colon S\to {\mathscr{B}}(X)\) such that f = g almost everywhere. Let \({\mathscr{O}}\) be either \(\mathbb {R}^{d}_{+}\) or a \(C^{\infty }\)-domain in \(\mathbb {R}^{d}\) with a compact boundary \(\partial {\mathscr{O}}\), let X be a UMD Banach space, \(U \in \{\mathbb {R}^{d},{\mathscr{O}}\}\), \(p\in (1,\infty )\) and \(w\in A_{p}(\mathbb {R}^{n})\). Let further \(s_{0},s_{1} \in \mathbb {R}\) and \(\sigma \in \mathbb {R}\) satisfy \(\sigma > \max \limits \{-s_{0},s_{1},s_{1}-s_{0}\}\). Then for all \(m \in B^{\sigma }_{\infty ,1}(U;{\mathscr{B}}(X))\) and \(f \in H^{s_{1}\vee s_{0}}_{p}(U,w;X)\) there is the estimate $$ \|{mf}\|_{H^{s_{1}}_{p}(U,w;X)} \lesssim \|{m}\|_{\mathcal{R}L_{\infty}(U;\mathcal{B}(X))} \|{f}\|_{H^{s_{1}}_{p}(U,w;X)} + \|{m}\|_{B^{\sigma}_{\infty,1}(U;\mathcal{B}(X))} \|{f}\|_{H^{s_{0}}_{p}(U,w;X)}. $$ It suffices to consider the case \(U=\mathbb {R}^{d}\). We use paraproducts as in [82, Section 4.4] and [72, Section 4.2]. By [72, Lemma 4.4], the paraproduct π1 : (m,f)↦π1(m,f) gives rise to bounded bilinear mapping $$ {\Pi}_{1}: \mathcal{R}L_{\infty}(\mathbb{R}^{d};\mathcal{B}(X)) \times {H^{s}_{p}}(\mathbb{R}^{d},w;X) \longrightarrow {H^{s}_{p}}(\mathbb{R}^{d},w;X). $$ By a slight modification of [72, Lemma 4.6] (see [62, Lemma 3.1]), for i ∈{2, 3}, the paraproduct πi : (m,f)↦πi(m,f) gives rise to bounded bilinear mapping $$ {\Pi}_{i}: B^{\sigma}_{\infty,1}(\mathbb{R}^{d};\mathcal{B}(X)) \times F^{s_{0}}_{p,\infty}(\mathbb{R}^{d},w;X) \longrightarrow F^{s_{1}}_{p,1}(\mathbb{R}^{d},w;X) $$ and thus a bounded bilinear mapping $$ {\Pi}_{i}: B^{\sigma}_{\infty,1}(\mathbb{R}^{d};\mathcal{B}(X)) \times H^{s_{0}}_{p}(\mathbb{R}^{d},w;X) \longrightarrow H^{s_{1}}_{p}(\mathbb{R}^{d},w;X). $$ Proposition B.6 Under the conditions of Lemma B.1 with s0 ≥ s1, we have the continuous bilinear mapping $$ B^{\sigma}_{\infty,1}(U;\mathcal{B}(X)) \times \mathscr{A}^{s_{0}}_{p,q}(U,w^{\partial\mathscr{O}}_{\gamma};X) \longrightarrow \mathscr{A}^{s_{1}}_{p,q}(U,w^{\partial\mathscr{O}}_{\gamma},X), (m,f) \mapsto mf. $$ $$ B^{\sigma}_{\infty,1}(U;\mathcal{B}(X)) \times H^{s_{0}}_{p}(U,w;X) \longrightarrow H^{s_{1}}_{p}(U,w,X), (m,f) \mapsto mf. $$ Equation B-5 is a direct consequence of Lemma B.1. The case s = 0 in Eq. B-6 follows from [72, Proposition 3.8]. The case s0 > s1 in Eq. B-6 follows from the Ap-version of Eq. B-5 (see Remark B.3) as \(\sigma > \sigma _{s_{0}-\epsilon ,s_{1}+\epsilon ,p,w}\) for sufficiently small 𝜖 > 0. □ Appendix C: Comments on the Localization and Perturbation Procedure The localization and perturbation arguments are quite technical but standard, let us just say the following. The localization in Theorem 5.3 can be carried out as in [67, Sections 2.3 & 2.4] and [60, Appendix B], where we need to use some of the pointwise estimates from Appendix B as well some of the localization and rectification results for weighted Besov and Triebel-Lizorkin spaces from [62, Section 4] (which extend to the vector-valued situation) in order to perform all the arguments. Furthermore, the localization in Theorem 6.2 can be carried out as in [62, Theorem 9.2]. The results in [62, Section 4] are a generalization of results on the invariance of Besov- and Triebel-Lizorkin spaces under diffeomorphic transformations such as [84, Theorem 4.16] to the weighted anisotropic mixed-norm setting. They lead to the conditions (SO) in Section 5 and (SO)s in Section 6. We would also like to mention [24] and [25], where the authors treat maximal Lq-Lp-regularity for parabolic boundary value problems on the half-space in which the elliptic operators have top order coefficients in the VMO class in both time and space variables. In their proofs, they do not use localization for the results on VMO coefficients, but they extend some techniques by Krylov as well as Dong and Kim. While the geometric steps of the localization procedure in our setting are the same as in the standard Lp-setting, there are some differences in what kind of perturbation results we need. The main difference lies in the treatment of the top order perturbation of the differential operator on the domain. More precisely, the following lemma is a useful tool in the localization procedure for our setting. Lemma C.1 Let E be a Banach space and \(A\colon E\supset D(A)\to E\) a closed linear operator. Suppose that there is a constant C > 0 such that for all λ > 0 and all u ∈ D(A) it holds that $$ \begin{array}{@{}rcl@{}} \| u\|_{D(A)} + \lambda \|u\|_{E}\leq C \|(\lambda+A) u\|_{E}. \end{array} $$ (C-1) Let \(\||{ \cdot }\||\colon E\to [0,\infty )\) a mapping and 𝜃 ∈ (0, 1) such that $$ \begin{array}{@{}rcl@{}} \||{u}\|| \leq \|u\|_{E}^{1-\theta}\|u\|^{\theta}_{D(A)} \end{array} $$ holds for all u ∈ D(A). Let further P : D(A) → E and suppose that there are constants \(\delta , C^{\prime }\in (0,\infty )\) such that $$ \begin{array}{@{}rcl@{}} \|P(u)\|_{E}\leq \delta\|u\|_{D(A)}+C^{\prime}\||{u}\|| \end{array} $$ for all u ∈ D(A). Then there is \(\lambda _{0}\in (0,\infty )\) only depending on \(\delta ,C^{\prime }\) and 𝜃 such that for all λ ≥ λ0 and all u ∈ D(A) we have the estimate $$ \begin{array}{@{}rcl@{}} \|P(u)\|_{E}\leq 2\delta C\|(\lambda+A)u\|_{E}. \end{array} $$ For u ∈ D(A) we have that $$ \begin{array}{@{}rcl@{}} \|P(u)\|_{E}&\leq& \delta\|u\|_{D(A)}+C^{\prime}\||{u}\|| \leq \delta\|u\|_{D(A)}+C^{\prime} \|u\|_{E}^{1-\theta}\|u\|^{\theta}_{D(A)} \\ &\leq& 2\delta\|u\|_{D(A)} +\delta C_{\delta} \|u\|_{E} \end{array} $$ with \(C_{\delta }:=(\frac {\delta }{C^{\prime }\theta })^{\theta /(1-\theta )}(1-\theta )\). Here, we used Young's inequality with the Peter-Paul trick. Using Eq. C-1 with λ ≥ Cδ/2, we can further estimate $$ \begin{array}{@{}rcl@{}} \|P(u)\|_{E}&\leq 2\delta C \| (\lambda+A)u\|_{E} \end{array} $$ so that λ0 = Cδ/2 is the asserted parameter. □ If one wants to apply Lemma C.1 for a localization procedure, then one can treat the top order perturbation as follows: Suppose that the differential operator has the form \(1+{\sum }_{|\alpha |=2m} (a_{\alpha }+p_{\alpha }(x))D^{\alpha }\) with pα being small in a certain norm. The mapping P in Lemma C.1 can be chosen to be \(P(u)(x)={\sum }_{|\alpha |=2m}p_{\alpha }(x)D^{\alpha } u(x)\) and A can be chosen to be the realization of \(1+{\sum }_{|\alpha |=2m}a_{\alpha }D^{\alpha }\) in \(\mathbb {E}\) with vanishing boundary conditions. Now one can use Lemma B.1 in combination with Remark B.2 in the Besov-Triebel-Lizorkin case and Lemma B.5 in the Bessel potential case to obtain an estimate of the form Eq. C-3. In order to do this for example in the parabolic case, one chooses 𝜖 ∈ (0, 2m) such that σ > σs,p,γ + 𝜖 ≥ σs−𝜖,𝜖,p,γ. Then one chooses s0 = s − 𝜖 and s1 = s. These choices lead to the estimate $$ \|p_{\alpha}D^{\alpha}u\|_{\mathbb{E}}\lesssim \|p_{\alpha}\|_{L_{\infty}} \|u\|_{\mathbb{E}^{2m}}+\|p_{\alpha}\|_{BUC^{\sigma}}\|u\|_{\mathbb{E}^{2m-\epsilon}}\quad(|\alpha|=2m) $$ in the Besov and Triebel-Lizorkin cases and $$ \|p_{\alpha}D^{\alpha}u\|_{\mathbb{E}}\lesssim \|p_{\alpha}\|_{\mathcal{R}L_{\infty}} \|u\|_{\mathbb{E}^{2m}}+\|p_{\alpha}\|_{BUC^{\sigma}}\|u\|_{\mathbb{E}^{2m-\epsilon}}\quad(|\alpha|=2m) $$ in the Bessel potential case. It holds that \(\|\cdot \|_{D(A)}\eqsim \|\cdot \|_{\mathbb {E}^{2m}}\) on \(D(A)\subset \mathbb {E}^{2m}\). Hence, if \(\theta =1-\frac {\epsilon }{2m}\) and \(E=\mathbb {E}\), then these estimates would correspond to Eq. C-3 in Lemma C.1 where \(\cdot =M\|\cdot \|_{\mathbb {E}^{2m-\epsilon }}\) for a suitable constant M > 0 such that Eq. C-2 holds. Note that Eq. C-1 follows from the sectoriality of A. Therefore, if pα is small in \(L_{\infty }\) or \(\mathcal {R}L_{\infty }\)-norm, respectively, then Lemma C.1 shows that P is just a small perturbation of a suitable shift of the operator A. Hummel, F., Lindemulder, N. Elliptic and Parabolic Boundary Value Problems in Weighted Function Spaces. Potential Anal 57, 601–669 (2022). https://doi.org/10.1007/s11118-021-09929-w Issue Date: December 2022 Anistropic Bessel potential Boundary value problem Lopatinskii-Shapiro Maximal regularity Mixed-norm Poisson operator Sobolev Triebel-Lizorkin Vector-valued Mathematics Subject Classification (2010) Primary: 35K52 46E35; Secondary: 46E40 47G30
CommonCrawl
This site utilises several optional online services that might collect information about you. By continuing visiting this site (whether or not you dismiss this dialog), you acknowledge that you accept our privacy notice. Is the pattern you found persuasive? Gee Law Hero image: Kolmogorov complexity (Wikipedia) This entry is a summary of a previous Zhihu article by me (《一种讨论"逻辑简单"的框架》, sorry again if you don't read Chinese). This entry starts from a frequented type of question posed in elementary maths exams, 'complete a sequence by finding a pattern'. The questions are mostly okay, and such quizzes are generally good. However, from time to time, one might find puzzles whose patterns are hard to come up with, and when we are told the answer, the patterns fail to be persuasive, making people think the designated pattern is just an artefact. Some ridicule these problems by solving all problems with polynomial interpolation. An (at least somehow) unbiased framework to determine the persuasiveness of a pattern is in dire need, which is the central topic of this entry. For Chinese speakers, you might also want to check this V2EX post. Background: finding a pattern Framework: Ockham's razor and Kolmogorov complexity Examples and discussion Real-world considerations Philosophical implications Most of us have the experience of being given a (finite) sequence of numbers with a blank for a missing term, and being asked to find a pattern from the existing sequence and fill the blank one with the value that would have been there according to the rule. Examples are: One possible pattern Possible missing term 1, 4, 7, _, 13, 16 Arithmetic progression with common difference 3 10 1, 2, _, 8, 16 Geometric progression with common ratio 2 4 1, 1, 2, 3, 5, _, 13, 21 The next term is the sum of two previous terms 8 But what if you are given this sequence: 1, 1, 2, 5, 14, _, 132, 429? The answer is 42 and the sequence is supposed to be Catalan numbers. The pattern is obscure for those who never knew the sequence. What about 38, 28, 30, _, 142, 288, 518, then? The answer is 62, and the general term is an=3n3−12n2+5n+42a_n=3n^3-12n^2+5n+42a​n​​=3n​3​​−12n​2​​+5n+42, which was randomly made up by me. The problems of finding a pattern and completing the missing term are mostly encountered when we are at lower grades of primary school. These problems, if well designed, are good for the development of the ability to discover patterns, or the 'non-exhaustive inductive reasoning' (不完全归纳). However, sometimes the pattern is hard to discover, not persuasive or not suitable for someone without certain knowledge. To make things worse, sometimes there are multiple 'possible' rules to produce the given terms while giving different missing terms. By putting 'possible' in quotation marks, I am using it in day-to-day sense, meaning that multiple similarly complicated (or similarly simple) rules. Some people mock this kind of problems by using polynomial interpolation for all such problems, often intendedly producing 'ridiculous' results to exaggerate. For example, one may fill the blank in 1, 2, 3, 4, _, 6, 7 with 19260817 because the pattern is an=n+19260817−54!2!(n−1)(n−2)(n−3)(n−4)(n−6)(n−7).a_n=n+\frac{19260817-5}{4!2!}({n-1})({n-2})({n-3})({n-4})({n-6})({n-7}).a​n​​=n+​4!2!​​19260817−5​​(n−1)(n−2)(n−3)(n−4)(n−6)(n−7). Oh, did I mention it was not necessary that we use polynomials? Even non-elementary functions are okay! Mathematically speaking, there is no 'one single pattern' that governs the finite observation. In real world, we would want a persuasive, presumably simple, pattern. Plus, according to the previous entry, any sequence of integers has an elementary general term, which allows us 'filling infinite number of missing terms with "simple" rules' if we equate elementariness with simplicity. After all, which pattern is the most persuasive (in the day-to-day sense) varies from one to another. Perceptual thinking fails to make people agree on the thing. It is thus desirable to define a baseline to formally compare the persuasiveness between rules. We then should be able to define the standard answer of a find-the-pattern problem to be the most persuasive one. Intuition favours simpler patterns to more complicated ones and thinks the simplest rule is the most persuasive. We also have the well-known Ockham's razor: Entia non sunt multiplicanda praeter necessitatem. Entities must not be multiplied beyond necessity. — John Punch If we agree that simpler means more persuasive, the remaining task is to compare simplicity, which we owe to Kolmogorov complexity, a concept that naturally extends to model the complexity of a pattern. Prerequisite Let T\mathcal{T}T be the set of Turing machines. For T∈TT\in\mathcal{T}T∈T, we denote the output of TTT given input nnn (if it halts) by T(n)T(n)T(n) (abusing the notation for the sequence computed by it), and define T(n)=⊥T(n)={\perp}T(n)=⊥ for those nnn on which TTT does not halt. We assume without proof that there exists an injective function r:T→Nr:\mathcal{T}\to\mathbb{N}r:T→N together with a UTM U∈TU\in\mathcal{T}U∈T and a computable bijection u:N2↦Nu:\mathbb{N}^2\mapsto\mathbb{N}u:N​2​​↦N such that U(u(r(T),x))=T(x)U(u({r(T),x}))=T(x)U(u(r(T),x))=T(x) for all T∈TT\in\mathcal{T}T∈T and x∈Nx\in\mathbb{N}x∈N. We also assume that r(T)r(\mathcal{T})r(T) is a decidable language. Remark. Traditionally the input, the output and the encoding are (binary) strings. However, strings can be mapped to natural numbers with 'order' preserved (shorter strings are smaller, and ties are broken by lexicographical order). For ease of language, we'll use numbers directly. Remark. Think rrr as a programming language, and UUU an interpreter of this language. For example, you may think rrr as JavaScript, and UUU as ChakraCore, Node.js or V8. Or you may think rrr as the binary executable, and UUU as the CPU. When given the (source or machine) code r(T)r(T)r(T) and the input xxx, the interpreter simulates (executes, or simply 'runs') TTT with input xxx. Definition A partial sequence is a function p:N→N∪{⊥}p:\mathbb{N}\to{\mathbb{N}\cup{\left\{{\perp}\right\}}}p:N→N∪{⊥}. We write pnp_np​n​​ for p(n)p(n)p(n). The bottom ⊥{\perp}⊥ is used to mark missing terms. A sequence is a function s:N→Ns:\mathbb{N}\to\mathbb{N}s:N→N. Definition A Turing machine T∈TT\in\mathcal{T}T∈T is said to be a solution to a partial sequence ppp, if TTT halts on all inputs, and T(n)=pnT(n)=p_nT(n)=p​n​​ for all n∈N,pn≠⊥n\in\mathbb{N},p_n\neq{\perp}n∈N,p​n​​≠⊥. Definition For a Turing machine T∈TT\in\mathcal{T}T∈T, the natural number r(T)r(T)r(T) is said to be the complexity of TTT. Remark. For source code, a shorter piece of code is simpler. Ties are broken by lexicographical order. Definition For a partial sequence, if it has a solution, the one with the lowest complexity is said to be its pattern. The computable sequence produced by its pattern is said to be its completion. Definition A partial sequence pnp_np​n​​ is said to be finite, if there exists NNN such that for all n>Nn>Nn>N, pn=⊥p_n={\perp}p​n​​=⊥. In other words, it is finite if and only if only there are only finitely many terms given. For a sequence sss, its kkk leading terms form a finite sequence (called its kkk-th leading part) pn(k)={sn,n≤k;⊥,n>k.p^{(k)}_n=\left\{\begin{array}{ll}s_n,&n\leq k;\\{\perp},&n>k.\end{array}\right.p​n​(k)​​={​s​n​​,​⊥,​​​n≤k;​n>k.​​ From these definitions we reach the following propositions: Computability A partial sequence ppp has a solution if and only if there exists a computable function fff such that f(n)=pnf(n)=p_nf(n)=p​n​​ for all n∈N,pn≠⊥n\in\mathbb{N},p_n\neq{\perp}n∈N,p​n​​≠⊥. In other words, a partial sequence has a solution if and only if it 'comes from' a computable sequence by removing some terms. Proof. If ppp has a solution T∈TT\in\mathcal{T}T∈T, by definition, T(n)=pnT(n)=p_nT(n)=p​n​​ for all n∈N,pn≠⊥n\in\mathbb{N},p_n\neq{\perp}n∈N,p​n​​≠⊥. Conversely, if ppp coincides with the function computed by some T∈TT\in\mathcal{T}T∈T everywhere pn≠⊥p_n\neq{\perp}p​n​​≠⊥, by definition, TTT is a solution to ppp. Corollary A finite partial sequence always has a solution. Lock-down of pattern/completion For a sequence sss, the following are equivalent: sss is computable. (Lock-down of pattern) There exists KKK such that for all k>Kk>Kk>K, the pattern of its kkk-th leading part is the same as that of its KKK-th one. (Lock-down of completion) There exists kkk such that the completion of its kkk-th leading part is sss itself. Proof. We prove by chasing a cyclic chain of implications: (1 ⟹ 2) If sss is computable, let T∈TT\in\mathcal{T}T∈T be the machine with the lowest complexity among those that compute sss. Let KKK be the complexity of TTT and consider machines whose complexity is less than KKK. There are only finitely many such machines, specifically, at most KKK ones. For each such machine T′T'T​′​​, by the choice of TTT, T′T'T​′​​ does not compute sss, therefore, either it doesn't halt for some natural number, or there exists KT′K_{T'}K​T​′​​​​ such that s(KT′)≠T′(KT′)s\left(K_{T'}\right)\neq T'\left(K_{T'}\right)s(K​T​′​​​​)≠T​′​​(K​T​′​​​​). Let KKK be a natural number greater than all such KT′K_{T'}K​T​′​​​​, then for all k≥Kk\geq Kk≥K, TTT is a solution to the KKK-th leading part of sss, and by the choice of TTT and KKK, it the one with the lowest complexity, hence the pattern. (2 ⟹ 3) Let TTT be the common pattern for all kkk-th leading part of sss for all k≥Kk\geq Kk≥K, then it is also a common solution. For n∈Nn\in\mathbb{N}n∈N, since TTT is a solution to the (n+K)({n+K})(n+K)-th leading part, we have sn=T(n)s_n=T(n)s​n​​=T(n). Therefore, the completion of the KKK-th leading part of sss is the sequence itself. (3 ⟹ 1) As sss is the completion of some partial sequence, it obviously is computable. Uncomputability There does not exist an algorithm that computes (the description of) the pattern of all finite partial sequences. Proof (Sketch). If the problem were computable, we would be able to decide the language of compressible strings (see Kolmogorov complexity). Consider the following program: Input: sss. For all s′s's​′​​ shorter than sss: Find the description r(T)r(T)r(T) of the pattern TTT of ps′=sp_{s'}=sp​s​′​​​​=s (the other terms are ⊥{\perp}⊥). If (r(T),s′)({r(T),s'})(r(T),s​′​​) is shorter than sss, output COMPRESSIBLE\mathrm{COMPRESSIBLE}COMPRESSIBLE and terminate. If the loop finishes without terminating, output INCOMPRESSIBLE\mathrm{INCOMPRESSIBLE}INCOMPRESSIBLE. The pattern TTT is the 'shortest' machine such that T(s′)=sT(s')=sT(s​′​​)=s, by which the correctness of the program follows. Note that traditionally, the definition of Kolmogorov complexity does not require that TTT halt on all inputs. Nevertheless, tweaking the definition keeps compressibility undecidable. In simple words, for a partial sequence, a solution is a program that produces correct given terms. The pattern is the shortest program among these. If there is a tie, break it by lexicographical order. Let's take some non-abstract sense. Suppose we use JavaScript (ES2015) as our programming language. We define a 'source code' as a JavaScript expression that evaluates to a method f and define f(n) as the output on input n. Consider the sequence sn=ns_n=ns​n​​=n. If we are given only the initial term p0(0)=0p^{(0)}_0=0p​0​(0)​​=0, what is the completion of p(0)p^{(0)}p​(0)​​? function(){return 0} The completion of p(0)p^{(0)}p​(0)​​ is sn(0)=0s^{(0)}_n=0s​n​(0)​​=0. What if we are given the first two terms, i.e., p(1)p^{(1)}p​(1)​​? What's its completion? function($){return $} Its completion is sss itself. Let's consider another example: 1, 2, 3, 4, _, 6, 7. Clearly the completion is sss. Let's look at one of its solutions: function (n) if (n == 5) return 19260817; That solution is obviously more complicated (longer) than the pattern, hence not considered the pattern (in day-to-day sense). For any given partial sequence, its completion depends on the encoding rrr, or the programming language selected. For example, given 2, 3, 5, 7, 11, the completion might not be the prime if the programming language is not efficient in expressing such concepts. (Interleave: JavaScript can express the concept with short code thanks to its regular expression engine.) For real-world cases, the 'programming language' also varies from people to people. Perhaps Mathematica can be used to model mathematicians? But clearly I shouldn't be modelled so. The framework resolves the definition of 'persuasive' by resorting to simplicity. However, simplicity varies according to 'preset knowledge' (built-in notations), which, if to be uniformised, is just a matter of choice. For quizzes in school, we can modify our 'program' to 'mathematical expression' and define a total order over these expressions to compare their complexities. As suggested by Liwei Cai, for people with more maths knowledge, we should include a lot of built-in functions. The computability of the terms in the completion together with the existence of a solution to any finite partial sequence (that a pattern exists) and the uncomputability of the pattern (that finding the pattern is hard) form a dialectical contradiction (coarsely understood as 'the two sides of a coin among everything', a concept from Marxism). The two results are not surprising by themselves or together, but making use of the parlance learnt from philosophy (pronounced 'political') courses is interesting to me. The uncomputability suggests: The discovery of the most clean theory explaining everything we see cannot be purely mechanical. The lock-down of completion suggests the rationality of this framework. For a computable sequence sss, the completion of its kkk-th leading part eventually will be sss itself, independent of the choice of encoding rrr. For any programming language, as long as it is Turing complete, the most succinct rule that explains the sequence will eventually be the same, given enough terms. Applying the analogy to human, this implies that: Having observed abundant phenomena, people vastly different yet all knowledgeable eventually agree on the law that governs the nature. Computer ScienceMaths Copyright © 1995-2020 by Gee Law, all rights reserved.
CommonCrawl
Help with math Visual illusions Cut the knot! What is what? Inventor's paradox Math as language Outline mathematics Analogue gadgets Proofs in mathematics Things impossible Index/Glossary Fast Arithmetic Tips Stories for young Make an identity Elementary geometry Plane Filling Curves: the Lebesgue Curve It is not very difficult to show that Hilbert's and Peano's plane filling curves are nowhere differentiable. Judging by their polygonal approximations that turn incessantly, one may well expect the non-differentiability of the limit. However, the visual clues may be deceptive. The applet below illustrates the generation of the Lebesgue plane filling curve which is differentiable almost everywhere. Who could perceive this by watching? (To see the progress of polygonal approximations keep clicking in the applet area.) If you are reading this, your browser is not set to run Java applets. Try IE11 or Safari and declare the site https://www.cut-the-knot.org as trusted in the Java setup. What if applet does not run? Geometrically, the approximations join the centers of the subsquares in the order "top-down, left-right": As common, a curve obtained at one step is squeezed into smaller square on the next. The four diminished copies of the curve are joined in the same order of constructions. Analytically, the curve is related to the Cantor set \(C\) and the "stair-case" function. Recollect that the numbers in the Cantor set admit a ternary expansion, without digit \(1\). Every number \(c\in C\) can be expressed as $c = 0.(2t_{1})(2t_{2})(2t_{3})\ldots,$ where \(t_{j}, j = 1, 2, \ldots\) are binary digits, either \(0\) or \(1\). The Cantor set can be mapped on the unit square \([0,1]\times [0,1]\) surjectively (i.e. "onto") by means of $0.(2t_{1})(2t_{2})(2t_{3})\ldots \mapsto (0.t_{1}t_{3}\ldots, 0.t_{2}t_{4}\ldots)$ This mapping \(f: C \rightarrow [0,1]\times [0,1]\) is obviously, although perhaps surprisingly, surjective because both coordinates of any point \((x,y)\) in the unit square admit a binary expansion; and the two may be interlaced into a single member of \(C\) in a reverse of the definition of \(f\). In 1904, H. Lebesgue extended the mapping \(f\) by linear interpolation from \(C\) to \([0,1]\) in the manner of construction of the Cantor stair-case function. Let \((a,b)\) be one of the intervals removed in the construction of the Cantor set. Then for \(t\in (a,b)\) define $f(t) = \frac{1}{b-a}(f(b)(t-a) + f(a)(b-t)).$ By the construction, \(f\), being linear, is differentiable on every such interval \((a,b)\) and, since the measure ("length") of the Cantor set is \(0\), it is differentiable "almost everywhere" on the unit interval \([0,1]\). This is a nice and unexpected result whose veracity is pretty obvious analytically. It may be worth spending some time to make sure that the analytic and geometric aspects of the construction do match each other. H. Sagan, Space-Filling Curves, Springer-Verlag, 1994 Plane Filling Curves Plane Filling Curves: Hilbert's and Moore's Plane Filling Curves: Peano's and Wunderlich's Plane Filling Curves: all possible Peano curve Plane Filling Curves: the Lebesgue Curve Following the Hilbert Curve Plane Filling Curves: One of Sierpinski's Curves A Plane Filling Curve for the Year 2017 |Activities| |Contact| |Front page| |Contents| |Geometry| Copyright © 1996-2018 Alexander Bogomolny
CommonCrawl
Matrices for developers January 23, 2017 — 45 minutes read math matrix WARNING: Long article, big images, heavy GIFs. A few weeks ago I was on an android-user-group channel, when someone posted a question about Android's Matrix.postScale(sx, sy, px, py) method and how it works because it was "hard to grasp". Coincidence: in the beginning of 2016, I finished a freelance project on an Android application where I had to implement an exciting feature: Climbing away - App screenshot 1 Android app screenshots The user, after buying and downloading a digital topography of a crag, had to be able to view the crag which was composed of: a picture of the cliff, a SVG file containing an overlay of the climbing routes. The user had to have the ability to pan and zoom at will and have the routes layer "follow" the picture. Technical challenge What is a matrix? Transformation matrices Transforming points More math stuff The identity matrix Combining transformations Types of transformations 3x3 transformation matrices Matrices wrap-up Combination use-case: pinch-zoom Combination use-case: rotate image In order to have the overlay of routes follow the user's actions, I found I had to get my hands dirty by overloading an Android ImageView, draw onto the Canvas and deal with finger gestures. As a good engineer: I searched on Stack Overflow :sweat_smile: And I discovered I'd need the android.graphics.Matrix class for 2D transformations. The problem with this class, is that it might seem obvious what it does, but if you have no mathematical background, it's quite mysterious. boolean postScale (float sx, float sy, float px, float py) Postconcats the matrix with the specified scale. M' = S(sx, sy, px, py) * M Yeah, cool, so it scales something with some parameters and it does it with some kind of multiplication. Nah, I don't get it: What does it do exactly? Scales a matrix? What's that supposed to mean, I want to scale the canvas… What should I use, preScale or postScale? Do I try both while I get the input parameters from my gesture detection code and enter an infinite loop of trial and error guesstimates? (No. Fucking. Way.) So at this very moment of the development process I realized I needed to re-learn basic math skills about matrices that I had forgotten many years ago, after finishing my first two years of uni :scream: WWW to the rescue! While searching around I've found a number of good resources and was able to learn some math again, and it felt great. It also helped me solve my 2D transformations problems by applying my understandings as code in Java and Android. So, given the discussion I've had on the channel I've mentioned above, it seems I was not the only one struggling with matrices, trying to make sense of it and using these skills with Android's Matrix class and methods, so I thought I'd write an article. The first part, this one, is about matrices. The second part, "2D Transformations with Android and Java", is about how to apply what you know about matrices in code, with Java and on Android. The first resource you might encounter when trying to understand 2D transformations are articles about "Transformation matrix" and "Affine transformations" on Wikipedia: https://en.wikipedia.org/wiki/Transformation_matrix https://en.wikipedia.org/wiki/Transformation_matrix#Affine_transformations https://en.wikipedia.org/wiki/Affine_transformation I don't know you, but with this material, I almost got everything — wait… NOPE! I didn't get anything at all. Luckily, on Khan Academy you will find a very well taught algebra course about matrices. If you have this kind of problem, I encourage you to take the time needed to follow this course until you reach that "AHA" moment. It's just a few hours of investment (it's free) and you won't regret it. Why? Because matrices are good at representing data, and operations on matrices can help you solve problems on this data. For instance, remember having to solve systems of linear equations at school? The most common ways (at least the two I've studied) to solve a system like that is with the elimination of variables method or the row reduction method. But you can also use matrices for that, which leads to interesting algorithms. Matrices are used heavily in every branch of science, and they can also be used for linear transformation to describe the position of points in space, and this is the use case we will study in this article. Simply put, a matrix is a 2D array. In fact, talking about a $m \times n$ matrix relates to an array of length $m$ in which each item is also an array but this time of length $n$. Usually, $m$ represents a rows' number and $n$ a columns' number. Each element in the matrix is called an entry. A matrix is represented by a bold capital letter, and each entry is represented by the same letter, but in lowercase and suffixed with its row number and column number, in this order. For example: $$ \mathbf{A} = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \vdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} $$ Now what can we do with it? We can define an algebra for instance: like addition, subtraction and multiplication operations, for fun and profit. :nerd: Addition and subtraction of matrices is done by adding or subtracting the corresponding entries of the operand matrices: $$ \mathbf{A} + \mathbf{B} = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \vdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} + \begin{pmatrix} b_{11} & b_{12} & \cdots & b_{1n}\\ b_{21} & b_{22} & \vdots & b_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ b_{m1} & b_{m2} & \cdots & b_{mn} \end{pmatrix} = \begin{pmatrix} a_{11}+b_{11} & a_{12}+b_{12} & \cdots & a_{1n}+b_{1n}\\ a_{21}+b_{21} & a_{22}+b_{22} & \vdots & a_{2n}+b_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1}+b_{m1} & a_{m2}+b_{m2} & \cdots & a_{mn}+b_{mn} \end{pmatrix} $$ $$ \mathbf{A} - \mathbf{B} = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \vdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} - \begin{pmatrix} b_{11} & b_{12} & \cdots & b_{1n}\\ b_{21} & b_{22} & \vdots & b_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ b_{m1} & b_{m2} & \cdots & b_{mn} \end{pmatrix} = \begin{pmatrix} a_{11}-b_{11} & a_{12}-b_{12} & \cdots & a_{1n}-b_{1n}\\ a_{21}-b_{21} & a_{22}-b_{22} & \vdots & a_{2n}-b_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1}-b_{m1} & a_{m2}-b_{m2} & \cdots & a_{mn}-b_{mn} \end{pmatrix} $$ Corollary to this definition we can deduce that in order to be defined, a matrix addition or subtraction must be performed against two matrices of same dimensions $m \times n$, otherwise the "corresponding entries" bit would have no sense: Grab a pen and paper and try to add a $3 \times 2$ matrix to a $2 \times 3$ matrix. What will you do with the last row of the first matrix? Same question with the last column of the second matrix? If you don't know, then you've reached the same conclusion as the mathematicians that defined matrices additions and subtractions, pretty much :innocent: $$ \begin{aligned} \text{Addition}\\ \mathbf{A} + \mathbf{B} &= \begin{pmatrix} 4 & -8 & 7\\ 0 & 2 & -1\\ 15 & 4 & 9 \end{pmatrix} + \begin{pmatrix} -5 & 2 & 3\\ 4 & -1 & 6\\ 0 & 12 & 3 \end{pmatrix}\\\\ &= \begin{pmatrix} 4+\left(-5\right) & \left(-8\right)+2 & 7+3\\ 0+4 & 2+\left(-1\right) & \left(-1\right)+6\\ 15+0 & 4+12 & 9+3 \end{pmatrix}\\\\ \mathbf{A} + \mathbf{B} &= \begin{pmatrix} -1 & -6 & 10\\ 4 & 1 & 5\\ 15 & 16 & 12 \end{pmatrix} \end{aligned} $$ $$ \begin{aligned} \text{Subtraction}\\ \mathbf{A} - \mathbf{B} &= \begin{pmatrix} 4 & -8 & 7\\ 0 & 2 & -1\\ 15 & 4 & 9 \end{pmatrix} - \begin{pmatrix} -5 & 2 & 3\\ 4 & -1 & 6\\ 0 & 12 & 3 \end{pmatrix}\\\\ &= \begin{pmatrix} 4-\left(-5\right) & \left(-8\right)-2 & 7-3\\ 0-4 & 2-\left(-1\right) & \left(-1\right)-6\\ 15-0 & 4-12 & 9-3 \end{pmatrix}\\\\ \mathbf{A} + \mathbf{B} &= \begin{pmatrix} 9 & -10 & 4\\ -4 & 3 & -7\\ 15 & -8 & 6 \end{pmatrix} \end{aligned} $$ Throughout all my math schooling I've been said things like "you can't add apples to oranges, it makes no sense", in order to express the importance of units. Well it turns out that multiplying apples and oranges is allowed. And it can be applied to matrices: we can only add matrices to matrices, but we can multiply matrices by numbers and by other matrices. In the first case though, the number is not just a number (semantically). You don't multiply a matrix by a number, you multiply a matrix by a scalar. In order to multiply a matrix by a scalar, we have to multiply each entry in the matrix by the scalar, which will give us another matrix as a result. $$ k . \mathbf{A} = k . \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \vdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} = \begin{pmatrix} k.a_{11} & k.a_{12} & \cdots & k.a_{1n}\\ k.a_{21} & k.a_{22} & \vdots & k.a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ k.a_{m1} & k.a_{m2} & \cdots & k.a_{mn} \end{pmatrix} $$ And a little example: $$ 4 . \begin{pmatrix} 0 & 3 & 12\\ 7 & -5 & 1\\ -8 & 2 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 12 & 48\\ 28 & -20 & 4\\ -32 & 8 & 0 \end{pmatrix} $$ The second type of multiplication operation is the multiplication of matrices by matrices. This operation is a little bit more complicated than addition/subtraction because in order to multiply a matrix by a matrix we don't simply multiply the corresponding entries. I'll just quote wikipedia on that one: if $\mathbf{A}$ is an $m \times n$ matrix and $\mathbf{B}$ is an $n \times p$ matrix, their matrix product $\mathbf{AB}$ is an $m \times p$ matrix, in which the $n$ entries across a row of $\mathbf{A}$ are multiplied with the $n$ entries down a columns of $\mathbf{B}$ and summed to produce an entry of $\mathbf{AB}$ :expressionless: This hurts my brain, let's break it down: if $\mathbf{A}$ is an $m \times n$ matrix and $\mathbf{B}$ is an $n \times p$ matrix, their matrix product $\mathbf{AB}$ is an $m \times p$ matrix We can write this in a more graphical way: $ {\tiny\begin{matrix}^{\scriptsize }\\ \normalsize \mathbf{A} \\ ^{\scriptsize m \times n}\end{matrix} } \times {\tiny\begin{matrix}^{\scriptsize }\\ \normalsize \mathbf{B} \\ ^{\scriptsize n \times p}\end{matrix} } = {\tiny\begin{matrix}^{\scriptsize }\\ \normalsize \mathbf{AB} \\ ^{\scriptsize m \times p}\end{matrix} } $. See this simple matrix $ {\tiny\begin{matrix}^{\scriptsize }\\ \normalsize \mathbf{A} \\ ^{\scriptsize 2 \times 3}\end{matrix} } = \begin{pmatrix}a_{11} & a_{12} & a_{13}\\a_{21} & a_{22} & a_{23}\end{pmatrix} $ and this other matrix $ {\tiny\begin{matrix}^{\scriptsize }\\ \normalsize \mathbf{B} \\ ^{\scriptsize 3 \times 1}\end{matrix} } = \begin{pmatrix}b_{11}\\b_{21}\\b_{31}\end{pmatrix} $. We have $m=2$, $n=3$ and $p=1$ so the multiplication will give $ {\tiny\begin{matrix}^{\scriptsize }\\ \normalsize \mathbf{AB} \\ ^{\scriptsize 2 \times 1}\end{matrix} } = \begin{pmatrix}ab_{11}\\ab_{21}\end{pmatrix} $. Let's decompose the second part now: "the $n$ entries across a row of $\mathbf{A}$" means that each row in $\mathbf{A}$ is an array of $n=3$ entries: if we take the first row we get $a_{11}$, $a_{12}$ and $a_{13}$. "the $n$ entries down a columns of $\mathbf{B}$" means that each column of $\mathbf{B}$ is also an array of $n=3$ entries: in the first column we get $b_{11}$, $b_{21}$ and $b_{31}$. "are multiplied with" means that each entry in $\mathbf{A}$'s row must be multiplied with its corresponding (first with first, second with second, etc.) entry in $\mathbf{B}$'s column: $a_{11} \times b_{11}$, $a_{12} \times b_{21}$ and $a_{13} \times b_{31}$ "And summed to produce an entry of $\mathbf{AB}$" means that we must add the products of these corresponding rows and columns entries in order to get the entry of the new matrix at this row number and column number: in our case we took the products of the entries in the first row in the first matrix with the entries in the first column in the second matrix, so this will give us the entry in the first row and first column of the new matrix: $a_{11} \times b_{11} + a_{12} \times b_{21} + a_{13} \times b_{31}$ To plagiate wikipedia, here is the formula: $$ \mathbf{A} = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \vdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} \text{, } \mathbf{B} = \begin{pmatrix} b_{11} & b_{12} & \cdots & b_{1p}\\ b_{21} & b_{22} & \vdots & b_{2p}\\ \vdots & \vdots & \ddots & \vdots\\ b_{n1} & b_{n2} & \cdots & b_{np} \end{pmatrix} $$ $$ \mathbf{AB} = \begin{pmatrix} ab_{11} & ab_{12} & \cdots & ab_{1p}\\ ab_{21} & ab_{22} & \vdots & ab_{2p}\\ \vdots & \vdots & \ddots & \vdots\\ ab_{m1} & ab_{m2} & \cdots & ab_{mp} \end{pmatrix} \text{where } ab_{ij}=\sum_{k=1}^{m}a_{ik}b_{kj} $$ Ok I realize I don't have any better way to explain this so here is a visual representation of the matrix multiplication process and an example: $$ \mathbf{A} = \begin{pmatrix} 4 & 3\\ 0 & -5\\ 2 & 1\\ -6 & 8 \end{pmatrix} \text{, } \mathbf{B} = \begin{pmatrix} 7 & 1 & 3\\ -2 & 4 & 1 \end{pmatrix} $$ $$ \begin{aligned} \mathbf{AB} &= \begin{pmatrix} 4\times7+3\times\left(-2\right) & 4\times1+3\times4 & 4\times3+3\times1\\ 0\times7+\left(-5\right)\times\left(-2\right) & 0\times1+\left(-5\right)\times4 & 0\times3+\left(-5\right)\times1\\ 2\times7+1\times\left(-2\right) & 2\times1+1\times4 & 2\times3+1\times1\\ \left(-6\right)\times7+8\times\left(-2\right) & \left(-6\right)\times1+8\times4 & \left(-6\right)\times3+8\times1 \end{pmatrix}\\\\ &= \begin{pmatrix} 28-6 & 4+12 & 12+3\\ 0+10 & 0-20 & 0-5\\ 14-2 & 2+4 & 6+1\\ -42-16 & -6+32 & -18+8 \end{pmatrix}\\\\ \mathbf{AB} &= \begin{pmatrix} 22 & 16 & 15\\ 10 & -20 & -5\\ 12 & 6 & 7\\ -58 & 26 & -10 \end{pmatrix} \end{aligned} $$ In order for matrix multiplication to be defined, the number of columns in the first matrix must be equal to the number of rows in the second matrix. Otherwise you can't multiply, period. More details here and here if you are interested. Now that we know what is a matrix and how we can multiply matrices, we can see why it is interesting for 2D transformations. As I've said previously, matrices can be used to represent systems of linear equations. Suppose I give you this system: $$ \begin{aligned} 2x+y &= 5\\ -x+2y &= 0 \end{aligned} $$ Now that you are familiar with matrix multiplications, maybe you can see this coming, but we can definitely express this system of equations as the following matrix multiplication: $$ \begin{pmatrix} 2 & 1\\ -1 & 2 \end{pmatrix} . \begin{pmatrix} x\\y \end{pmatrix} = \begin{pmatrix} 5\\0 \end{pmatrix} $$ If we go a little further, we can see something else based on the matrices $\begin{pmatrix}x\\y\end{pmatrix}$ and $\begin{pmatrix}5\\0\end{pmatrix}$. We can see that they can be used to reprensent points in the Cartesian plane, right? A point can be represented by a vector originating from origin, and a vector is just a $2 \times 1$ matrix. What we have here, is a matrix multiplication that represents the transformation of a point into another point. We don't know what the first point's coordinates are yet, and it doesn't matter. What I wanted to show is that, given a position vector, we are able to transform it into another via a matrix multiplication operation. Given a point $P$, whose coordinates are represented by the position vector, $\begin{pmatrix}x\\y\end{pmatrix}$, we can obtain a new point $P^{\prime}$ whose coordinates are represented by the position vector $\begin{pmatrix}x^{\prime}\\y^{\prime}\end{pmatrix}$ by multiplying it by a matrix. One important thing is that this transformation matrix has to have specific dimensions, in order to fulfill the rule of matrix multiplication: because $\begin{pmatrix}x\\y\end{pmatrix}$ is a $2 \times 1$ matrix, and $\begin{pmatrix}x^{\prime}\\y^{\prime}\end{pmatrix}$ is also a $2 \times 1$ matrix, then the transformation matrix has to be a $2 \times 2$ matrix in order to have: $$ \mathbf{A} . \begin{pmatrix} x\\y \end{pmatrix} = \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{pmatrix} . \begin{pmatrix} x\\y \end{pmatrix} = \begin{pmatrix} x^{\prime}\\y^{\prime} \end{pmatrix} $$ Note: The order here is important as we will see later, but you can already see that switching $\mathbf{A}$ and $\begin{pmatrix}x\\y\end{pmatrix}$ would lead to an $undefined$ result (if you don't get it, re-read the part on matrix multiplication and their dimensions). Notice that the nature of the transformation represented by our matrix above and in the link is not clear, and I didn't say what kind of transformation it is, on purpose. The transformation matrix was picked at random, and yet we see how interesting and useful it is for 2D manipulation of graphics. Another great thing about transformation matrices, is that they can be used to transform a whole bunch of points at the same time. For now, I suppose all you know is the type of transformations you want to apply: rotation, scale or translation and some parameters. So how do you go from scale by a factor of 2 and rotate 90 degrees clockwise to a transformation matrix? Well the answer is: More specifically I encourage you to read this course on Matrices as transformations (which is full of fancy plots and animations) and particularly its last part: Representing two dimensional linear transforms with matrices. Come back here once you've read it, or it's goind to hurt :sweat_smile: Ok I suppose you've read the course above, but just in case, here is a reminder a position vector $\begin{pmatrix}x\\y\end{pmatrix}$ can be broken down as $\begin{pmatrix}x\\y\end{pmatrix} = x.\begin{pmatrix}\color{Green} 1\\ \color{Green} 0\end{pmatrix} + y.\begin{pmatrix}\color{Red} 0\\ \color{Red} 1\end{pmatrix}$. [Show explanation] If you decompose $\begin{pmatrix}x\\y\end{pmatrix}$ into a matrix addition operation, you find $\begin{pmatrix}x\\y\end{pmatrix} = \begin{pmatrix}x\\0\end{pmatrix} + \begin{pmatrix}0\\y\end{pmatrix}$. And if you decompose a little bit more you can express each operand of this addition as the multiplication of a scalar and a matrix: $\begin{pmatrix}x\\0\end{pmatrix} = x.\begin{pmatrix}1\\0\end{pmatrix}$ $\begin{pmatrix}0\\y\end{pmatrix} = y.\begin{pmatrix}0\\1\end{pmatrix}$ Now look at the the matrices $\begin{pmatrix}1\\0\end{pmatrix}$ and $\begin{pmatrix}0\\1\end{pmatrix}$, they are the Cartesian unit vectors. So $\begin{pmatrix} x\\y \end{pmatrix} = x.\begin{pmatrix}\color{Green} 1\\ \color{Green} 0\end{pmatrix} + y.\begin{pmatrix}\color{Red} 0\\ \color{Red} 1\end{pmatrix}$ is just another way to write that the position vector $\begin{pmatrix}x\\y\end{pmatrix}$ represents a point given by a transformation of the unit vectors of our Cartesian plane. $\begin{pmatrix}\color{Green} a\\ \color{Green} c\end{pmatrix}$ and $\begin{pmatrix}\color{Red} b\\ \color{Red} d\end{pmatrix}$ are the position vectors where $\begin{pmatrix} \color{Green} 0\\ \color{Green} 1\end{pmatrix}$ and $\begin{pmatrix} \color{Red} 1\\ \color{Red} 0\end{pmatrix}$ will land respectively after the transformation matrix $\mathbf{A} = \begin{pmatrix} \color{Green} a & \color{Red} b\\ \color{Green} c & \color{Red} d \end{pmatrix}$ has been applied. Let's start again from our unit vectors $\begin{pmatrix} \color{Green} 1\\ \color{Green} 0\end{pmatrix}$ and $\begin{pmatrix} \color{Red} 0\\ \color{Red} 1\end{pmatrix}$. We know that $\begin{pmatrix} x\\y \end{pmatrix} = x.\begin{pmatrix} \color{Green} 1\\ \color{Green} 0\end{pmatrix} + y.\begin{pmatrix} \color{Red} 0\\ \color{Red} 1\end{pmatrix}$, so now imagine we apply a transformation to our plane. Our unit vectors will be transformed too, right? If we assume that $\begin{pmatrix} \color{Green} 1\\ \color{Green} 0 \end{pmatrix}$ "lands on" $\begin{pmatrix} \color{Green} a\\ \color{Green} c \end{pmatrix}$ and that $\begin{pmatrix} \color{Red} 0\\ \color{Red} 1 \end{pmatrix}$ "lands on" $\begin{pmatrix} \color{Red} b\\ \color{Red} d \end{pmatrix}$, then we have our position vector $\begin{pmatrix} x\\y \end{pmatrix}$ landing on $x.\begin{pmatrix} \color{Green} a\\ \color{Green} c \end{pmatrix} + y.\begin{pmatrix} \color{Red} b\\ \color{Red} d \end{pmatrix} = \begin{pmatrix}\color{Green} a.x + \color{Red} b.y\\ \color{Green} c.x + \color{Red} d.y \end{pmatrix}$. given the previous transformation, $\begin{pmatrix} x\\ y \end{pmatrix}$ will land on $\begin{pmatrix} \color{Green} a.x + \color{Red} b.y\\ \color{Green} c.x + \color{Red} d.y \end{pmatrix}$. If you don't understand this conclusion, read again, read the course, take your time. Now remember, our goal is to determine what $ \mathbf{A} $ is, because we know the transformation we want to apply but we're searching for the matrix we should apply to our position vector(s) in order to transform our graphics. Let's take the example of the transformation of a series of points: we know where the position vectors will land, but we're looking for $ \mathbf{A} $. We have our cartesian plane with a triangle formed by the three points $P_{(2,1)}$, $Q_{(-2,0)}$, $R_{(0,2)}$, and another triangle which represents a transformed version of the first one: $P^{\prime}_{(5, 0)}$ and $Q^{\prime}_{(-4, 2)}$ and $R^{\prime}_{(2,4)}$. Cartesian plane containing two triangles Example transformation of a triangle We just need two points for this example, let's take $P$ and $Q$. We know that: $$ \begin{pmatrix} 2\\ 1 \end{pmatrix} \text{ lands on } \begin{pmatrix} 5\\ 0 \end{pmatrix} $$ $$ \begin{pmatrix} -2\\ 0 \end{pmatrix} \text{ lands on } \begin{pmatrix} -4\\ 2 \end{pmatrix} $$ Which means: $$ \begin{pmatrix} x\\ y \end{pmatrix} = \begin{pmatrix} 2\\ 1 \end{pmatrix} \text{ lands on } \begin{pmatrix} a.x+b.y\\ c.x+d.y \end{pmatrix} = \begin{pmatrix} 5\\ 0 \end{pmatrix} $$ $$ \begin{pmatrix} x\\ y \end{pmatrix} = \begin{pmatrix} -2\\ 0 \end{pmatrix} \text{ lands on } \begin{pmatrix} a.x+b.y\\ c.x+d.y \end{pmatrix} = \begin{pmatrix} -4\\ 2 \end{pmatrix} $$ From which we can deduce: $$ \begin{pmatrix} 2.a+1.b\\ 2.c+1.d \end{pmatrix} = \begin{pmatrix} 5\\ 0 \end{pmatrix} $$ $$ \begin{pmatrix} -2.a+0.b\\ -2.c+0.d \end{pmatrix} = \begin{pmatrix} -4\\ 2 \end{pmatrix} $$ The right side gives us $ a=2 $ and $ c = -1 $, with which we can deduce $ b=1 $ and $ d=2 $ from the left side. And this, is our transformation matrix: $$ \mathbf{A} = \begin{pmatrix} \color{Green} 2 & \color{Red} 1\\ \color{Green} -\color{Green} 1 & \color{Red} 2 \end{pmatrix} $$ Try that same exercise with $P$ and $R$, or with $Q$ and $R$ and you should end up to the same result. We don't know how to define a transformation matrix yet, but we know its form. So what do we do next? Remember the last section where we've seen that a position vector $\begin{pmatrix} x\\ y \end{pmatrix}$ can be broken down as $\begin{pmatrix} x\\y \end{pmatrix} = x.\begin{pmatrix} \color{Green} 1\\ \color{Green} 0 \end{pmatrix} + y.\begin{pmatrix} \color{Red} 0\\ \color{Red} 1 \end{pmatrix} $ ? That's a pretty good starting point, we just laid out our "base" matrix: $$ \begin{pmatrix} \color{Green} 1 & \color{Red} 0\\ \color{Green} 0 & \color{Red} 1 \end{pmatrix} $$ This matrix represents the base state of your plane, the matrix applied to your plane when you have just loaded your image for example (granted your image is the same size as its receiving container view). In other words, this is the matrix that, applied to any position vector will return that same position vector. This matrix is called the identity matrix. [More on the identity matrix] One more thing before we get concrete: We want our user to be able to combine/chain transformations (like zooming and panning at the same time for instance). In order to chain multiple transformations we need to understand the properties of matrix multiplication, and more specifically the non-commutative and associative properties of matrix multiplication: Matrix multiplication is associative $\left(\mathbf{A}.\mathbf{B}\right).\mathbf{C} = \mathbf{A}.\left(\mathbf{B}.\mathbf{C}\right)$ Just trust me already! If you don't, I'm not going to write it here because it takes a lot of screen width (I've tried and it didn't render very well), so check out this video. Matrix multiplication is non-commutative $\mathbf{A}.\mathbf{B} \neq \mathbf{B}.\mathbf{A}$ In order to affirm this we just have to prove commutativity wrong, which is easy! Imagine $\mathbf{A}$ is a $5 \times 2$ matrix, and $\mathbf{B}$ is a $2 \times 3$ matrix: $ {\tiny\begin{matrix}^{\scriptsize }\\ \normalsize \mathbf{A} \\ ^{\scriptsize 5 \times 2}\end{matrix} } \times {\tiny\begin{matrix}^{\scriptsize }\\ \normalsize \mathbf{B} \\ ^{\scriptsize 2 \times 3}\end{matrix} } = {\tiny\begin{matrix}^{\scriptsize }\\ \normalsize \mathbf{AB} \\ ^{\scriptsize 5 \times 3}\end{matrix} } $ $ {\tiny\begin{matrix}^{\scriptsize }\\ \normalsize \mathbf{B} \\ ^{\scriptsize 2 \times 3}\end{matrix} } \times {\tiny\begin{matrix}^{\scriptsize }\\ \normalsize \mathbf{A} \\ ^{\scriptsize 5 \times 2}\end{matrix} } = undefined $ And that's it. But we can also see commutativity does not hold even for matrices of same dimensions: $$ \begin{aligned} \mathbf{A}.\mathbf{B} &= \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{pmatrix} . \begin{pmatrix} b_{11} & b_{12}\\ b_{21} & b_{22} \end{pmatrix}\\\\ &= \begin{pmatrix} a_{11}.b_{11}+a_{12}.b_{21} & a_{11}.b_{12}+a_{12}.b_{22}\\ a_{21}.b_{11}+a_{22}.b_{21} & a_{11}.b_{22}+a_{22}.b_{22} \end{pmatrix} \end{aligned} $$ $$ \begin{aligned} \mathbf{B}.\mathbf{A} &= \begin{pmatrix} b_{11} & b_{12}\\ b_{21} & b_{22} \end{pmatrix} . \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{pmatrix}\\\\ &= \begin{pmatrix} b_{11}.a_{11}+b_{12}.a_{21} & b_{11}.a_{12}+b_{12}.a_{22}\\ b_{21}.a_{11}+b_{22}.a_{21} & b_{21}.a_{12}+b_{22}.a_{22} \end{pmatrix} \end{aligned} $$ Grab a pen and paper and try it for yourself with the following matrices $\mathbf{A}=\begin{pmatrix}1 & 2\\-3 & -4\end{pmatrix}$ and $\mathbf{B}=\begin{pmatrix}-2 & 0\\0 & -3\end{pmatrix}$. Back to our transformations. Imagine we want to apply transformation $ \mathbf{B} $, then transformation $ \mathbf{A} $ to our position vector $ \vec{v} $. We have $ \vec{v^{\prime}} = \mathbf{B} . \vec{v} $ and $ \vec{v^{\prime\prime}} = \mathbf{A} . \vec{v^{\prime}} $, which leads us to: $$ \vec{v^{\prime\prime}} = \mathbf{A} . \left( \mathbf{B} . \vec{v} \right) $$ We know that matrix multiplication is associative, which gives us: $$ \vec{v^{\prime\prime}} = \mathbf{A} . \left( \mathbf{B} . \vec{v} \right) \Leftrightarrow \vec{v^{\prime\prime}} = \left( \mathbf{A} . \mathbf{B} \right) . \vec{v} $$ In conclusion, in order to apply multiple transformations at once, we can multiply all our transformation matrices and apply the resulting transformation matrix to our vector(s). We also know that matrix multiplication is not commutative, so the order in which we multiply our transformation matrices ($ \mathbf{A} . \mathbf{B} $ or $ \mathbf{B} . \mathbf{A} $) will have an impact on our final matrix and will lead to different results, different transformations. There are several types of 2D transformations we are able to define using $2 \times 2$ dimensions matrices, and you've had a preview of most of them in this course on matrices as transformations. For the rest of this section imagine we have the point $ P_{\left(x, y\right)} $, which represents any point of an object on the plane, and we want to find the matrix to transform it into $ P^{\prime}_{\left(x^{\prime}, y^{\prime}\right)}$ such that $$ \begin{pmatrix} x^{\prime}\\y^{\prime} \end{pmatrix} = \mathbf{A} . \begin{pmatrix} x\\y \end{pmatrix} = \begin{pmatrix} a & b\\c & d \end{pmatrix} . \begin{pmatrix} x\\y \end{pmatrix} $$ Scaling (like zooming in by a factor of 2 for instance) might seem straightforward to represent, right? "Simply multiply the coordinates by the scaling factor and you're done." But the pitfall here is that you might want to have different horizontal and vertical scaling factors for your transformation, I mean it's possible! So we must differentiate between $ s_{x} $ and $ s_{y} $ which represent the horizontal and vertical scaling factors, respectively. The two equations this gives us are: $$ \begin{aligned} x' &= s_{x} . x \\ y' &= s_{y} . y \end{aligned} $$ Knowing that: $$ \begin{pmatrix} x^{\prime}\\y^{\prime} \end{pmatrix} = \begin{pmatrix} a & b\\c & d \end{pmatrix} . \begin{pmatrix} x\\y \end{pmatrix} $$ We can find $a$, $b$, $c$ and $d$: $$ \begin{aligned} s_{x} . x &= a . x + b . y\\\\ \Rightarrow a &= s_{x} \text{ and }\\ b &= 0 \end{aligned} $$ $$ \begin{aligned} s_{y} . y &= c . x + d . y\\\\ \Rightarrow c &= s_{y} \text{ and }\\ d &= 0 \end{aligned} $$ In conclusion, the $2 \times 2$ scaling matrix for the factors $\left(s_{x}, s_{y}\right)$ is: $$ \begin{pmatrix} a & b\\c & d \end{pmatrix} = \begin{pmatrix} s_{x} & 0\\0 & s_{y} \end{pmatrix} $$ Which makes sense, right? I mean, scaling by a factor of $1$ both on the $x$ and $y$ axises will give: $$ \begin{pmatrix} s_{x} & 0\\0 & s_{y} \end{pmatrix} = \begin{pmatrix} 1 & 0\\0 & 1 \end{pmatrix} $$ Which is… the identity matrix! So nothing moves, basically. There are 2 types of reflexions we can think about right ahead: reflexion around an axis, or around a point. To keep things simple we'll focus on reflexions around the $x$ and $y$ axises (reflexion around the origin is the equivalent of applying a reflexion on the $x$ and $y$ axises successively). Reflexion around the $x$ axis gives us: $$ \begin{aligned} x^{\prime} &= x\\ x &= a . x + b . y\\\\ \Rightarrow a &= 1 \text{ and }\\ b &= 0 \end{aligned} $$ $$ \begin{aligned} y^{\prime} &= -y\\ -y &= c . x + d . y\\\\ \Rightarrow c &= 0 \text{ and }\\ d &= -1 \end{aligned} $$ Funny, reflecting around the $x$ axis is the same transformation as scaling $x$ by a factor of $-1$ and $y$ by a factor of $1$: $$ \begin{pmatrix} a & b\\c & d \end{pmatrix} = \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix} $$ And reflexion around the $y$ axis gives us: $$ \begin{aligned} x^{\prime} &= -x\\ -x &= a . x + b . y\\\\ \Rightarrow a &= -1 \text{ and }\\ b &= 0 \end{aligned} $$ $$ \begin{aligned} y^{\prime} &= y\\ y &= c . x + d . y\\\\ \Rightarrow c &= 0 \text{ and }\\ d &= 1 \end{aligned} $$ The transformation matrix to reflect around the $y$ axis is: $$ \begin{pmatrix} a & b\\c & d \end{pmatrix} = \begin{pmatrix} -1 & 0\\ 0 & 1 \end{pmatrix} $$ Now it gets a little bit trickier. In most examples I've found, shearing is explained by saying the coordinates are changed by adding a constant that measures the degree of shearing. For instance, a shear along the $x$ axis is often represented showing a rectangle with a vertex at $\left(0, 1\right)$ is transformed into a parallelogram with a vertex at $\left(1, 1\right)$. Cartesian plane shearing $\underline{\text{Shearing along } x \text{ axis by a constant } k_{x}=1}$ In this article, I want to explain it using the shearing angle, the angle through which the axis is sheared. Let's call it $\alpha$ (alpha). $\underline{\text{Shearing along } x \text{ axis by an angle } \alpha}$ If we look at the plane above, we can see that the new abscissa $x^{\prime}$ is equal to $x$ plus/minus the opposite side of the triangle formed by the $y$ axis, the sheared version of the $y$ axis and the segment between the top left vertex of the rectangle and the top left vertex of the parallelogram. In other words, $x^{\prime}$ is equal to $x$ plus/minus the opposite side of the green triangle, see: Shearing by negative $\alpha=-30^{\circ}$ when $y\left(P\right)>0$ Shearing by positive $\alpha=30^{\circ}$ when $y\left(P\right)>0$ Shearing by negative $\alpha=-30^{\circ}$ when $y\left(P\right)<0$ Shearing by positive $\alpha=30^{\circ}$ when $y\left(P\right)<0$ $\underline{\text{Triangles formed by shearing along } x \text{ axis by an angle } \alpha}$ Remember your trigonometry class? In a right-angled triangle: the hypotenuse is the longest side the opposite side is the one at the opposite of a given angle the adjacent side is the next to a given angle $PP^{\prime}$ is the opposite side, we need to find its length ($k$), in order to calculate $x^{\prime}$ from $x$ the adjacent side is $P$'s ordinate: $y$ we don't know the hypotenuse's length From our trigonometry class, we know that: $$ \begin{aligned} \cos \left( \alpha \right) &= \frac{adjacent}{hypotenuse}\\\\ \sin \left( \alpha \right) &= \frac{opposite}{hypotenuse}\\\\ \tan \left( \alpha \right) &= \frac{opposite}{adjacent} \end{aligned} $$ We know $\alpha$, but we don't know the length of the hypotenuse, so we can't use the cosine function. On the other hand, we know the adjacent side's length: it's $y$, so we can use the tangent function to find the opposite side's length: $$ \begin{aligned} \tan \left( \alpha \right) &= \frac{opposite}{adjacent}\\\\ opposite &= adjacent \times \tan \left( \alpha \right) \end{aligned} $$ We can start solving our system of equations in order to find our matrix with the following: $$ x^{\prime} = x + k = x + y . \tan \left( \alpha \right) $$ $$ y^{\prime} = y $$ However, we can see that when $\alpha > 0$, $\tan \left( \alpha \right) < 0$ and when $\alpha < 0$, $\tan \left( \alpha \right) > 0$. This multiplied by $y$ which can itself be positive or negative will give very different results for $x^{\prime} = x + k = x + y . \tan \left( \alpha \right)$. So don't forget that $\alpha > 0$ is counterclockwise rotation/shearing angle, while $\alpha < 0$ is clockwise rotation/shearing angle. $$ \begin{aligned} x^{\prime} &= x + y . \tan \left( \alpha \right) \\ x + y . \tan \left( \alpha \right) &= a . x + b . y\\\\ \Rightarrow a &= 1 \text{ and }\\ b &= \tan \left( \alpha \right) \end{aligned} $$ The transformation matrix to shear along the $x$ direction is: $$ \begin{aligned} \begin{pmatrix} a & b\\c & d \end{pmatrix} = \begin{pmatrix} 1 & \tan \alpha \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & k_{x}\\ 0 & 1 \end{pmatrix}\\\\ \text{where } k_{x} \text{ is the shearing constant} \end{aligned} $$ Similarly, the transformation matrix to shear along the $y$ direction is: $$ \begin{aligned} \begin{pmatrix} a & b\\c & d \end{pmatrix} = \begin{pmatrix} 1 & 0\\ \tan \beta & 1 \end{pmatrix} = \begin{pmatrix} 1 & 0\\ k_{y} & 1 \end{pmatrix}\\\\ \text{where } k_{y} \text{ is the shearing constant} \end{aligned} $$ Rotations are yet a little bit more complex. Let's take a closer look at it with an example of rotating (around the origin) from a angle $ \theta $ (theta). Cartesian plane rotation $\underline{\text{Rotate by an angle } \theta}$ Notice how the coordinates of $P$ and $P^{\prime}$ are the same in their respective planes: $P$ and $P^{\prime}$ have the same set of coordinates $ \left( x, y\right) $ in each planes. But $P^{\prime}$ has new coordinates $ \left( x^{\prime}, y^{\prime}\right) $ in the first plane, the one that has not been rotated. We can now define the relationship between the coordinates $ \left(x, y\right) $ and the new coordinates $ \left(x^{\prime}, y^{\prime}\right) $, right? This is where trigonometry helps again. While searching for the demonstration of this, I stumbled upon this geometry based explanation by Nick Berry and this video. To be honest, I'm not 100% comfortable with this solution, which means I didn't fully understand it. And also after re-reading what I've written, Hadrien (one of the reviewers) and I have found my explanation to be a bit awkward. So I'm leaving it here in case you're interested, but I suggest you don't bother unless you're very curious and don't mind a little confusion. [Show fuzzy explanation] Trigonometry triangles based on $\theta$ $\underline{\text{Unit vectors rotation by } \theta}$ On this plane we see that $x$ (the blue line) can be expressed as the addition of the adjacent side of the green triangle plus the opposite side of the red triangle. And $y$ as the subtraction of the opposite side of the green triangle from the adjacent side of the red triangle. We know that: $$ \begin{aligned} \cos \left( \theta \right) &= \frac{adjacent}{hypotenuse} \Rightarrow adjacent = hypotenuse \times \cos \left( \theta \right)\\\\ \sin \left( \theta \right) &= \frac{opposite}{hypotenuse} \Rightarrow opposite = hypotenuse \times \sin \left( \theta \right) \end{aligned} $$ So we can express our relationship as follows: $$ \begin{aligned} x & = \color{Green}a\color{Green}d\color{Green}j\color{Green}a\color{Green}c\color{Green}e\color{Green}n\color{Green}t + \color{Red}o\color{Red}p\color{Red}p\color{Red}o\color{Red}s\color{Red}i\color{Red}t\color{Red}e\\ & = \color{Green}h\color{Green}y\color{Green}p\color{Green}o\color{Green}t\color{Green}e\color{Green}n\color{Green}u\color{Green}s\color{Green}e . \cos \left( \theta \right) + \color{Red}h\color{Red}y\color{Red}p\color{Red}o\color{Red}t\color{Red}e\color{Red}n\color{Red}u\color{Red}s\color{Red}e . \sin \left( \theta \right)\\ & = x^{\prime} . \cos \left( \theta \right) + y^{\prime} . \sin \left( \theta \right) \end{aligned} $$ $$ \begin{aligned} y & = \color{Red}a\color{Red}d\color{Red}j\color{Red}a\color{Red}c\color{Red}e\color{Red}n\color{Red}t - \color{Green}o\color{Green}p\color{Green}p\color{Green}o\color{Green}s\color{Green}i\color{Green}t\color{Green}e\\ & = \color{Red}h\color{Red}y\color{Red}p\color{Red}o\color{Red}t\color{Red}e\color{Red}n\color{Red}u\color{Red}s\color{Red}e . \cos \left( \theta \right) - \color{Green}h\color{Green}y\color{Green}p\color{Green}o\color{Green}t\color{Green}e\color{Green}n\color{Green}u\color{Green}s\color{Green}e . \sin \left( \theta \right)\\ & = y^{\prime} . \cos \left( \theta \right) - x^{\prime} . \sin \left( \theta \right)\\ & = -x^{\prime} . \sin \left( \theta \right) + y^{\prime} . \cos \left( \theta \right) \end{aligned} $$ In the end what we really have here is a system of equations that we can represent as a $2 \times 2$ matrix: $$ \begin{pmatrix} x\\ y \end{pmatrix} = \begin{pmatrix} \cos \theta & \sin \theta\\ -\sin \theta & \cos \theta \end{pmatrix} . \begin{pmatrix} x^{\prime}\\ y^{\prime} \end{pmatrix} $$ But this is not exactly what we are looking for, right? This defines the relationship to convert from the new coordinates in the original plane $ \left(x^{\prime}, y^{\prime}\right) $ what are the coordinates $ \left(x, y\right) $ in the rotated plane. Whereas what we want to define is how to convert from the rotated plane (the coordinates that we know) to the original plane. In order to do what we want, we need to take the same matrix, but define a rotation of $ - \theta $. $$ \begin{aligned} \cos \left( -\theta \right) &= cos \left( \theta \right)\\ \sin \left( -\theta \right) &= - sin \left( \theta \right) \end{aligned} $$ Which gives us our desired rotation matrix: $$ \begin{pmatrix} a & b\\c & d \end{pmatrix} = \begin{pmatrix} \cos \theta & -\sin \theta\\ \sin \theta & \cos \theta \end{pmatrix} $$ Now for the simple demonstration I'm going to go with the *"This position vector lands on this position vector"* route. Suppose you are zooming on the unit vectors like so: Unit vectors under rotation by $\underline{\theta}$ Trigonometry triangles based on $\underline{\theta}$ Based on the rules of trigonometry we've already seen, we have: $$ \begin{pmatrix} 0\\ 1 \end{pmatrix} \text{ lands on } \begin{pmatrix} \cos \theta \\ \sin \theta \end{pmatrix} $$ $$ \begin{pmatrix} 1\\ 0 \end{pmatrix} \text{ lands on } \begin{pmatrix} - \sin \theta \\ \cos \theta \end{pmatrix} $$ $$ \begin{pmatrix} x\\ y \end{pmatrix} = \begin{pmatrix} 1\\ 0 \end{pmatrix} \text{ lands on } \begin{pmatrix} a.x+b.y\\ c.x+d.y \end{pmatrix} = \begin{pmatrix} \cos \theta \\ \sin \theta \end{pmatrix} $$ $$ \begin{pmatrix} x\\ y \end{pmatrix} = \begin{pmatrix} 0\\ 1 \end{pmatrix} \text{ lands on } \begin{pmatrix} a.x+b.y\\ c.x+d.y \end{pmatrix} = \begin{pmatrix} - \sin \theta \\ \cos \theta \end{pmatrix} $$ $$ \begin{pmatrix} 1.a+0.b\\ 1.c+0.d \end{pmatrix} = \begin{pmatrix} \cos \theta \\ \sin \theta \end{pmatrix} $$ $$ \begin{pmatrix} 0.a+1.b\\ 0.c+1.d \end{pmatrix} = \begin{pmatrix} - \sin \theta \\ \cos \theta \end{pmatrix} $$ Easy to deduce $ a = \cos \left( \theta \right) $, $ b = - \sin \left( \theta \right) $, $ c = \sin \left( \theta \right) $ and $ d = \cos \left( \theta \right) $. Congratulations! You know of to define scaling, reflexion, shearing and rotation transformation matrices. So what is missing? If you're still with me at this point, maybe you're wondering why any of this is useful. If it's the case, you missed the point of this article, which is to understand affine transformations in order to apply them in code :mortar_board: . This is useful because at this point you know what a transformation matrix looks like, and you know how to compute one given a few position vectors, and it is also a great accomplishment by itself. But here's the thing: $2 \times 2$ matrices are limited in the number of operations we can perform. With a $2 \times 2$ matrix, the only transformations we can do are the ones we've seen in the previous section: So what are we missing? Answer: translations! And this is unfortunate, as translations are really useful, like when the user pans and the image has to behave accordingly (aka. follow the finger). Translations are defined by the addition of two matrices : $$ \begin{pmatrix} x'\\ y' \end{pmatrix} = \begin{pmatrix} x\\ y \end{pmatrix} + \begin{pmatrix} t_{x}\\ t_{y} \end{pmatrix} $$ But we want our user to be able to combine/chain transformations (like zooming on a specific point which is not the origin), so we need to find a way to express translations as matrices multiplications too. Here comes the world of Homogeneous coordinates… No, you don't have to read it, and no I don't totally get it either… The gist of it is: the Cartesian plane you're used to, is really just one of many planes that exist in the 3D space, and is at $ z = 1 $ for any point $ \left(x, y, z\right)$ in the 3D space, the line in the projecting space that is going through this point and the origin is also passing through any point that is obtained by scaling $x$, $y$ and $z$ by the same factor the coordinates of any of these points on the line is $ \left(\frac{x}{z}, \frac{y}{z}, z\right)$. Homogeneous coordinates Homogeneous coordinates graphics I've collected a list of blog posts, articles and videos links at the end of this post if you're interested. Without further dig in, this is helping, because it says that we can now represent any point in our Cartesian plane ($ z = 1 $) not only as a $2 \times 1$ matrix, but also as a $3 \times 1$ matrix: $$ \begin{pmatrix} x\\ y \end{pmatrix} \Leftrightarrow \begin{pmatrix} x\\ y\\ 1 \end{pmatrix} $$ Which means we have to redefine all our previous transformation matrices, because the product of a $3 \times 1$ matrix (position vector) by a $2 \times 2$ matrix (transformation) is undefined. Don't rage quit! It's straightforward: $\mathbf{z^{\prime}=z}$! We have to find the transformation matrix $ \mathbf{A} = \begin{pmatrix} a & b & c\\ d & e & f\\ g & h & i \end{pmatrix} $ If, like in the previous section, we imagine that we have the point $ P_{\left(x, y, z\right)} $, which represents any point of an object on the cartesian plane, then we want to find the matrix to transform it into $ P^{\prime}_{\left(x^{\prime}, y^{\prime}, z^{\prime}\right)}$ such that $$ \begin{pmatrix} x^{\prime}\\y^{\prime}\\z^{\prime} \end{pmatrix} = \mathbf{A} . \begin{pmatrix} x\\y\\z \end{pmatrix} = \begin{pmatrix} a & b & c\\d & e & f\\g & h & i\end{pmatrix} . \begin{pmatrix} x\\y\\z \end{pmatrix} $$ We are looking for $\mathbf{A}$ such that: $$ \begin{pmatrix} x^{\prime}\\y^{\prime}\\z^{\prime} \end{pmatrix} = \begin{pmatrix} s_{x}.x\\s_{y}.y\\z \end{pmatrix} = \begin{pmatrix} a & b & c\\d & e & f\\g & h & i\end{pmatrix} . \begin{pmatrix} x\\y\\z \end{pmatrix} $$ We can solve the following system of equation in order to find $\mathbf{A}$: $$ \begin{aligned} x^{\prime} &= s_{x} . x\\ s_{x} . x &= a . x + b . y + c . z\\\\ \Rightarrow a &= s_{x} \text{ and }\\ b &= 0 \text{ and }\\ c &= 0 \end{aligned} $$ $$ \begin{aligned} y^{\prime} &= s_{y} . y\\ s_{y} . y &= d . x + e . y + f + z\\\\ \Rightarrow d &= s_{y} \text{ and }\\ e &= 0 \text{ and }\\ f &= 0 \end{aligned} $$ $$ \begin{aligned} z^{\prime} &= z\\ \Rightarrow z &= g . x + h . y + i + z\\ \Rightarrow g &= 0 \text{ and }\\ h &= 0 \text{ and }\\ i &= 1 \end{aligned} $$ The 3x3 scaling matrix for the factors $ \left(s_{x}, s_{y}\right) $ is: $$ \begin{pmatrix} a & b & c\\d & e & f\\g & h & i\end{pmatrix} = \begin{pmatrix} s_{x} & 0 &0\\0 & s_{y} & 0\\0 & 0 & 1\end{pmatrix} $$ For a reflexion around the $x$ axis we are looking for $\mathbf{A}$ such that: $$ \begin{pmatrix} x^{\prime}\\y^{\prime}\\z^{\prime} \end{pmatrix} = \begin{pmatrix} x\\-y\\z \end{pmatrix} = \begin{pmatrix} a & b & c\\d & e & f\\g & h & i\end{pmatrix} . \begin{pmatrix} x\\y\\z \end{pmatrix} $$ $$ \begin{aligned} x^{\prime} &= x\\ x &= a . x + b . y + c . z\\\\ \Rightarrow a &= 1 \text{ and }\\ b &= 0 \text{ and }\\ c &= 0 \end{aligned} $$ $$ \begin{aligned} y^{\prime} &= -y\\ -y &= d . x + e . y + f . z\\\\ \Rightarrow d &= 0 \text{ and }\\ e &= -1 \text{ and }\\ f &= 0 \end{aligned} $$ $$ \begin{aligned} z^{\prime} &= z\\ z &= g . x + h . y + i . z\\\\ \Rightarrow g &= 0 \text{ and }\\ h &= 0 \text{ and }\\ i &= 1 \end{aligned} $$ The transformation matrix to reflect around the $x$ axis is: $$ \begin{pmatrix} a & b & c\\d & e & f\\g & h & i\end{pmatrix} = \begin{pmatrix} 1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 1 \end{pmatrix} $$ For the reflexion around the $y$ axis we are looking for $\mathbf{A}$ such that: $$ \begin{aligned} x^{\prime} &= -x\\ -x &= a . x + b . y + c . z\\\\ \Rightarrow a &= -1 \text{ and }\\ b &= 0 \text{ and }\\ c &= 0 \end{aligned} $$ $$ \begin{aligned} y^{\prime} &= y\\ y &= d . x + e . y + f . z\\\\ \Rightarrow d &= 0 \text{ and }\\ e &= 1 \text{ and }\\ f &= 0 \end{aligned} $$ $$ \begin{pmatrix} a & b & c\\d & e & f\\g & h & i\end{pmatrix} = \begin{pmatrix} -1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix} $$ Well, I'm a bit lazy here :hugging: You see the pattern, right? Third line always the same, third column always the same. $$ \begin{aligned} \begin{pmatrix} a & b & c\\d & e & f\\g & h & i\end{pmatrix} &= \begin{pmatrix} 1 & \tan \alpha & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}\\\\ &= \begin{pmatrix} 1 & k_{x} & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}\\\\ & \text{where } k \text{ is the shearing constant} \end{aligned} $$ $$ \begin{aligned} \begin{pmatrix} a & b & c\\d & e & f\\g & h & i\end{pmatrix} &= \begin{pmatrix} 1 & 0 & 0\\ \tan \beta & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}\\\\ &= \begin{pmatrix} 1 & 0 & 0\\ k_{y} & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}\\\\ & \text{where } k \text{ is the shearing constant} \end{aligned} $$ Same pattern, basically we just take the $2 \times 2$ rotation matrix and add one row and one column whose entries are $0$, $0$ and $1$. $$ \begin{pmatrix} a & b & c\\d & e & f\\g & h & i\end{pmatrix} = \begin{pmatrix} \cos \theta & -\sin \theta & 0\\ \sin \theta & \cos \theta & 0\\ 0 & 0 & 1 \end{pmatrix} $$ But you can do the math, if you want :stuck_out_tongue_winking_eye: And now it gets interesting, because we can define translations as $3 \times 3$ matrices multiplication! $$ \begin{pmatrix} x^{\prime}\\y^{\prime}\\z^{\prime} \end{pmatrix} = \begin{pmatrix} x+t_{x}\\y+t_{y}\\z \end{pmatrix} = \begin{pmatrix} a & b & c\\d & e & f\\g & h & i\end{pmatrix} . \begin{pmatrix} x\\y\\z \end{pmatrix} $$ $$ \begin{aligned} x^{\prime} &= x + t_{x} \\ x + t_{x} &= a . x + b . y + c . z\\\\ \Rightarrow a &= 1 \text{ and }\\ b &= 0 \text{ and }\\ c &= t_{x} \end{aligned} $$ $$ \begin{aligned} y^{\prime} &= y + t_{y}\\ y + t_{y} &= d . x + e . y + f . z\\\\ \Rightarrow d &= 0 \text{ and }\\ e &= 1 \text{ and }\\ f &= t_{y} \end{aligned} $$ The $3 \times 3$ translation matrix for the translation $ \left(t_{x}, t_{y}\right) $ is: $$ \begin{pmatrix} a & b & c\\d & e & f\\g & h & i\end{pmatrix} = \begin{pmatrix} 1 & 0 & t_{x}\\0 & 1 & t_{y}\\0 & 0 & 1\end{pmatrix} $$ Obviously, you won't have to go into all of these algebra stuff each time you want to know what is the matrix you need to apply in order to do your transformations. You can just use the following: Translation matrix: $\begin{pmatrix}1 & 0 & t_{x}\\0 & 1 & t_{y}\\0 & 0 & 1\end{pmatrix}$ Scaling matrix: $\begin{pmatrix}s_{x} & 0 & 0\\0 & s_{y} & 0\\0 & 0 & 1\end{pmatrix}$ Shear matrix: $\begin{pmatrix}1 & \tan \alpha & 0\\\tan \beta & 1 & 0\\0 & 0 & 1\end{pmatrix} = \begin{pmatrix}1 & k_{x} & 0\\k_{y} & 1 & 0\\0 & 0 & 1\end{pmatrix}$ Rotation matrix: $\begin{pmatrix}\cos \theta & -\sin \theta & 0\\\sin \theta & \cos \theta & 0\\0 & 0 & 1\end{pmatrix}$ That's neat! Now you can define your matrices easily, plus you know how it works. One last thing: all the transformations we've seen are centered around the origin. How do we apply what we know in order to, for instance, zoom on a specific point which is not the origin, or rotate an object in place, around its center? The answer is composition: We must compose our transformations by using several other transformations. Imagine you have a shape, like a square for instance, and you want to zoom in at the center of the square, to mimic a pinch-zoom behaviour :mag: This transformation is composed of the following sequence: move anchor point to origin: $ \left( -t_{x}, -t_{y} \right) $ scale by $ \left( s_{x}, s_{y} \right) $ move back anchor point: $ \left( t_{x}, t_{y} \right) $ Where $t$ is the anchor point of our scaling transformation (the center of the square). Our transformations are defined by the first translation matrix $ \mathbf{C} $, the scaling matrix $ \mathbf{B} $, and the last translation matrix $ \mathbf{A} $. $$ \mathbf{C} = \begin{pmatrix} 1 & 0 & -t_{x} \\ 0 & 1 & -t_{y} \\ 0 & 0 & 1 \end{pmatrix} \text{ , } \mathbf{B} = \begin{pmatrix} s_{x} & 0 & 0 \\ 0 & s_{y} & 0 \\ 0 & 0 & 1 \end{pmatrix} \text{ and } \mathbf{A} = \begin{pmatrix} 1 & 0 & t_{x} \\ 0 & 1 & t_{y} \\ 0 & 0 & 1 \end{pmatrix} $$ Because matrix multiplication is non-commutative, the order matters, so we will apply them in reverse order (hence the reverse naming order). The composition of these transformations gives us the following product: $$ \begin{aligned} \mathbf{A} . \mathbf{B} . \mathbf{C} &= \begin{pmatrix} 1 & 0 & t_{x} \\ 0 & 1 & t_{y} \\ 0 & 0 & 1 \end{pmatrix} . \begin{pmatrix} s_{x} & 0 & 0 \\ 0 & s_{y} & 0 \\ 0 & 0 & 1 \end{pmatrix} . \begin{pmatrix} 1 & 0 & -t_{x} \\ 0 & 1 & -t_{y} \\ 0 & 0 & 1 \end{pmatrix}\\\\ &= \begin{pmatrix} 1 & 0 & t_{x} \\ 0 & 1 & t_{y} \\ 0 & 0 & 1 \end{pmatrix} . \begin{pmatrix} s_{x} & 0 & -s_{x}.t_{x} \\ 0 & s_{y} & -s_{y}.t_{y} \\ 0 & 0 & 1 \end{pmatrix}\\\\ \mathbf{A} . \mathbf{B} . \mathbf{C} &= \begin{pmatrix} s_{x} & 0 & -s_{x}.t_{x} + t_{x} \\ 0 & s_{y} & -s_{y}.t_{y} + t_{y} \\ 0 & 0 & 1 \end{pmatrix} \end{aligned} $$ Suppose we have the following points representing a square: $\begin{pmatrix}x_{1} & x_{2} & x_{3} & x_{4}\\y_{1} & y_{2} & y_{3} & y_{4}\\1 & 1 & 1 & 1\end{pmatrix} = \begin{pmatrix}2 & 4 & 4 & 2\\1 & 1 & 3 & 3\\1 & 1 & 1 & 1\end{pmatrix}$ Pinch-zoom four points demo And we want to apply a 2x zoom focusing on its center. The new coordinates will be: $$ \begin{aligned} \begin{pmatrix} x_{1}^{\prime} & x_{2}^{\prime} & x_{3}^{\prime} & x_{4}^{\prime}\\ y_{1}^{\prime} & y_{2}^{\prime} & y_{3}^{\prime} & y_{4}^{\prime}\\ 1 & 1 & 1 & 1 \end{pmatrix} &= \begin{pmatrix} s_{x} & 0 & -s_{x}.t_{x} + t_{x} \\ 0 & s_{y} & -s_{y}.t_{y} + t_{y} \\ 0 & 0 & 1 \end{pmatrix} . \begin{pmatrix} x_{1} & x_{2} & x_{3} & x_{4}\\ y_{1} & y_{2} & y_{3} & y_{4}\\ 1 & 1 & 1 & 1 \end{pmatrix}\\\\ &= \begin{pmatrix} 2 & 0 & -2.3 + 3 \\ 0 & 2 & -2.2 + 2 \\ 0 & 0 & 1 \end{pmatrix} . \begin{pmatrix} 2 & 4 & 4 & 2\\ 1 & 1 & 3 & 3\\ 1 & 1 & 1 & 1 \end{pmatrix}\\\\ &= \begin{pmatrix} 2 & 0 & -3 \\ 0 & 2 & -2 \\ 0 & 0 & 1 \end{pmatrix} . \begin{pmatrix} 2 & 4 & 4 & 2\\ 1 & 1 & 3 & 3\\ 1 & 1 & 1 & 1 \end{pmatrix}\\\\ \begin{pmatrix} x_{1}^{\prime} & x_{2}^{\prime} & x_{3}^{\prime} & x_{4}^{\prime}\\ y_{1}^{\prime} & y_{2}^{\prime} & y_{3}^{\prime} & y_{4}^{\prime}\\ 1 & 1 & 1 & 1 \end{pmatrix} &= \begin{pmatrix} 1 & 5 & 5 & 1\\ 0 & 0 & 4 & 4\\ 1 & 1 & 1 & 1 \end{pmatrix} \end{aligned} $$ Now imagine you have an image in a view, the origin is not a the center of the view, it is probably at the top-left corner (implementations may vary), but you want to rotate the image at the center of the view :upside_down: rotate by $ \theta $ Where $t$ is the anchor point of our rotation transformation. Our transformations are defined by the first translation matrix $ \mathbf{C} $, the rotation matrix $ \mathbf{B} $, and the last translation matrix $ \mathbf{A} $. $$ \mathbf{C} = \begin{pmatrix} 1 & 0 & -t_{x} \\ 0 & 1 & -t_{y} \\ 0 & 0 & 1 \end{pmatrix} \text{ , } \mathbf{B} = \begin{pmatrix} \cos \theta & -\sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \end{pmatrix} \text{ and } \mathbf{A} = \begin{pmatrix} 1 & 0 & t_{x} \\ 0 & 1 & t_{y} \\ 0 & 0 & 1 \end{pmatrix} $$ $$ \begin{aligned} \mathbf{A} . \mathbf{B} . \mathbf{C} &= \begin{pmatrix} 1 & 0 & t_{x} \\ 0 & 1 & t_{y} \\ 0 & 0 & 1 \end{pmatrix} . \begin{pmatrix} \cos \theta & -\sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \end{pmatrix} . \begin{pmatrix} 1 & 0 & -t_{x} \\ 0 & 1 & -t_{y} \\ 0 & 0 & 1 \end{pmatrix}\\\\ &= \begin{pmatrix} 1 & 0 & t_{x} \\ 0 & 1 & t_{y} \\ 0 & 0 & 1 \end{pmatrix} . \begin{pmatrix} \cos \theta & -\sin \theta & -\cos \theta.t_{x} +\sin \theta.t_{y} \\ \sin \theta & \cos \theta & -\sin \theta.t_{x} -\cos \theta.t_{y} \\ 0 & 0 & 1 \end{pmatrix}\\\\ \mathbf{A} . \mathbf{B} . \mathbf{C} &= \begin{pmatrix} \cos \theta & -\sin \theta & -\cos \theta.t_{x} +\sin \theta.t_{y} + t_{x} \\ \sin \theta & \cos \theta & -\sin \theta.t_{x} -\cos \theta.t_{y} + t_{y} \\ 0 & 0 & 1 \end{pmatrix} \end{aligned} $$ Rotate image (four points demo) And we want to apply a rotation of $ \theta = 90^{\circ} $ focusing on its center. $$ \begin{aligned} \begin{pmatrix} x_{1}^{\prime} & x_{2}^{\prime} & x_{3}^{\prime} & x_{4}^{\prime}\\ y_{1}^{\prime} & y_{2}^{\prime} & y_{3}^{\prime} & y_{4}^{\prime}\\ 1 & 1 & 1 & 1 \end{pmatrix} &= \begin{pmatrix} \cos \theta & -\sin \theta & -\cos \theta.t_{x} +\sin \theta.t_{y} + t_{x} \\ \sin \theta & \cos \theta & -\sin \theta.t_{x} -\cos \theta.t_{y} + t_{y} \\ 0 & 0 & 1 \end{pmatrix} . \begin{pmatrix} x_{1} & x_{2} & x_{3} & x_{4}\\ y_{1} & y_{2} & y_{3} & y_{4}\\ 1 & 1 & 1 & 1 \end{pmatrix}\\\\ &= \begin{pmatrix} 0 & -1 & -0.3+1.2+3 \\ 1 & 0 & -1.3-0.2+2 \\ 0 & 0 & 1 \end{pmatrix} . \begin{pmatrix} 2 & 4 & 4 & 2\\ 1 & 1 & 3 & 3\\ 1 & 1 & 1 & 1 \end{pmatrix}\\\\ &= \begin{pmatrix} 0 & -1 & 5 \\ 1 & 0 & -1 \\ 0 & 0 & 1 \end{pmatrix} . \begin{pmatrix} 2 & 4 & 4 & 2\\ 1 & 1 & 3 & 3\\ 1 & 1 & 1 & 1 \end{pmatrix}\\\\ \begin{pmatrix} x_{1}^{\prime} & x_{2}^{\prime} & x_{3}^{\prime} & x_{4}^{\prime}\\ y_{1}^{\prime} & y_{2}^{\prime} & y_{3}^{\prime} & y_{4}^{\prime}\\ 1 & 1 & 1 & 1 \end{pmatrix} &= \begin{pmatrix} 4 & 4 & 2 & 2\\ 1 & 3 & 3 & 1\\ 1 & 1 & 1 & 1 \end{pmatrix} \end{aligned} $$ I want to address my warmest thank you to the following people, who helped me during the review process of this article, by providing helpful feedbacks and advices: Igor Laborie (@ilaborie) Hadrien Toma (@HadrienToma) All the Geogebra files I've used to generate the graphics and gifs Khan Academy algebra course on matrices A course on "Affine Transformation" at The University of Texas at Austin A course on "Composing Transformations" at The Ohio State University A blogpost on "Rotating images" by Nick Berry A Youtube video course on "The Rotation Matrix" by Michael J. Ruiz Wikipedia on Homogeneous coordinates A blogpost on "Explaining Homogeneous Coordinates & Projective Geometry" by Tom Dalling A blogpost on "Homogeneous Coordinates" by Song Ho Ahn A Youtube video course on "2D transformations and homogeneous coordinates" by Tarun Gehlot 2D Transformations with Android and Java
CommonCrawl
What can be known about the formulas for energy only from the fact that it is conserved? The question is to figure out how the energy can be derived knowing just one thing: There is a quantity called Energy that is conserved over time. The goal is to get an equation that somehow implies the basic formulas for kinetic energy and potential energy, $\frac{1}{2}mv^2$ and $mgh$. So how can we get this with just a rudimentary knowledge of physics and algebra, and the first principle (stated above)? energy energy-conservation spring potential-energy GregGreg $\begingroup$ Why do you want to restrict your methods to being based on a rudimentary knowledge of physics and algebra? Crippling your deriving ability in this way isn't what we do here, at least not without a reason. $\endgroup$ – David Z♦ Jun 14 '13 at 8:51 $\begingroup$ I did this because I wanted the derivation to be understood by students of physics at any level, especially the ones just starting to learn physics, who are most likely confused about the definition of kinetic and potential energy. I was looking for the most basic and simple explanation for energy that doesn't depend on a definition of work, the work-energy theorem, or anything other than the statement that energy is a quantity that is conserved. $\endgroup$ – Greg Jun 14 '13 at 13:50 $\begingroup$ OK, well then I'd say (just personally speaking) the question might be a bit better if it were phrased more like "what can be known about the formulas for energy only from the fact that it is conserved by gravity and springs?" or something like that. $\endgroup$ – David Z♦ Jun 14 '13 at 15:35 $\begingroup$ Given the educational goal explained in the comment, you might be interested in this book lightandmatter.com/cp , which is free online and which I'm the author of. It introduces energy before force or work. $\endgroup$ – Ben Crowell Jun 15 '13 at 3:43 Let $E$ denote a quantity that does not change over time (from the first principle). Consider a ball with mass $m$ dropped from a height $h$. As the ball drops, its speed changes due to the gravitational acceleration $g$, reaching a final value $v$ at impact. Thus, we can infer that the quantity $E$ depends on these 4 parameters: $$E(m,H,g,V)$$ where $H$ is a variable height and $V$ is a variable velocity. Now, consider the ball during the instant that it's dropped. It has height $h$ and $V=0 $. Then consider the ball right before it hits the ground. It has $H=0$ and velocity $v$. Thus, the velocity $v$ and height $h$ are most likely not multiplied by each other, as it would give a value of $0$ both at the top and at the bottom. So, from the first principle:$$E_i(m,h,g,0)=E_f(m,0,g,v)$$ $$E_i(m,h,g)=E_f(m,g,v)$$ The initial energy, which has no dependence on velocity, and complete dependence on the object's height above the ground, can be called the potential energy. Likewise, the final energy, which is completely dependent on the object's velocity, may be called the kinetic energy. We see that $m$ has units of mass $\left(M\right)$, $h$ has units of length $\left(L\right)$, $g$ has units of length over time squared $\left(\frac{L}{T^2}\right)$, $v$ has units of length over time $\left(\frac{L}{T}\right)$. So we can use dimensional analysis to figure out how these parameters most likely fit into the equation: $$\alpha m^ah^bg^c=\beta m^dg^ev^f$$ $$M^aL^b \left( \frac{L}{T^2} \right)^c=M^d\left( \frac{L}{T^2} \right)^e \left( \frac{L}{T} \right)^f$$ $$M^aL^{b+c}T^{-2c}=M^dL^{e+f}T^{-2e-f}$$ ( $\alpha$ and $\beta$ are constants of proportionality, and are dimensionless.) We end up with the following system of equations, which has 3 equations and 6 unknowns: $$a=d$$ $$b+c=e+f$$ $$-2c=-2e-f$$ If we let $a=w$, $b=u$, and $c=v$, then we have: $a=w$ $b=u$ $c=v$ $d=w$ $e=v-u$ $f=2u$ So we have the following general equation:$$\alpha m^wh^ug^v=\beta m^wg^{v-u}v^{2u}$$ It isn't clear at this point whether or not the mass $m$ is relevant to the energy; since it appears on both sides of the equation, it's safe to remove it for now. Also, we could remove the constants $\alpha$ and $\beta$, keeping in mind that they're still there. So the reduced form of the equation is: $$h^ug^v=g^{v-u}v^{2u}$$ It's clear that for any $u$ and $v$ that we choose, the equation would satisfy our dimensional analysis. So, the only thing left to do is input values for $u$ and $v$ and see what kind of equations we can come up with. For $u=0$, $v=1$ $$g=g$$ For $u=1$, $v=0$ $$h=g^{-1}v^2$$ $$hg=v^2$$ For $u=1$, $v=1$ $$hg=v^2$$ For $u=1$, $v=2$ $$hg^2=gv^2$$ $$hg=v^2$$ For $u=2$, $v=2$ $$h^2g^2=v^4$$ $$hg=v^2$$ Interestingly, for any $u$ and $v$ that we choose, the equation remains unchanged. (I was shocked when I saw this!) At this point, it isn't clear whether the potential energy, the kinetic energy, or both depend on the gravitational acceleration. At this point I would like to move on to another example where a block of mass $m$ and velocity $v$ is on a frictionless surface and compresses a spring with spring constant $k$ by an amount $x$. We can infer that $E$ depends on these 4 parameters: $$E(m,V,k,X)$$ where $V$ is a variable velocity and $X$ is a variable amount of compression in the spring. Now, consider the block before it begins compressing the spring. It has velocity $v$ and $X=0$. Then consider the block when it fully compresses the spring. It has $v=0$ and compression in the spring is $x$. By the same logic as above, I deduce that the velocity and compressed length cannot be multiplied by each other at any instant in time to get the quantity $E$. We can write the following equation from the first principle: $$E_i(m,v,k,0)=E_f(m,0,k,x)$$ $$E_i(m,v,k)=E_f(m,k,x)$$ The final quantity may be called the spring potential energy. The initial quantity in this equation is equivalent to what we called the kinetic energy in the first example. However, this quantity does not depend on $g$ while the term that we called the kinetic energy in the first example does not depend on $k$. Thus we can infer that the kinetic energy depends on neither $g$ nor $k$. That is: $$E_i(m,v)=E_f(m,k,x)$$ We can use dimensional analysis as we did earlier to figure out the proper exponents. $$\beta m^av^b=\gamma m^ck^dx^e$$ $$M^a \left( \frac{L}{T} \right)^b = M^c \left( \frac{M}{T^2} \right)^d L^e$$ $$M^aL^bT^{-b}=M^{c+d}L^eT^{-2d}$$ $$a=c+d$$ $$b=e$$ $$-b=-2d$$ Let $c=v$ and $d=u$, $a=v+u$ $b=2u$ $d=u$ $e=2u$ So we have:$$m^{v+u}v^{2u}=m^vk^ux^{2u}$$ Let $u=1$, $v=1$ $$m^2v^2=mkx^2$$ $$mv^2=kx^2$$ For other values of $u$ and $v$ we get the same equation. Notice how the mass must always exist on at least one side of the equation. At this point, we know that the kinetic energy depends on the speed of an object, and may or may not depend on the mass. The potential energy depends on the height above the ground, the acceleration due to gravity, and may or may not depend on the mass. The spring potential energy depends on the spring constant and the amount compressed, and may or may not depend on the mass. So if potential energy does not depend on mass, we have: $E_{potential}= \alpha gh$ $E_{kinetic}=\beta v^2$ $E_{spring potential}=\gamma \frac{kx^2}{m}$ If potential energy depends on the first power of mass, we have: $E_{potential}= \alpha mgh$ $E_{kinetic}=\beta mv^2$ $E_{spring potential}=\gamma kx^2$ If potential energy depends on the second power of mass, we have: $E_{potential}= \alpha m^2gh$ $E_{kinetic}=\beta m^2v^2$ $E_{spring potential}=\gamma mkx^2$ We use the one where the potential energy depends on the first power of mass, but it seems to me as though based on this preliminary analysis, that any of these 3 sets of equations may be defined as the 'energy'. Of course, we haven't considered the fact that the spring potential energy does not depend at all on the mass of the object compressing it. This would restrict our formulation of the potential, kinetic, and spring potential energies to the second version above. Also, we haven't figured out the values of the constants of proportionality, $\alpha$, $\beta$, and $\gamma$. The best I could do was find the ratio of $\alpha$ to $\beta$, which is 2:1, using kinematics. Still, I think we've gone pretty far, considering that we've only needed to use some intuition and a little algebra ;) $\begingroup$ In your dimensional analysis, you keep trying values for $u$ and $v$. For example, $u=1$ and $v=1$. What don't you first simplify the equations on both sides adequately? You'll get to the answer without the effort of trying different values. $\endgroup$ – fffred Jun 14 '13 at 18:21 This is a nice question, but I think it needs to be refined quite a bit. From the given information, the conserved quantity we end up with could be mass, energy, angular momentum, or electric charge. Also, the result for something like energy can't be uniquely defined based on the given information, since, e.g., a relativistic expression would also be consistent with all the given information. Or we could have $E=bE_o$, where $E$ is the quantity we end up defining, $b$ is a constant, and $E_o$ is the quantity defined in textbooks. Based on these examples, we clearly need some more postulates, maybe something like the following: The conserved quantity has to be additive and has to describe the state of the system (necessary for any conservation law). The domain of applicability is mechanics (otherwise we could be talking about conservation of charge). No new unitful constants will appear (or else we could have possibilities like the relativistic expressions, which contain $c$). Energy is a scalar. Conservation of energy is equally valid in all inertial frames. There is a gravitational field $\textbf{g}$, which is static and uniform and gives the acceleration of free-falling objects. Time-reversal symmetry holds. From assumption #2, we expect these expressions to involve $m$ and $\textbf{v}$. From assumption #6, the expressions should involve $\textbf{g}$ and the position vector $\textbf{r}$, and from #5 the choice of an origin of coordinates for defining $\textbf{r}$ must be arbitrary. By #1, the only powers of $m$ that can appear in any term are 0 or 1. By #1, things like the acceleration vector can't appear, since they don't describe the state of a system (e.g., you can impose an initial position and velocity on a baseball, but it doesn't "remember" its initial acceleration). The easiest scalars to form from these ingredients are 1, $m$, $m\textbf{v}\cdot\textbf{v}$, $m\textbf{g}\cdot\textbf{g}$, and $m\textbf{r}\cdot\textbf{r}$. The constant 1 is uninteresting because it doesn't affect the predictions of the conservation law. If mass is a fixed property of our particles, then the same applies to $mg^2$. We can't have $mr^2$, or any expression involving $r^2$, without violating 5, since the choice of origin for $\textbf{r}$ is arbitrary. The only winner so far is $mv^2$, and this makes us suspect that energy should be unchanged under time-reversal (not odd under time-reversal, which would also be consistent with #7). With mixed dot products we can get $m\textbf{v}\cdot\textbf{g}$, $m\textbf{g}\cdot\textbf{r}$, and $m\textbf{r}\cdot\textbf{v}$. By #7, energy can't involve addition of terms that change under time-reversal and terms that don't. Therefore $m\textbf{g}\cdot\textbf{r}$ is the only one of these that is going to be OK. This has the same units as $mv^2$, which is consistent with #3. Putting together the two terms that look interesting so far, we have something of the form $\alpha mv^2+\beta m\textbf{g}\cdot\textbf{r}$. By #6, we need $\alpha/\beta=-1/2$. For consistency with arbitrary historical convention, let's take $\alpha=1/2$ and $\beta=-1$. Having come this far, we can already do a ton of physics. We can successfully predict the motion of projectiles and the behavior of particles in elastic collisions. By transforming from one frame to another (#5) and requiring collisions to conserve energy in both frames, we can prove conservation of momentum. We could also build up other scalars through scalar triple products, but to keep them from vanishing identically they'd need to be of the form $\textbf{v}\cdot(\textbf{g}\times\textbf{r})$ (or similar expressions in which these three variables are permuted, but those are identical to this one up to a sign). But this is odd under time-reversal, so it doesn't work. Examples like $m(\textbf{v}\cdot\textbf{v})(\textbf{v}\cdot\textbf{v})$ violate #3. Ben CrowellBen Crowell 74k88 gold badges191191 silver badges370370 bronze badges Not the answer you're looking for? Browse other questions tagged energy energy-conservation spring potential-energy or ask your own question. Why is there a $\frac 1 2$ in $\frac 1 2 mv^2$? Is it intuitive that the conserved quantity from time symmetry is what we know as energy? Formulas for kinetic energy How to derive energy expressions thinking of it as a conserved quantity only? What was the first intuition that lead us to quantify energy with work instead of momentum? What happens to the energy of a body when it is brought down some height? Show the total energy is conserved Formula for potential energy? Conservation of energy?
CommonCrawl
A Gap Theorem for Ends of Complete Manifolds Proceedings of the American Mathematical Society 123(1):247-247 DOI:10.1090/S0002-9939-1995-1213856-8 Mingliang Cai Tobias Holck Colding Dagang Yang Let (M(n), o) be a pointed open complete manifold with Ricci curvature bounded from below by -(n - 1)Lambda(2) (for Lambda greater than or equal to 0) and nonnegative outside the ball B(o, a). It has recently been shown that there is an upper bound for the number of ends of such a manifold which depends only on Lambda a and the dimension n of the manifold M(n). We will give a gap theorem in this paper which shows that there exists an epsilon = epsilon(n) > 0 such that M(n) has at most two ends if Lambda a less than or equal to epsilon(n). We also give examples to show that, in dimension n greater than or equal to 4, such manifolds in general do not carry any complete metric with nonnegative Ricci Curvature for any Lambda a > 0. Content uploaded by Dagang Yang All content in this area was uploaded by Dagang Yang on Oct 20, 2014 Volume 123, Number 1, January 1995 MINGLIANG CAI, TOBIAS HOLCK COLDING, AND DaGANG YANG (Communicated by Peter Li) Abstract. Let (Mn , o) be a pointed open complete manifold with Ricci cur- vature bounded from below by —(n - 1)A2 (for A > 0) and nonnegative outside the ball B(o, a). It has recently been shown that there is an upper bound for the number of ends of such a manifold which depends only on Aa and the dimension 71 of the manifold M" . We will give a gap theorem in this paper which shows that there exists an e = e(n) > 0 such that M" has at most two ends if Aa < e(n). We also give examples to show that, in dimen- sion 7i > 4, such manifolds in general do not carry any complete metric with nonnegative Ricci Curvature for any Aa > 0 . The Cheeger-Gromoll splitting theorem states that in a complete manifold of nonnegative Ricci curvature, a line splits off isometrically, i.e., any nonneg- atively Ricci curved M" is isometric to a Riemannian product Nk x Rn~k, where yV does not contain a line (cf. [CG]). In particular, such a manifold has at most two ends. Recently, the first-named author and independently Li and Tarn have shown that a complete manifold with nonnegative Ricci curvature outside a compact set has at most finitely many ends [C, LT]. At about the same time, Liu has also given a proof of the same theorem with an additional condi- tion that there is a lower bound on sectional curvature [L], which was removed shortly after the appearance of [C]. In this paper, we consider manifolds with nonnegative Ricci curvature outside a compact set and prove the following gap theorem. Theorem. Given n > 0, there exists an e = e(n) > 0 such that for all pointed open complete manifolds (Mn , 0) with Ricci curvature bounded from below by -(« -1 )A2 (for A > 0 ) and nonnegative outside the ball B(o, a), if Aa < e(n), then Mn has at most two ends. A natural question one would like to ask is whether this theorem can be improved so that M" must carry a complete metric with nonnegative Ricci curvature. Indeed, it is easy to see by volume comparison that the answer to the above question is affirmative in dimension 2 since the Euler number of such Received by the editors April 6, 1993. 1991 Mathematics Subject Classification. Primary 53C20. The third author was partially supported by National Science Foundation grant DMS 90-03524. ©1994 American Mathematical Society 0002-9939/94 $1.00+ $.25 per page License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use 248 MINGLIANG CAI, T. H. COLDING, AND DaGANG YANG a 2-dimensional complete manifold is an upper bound for the total curvature integral. However, such a gap theorem is the best one can have in dimensions higher than 3 as illustrated by the following examples. For any e > 0, by gluing two sharp cones together at the singular point, it is easy to construct a complete metric on RxSn~2, n > 4, with Ricci curvature bounded from below by -e and with nonnegative sectional curvature away from a metric ball of radius 1. By applying the metric surgery techniques as in [SY] to the manifold Sx x R x S"~2, one obtains an «-dimensional complete manifold M of infinite homotopy type with exactly two ends and with Ricci curvature bounded from below by -e and with nonnegative Ricci curvature outside a metric ball of radius 1. M certainly cannot carry any complete metric with nonnegative Ricci curvature since the Cheeger-Gromoll splitting theorem implies that a nonnegatively Ricci curved manifold with exactly two ends must split isometrically into the product of R with a closed manifold and therefore has finite homotopy type. The above examples are not valid in dimension 3 since the kind of met- ric surgery lemmas are not available. Therefore, the following problem is of particular interest: Does there exist an e > 0 such that if (M, o) is a pointed noncompact complete 3-dimensional manifold with Ricci curvature bounded from below by -e and nonnegative outside the unit metric ball B(o, 1), then M carries a complete metric with nonnegative Ricci curvature? 2. Proof of the theorem There are various (but equivalent) definitions of an end of a manifold. For the sake of our argument, we use the following (compare with [A]). Definition 2.1. Two rays yx and y2 starting at the base point o are called cofi- nal, if for any r > 0 and all t > r, yx(t) and y2(t) lie in the same component of M - B(o, r). An equivalence class of cofinal rays is called an end of M. We will denote by [y] the equivalence class of y. Notice that the above definition does not depend on the base point o and the particular complete metric on M. Thus the number of ends of M is a topological invariant of M. The following lemma is a refined version of Proposition 2.2 in [C] and can be proved by the same argument. Lemma 2.2. Let M be as in the theorem. If [yx] and [y2] are two different ends of M, then for any tx, t2 >0, d(yx(tx), y2(t2)) >tx+t2-2a. In what follows, let Mn be as in the theorem. By scaling, we may assume that Ric(M") >-(« - 1). Following Abresch and Gromoll in [AG], let 4>(x) be the function defined on ß_i(o, 1) - {o}, the truncated unit ball in the hyperbolic space H", with the following property: A<t> = 2(n-1), ^laß_l(i) = 0- A GAP THEOREM FOR ENDS OF COMPLETE MANIFOLDS 249 It is easy to see that <j>(x) = G(d(o, x)), where Given a continuous function u: M —> R and x £ M, a continuous function ux: M —> R is called an upper barrier of u at x if ux(x) = u(x) and u <ux . The following lemma is a slight generalization of Theorem 2.1 in [AG]. Lemma 2.3. Let M" be a complete Riemannian manifold with Ricci curvature bounded from below by -(« - 1). Then there exist an e = e(n) > 0 and a S = S(n) > 0 such that u(x) <2-2S-4e for all x £ S(o, 1 - ô) if u: M" —> R is a continuous function which satisfies the following properties: (1) u(o) = 0, (2) u>-2e, (3) dil(w)<2, (4) Am < 2(n - 1), where dil(u) = supx_^ \u(x) - u(y)\/d(x, y) and the last inequality is in the barrier sense, that is, for any x £ M and a > 0, there is an upper barrier of u at x, uXta, such that ux<a is smooth near x and Aux,a(x) < 2(n - 1) + a. Proof. Consider H(r) = 2r + G(r). Notice that G(l) = 0 and G'(l) = 0. Hence H(l) = 2 and H'(r) > 0 for r close to 1, and therefore there exists a c such that 0 < c < 1 and H(c) < 2. Now choose ô = ô(n) and e = e(n) such (5) 0 < S < \ min{2 - H(c), 1 - c} (6) 0<e< imin{C7(l-r5),2-//(c)-2á}. Consider the function v(y) = u(y) - G(d(x,y)) on the annulus B(x, 1)\ B(x, c). The well-known Laplacian comparison theorem for distance functions (cf. [EH]) implies that Av < 0 (in the barrier sense). By the maximum principle [EH], v achieves its minimum on the boundary of the annulus. Since o is an interior point of the domain by (5) and v(o) = u(o)-G(d(o, x)) = -G(l-ô) < -2e by (6), there exists a point z on the boundary of the domain such that v(z) < -2e. But on S(x, 1), v = u - G(l) = u > -2e by (2). Hence z £ S(x, c). Combining this with (3) and (6), we conclude that u(x) < u(z) + 2c = v(z) + H(c) < 2 - 20 - 4e. This proves Lemma 2.3. Remark 2.4. For a ray y in M, let by be the associated Busemann function, i.e., by(x)= lim(d(y(t),x)-t). I—>oo It is well known (e.g., see [EH]) that, in the barrier sense, Aby < n - 1 . We are now in position to prove the theorem. MINGLIANG CAÍ, T. H. COLDING, AND DaGANG YANG Proof of the theorem. Let M" be as in the theorem with A = 1. Let e = e(n) be as in Lemma 2.3. We need to show that when a < e, Mn has at most two ends. Suppose not. Let [yx], [y2], and [73] be three different ends. Consider u := byx + by2. We claim that u satisfies the conditions in Lemma 2.3. As a matter of fact, (1) and (3) are clear, (4) is by Remark 2.4, and (2) is a consequence of the triangle inequality and Lemma 2.2. From Lemma 2.3, we conclude that (7) u(y3(l - S)) < 2 - 20 - 4s. On the other hand, it follows from Lemma 2.2 that for any r > 0, "(73(0) > 2t-4a. u(Yi(l - ô)) >2(1 - Ô) - 4a>2 - 20 - 4e. This clearly contradicts (7) and hence completes the proof of the theorem. [A] U. Abresch, Lower curvature bounds, Toponogov's theorem and bounded topology, Ann. Sei. École Norm. Sup. (4) 18 (1985), 651-670. [AG] U. Abresch and D. Gromoll, On complete manifolds with nonnegative Ricci curvature, J. Amer. Math. Soc. 3 (1990), 355-374. [C] M. Cai, Ends ofRiemannian manifolds with nonnegative Ricci curvature outside a compact set, Bull. Amer. Math. Soc. (N.S.) 24 (1991), 371-377. [CG] J. Cheeger and D. Gromoll, The splitting theorem for manifolds of nonnegative Ricci curva- ture, J. Differential Geom. 6 (1971), 119-128. [EH] J.-H. Eschenburg and E. Heintze, An elementary proof of the Cheeger-Gromoll splitting theorem, Ann. Global Anal. Geom. 2 (1984), 249-260. [L] Z. Liu, Ball covering on manifolds with nonnegative Ricci curvature near infinity, Proc. Amer. Math. Soc. 115 (1992), 211-219. [LT] P. Li and F. Tarn, Harmonic functions and the structure of complete manifolds, preprint, [SY] J. P. Sha and D. G. Yang, Positive Ricci curvature on the connected sums of S" x Sm , J. Differential Geom. 33 (1991), 127-137. (M. Cai and T. H. Colding) Department of Mathematics, University of Pennsylvania, Current address, M. Cai: Department of Mathematics and Computer Science, University of Miami, Coral Gables, Florida 33124 E-mail address : mcaiQmath. miami. edu Current address, T. H. Colding: Courant Institute, New York University, New York, New York (D. Yang) Department of Mathematics, Tulane University, New Orleans, Louisiana ... He proved that the number of ends of such a manifold is finite and can be estimated from above explicitly; see also Li and Tam [7] for a independent proof by a different method. After that, Cai, Colding and Yang [3] gave a gap theorem for this class of manifolds, which states that there exists an ǫ(n) such that such a manifold has at most two ends if KR ≤ ǫ(n). In this paper we will extend the Cai-Colding-Yang gap theorem to smooth metric measure spaces with the Bakry-Émery Ricci tensor. ... ... Inspired by the gap theorem of manifolds [3] and the number estimate for ends of SMMSs [18], in this paper we first give a gap theorem for ends of a smooth metric measure space when Ric f ≥ 0 and f has some degeneration outside a compact set. ... ... The proof of our theorems adapts the argument of Cai-Colding-Yang [3] and it relies on a Wei-Wylie's weighted Laplacian comparison [16] and geometric inequalities for two different ends (see Lemmas 2.8 and 2.11), which are derived by locally analyzing splitting theorems. We would like to point out that Cai-Colding-Yang's proof depends on a delicate constructional function G(r), which satisfies certain Laplacian equation with the Dirichlet boundary condition. ... Gap theorems for ends of smooth metric measure spaces Bobo Hua Jia-Yong Wu In this paper, we establish two gap theorems for ends of smooth metric measure space $(M^n, g,e^{-f}dv)$ with the Bakry-\'Emery Ricci tensor $\mathrm{Ric}_f\ge-(n-1)$ in a geodesic ball $B_o(R)$ with radius $R$ and center $o\in M^n$. When $\mathrm{Ric}_f\ge 0$ and $f$ has some degeneration (including sublinear growth) outside $B_o(R)$, we show that there exists an $\epsilon=\epsilon(n,\sup_{B_o(1)}|f|)$ such that such a manifold has at most two ends if $R\le\epsilon$. When $\mathrm{Ric}_f\ge\frac 12$ and $f(x)\le\frac 14d^2(x,B_o(R))+c$ for some constant $c>0$ outside $B_o(R)$, we can also get the same gap conclusion. ... The proof in the Riemannian case relies on Busemann functions and on the Laplacian comparison theorem requiring finiteness of space dimension. It is proven in [5] that manifolds allowing for some negative Ricci curvature within a small compact set also have at most two ends. ... Every Salami has two ends Florentin Münch A salami is a connected, locally finite, weighted graph with non-negative Ollivier Ricci curvature and at least two ends of infinite volume. We show that every salami has exactly two ends and no vertices with positive curvature. We moreover show that every salami is recurrent and admits harmonic functions with constant gradient. The proofs are based on extremal Lipschitz extensions, a variational principle and the study of harmonic functions. Assuming a lower bound on the edge weight, we prove that salamis are quasi-isometric to the line, that the space of all harmonic functions has finite dimension, and that the space of subexponentially growing harmonic functions is two-dimensional. Moreover, we give a Cheng-Yau gradient estimate for harmonic functions on balls. Ricci curvature and ends of Riemannian orbifolds Liang-Khoon Koh We consider Riemannian orbifolds with Ricci curvature nonnegative outside a compact set and prove that the number of ends is finite. We also show that if that compact set is small then the Riemannian orbifolds have only two ends. A version of splitting theorem for orbifolds also follows as an easy consequence. Alexandrov spaces with nonnegative curvature outside a compact set Alexandrov spaces with nonnegative curvature outside a compact set have number of ends uniformly bounded above. If the compact set is small, the spaces have at most two ends. Curvature and Function Theory on Riemannian Manifolds Peter Li this article is to give a rough outline of the history of a specific point of view in this area, namely, the interplay between the geometry -- primarily the curvature -- and the function theory. Throughout this article, unless otherwise stated, we will assume that M An elementary proof of the Cheeger-Gromoll splitting theorem Jost Eschenburg Ernst Heintze We give a short proof of the Cheeger-Gromoll Splitting Theorem which says that a line in a complete manifold of nonnegative Ricci curvature splits off isometrically. Our proof avoids the existence and regularity theory of elliptic PDE's. Lower curvature bounds, Toponogov's theorem, and bounded topology ANN SCI ECOLE NORM S Uwe Abresch On complete manifolds with nonnegative Ricci curvature Detlef Gromoll BALL COVERING ON MANIFOLDS WITH NONNEGATIVE RICCI CURVATURE NEAR INFINITY ZD LIU Let M be a complete open Riemannian manifold with nonnegative Ricci curvature outside a compact set B. We show that the following ball covering property (see [LT]) is true provided that the sectional curvature has a lower bound: For a fixed p0 is-an-element-of M, there exist N > 0 and r0 > 0 such that for r greater-than-or-equal-to r0, there exist p1, ... , p(k) is-an-element-of partial-derivative B(p0)(2r), k less-than-or-equal-to N, with [GRAPHICS] Furthermore N and r0 depend only on the dimension, the lower bound on the sectional curvature, and the radius of the ball at p0 that contains B . Lower curvature bounds, Toponogov's theorem, and bounded topology. II Positive Ricci curvature on the connected sums of S n ×S m J DIFFER GEOM Ji-Ping Sha Ends of Riemannian manifolds with nonnegative Ricci curvature outside a compact set We consider complete manifolds with Ricci cur­ vature nonnegative outside a compact set and prove that the number of ends of such a manifold is finite and in particular, we give an explicit upper bound for the number. Zhong-dong Liu Let M be a complete open Riemannian manifold with nonnegative Ricci curvature outside a compact set B. We show that the following ball covering property (see [LT]) is true provided that the sectional curvature has a lower bound: For a fixed p0 ∈ M, there exist $N > 0$ and $r_0 > 0$ such that for r ≥ r0, there exist p1, ⋯, pk ∈ ∂ Bp0 (2r), k ≤ N, with $\bigcup^k_{j = 1} B_{p_j}(r) \supset \partial B_{p_0}(2r).$ Furthermore N and r0 depend only on the dimension, the lower bound on the sectional curvature, and the radius of the ball at p0 that contains B. The splitting theorem for manifolds of nonnegative Ricci curvature Jeff Cheeger On Yamabe Constants of Products with Hyperbolic Spaces February 2013 · Journal of Geometric Analysis Guillermo Henry Jimmy Petean We study the H^n-Yamabe constants of Riemannian products (H^n \times M^m, g_h^n +g), where (M,g) is a compact Riemannian manifold of constant scalar curvature and g_h^n is the hyperbolic metric on H^n. Numerical calculations can be carried out due to the uniqueness of (positive, finite energy) solutions of the equation \Delta u -\lambda u + u^q =0 on hyperbolic space H^n under appropriate bounds ... [Show full abstract] on the parameters \lambda, q, as shown by G. Mancini and K. Sandeep. We do explicit numerical estimates in the cases (n,m)=(2,2),(2,3) and (3,2). Bounds for Eigenfunctions of the Laplacian on Compact Riemannian Manifolds December 2001 · Asian Journal of Mathematics Harold Donnelly Suppose that phi is an eigenfunction of -Delta with eigenvalue lambda not equal 0. It is proved that \ \ phi \ \ (infinity) less than or equal to c(1)lambda (n-1/4)\ \ phi \ \ (2), where n is the dimension of M and c(1), depends only upon a bound for the absolute value of the sectional curvature of M and a lower bound for the injectivity radius of M. It is then shown that if M admits an isometric ... [Show full abstract] circle action, and the metric is generic, one has exceptional sequences of eigenfunctions satisfying the complementary bounds \ \ phi (k)\ \ (infinity) greater than or equal to c(2)lambda (n-1/8)(k)\ \ phi \ \ (2.) (C) 2001 Academic Press. The irrational ratio of average indices of closed geodesics on positively curved Finsler spheres August 2015 · Nonlinear Analysis Huagui Duan In this paper, we prove that for every Finsler n-sphere (S-n, F) carrying finitely many prime closed geodesics for n >= 6 with reversibility lambda and flag curvature K satisfying (lambda/lambda+1)(2) < K <= 1, there exist at least [n/2] 1 prime closed geodesics such that for any two elements among them, the ratio of their average indices is an irrational number. In addition, if the metric is ... [Show full abstract] bumpy, then there exist at least n - 2 closed geodesics satisfying the above property. (c) 2015 Elsevier Ltd. All rights reserved. The spectrum of geodesic balls on spherically symmetric manifolds March 2016 · Communications in Analysis and Geometry Denis Borisov We study the Dirichlet spectrum of the Laplace operator on geodesic balls centred at a pole of spherically symmetric manifolds. We first derive a Hadamard--type formula for the dependence of the first eigenvalue $\lambda_{1}$ on the radius $r$ of the ball, which allows us to obtain lower and upper bounds for $\lambda_{1}$ in specific cases. For the sphere and hyperbolic space, these bounds are ... [Show full abstract] asymptotically sharp as $r$ approaches zero and we see that while in two dimensions $\lambda_{1}$ is bounded from above by the first two terms in the asymptotics for small $r$, for dimensions four and higher the reverse inequality holds. In the general case we derive the asymptotic expansion of $\lambda_{1}$ for small radius and determine the first three terms explicitly. For compact manifolds we carry out similar calculations as the radius of the geodesic ball approaches the diameter of the manifold. In the latter case we show that in even dimensions there will always exist logarithmic terms in these expansions.
CommonCrawl
Gauss Elimination Calculator Matrix Determinant Matrix Inverse Transpose Matrix Matrix Addition & Subtraction Matrix Multiplication Cramers Rule Gauss Elimination X + Y + Z = X = 0.087 Y = 2.6087 Z = -1.1304 Gauss Elimination Calculator solve a system of three linear equations with real coefficients using Gaussian elimination algorithm. It is an online algebra tool programmed to determine an ordered triple as a solution to a system of three linear equations. Using this calculator, we will able to understand how to solve the system of linear equations using Gauss elimination algorithm. It is necessary to follow the next steps: Enter twelve coefficients of a system of linear equations in the box. These coefficients must be real numbers. Press the "Generate Work" button to make the computation; Gauss Elimination Calculator will give an ordered triple $(x,y,z)$ as a solution of a system of three linear equations. Input : System of three linear equations.; Output : Three real numbers. How to Find Unknown Variables in the Equations by Gauss Elimination? Gauss elimination or row reduction, is an algorithm for solving a system of linear equations. This method also called as Gauss-Jordan elimination. It is represented by a sequence of operations performed on the matrix. The method is named after Carl Friedrich Gauss (1777-1855), although it was known to Chinese mathematicians. The method of solving a system of linear equations by Gauss elimination is similar to the method of solving matrices. For instance, there is the connection between a system of three linear equations and its coefficient matrix. $$\begin{align} &a_1x+b_1y+c_1z={ d_1}\\ &a_2x+b_2y+c_2z={ d_2}\\ &a_3x+b_3y+c_3z={ d_3}\\ \end{align} \quad\longmapsto \left( \begin{array}{ccc} {a_1} & b_1 &c_1\\ {a_2} &b_2 &c_2\\ {a_3} &b_3 &c_3\\ \end{array} \right)$$ There are three types of elementary row operations: Replacing two rows; Multiplying a row by a nonzero number; Adding a multiple of one row to another row. The method of Gauss elimination consists of two parts. The first part reduces a given system to \underline{row echelon form}. From the row echelon form, we can conclude whether the system has no solutions, a unique solution, or infinitely many solutions. The second part uses row operations until the solution is found. Row echelon form satisfies following properties: The leading coefficient of each row must be $1$; All elements in a column below a leading $1$ must be $0$; All rows that contain zeros are at the bottom of the matrix. For example, the following matrices are in row echelon form $$\left( \begin{array}{cc} 1 & 5 \\ 0 & 1 \\ \end{array} \right), \quad \left( \begin{array}{cccc} 1 & 1 & 0 & 5 \\ 0 & 1 & 3 & 4 \\ 0&0 & 1 & 2 \\ \end{array} \right), \quad \left( \begin{array}{cccc} 1 & 2 & 3 & 4 \\ 0 & 1 & 3 & 4 \\ 0&0 & 1 & 2 \\ 0&0 & 0 &0 \\ \end{array} \right)$$ A matrix is in reduced row echelon form if furthermore in every column containing a leading coefficient, all of the other entries in that column are zero. For instance, the matrices shown below are examples of matrices in reduced row echelon form. $$\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right), \quad \left( \begin{array}{cccc} 1 & 0 & 0 & 7 \\ 0 & 1 & 0 & -2 \\ 0&0 & 1 & 2 \\ \end{array} \right), \quad \left( \begin{array}{cccc} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & -2 \\ 0&0 & 1 & 2 \\ 0&0 & 0 &0 \\ \end{array} \right)$$ An augmented matrix is a matrix obtained by appending the columns of two given matrices. In the case of solving a system, we need to augment the coefficient matrix and the constant matrix. The vertical line indicates the separation between the coefficient matrix and the constant matrix. So, for the the system of three equations $$\begin{align} &a_1x+b_1y+c_1z={ d_1}\\ &a_2x+b_2y+c_2z={ d_2}\\ &a_3x+b_3y+c_3z={ d_3}\\ \end{align}$$ the augmented matrix is $$\left( \begin{array}{ccc|c} a_1 & b_1 & c_1 & d_1 \\ a_2 & b_2 & c_2 & d_2 \\ a_3&b_3 & c_3 & d_3 \\ \end{array} \right)$$ The number of solutions to a system depends only on the rank of the matrix representing the system and the rank of the corresponding augmented matrix. Based on the Kronecker-Capelli Theorem, any system of three linear equations has no solutions if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables, in this case, if the rank is equal to $3$. For example, let us solve the solution of the system using the Gauss elimination $$\begin{align} &4x+5y+3z={ 10}\\ &3x+6y+7z={ 8}\\ &2x+3y+0z={ 8}\\ \end{align}$$ The coefficients and constant terms of the system give the matrices $$\left( \begin{array}{ccc} 4 & 5 &3\\ 3 &6 &7\\ 2 &3&0\\ \end{array} \right),\quad \left( \begin{array}{c} 10 \\ 8 \\ 8 \\ \end{array} \right)$$ The augmented matrix is $$\left( \begin{array}{ccc|c} 4 & 5 &3&10\\ 3 &6 &7&8\\ 2 &3&0&8\\ \end{array} \right)$$ To solve the system, reduce the augmented matrix to reduced row echelon form in the following way. Divide row $1$ by $4$ ($R_1=\frac {R_1}4)$, to get $$\left( \begin{array}{ccc|c} 1 & \frac 54 &\frac 34&\frac{5}2\\ 3 &6 &7&8\\ 2 &3&0&8\\ \end{array} \right)$$ Subtract row $1$ multiplied by $3$ from row $2$ ($R_2=R_2-3R_1$), to get $$\left( \begin{array}{ccc|c} 1 & \frac 54 &\frac 34&\frac{5}2\\ 0 &\frac 94 &\frac{19}4&\frac 12\\ 2 &3&0&8\\ \end{array} \right)$$ Subtract row $1$ multiplied by $2$ from row $3$ ($R_3=R_3-2R_1$), to get $$\left( \begin{array}{ccc|c} 1 & \frac 54 &\frac 34&\frac{5}2\\ 0 &\frac 94 &\frac{19}4&\frac 12\\ 0 &\frac12 &-\frac 32&3\\ \end{array} \right)$$ Multiply row $2$ by $\frac 49$ ($R_2=\frac49 R_2$), to get $$\left( \begin{array}{ccc|c} 1 & \frac 54 &\frac 34&\frac{5}2\\ 0 &1 &\frac{19}9&\frac 29\\\ 0 &\frac12 &-\frac 32&3\\ \end{array} \right)$$ Subtract row $2$ multiplied by $\frac 54$ from row $1$ ($R_1=R_1-\frac54 R_2$), to get $$\left( \begin{array}{ccc|c} 1 & 0 &-\frac {17}9&\frac{20}9\\ 0 &1 &\frac{19}9&\frac 29\\ 0 &\frac12 &-\frac 32&3\\ \end{array} \right)$$ Subtract row $2$ multiplied by $\frac 12$ from row $3$ ($R_3=R_3-\frac12R_2$), to get $$\left( \begin{array}{ccc|c} 1 & 0 &-\frac {17}9&\frac{20}9\\ 0 &1 &\frac{19}9&\frac 29\\ 0 &0&-\frac{23}9&\frac{26}9\\ \end{array} \right)$$ Multiply row $3$ by $-\frac9{23}$ ($R_3=-\frac9{23}R_3$), to get $$\left( \begin{array}{ccc|c} 1 & 0 &-\frac {17}9&\frac{20}9\\ 0 &1 &\frac{19}9&\frac 29\\ 0 &0&1&-\frac{26}{23}\\ \end{array} \right)$$ Add row $3$ multiplied by $\frac{17}9$ to row $1$ ($R_1=R_1+\frac{17}9R_3$), to get $$\left( \begin{array}{ccc|c} 1 & 0 &0&\frac2{23}\\ 0 &1 &\frac{19}9&\frac 29\\ 0 &0&1&-\frac{26}{23}\\ \end{array} \right)$$ Subtract row $3$ multiplied by $\frac {19}9$ from row $2$ ($R_2=R_2-\frac{19}9R_3$), to obtain $$\left( \begin{array}{ccc|c} 1 & 0 &0&\frac2{23}\\ 0 &1 &0&\frac {60}{23}\\ 0 &0&1&-\frac{26}{23}\\ \end{array} \right)$$ So the solution of the system is $(x, y, z) = (\frac{2}{23},\frac{60}{23}, -\frac{26}{23})$. The Gauss Elimination work with steps shows the complete step-by-step calculation for finding a solution of a linear system of three equations using the Gauss elimination method. For any other system, just supply twelve real numbers as coefficients of linear equations and click on the Generate Work button. The grade school students use this Gauss Elimination Calculator to generate the work, verify the results of solving systems of linear equations derived by hand or do their homework problems efficiently. In many applications, it is necessary to calculate the matrix elimination where this online Matrix Gauss Elimination calculator can assist to effortlessly make calculations easy for the respective inputs. Real World Problems Using Gauss elimination Gaussian elimination algorithm is useful for determining the rank of a matrix (an important property of each matrix). This method can also help us to find the inverse of a matrix. In Geometry, the equation $Ax+By+Cz=D$ defines a plane in the three-dimensional coordinate system. If we consider a system of three variables, we can think about the points of intersection of planes. Hence, we can determine whether planes are parallel, intersect each other or coincide. Gauss Elimination Practice Problems Practice Problem 1: Using the Gauss elimination, solve the system of equations $$\begin{align} &2x+4y-z=-1\\ &x+3y+7z=2\\ &x+2y+z=-5\\ \end{align} $$ Practice Problem 2: A math library wants to purchase $25$ books for $\$2,800$. Three different types of books are available: a geometry with a price of $\$35$, an algebra with a price of $\$70$, a statistics with a price of $\$140$. How many of each type of books should the library purchase? The Gauss Elimination Calculator, formula, example calculation (work with steps) and practice problems would be very useful for grade school students of K-12 education to understand the concept of solving systems of linear equations. This concept is conceived in almost all areas of science, so it will be helpful in solving more complex problems. 4x4, 3x3 & 2x2 Matrix Determinant Calculator Transpose Matrix Calculator nxn Inverse Matrix Calculator 4x4 Matrix Addition & Subtraction Calculator 3x3 Matrix Addition Calculator 3x3 Matrix Subtraction Calculator 4x4 Matrix Multiplication Calculator Squared Matrix Calculator
CommonCrawl
Understanding the cluster randomised crossover design: a graphical illustration of the components of variation and a sample size tutorial Sarah J. Arnup1, Joanne E. McKenzie1, Karla Hemming2, David Pilcher3,4,5 & Andrew B. Forbes1 In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society – Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of the WPC or BPC can increase the required number of clusters. By illustrating how the parameters required for sample size calculations arise from the CRXO design and by providing guidance on both how to choose values for the parameters and perform the sample size calculations, the implementation of the sample size formulae for CRXO trials may improve. Individually randomised trials are considered the 'gold standard' for evaluating medical interventions [1]. However, situations arise where is it necessary, or preferable, to randomise clusters of individuals, such as hospitals or schools, rather than the individual patients or students, to interventions [2, 3]. A cluster randomised trial will generally require a larger sample size compared with an individually randomised trial to estimate the intervention effect to the same precision [4]. In a two-period, two-intervention, cluster randomised crossover (CRXO) design, each cluster receives each of the two interventions in a separate period of time, leading to the formation of two 'cluster-periods'. In a cross-sectional design, each cluster-period consists of different individuals, while in a cohort design, each cluster-period consists of the same individuals. The order in which the interventions are delivered to each cluster is randomised to control for potential period effects [5, 6]. Like in an individually randomised trial, this adaption has the benefit of reducing the required number of participants [7]. The key to understanding the CRXO design is to recognise how both the cluster randomisation and crossover aspects of the design lead to variation between individual responses in a trial; and how these aspects of the design give rise to similarities in the responses of groups of individuals. Sample size formula have been published for the two-period, two-intervention, cross-sectional CRXO design [8,9,10]. These formulae require a-priori specification of two correlations: the similarity between two individuals in the same cluster-period, typically measured by the within-cluster within-period correlation (WPC); and the similarity between two individuals in the same cluster, but in different cluster-periods, typically measured by the within-cluster between-period correlation (BPC). However, there is little guidance for informing the value of the BPC, nor on the sensitivity of the sample size to the chosen values of both correlations [11, 12]. A 2015 systematic review of CRXO trials found that both the cluster randomisation and crossover aspects of the design of the CRXO was appropriately accounted for in only 10% of sample size calculations and 10% of analyses [13]. This suggests that the CRXO design is not well understood. The aims of this tutorial are to illustrate the intuition behind the CRXO design; to provide guidance on how to a-priori specify the WPC and BPC; and perform sample size calculations for two-period, two-intervention, cross-sectional CRXO trials. In the 'Understanding the CRXO design' section, we describe how the cluster randomisation and crossover aspects of the design leads to variation between individual responses in a two-period, two-intervention, cross-sectional CRXO design, using intensive care unit (ICU) length(s) of stay (LOS) as an example. In the 'Performing a sample size calculation' section, we outline how to perform sample size calculations and discuss how to specify values of the WPC and BPC for sample size calculations. In the 'Common mistakes when performing a sample size analyses' section, we outline common mistakes made by trialists when performing sample size calculations for CRXO trials and the likely consequences of those mistakes. We conclude with a general discussion, considering extensions and larger designs. Understanding the CRXO design In this section we illustrate graphically how the cluster randomisation and crossover aspects of the CRXO design leads to variation in the responses of individuals in a CRXO trial, and how these aspects of the design can be used to measure the similarity between individuals using the WPC and BPC. We illustrate the sources of variation and measures of similarity that arise in the two-period, two-intervention, cross-sectional CRXO design by considering a hypothetical CRXO trial conducted in 20 ICUs over a 2-year period. We consider the ICU LOS of all patients admitted to these 20 ICUs, and assume (for ease of exposition) that the number of patients in each ICU is infinitely large (or at least very large). As LOS is non-normally distributed and right skewed, we use the logarithmic transform of ICU LOS throughout our illustration. Each ICU is randomly assigned to administer one of two interventions to all patients admitted during the first year (period 1). In the subsequent year, each ICU administers the alternate intervention (period 2). All patients admitted to a single ICU over the 2-year period can be thought of as belonging to a cluster. Within each ICU (cluster), the patients admitted during a 1-year period can be thought of as belonging to a separate cluster-period. Therefore, in each ICU (cluster) there are two cluster-periods. The allocation of interventions to patients in the stratified, multicentre, parallel-group, individually randomised trial (IRCT) design, the parallel-group cluster randomised trial (CRCT) design, and the CRXO design are shown in Fig. 1. In each design, each intervention is given for one 12-month period. In the IRCT design half the patients in each centre (ICU) receive each intervention. In the CRCT design, all patients in a single ICU are assigned the same intervention. Schematic illustration of the stratified, multicentre, parallel-group, individually randomised trial (IRCT), parallel-group cluster randomised trial (CRCT), and cluster randomised crossover (CRXO) design with the same total number of participants Variation in the length of stay between patients To illustrate the sources of variation and measures of similarity that arise in the CRXO design, we assume that the true difference between interventions is zero. In the hypothetical situation where we have an infinite number of patients, the overall mean LOS for all patients in the trial will be equal to the true overall mean LOS for all patients who could be admitted to the 20 ICUs. The variation in LOS arises from both patient and ICU factors. In a CRXO design, the ICU (cluster) and the time period of admission (cluster-period) are both factors that could affect the patient's LOS and, therefore, explain some of the variation seen in patient LOS. For example, each ICU may have a different case mix of patients, different operating policies and procedures, and different staff. And within an ICU, changes to staff or policy over time could lead to differences in LOS between time periods. The following sections describe how the ICU and time period of admission can explain part of the variation in the LOS between patients. Variation in the length of stay between ICUs Each ICU has a true mean LOS for the infinite number of patients who could be hypothetically admitted to that ICU. When there is true variability between ICUs, the true mean LOS for each ICU will differ from the mean of all true ICU mean LOS. In the hypothetical situation where we have an infinite number of patients, the overall mean LOS for all patients and the mean of all true ICU mean LOS will be equal to the same true overall mean LOS. Figure 2a, b, e and f show four scenarios that each illustrate variation in the true mean LOS across ICUs (red circles). The true mean LOS in each ICU may be similar and, therefore, close to the true overall mean LOS (black line) (Fig. 2a); or the true mean LOS of each ICU may be more dispersed about the true overall mean (Fig. 2b). The difference in the spread of true ICU mean LOS between Fig. 2a and b indicates greater variability in the true ICU mean LOS across ICUs in Fig. 2b than in Fig. 2a. The same comparison can be made between Fig. 2e and f. Variation in true mean length(s) of stay (LOS) between intensive care units (ICUs) and between periods within ICUs. Low variation in the true mean LOS between ICUs is shown in the left column (a, c, e, g) and high variation in the right column (b, d, f, h). Low variation in the true mean LOS between periods within ICUs is shown in the top row (a, c, b, d) and high variation in the bottom row (e, g, f, h). a, b, e, f the true mean LOS for each of the 20 hypothetical ICUs are marked by a red circle, with the difference between the true overall mean LOS and the true mean LOS for each ICU indicated by a dashed red horizontal line. The two true cluster-period mean LOS for each ICU are marked with a green circle to the left and right of the true ICU mean LOS. The difference between the true ICU mean LOS and the true cluster-period mean LOS is indicated by a green horizontal line. The black vertical line indicates true overall mean LOS. c, d, g, h the red vertical line indicates the true ICU mean LOS and the green vertical line indicates the true cluster-period mean LOS for each period in each of two ICUs. For (a) WPC = 0.02, BPC = 0.01; for (b) WPC = 0.06, BPC = 0.05; for (e) WPC = 0.06, BPC = 0.01; for (f) WPC = 0.10, BPC = 0.05. ICU 1 is shown with solid lines and ICU 2 is shown in dashed lines in (h). The yellow (blue) curve indicates a normal distribution of patient LOS within each cluster-period where the cluster was allocated to intervention S (T). For (d) the distribution of patient LOS in each of the four cluster-periods are labelled A to D. WPC: within-cluster within-period correlation (ρ); BPC: within-cluster between-period correlation (η) Variation in the length of stay between time periods in an ICU Within each ICU, there is also a true mean LOS for the infinite number of patients who could be hypothetically admitted in each 1-year period (i.e. each cluster-period). Figure 2a, b, e and f show also that there is variation in the difference between the true cluster-period mean LOS (green circles) and the true ICU mean LOS (red circles). The true cluster-period mean LOS may be similar to the true ICU mean LOS Fig. 2a); or the true mean LOS of each cluster-period may be more dispersed about the true ICU mean (Fig. 2e). The difference in the spread of the true cluster-period mean LOS between Fig. 2a and e indicates greater variability in true cluster-period mean LOS within ICUs in Fig. 2e than in Fig. 2a. The same comparison can be made between Fig. 2b and f. Variation in length of stay between patients in a cluster-period While there is a true mean LOS for all patients admitted in each cluster-period, the individual patients within each cluster-period will show variation in their LOS due to other patient factors (e.g. severity of their condition). Two of the 20 example ICUs are depicted in Figs. 2c, d, g and h. ICU 1 is shown with solid lines and ICU 2 is shown in dashed lines. As previously, the mean LOS in each ICU is marked by a red line, and the mean LOS in each cluster-period is marked by a green line. The distribution of the individual patient LOS within each cluster-period follows a normal distribution, and is shown with four yellow or blue curves. The distribution of the LOS for patients receiving intervention S are coloured yellow, and the distribution of those receiving intervention T are coloured blue. Within each cluster-period, patients have a range of individual LOS centred at the true cluster-period mean LOS (green line). Nonetheless, the patients in each cluster-period are from distinct distributions labelled as A, B, C, and D in Fig. 2h (these labels apply also to Fig. 2c, d and g). In each cluster-period, we assume that the variability of the individual patient LOS is the same, and hence the yellow and blue curves have the same shape and are only shifted in location between the four cluster-periods. Summary of the sources of variation in the CRXO design We have illustrated how the cluster randomisation aspect of the CRXO design leads to the formation of clusters of patients defined by ICU, while the crossover aspect of the design leads further to the formation of cluster-periods of patients within each cluster. We have also illustrated how the cluster randomisation and crossover aspects of the CRXO design can lead to three sources (or components) of variation in the responses of patients in a CRXO trial: variation in the mean LOS between ICUs; variation in the mean LOS between cluster-periods; and variation between individual patient LOS within a cluster-period. The within-cluster within-period correlation and the within-cluster between-period correlation In this section we show how the three sources of variation outlined in the preceding section can be used to quantify the similarity in LOS between the groups of patients defined by ICU (cluster) and cluster-period. The within-cluster within-period correlation (WPC) quantifies the similarity of outcomes from patients in the same cluster-period. The within-cluster between-period correlation (BPC) quantifies the similarity of outcomes from patients in the same cluster, but in different periods. Specification of these two correlations are required to perform sample size estimates for a CRXO trial. In the hypothetical circumstance where the LOS of an infinite number of patients admitted to each ICU is measured, we can determine the true WPC and BPC. In practice, the LOS can only be measured on a sample of patients, and the true WPC and BPC will be estimated from this sample of patients, with some amount of random sampling error. We first describe the sources of variation underlying the BPC, and then extend the description to the WPC. Within-cluster between-period correlation (BPC) The BPC measures how much of the total variability in the LOS is due to variability in the ICU mean LOS or analogously how similar patient responses are within the same cluster, but in different periods. The formula for the BPC, η, is: $$ \eta =\frac{\sigma_C^2}{\sigma_C^2+{\sigma}_{CP}^2+{\sigma}_I^2}, $$ where σ 2 C is the variance in mean LOS between clusters (ICUs), σ 2 CP is the variance in mean LOS between cluster-periods, and σ 2 I is the variance in individual LOS within a cluster-period. The BPC measures the similarity between the LOS of two patients from the same ICU with one patient from the first period (cluster-period C) and one patient from the second period (cluster-period D). The similarity between the LOS of patients in an ICU between cluster-periods arises from the variability in the ICU mean LOS only. We now refer to Fig. 2 to describe how this relationship between similarity and variability arises. As the ICU mean LOS (red lines/red circles) become more dispersed between ICUs, relative to the dispersion (i.e. distance) between cluster-period mean LOS within an ICU (green lines/green circles), the distribution of the patient LOS (yellow/blue curves) in the cluster-periods A and B become more similar to each other, as do the distribution of patient LOS in cluster-periods C and D. For example, in Fig. 2c there is little variation in the ICU mean LOS around the overall mean LOS (black line) and the distribution of patient LOS in cluster-periods A, B, C and D almost all coincide. As a result, the similarity between the LOS of patients in different cluster-periods within the same ICU (e.g. one patient from cluster-period A and one patient from cluster-period B) is comparable to the similarity between the LOS of patients in different ICUs (e.g. one patient from cluster-period A and one patient from cluster-periods C or D). In contrast, in Fig. 2d, there is more separation between the ICU mean LOS and only the distributions of patient LOS from the same ICUs coincide (i.e. cluster-periods A and B, and cluster-periods C and D, coincide). As a result, the LOS of patients in different cluster-periods within the same ICU (e.g. one patient from cluster-period A and one patient from cluster-period B) are more similar to each other than to the patients in other ICUs (e.g. one patient from cluster-period A and one patient from cluster-periods C or D). Hence, the BPC is larger in Fig. 2d than in Fig. 2c. The same comparison can be made between Fig. 2g and h. The within-cluster within-period correlation (WPC) The WPC measures how much of the total variability in the LOS is due to variability in the ICU mean LOS and the cluster-period mean LOS or analogously how similar patient responses are within a cluster-period. The formula for the WPC, ρ, is: $$ \rho =\frac{\sigma_C^2+{\sigma}_{CP}^2}{\sigma_C^2+{\sigma}_{CP}^2+{\sigma}_I^2}. $$ The WPC measures the similarity in the LOS from two patients in the same cluster-period, e.g. cluster-period C. The similarity between the LOS of patients within a cluster-period arises from the variability in the ICU mean LOS and cluster-period mean LOS. We now refer to Fig. 2 to describe how this relationship between similarity and variability arises. We describe the relationship in two parts: variability in the ICU mean LOS; and variability in the cluster-period mean LOS. As the ICU mean LOS (red circles/red lines) becomes more disperse, relative to the dispersion (i.e. distance) between the cluster-period mean LOS (green circles/green lines), the distribution of the individual patient LOS (yellow/blue curves) in the four cluster-periods A, B, C and D become more distinct from each other, and hence patients within a cluster-period appear more similar to each other. For example, in Fig. 2c there is little variation between the ICU mean LOS around the overall mean LOS (black line) and the distribution of patient LOS in cluster-periods A, B, C and D almost all coincide. As a result, the similarity between the LOS of two patients in cluster-period A is comparable to the similarity between the LOS of one patient from cluster-period A and one patient from cluster-period B (or C or D). In contrast, in Fig. 2d, there is more separation between the ICU mean LOS and hence more separation of the patient LOS in ICUs 1 and 2. As a result, the LOS of two patients in cluster-period A are more similar to each other than to one patient from cluster-period A (cluster 1) and another patient from cluster-periods C or D (cluster 2). Hence, the WPC is smaller in Fig. 2c than in Fig. 2d. We note that the same comparison can be made between Fig. 2g and h. Likewise, as the cluster-period mean LOS (green circles/green lines) becomes more disperses, relative to the distance between the ICU mean LOS (red circles/red lines), the distribution of the individual patient LOS (yellow/blue curves) in the four cluster-periods A, B, C and D also become more distinct from each other, and hence patients within a cluster-period become more similar to each other. For example, in Fig. 2d there is little variation between the cluster-period mean LOS around the ICU mean LOS and thus the distribution of patient LOS in cluster-periods A and B (and equivalently C and D) almost coincide. As a result, the similarity between the LOS of two patients in cluster-period A is comparable to the similarity between the LOS of one patient from cluster-period A and one patient from cluster-period B. In contrast, in Fig. 2h, there is more separation between the cluster-period mean LOS and the distribution of patient LOS. As a result, the LOS of two patients in cluster-period A are more similar to each other than to one patient from cluster-period A and another patient from cluster-period B (and even more similar than one patient from cluster-period A and another patient from cluster-periods C or D). Hence the WPC is again smaller in Fig. 2d than in Fig. 2h. We note that the same comparison can be made between Fig. 2c and g. Precision of the CRXO design compared to the parallel-group cluster randomised design and parallel-group, individually randomised design In this section, we discuss how the WPC and BPC affect the precision of the estimate of the difference between interventions, and hence the sample size requirement, in a two-period, two-intervention, cross-sectional CRXO trial. We illustrate the two extremes of the CRXO design: when the precision in the CRXO design is equivalent to an IRCT design; and equivalent to a CRCT design. The allocation of interventions to patients in the IRCT, CRCT, and CRXO design are shown in Fig. 1. To illustrate the effect of the WPC and BPC on precision (and equivalently the components of variation), we continue to assume that the true difference between interventions is zero. We consider a large sample of patients admitted to one cluster in a CRXO design, such that the sampling error in the estimated mean LOS for patients is assumed negligible. Therefore, in the single cluster shown in Fig. 3, the separation between the distribution of LOS from patients receiving intervention S (yellow curve) and intervention T (blue curve) arises solely from the variation in the mean LOS between cluster-periods (σ 2 CP ). In this section, we show which partitioning of the total variation in LOS into the components of variation leads to the most precision and to the least precision in the CRXO design. A single cluster in the cluster randomised crossover (CRXO) design where (a) ρ > η, η > 0. b η → ρ. c η → 0. The green solid vertical lines indicate difference between true intensive care unit (ICU) mean length of stay (LOS) and true cluster-period mean LOS. The yellow (blue) curve indicates a normal distribution of patient LOS within each cluster or cluster-period where the patient or cluster was allocated to intervention S (T). The true difference between intervention S and T is zero. The total variance in LOS remains constant In the CRXO design, the observed mean LOS of patients receiving each intervention can be compared within each cluster because each intervention is delivered in each cluster. As an illustration, in Fig. 3a, the observed difference in mean LOS between patients receiving each intervention could be due to a difference in true cluster-period mean LOS (green lines) but not due to differences in the true ICU mean LOS because this component of variation is removed when the two interventions are compared within an ICU. As the variation in the true cluster-period mean LOS increases, and hence the separation between the green lines in Fig. 3a increases, the separation between the yellow and blue curves within an ICU increases. Correspondingly, from Eqs. 1 and 2, the difference between the WPC and BPC increases. In conclusion, increasing variability in the cluster-period means leads to increasing uncertainty in the observed difference in the mean LOS between patients receiving each intervention. In the CRXO design, precision is maximised when there is no variation in LOS between periods within a cluster. In this scenario the separation between the green lines in Fig. 3a shrinks and the yellow and blue curves coincide, yielding Fig. 3b. The LOS of two patients in the same cluster-period are as similar as the LOS of two patients from the same ICU but in different cluster-periods. Also, from Eqs. 1 and 2, the WPC equals the BPC. Figure 3b now approximates the diagram that one would expect from an IRCT with two ICUs (with the mean LOS for each centre indicated by the green lines) and half the patients within each cluster receiving each intervention. This diagram arises in an IRCT because, for large sample sizes and under the assumption of no true differences between interventions, randomisation ensures that the distributions of LOS in each intervention (yellow and blue curves) are identical. The CRXO design will, therefore, have the same precision as an IRCT design. Conversely, the precision of the CRXO design decreases when the cluster-period variability increases. As the variability between periods within a cluster increases, the separation between the green lines, and correspondingly the yellow and blue curves, in Fig. 3a increases. The increased separation results in greater variability in the comparison of patient LOS in each intervention within each cluster. For a fixed total variability in ICU LOS, as the variability between periods within a cluster increases, the variability between different clusters must reduce. In the limiting case there is no variation at all between clusters (σ 2 C = 0), resulting in the BPC equalling zero (Eq. 1). In this case each cluster-period effectively resembles a separate cluster (Fig. 3c). Two patients in different cluster-periods in the same ICU are no more similar than two patients in different ICUs. Therefore, there is no advantage to the crossover component of the CRXO design and the CRXO will have the same precision as a CRCT design. In most situations, the BPC will lie between zero and the WPC. In the following section, 'Performing a sample size calculation', we discuss the effect of the BPC and WPC on the sample size required to be able to detect a specified true intervention effect in a CRXO trial with a given level of power, and provide guidance on how to choose values for the BPC and WPC for a sample size calculation. Performing a sample size calculation The sample size required to detect a specified true difference between interventions with a given level of power decreases as the precision of the estimate of the intervention effect increases. In the 'Understanding the CRXO design' section, we considered precision in the CRXO design when the true difference between interventions was assumed to be zero. However, even when the true difference is not zero, the effects of the WPC and BPC on precision described in the previous section continue to hold. The sample size required for a CRXO trial increases as the cluster-period variability increases, or equivalently as the difference between the WPC and BPC increases. As the value of the BPC increases from zero to the WPC, the sample size required for the CRXO design will decrease from that required for a CRCT design towards the sample size for an IRCT. Therefore, using an appropriate specification of the difference between the WPC and the BPC is essential for performing sample size calculations for the CRXO design. We now illustrate how to perform a sample size calculation for a two-period, two-intervention CRXO trial with a continuous and binary outcome using ICU LOS and in-ICU mortality data, respectively, from the Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database (APD) [14, 15]. There are 37 tertiary ICUs in Australia and New Zealand, of which 25 to 30 might be expected to participate in a trial. We compare the sample size requirement for number of individuals and number of clusters (ICUs) from the CRXO design with the requirement from the stratified, multicentre, parallel-group, individually randomised design (IRCT) and the parallel-group cluster randomised design (CRCT) conducted over one period. Comparisons of the sample size requirements for these different designs can either be made by fixing the total number of clusters across all designs; or by treating the CRXO design as lasting twice as long, i.e. two periods, instead of one period as in the IRCT and CRCT designs. We take the latter approach here so that the WPC is the same in each period. We include Stata do-files to estimate the required sample size for each trial design, for a chosen set of sample size parameters (see Additional files 1 and 2). The sample size formulae for a one-period IRCT design, a one-period CRCT design, and a two-period, two-intervention, cross-sectional CRXO design The sample size formula for the total number of participants required for a normally distributed continuous outcome in a two-period, two-intervention CRXO trial, across all clusters and interventions, assuming a constant number of participants recruited to each cluster-period is [8]: $$ {N}_{CRXO}=2\ {\left({z}_{\alpha /2}+{z}_{\beta}\right)}^2\frac{2{\sigma}^2}{{\left({\mu}_A-{\mu}_B\right)}^2}\ \left(1+\left(m-1\right)\rho -m\ \eta \right)+4m, $$ and for a one-period, two-intervention CRCT: $$ {N}_{CRCT}=2\ {\left({z}_{\alpha /2}+{z}_{\beta}\right)}^2\frac{2{\sigma}^2}{{\left({\mu}_A-{\mu}_B\right)}^2\ }\ \left(1+\left(m-1\right)\rho \right)+2m, $$ and for a one-period, two-intervention, parallel-group IRCT, stratified by cluster, across all clusters and interventions is [16]: $$ {N}_{IRCT}=2\ {\left({z}_{\alpha /2}+{z}_{\beta}\right)}^2\frac{2{\sigma}^2}{{\left({\mu}_A-{\mu}_B\right)}^2\ }\ \left(1-\rho \right), $$ where zα/2 and zβ are the standard normal values corresponding to the upper tail probabilities of α/2 and β, respectively; α is the two-sided significance level, typically 0.05; 1 − β is the power to detect the specified difference (μA − μB) with probability α; σ2 is the variance of the outcome; μ A and μ B are the outcome means in each arm; m is the number of participants per cluster-period; ρ is the WPC; and η is the BPC. The formulae presented above include a correction for when the number of clusters small, as suggested in Eldridge and Kerry (p. 149) [2] and Forbes et al. [9]. This leads to an additional 4 m participants in the CRXO design and 2 m participants in the CRCT design. No correction is necessary for the IRCT because the number of individual participants will be large in the example settings. For a binary outcome we can replace \( \frac{2{\sigma}^2}{{\left({\mu}_A-{\mu}_B\right)}^2} \) with \( \frac{p_A\left(1-{p}_A\right)+{p}_B\left(1-{p}_B\right)}{{\left({p}_A-{p}_B\right)}^2\kern1.25em } \) in the above formulae [12], where p A and p B are the proportions of the outcomes in each arm. For the CRXO design, CRCT design and IRCT design, respectively, the formulae to determine the number of clusters (ICUs) needed to achieve the required number of participants are: \( {n}_{CRXO}=\frac{N_{CRXO}}{2m} \), \( {n}_{CRCT}=\frac{N_{CRCT}}{m} \), and \( {n}_{IRCT}=\frac{N_{IRCT}}{m} \) . Australian and New Zealand Intensive Care Society – Adult Patient Database (ANZICS-APD): estimates of the WPC and BPC The ANZICS-APD is one of four clinical quality registries run by the ANZICS Centre for Outcome and Resource Evaluation and collects de-identified information on admissions to adult ICUs in Australia and New Zealand. A range of data is collected during patients' admissions, including ICU LOS and in-ICU mortality. In this section we use the ANZICS-APD data from 34 tertiary ICUs to estimate the correlations required to perform sample size calculations for CRXO trials. We estimate the values of the WPC and the BPC from two 12-month periods of data between 2012 and 2013 (Appendix 1). Continuous outcomes We follow the methods of Turner et al. to estimate the WPC and BPC (Appendix 1). Using the ICU LOS data, the estimated WPC was \( \widehat{\rho}\kern0.5em =\kern0.5em 0.038 \), and the BPC was \( \widehat{\eta}\kern0.5em =\kern0.5em 0.032 \) (Table 1). The overall mean LOS was 5.3 log-hours, with a standard deviation 1.39 log-hours. Table 1 Calculation of the within-cluster, within-period correlation (WPC) and within-cluster, between-period correlation (BPC) for intensive care unit (ICU) log-length of stay (LOS) in the Australian and New Zealand Intensive Care Society – Adult Patient Database (ANZICS-APD) Binary outcomes We follow the methods of Donner et al. to estimate the WPC and BPC (Appendix 1). Using the in-ICU mortality data, the estimated WPC was \( \widehat{\rho}=0.010 \), and the BPC was \( \widehat{\eta}=0.007 \). The overall mortality rate was 8.7%. Sample size example for ICU LOS Suppose we wish to design a two-period, two-intervention, CRXO trial to have 80% power to detect a true reduction in ICU LOS of 0.1 log-hours (1.1 h) using a two-sided test with a Type-I error rate of 5%. In practice, the choice of reduction in ICU LOS should be the minimally clinically important reduction, determined in consultation with subject matter experts. A 0.1 log-hours' reduction is equivalent to a 10% reduction, and is a reasonable minimally clinically important reduction in ICU LOS. The standard deviation is estimated to be 1.2 log-hours (3.3 h). As an illustration, we assume that in a 12-month period, 200 patients in each ICU will meet the inclusion criteria for the trial. The CRXO trial will, therefore, run for 2 years and include 400 patients per ICU, with 200 patients receiving each intervention in each ICU. For comparison, we consider an IRCT and a CRCT run for a 12-month period, with 100 patients receiving each intervention in each ICU in the IRCT and all 200 patients receiving one intervention in each ICU in the CRCT. Using the estimates that we calculated from the ANZICS-APD data for the WPC and BPC, the total number of patients and ICUs for each design are summarised in Table 2 (see Appendix 2 for calculations). Table 2 Number of individuals and number of clusters required for a cluster randomised crossover (CRXO), cluster randomised controlled trial (CRCT) and individually randomised controlled trial (IRCT) trial with ρ = 0.038 for all designs and specified η for CRXO design Table 3 Number of individuals and number of clusters required for a cluster randomised crossover (CRXO), cluster randomised controlled trial (CRCT) and individually randomised controlled trial (IRCT) trial with ρ = 0.010 for each design and specified η for the CRXO design The total number of participants required for the CRXO design is N CRXO = 10,564. To include 10,564 participants, we require n CRXO = 27 ICUs, each recruiting 200 participants in each of the two 12-month periods. If instead we conducted a CRCT over a single 12-month time period, the total number of participants required would be N CRCT = 39,065. Assuming that 200 patients are eligible in each ICU, we would need n CRCT = 196 ICUs. The total number of participants required for an IRCT conducted over a 12-month period is N IRCT = 4345. With 200 patients per ICU (100 patients per intervention), the total number of ICUs required is n IRCT = 22. In this example, the CRXO design required five more clusters (ICUs) than the IRCT design; however, the CRXO design is run for twice as long. The CRCT design would require 7.3 times as many clusters as the CRXO design. Given that there are only 37 tertiary ICUs in Australia and New Zealand, a CRCT trial would not be feasible. We can examine the sensitivity of the CRXO sample size calculation to a different BPC. If the BPC was η = 0.010 rather than η = 0.032, then the CRXO design requires N CRXO = 30,433 participants. The total number of ICUs required to obtain the required number of participants is n CRXO = 77. The total number of ICUs required has now increased by 50, and the trial would no longer be feasible in the Australia and New Zealand region within tertiary ICUs only. Note that when the number of patients admitted in each cluster-period is relatively large, we would observe a similar increase in the sample size if we had underestimated the WPC by 0.023, rather than overestimated the BPC by 0.023. Sample size example for in-ICU mortality In a second example, suppose that we wish to design a study to have 80% power to detect a true reduction in in-ICU morality from 8.7% to 7.2% (absolute difference of 1.5%) using a two-sided test with a Type-I error rate of 5%. From the ANZICS-APD admission data, we estimate that in a 12-month period, 1200 patients will be admitted in each ICU and eligible for inclusion in the trial. The total number of patients and ICUs for each design are summarised in Table 3 (see Appendix 2 for calculations). For a CRXO design, using the estimates for the WPC, the BPC, and the cluster-period size we calculated from the ANZICS-APD, the total number of participants required is N CRXO = 51,581. Since we expect 1200 patients in each ICU for each of the two 12-month periods, the required number of ICUs is n CRXO = 22. If we had used a CRCT, the required number of participants is N CRCT = 134, 792. Assuming that 1200 patients admitted over a single 12-month period, we would need n CRCT = 113 ICUs. The total number of participants required for the IRCT design is N IRCT = 10,090. For a trial run over 12 months, with 1200 patients per ICU (600 patients per intervention), the total number of ICUs required is n IRCT = 9. In this example, the CRXO design required 2.4 times as many clusters (ICUs) as the IRCT design, and is run for twice as long. Despite the increase in required clusters, the CRXO is still a feasible design, unlike the CRCT design, which would require 5.1 times as many clusters as the CRXO design. We can examine the sensitivity of the CRXO sample size calculation to a different BPC. If the BPC was η = 0.006, rather than η = 0.007, then the total number of participants required is N CRXO = 63,811. Since we expect 1200 patients for each cluster-period, we would need to include n CRXO = 27 ICUs, i.e. 54 cluster-periods. This demonstrates that a small change in the assumed BPC can have a marked impact on the number of required ICUs and patients. Unequal cluster-period sizes We have so far assumed that the cluster-period size is constant. In reality, it is likely that different ICUs will include a differing number of participants [17, 18]. An extension to the sample size formula for this scenario is provided by [9]. When the analysis is based on unweighted cluster-period means, the arithmetic mean in the sample size formula given for the CRXO design can be replaced by the harmonic mean: $$ {m}_h=n{\sum}_{i=1}^n\frac{1}{m_i}. $$ We assume that the cluster-period size is the same in each period within a cluster. For further extensions, see Forbes et al. [9]. From the ANZICS-APD data, we estimate that the harmonic mean is m h = 900. Therefore then the required number of patients is N CRXO = 41,208, and the required number of ICUs is: $$ {n}_{CRXO}=\frac{41208}{2\times 900}=23. $$ Allowing for unequal cluster-period sizes has increased the required number of clusters slightly from 22 to 23. Guidance on how to choose the WPC and the BPC for the sample size calculation As was seen in the 'Understanding the CRXO design' section, the difference between the WPC and BPC is key in determining the sample size for a CRXO design. Approaches for choosing the within-cluster intracluster correlation (ICC) in sample size calculations for parallel-group CRCTs have been discussed [19,20,21,22]. Similar considerations apply when choosing the WPC in a CRXO design. In particular, because the ICC estimates are subject to large uncertainty [23], reviewing multiple relevant estimates of the ICC is recommended. These ICC estimates may be obtained from trial reports, lists published in journal articles or from routinely collected data. Identification of the factors which influence the magnitude of the within-cluster ICC can assist trialists in selecting ICC estimates that are relevant to their planned trial. Typically, the trial outcome itself is less predictive of the value of the ICC than factors such as: the type of outcome variable (i.e. process outcomes that measure adherence to protocol and policy or individually measured outcomes) [19], the prevalence of the outcome [20], the size of the natural cluster of individuals that the randomised clusters are formed from [20], and the characteristics of the individuals and clusters [22]. The duration of time over which the outcome variables were measured may also affect the value of the within-cluster ICC. As the measurements of individuals within a cluster become further apart, the similarity between the measurements might be expected to decrease. Using an estimate of the within-cluster ICC that was determined over a different duration of time than the intended period length of the planned trial assumes that there is no variation in the within-cluster ICC over time, and we are unaware of any research investigating if this is justified. In contrast, we are aware of only two publications reporting estimates of the BPC [24, 25]. Therefore, until reporting of the BPC becomes more common [26], estimates of the BPC are likely to rely on the analysis of routinely collected data, pilot or feasibility study data, or a reasoned best-guess. As for the within-cluster ICC in cluster randomised trials, estimating the BPC from feasibility or a single routinely collected data source is likely to be subject to considerable uncertainty [27]. In forming a best guess, it is helpful to recognise that the difference between the WPC and BPC is a measure of changes over time within a cluster's environment that affect the outcomes of each individual in that cluster (e.g. a change in policy in one ICU). Over short time periods or in clusters with stable environments and patient characteristics, it might be reasonable to expect little change over time and, therefore, the BPC will be similar to the WPC. However, if this assumption is untrue and the BPC is less that the WPC, a sample size calculation assuming that the two correlations are equal will lead to an underpowered study. It may be prudent to assume that the BPC is less than the WPC. To this end, suggestions have been made to set the BPC to: half the WPC [12]; and to 0.8 of the WPC [11]. In the ANZICS-APD the ratio of the BPC to WPC is 0.7 for ICU mortality and 0.8 for ICU LOS, which is consistent with the suggestion made by Hooper and Bourke [11]. In the absence of multiple estimates or precise estimates of the ICCs, a conservative approach in selecting the BPC is recommended to avoid an underpowered trial. Further, a sensitivity analysis exploring the effect of the choice of ICC on the sample size is recommended. Common mistakes when performing sample size calculations and analyses Many trialists have made strong assumptions about the values of the WPC and the BPC in their sample size and analysis methodology [13]. In this section we illustrate the consequences of using incorrect sample size methodology on the estimated sample size and power. Assume the outcomes are independent In a review of CRXO trials, 34% of sample size calculations made the assumption that the observations were independent [13]. There are two scenarios where this assumption is reasonably appropriate: when the WPC and the BPC are equal and the sample size calculation was stratified by centre; or when the WPC and the BPC are both zero. The first scenario arises when the outcomes of two individuals in the same cluster are equally similar if the individuals are in different periods as if the individuals are in the same period (i.e. there is no change in the WPC over time within a cluster). In this fortuitous case the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation (apart from a factor of 1-WPC, which is usually small [16]). The second scenario arises when there is no similarity between the outcomes of any two individuals, which is unlikely. The effect on power of assuming that the outcomes are independent will depend on the cluster-period size and the difference between the WPC and the BPC. Loss of power will increase as both the difference between the two ICCs increases and the cluster-period size increases. We illustrate the potential effect on power and sample size assuming the outcomes are independent using a published sample size calculation. Roisin [28] estimated that the seven wards (clusters) participating in their trial required a minimum of 3328 patients to have 80% power to detect a reduction in proportion of hospital acquisition of methicillin-resistant Staphylococcus aureus (MRSA) from 3% to 1.5%. From the ANZICS-APD data, we estimate a WPC of 0.010, and a BPC of 0.007 for in-ICU mortality in the ICU setting. As an example only, we assume that the estimates of the correlations for ICU mortality are similar to the correlations for ICU MRSA acquisition. Given that a total of 2505 patients were eligible for inclusion in the study, we determined the average cluster-period size to be 179. From these estimates, we determine that a sample size of 5385 is required to achieve the specified power, which is a 62% increase from the published sample size requirement of 3328. Assume a parallel-group cluster randomised design instead of a cluster randomised crossover design Another common approach when performing sample size calculations for CRXO trials is to use methods designed for parallel-group CRCT trials. Applying CRCT sample size methodology to a CRXO design makes the assumption that: the BPC is zero; and that the WPC calculated over all periods in the trial is the same as the WPC calculated for a single period. Under the assumption that the BPC is zero, the outcomes of individuals within a cluster, but in different periods, are no more similar than outcomes of individuals in different clusters. That is, the individuals in different periods are assumed to be independent. When the BPC is not zero, the CRCT design effect does not account for the gain in precision achieved by the crossover aspect of the CRXO design, leading to a potentially overpowered trial. Trials that use CRCT sample size methods become progressively more overpowered as the true BPC becomes larger and the cluster-period sizes increase. We illustrate the potential effect on power and the sample size requirement using CRCT sample size methodology by means of a published sample size calculation. van Duijn [29] estimated that eight ICUs (clusters) participating in their trial would include 135 patient measurements per cluster-period. Using CRCT sample size methodology, each of the 16 cluster-periods (two periods per ICU) were assumed to be separate clusters of 135 patients. van Duijn [29] assumed a within-cluster ICC of 0.01, and hence they estimated that the trial required 1842 patients to have 80% power to detect a reduction in proportion of ICU patients with antibiotic-resistant gram-negative bacteria from 55% to 45%. From the ANZICS-APD data, we estimate a WPC of 0.010, and a BPC of 0.007, as in the example in the previous section. From these estimates, we determine that a sample size of 1623 is required to achieve the specified power, which is 12% less than the sample size required for a CRCT. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design to ensure that an appropriate number of participants are recruited to adequately address the trial's hypotheses. There are simple, sample size formulae available for a two-period, two-intervention, cross-sectional CRXO design; however, the implementation of these formulae has been limited [13]. Such limited use of the formula may be due to a lack of recognition that formulae are available, a lack of availability of estimates of the parameters required within the formulae, or a lack of trialists' understanding of those parameters. We have illustrated how the cluster randomisation and crossover aspects of the CRXO design give rise to similarity in both the responses of individuals within the same cluster and within the same cluster-period; and have described the parameters required to perform sample size calculations for CRXO trials. We have provided guidance on how to choose the parameters required for the sample size calculation and perform sample size calculation using those parameters. While our focus has been on the two-intervention, two-period, cross-sectional CRXO design, more complex designs with additional periods and interventions are possible. The sample size and analysis methodology is more complex in these designs. For example, in a design with more than two periods, additional assumptions are required about the similarity between individuals in the same cluster in the same time period, and 1, 2, or 3, etc. time periods apart. Careful consideration should always be given to whether cluster randomisation is necessary [30], and whether the risk of the intervention effect from one period carrying over to the next period is minimal [6]. In addition to consideration of the sample size methodology, it is also essential to appropriately account for the cluster and the cluster-period in the analysis. Very few published trials do so [13]. Failure to account for the cluster-period in an individual level analysis leads to inflated Type-I error rates [31]. Methods to analyse CRXO trials have been published by Turner et al. and Forbes et al. [5, 9]. Sample size calculations for CRXO trials must account for both the cluster randomisation and crossover aspects of the design. In this tutorial we described how the CRXO design can be understood in terms of components of variation in the individual outcomes, or equivalently, in terms of correlations between the outcomes of individual patients. We illustrated how to perform sample size calculations for continuous and binary outcomes, and provided guidance on selecting estimates of the parameters required for the sample size calculation. ANZICS-APD: Australia and New Zealand Intensive Care Society – Adult Patient Database BPC: Within-cluster between-period correlation CRCT: Cluster randomised controlled trial CRXO: Cluster randomised crossover ICC: Intracluster correlation IRCT: Individually randomised controlled trial LOS: WPC: Within-cluster within-period correlation Grimes DA, Schulz KF. An overview of clinical research: the lay of the land. Lancet. 2002;359(9300):57–61. Eldridge S, Kerry S. A practical guide to cluster randomised trials in health services research. Chichester: Wiley; 2012. Ukoumunne OC, Gulliford MC, Chinn S, Sterne JA, Burney PG. Methods for evaluating area-wide and organisation-based interventions in health and health care: a systematic review. Health Technol Assess. 1999;3(5):iii–92. Donner A, Birkett N, Buck C. Randomization by cluster. Sample size requirements and analysis. Am J Epidemiol. 1981;114(6):906–14. Turner RM, White IR, Croudace T. Analysis of cluster randomized cross-over trial data: a comparison of methods. Stat Med. 2007;26(2):274–89. Parienti JJ, Kuss O. Cluster-crossover design: a method for limiting clusters level effect in community-intervention studies. Contemp Clin Trials. 2007;28(3):316–23. Hills M, Armitage P. The two-period cross-over clinical trial. Br J Clin Pharmacol. 1979;8(1):7–20. Giraudeau B, Ravaud P, Donner A. Sample size calculation for cluster randomized cross-over trials. Stat Med. 2008;27(27):5578–85. Forbes AB, Akram M, Pilcher D, Cooper J, Bellomo R. Cluster randomised crossover trials with binary data and unbalanced cluster sizes: application to studies of near-universal interventions in intensive care. Clin Trials. 2015;12(1):34–44. Rietbergen C, Moerbeek M. The design of cluster randomized crossover trials. J Educ Behav Stat. 2011;36(4):472–90. Hooper R, Bourke L. Cluster randomised trials with repeated cross sections: alternatives to parallel group designs. BMJ. 2015;350:h2925. Donner A, Klar N, Zou G. Methods for the statistical analysis of binary data in split-cluster designs. Biometrics. 2004;60(4):919–25. Arnup SJ, Forbes AB, Kahan BC, Morgan KE, McKenzie JE. Appropriate statistical methods were infrequently used in cluster-randomized crossover trials. J Clin Epidemiol. 2016;74:40–50. Stow PJ, Hart GK, Higlett T, George C, Herkes R, McWilliam D, Bellomo R, Committee ADM. Development and implementation of a high-quality clinical database: the Australian and New Zealand Intensive Care Society Adult Patient Database. J Crit Care. 2006;21(2):133–41. Kaukonen KM, Bailey M, Pilcher D, Cooper DJ, Bellomo R. Systemic inflammatory response syndrome criteria in defining severe sepsis. N Engl J Med. 2015;372(17):1629–38. Vierron E, Giraudeau B. Sample size calculation for multicenter randomized trial: taking the center effect into account. Contemp Clin Trials. 2007;28(4):451–8. Konstantopoulos S. Power analysis in two-level unbalanced designs. J Exp Educ. 2010;78(3):291–317. Kerry SM, Bland JM. Unequal cluster sizes for trials in English and Welsh general practice: implications for sample size calculations. Stat Med. 2001;20(3):377–90. Campbell MK, Fayers PM, Grimshaw JM. Determinants of the intracluster correlation coefficient in cluster randomized trials: the case of implementation research. Clin Trials. 2005;2(2):99–107. Gulliford MC, Adams G, Ukoumunne OC, Latinovic R, Chinn S, Campbell MJ. Intraclass correlation coefficient and outcome prevalence are associated in clustered binary data. J Clin Epidemiol. 2005;58(3):246–51. Gulliford MC, Ukoumunne OC, Chinn S. Components of variance and intraclass correlations for the design of community-based surveys and intervention studies: data from the Health Survey for England 1994. Am J Epidemiol. 1999;149(9):876–83. Adams G, Gulliford MC, Ukoumunne OC, Eldridge S, Chinn S, Campbell MJ. Patterns of intra-cluster correlation from primary care research to inform study design and analysis. J Clin Epidemiol. 2004;57(8):785–94. Ukoumunne OC. A comparison of confidence interval methods for the intraclass correlation coefficient in cluster randomized trials. Stat Med. 2002;21(24):3757–74. Martin J, Girling A, Nirantharakumar K, Ryan R, Marshall T, Hemming K. Intra-cluster and inter-period correlation coefficients for cross-sectional cluster randomised controlled trials for type-2 diabetes in UK primary care. Trials. 2016;17:402. Feldman HA, McKinlay SM. Cohort versus cross-sectional design in large field trials: precision, sample size, and a unifying model. Stat Med. 1994;13(1):61–78. Hooper R, Teerenstra S, de Hoop E, Eldridge S. Sample size calculation for stepped wedge and other longitudinal cluster randomised trials. Stat Med. 2016;35(26):4718–28. Eldridge SM, Costelloe CE, Kahan BC, Lancaster GA, Kerry SM. How big should the pilot study for my cluster randomised trial be? Stat Methods Med Res. 2016;25(3):1039–56. Roisin S, Laurent C, Denis O, Dramaix M, Nonhoff C, Hallin M, Byl B, Struelens MJ. Impact of rapid molecular screening at hospital admission on nosocomial transmission of methicillin-resistant staphylococcus aureus: cluster randomised trial. PLoS One. 2014;9(5):e96310. van Duijn PJ, Bonten MJ. Antibiotic rotation strategies to reduce antimicrobial resistance in Gram-negative bacteria in European intensive care units: study protocol for a cluster-randomized crossover controlled trial. Trials. 2014;15:277. Campbell MK, Piaggio G, Elbourne DR, Altman DG, Group C. CONSORT 2010 Statement: extension to cluster randomised trials. BMJ. 2012;345:e5661. Morgan KE, Forbes AB, Keogh RH, Jairath V, Kahan BC. Choosing appropriate analysis methods for cluster randomised cross-over trials with a binary outcome. Stat Med. 2017;36(2):318–33. This research was in part supported by a National Health and Medical Research Council (NHMRC) project grant (1108283). SJA was supported in part by a Monash University Graduate Scholarship and a National Health and Medical Research Council of Australia Centre of Research Excellence grant (1035261) to the Victorian Centre for Biostatistics (ViCBiostat). JEM was supported by a National Health and Medical Research Council (NHMRC) Australian Public Health Fellowship (1072366). School of Public Health and Preventive Medicine, Monash University, The Alfred Centre, Melbourne, VIC, 3004, Australia Sarah J. Arnup, Joanne E. McKenzie & Andrew B. Forbes Institute of Applied Health Research, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK Karla Hemming Australian and New Zealand Intensive Care Society Centre for Outcome and Resource Evaluation, Ievers Terrace, Carlton, VIC, 3154, Australia David Pilcher Department of Intensive Care, The Alfred Hospital, Commercial Road, Melbourne, VIC, 3004, Australia Australian and New Zealand Intensive Care Research Centre, School of Public Health and Preventive Medicine, Monash University, The Alfred Centre, Melbourne, VIC, 3004, Australia Sarah J. Arnup Joanne E. McKenzie Andrew B. Forbes SJA led the development of all sections and drafted the manuscript. JEM contributed to the development of all sections and provided critical review of the manuscript. KH contributed to the development of the graphical illustrations and corresponding sections, and provided critical review of the manuscript. DP provided guidance on the ANZIC-APD data and contributed to the development of the sample size examples. ABF conceived of the graphical illustrations, contributed to the development of all sections and provided critical review of the manuscript. All authors read and approved the final manuscript. Correspondence to Andrew B. Forbes. Continuous outcomes sample size Stata do file. Stata do file to perform sample size calculations for continuous outcomes using formulae presented in the 'Performing a sample size calculation' section, for a given set of sample size parameters. (DO 1 kb) Binary outcomes sample size Stata do file. Stata do file to perform sample size calculations for binary outcomes using formulae presented in the 'Performing a sample size calculation' section, for a given set of sample size parameters. (DO 2 kb) Estimates of the WPC and BPC To illustrate the impact of the WPC and BPC on the sample size calculation, we estimate the values of the WPC and BPC by using previously published methods for continuous and binary outcomes [5, 12]. ICU LOS is right-skewed, so we begin by log-transforming this variable, so that the assumptions of the model used to estimate the correlations are more likely to be met. We use LOS to represent log(LOS) throughout. We estimate the values of the WPC and the BPC from the variances estimated by fitting the following model [5]: $$ {Y}_{ij k}=\mu +\pi +{u}_i+{v}_{ij}+{e}_{ij k}, $$ where there are i = 1, …, n ICUs, j = 1, 2 12-month periods and k = 1, …, m ij patients in the i th ICU (cluster) and j th period; Y ijk is the LOS for the k th patient in the j th cluster-period in the i th ICU (cluster); μ is the overall mean LOS; π is the fixed period effect; u i ~ N(0, σ 2 C ) is the difference from the overall mean LOS for each ICU mean LOS; v ij ~ N(0, σ 2 CP ) is the difference from the ICU mean LOS for each cluster-period mean LOS, and e ijk ~ N(0, σ 2 I ) is the difference from the cluster-period mean LOS for each patient LOS; σ 2 C , σ 2 CP , and σ 2 I are the variances for the ICU (cluster) mean LOS, cluster-period mean LOS and patient LOS within each cluster-period, respectively. Because we are fitting the model to registry data, rather than clinical trial data of the actual treatments to be considered, we estimate the model parameters under the assumption of a null treatment effect, and hence have not included a fixed treatment effect. A fixed treatment effect should be included when estimating the variance components from data from the actual clinical trial. The model was fitted in Stata 14 with the mixed command using restricted maximum likelihood estimation: mixed log(LOS) periodeffect || cluster: || cluster_period:, reml. We estimate the value of the WPC for within-ICU mortality by fitting the analysis of variance (ANOVA) estimator for the intracluster correlation [12]: $$ \widehat{\rho}=\kern0.5em \frac{MSC- MSW}{MSC+\left({m}_0-1\right) MSW}, $$ $$ MSC=\frac{\sum_{j=1}^2{\sum}_{i=1}^n{m}_{ij}{\left({\widehat{P}}_{ij}-{\widehat{P}}_j\right)}^2}{\sum_{j=1}^2\left(n-1\right)}, $$ $$ MSW=\frac{\sum_{j=1}^2{\sum}_{i=1}^n{m}_{ij}{\widehat{P}}_{ij}\left(1-{\widehat{P}}_{ij}\right)}{\sum_{j=1}^2\left({N}_j-n\right)}, $$ $$ {m}_0=\frac{N-{\sum}_{j=1}^2{\sum}_{i=1}^n{m}_{ij}^2/{N}_j\ }{\sum_{j=1}^2\left(n-1\right)}, $$ where there are i = 1, …, n ICUs and j = 1, 2 12-month periods; m ij is the number of patients in the i th ICU (cluster) and j th period; N j is the total number of patients in each period and N is the total number of patients overall; \( {\widehat{P}}_{ij} \) is the estimated mortality rate in each cluster-period; and \( {\widehat{P}}_j \) is the estimated mortality rate in period j. And by fitting the Pearson pairwise estimator for the BPC [12]: $$ \widehat{\eta}=\frac{\sum_{i=1}^n\left({Y}_{1i}-{m}_{1i}{\widehat{P}}_1\right)\left({Y}_{2i}-{m}_{2i}{\widehat{P}}_2\right)}{\sqrt{\left({\sum}_{i=1}^n{m}_{2i}\left({Y}_{1i}-2{Y}_{1i}{\widehat{P}}_1+{m}_{1i}{\widehat{P}}_1^2\right)\left)\right({\sum}_{i=1}^n{m}_{1i}\left({Y}_{2i}-2{Y}_{2i}{\widehat{P}}_2+{m}_{2i}{\widehat{P}}_2^2\right)\right)}}, $$ where Y 1i and Y 2i are the number of deaths in two adjacent time periods on the i th ICU. Sample size calculations In this section we provide the details of the sample size calculations presented in the 'Performing a sample size calculation' section, using the estimates for the WPC and BPC that we calculated from the ANZICS-APD data in Appendix 1. Sample size calculation for ICU LOS Total number of participants and ICUs required for the CRXO design $$ {N}_{CRXO}=2\ {\left({z}_{\alpha /2}+{z}_{\beta}\right)}^2\frac{2{\sigma}^2}{{\left({\mu}_A-{\mu}_B\right)}^2}\ \left(1+\left(m-1\right)\rho -m\ \eta \right)\kern0.5em +\kern0.5em 4m, $$ $$ {N}_{CRXO}=2\times {\left(1.96+0.84\right)}^2\frac{2\times {1.2}^2}{{\left(5.3-5.2\right)}^2}\left(1+\left(200-1\right)0.038-200\times 0.032\right)+4\times 200=10564 $$ Since we expect 200 patients in each ICU for each of the two 12-month periods, the number of ICUs needed to achieve the required number of participants is: $$ {n}_{CRXO}=\frac{N_{CRXO}}{2m}=\frac{10564}{2\times 200}=27. $$ If the BPC was η = 0.010 rather than η = 0.032, then: The total number of ICUs required to obtain the required number of participants is: Total number of participants and ICUs required for the CRCT design $$ {N}_{CRCT}=2\ {\left(1.96+0.84\right)}^2\frac{2\times {1.2}^2}{{\left(5.3-5.2\right)}^2}\left(1+\left(200-1\right)0.038\right)+2\times 200=39065 $$ Assuming that 200 patients are eligible in each ICU over the 12-month trial period, we would need to include: $$ {n}_{CRCT}=\frac{N_{CRCT}}{m}=\frac{39065}{200}=196\kern0.5em \mathrm{ICUs}. $$ Total number of participants and ICUs required for the IRCT design $$ {N}_{IRCT}=2\ {\left(1.96+0.84\right)}^2\frac{2\times {1.2}^2}{{\left(5.3-5.2\right)}^2}\left(1-0.038\right)=4345. $$ For a trial run over 12 months, with 200 patients per ICU (100 patients per intervention), the total number of ICUs required is: $$ {n}_{IRCT}=\frac{N_{IRCT}}{m}=\frac{4345}{200}=22. $$ Sample size calculation for in-ICU mortality $$ {N}_{CRXO}=2\times {\left({z}_{\alpha /2}+{z}_{\beta}\right)}^2\frac{p_A\left(1-{p}_A\right)+{p}_B\left(1-{p}_B\right)}{{\left({p}_A-{p}_B\right)}^2\kern1.25em }\ \left(1+\left(m-1\right)\rho -m\ \eta \right)+4m, $$ $$ {\displaystyle \begin{array}{l}{N}_{CRXO}=2\times {\left(1.96+0.84\right)}^2\frac{0.087\times \left(1-0.087\right)+0.072\times \left(1-0.072\right)}{{\left(0.087-0.072\right)}^2\kern1.25em }\ \left(1+\left(1200-1\right)\right.\\ {}\kern8.5em \left.\times 0.010-1200\times 0.007\right)+4\times 1200\kern0.5em =\kern0.5em 51581\end{array}} $$ The number of ICUs needed to achieve the required number of participants is: $$ {n}_{CRXO}=\frac{N_{CRXO}}{2m}=\frac{51581}{2\times 1200}=22. $$ If the BPC was η = 0.006, rather than η = 0.007, then the total number of participants required is: $$ {\displaystyle \begin{array}{l}{N}_{CRXO}=2\times {\left(1.96+0.84\right)}^2\frac{0.087\times \left(1-0.087\right)+0.072\times \left(1-0.072\right)}{{\left(0.087-0.072\right)}^2\kern1.25em }\ \left(1+\left(1200-1\right)\times 0.010-1200\times 0.006\right)\\ {}\kern11.5em +4\times 1200=63811\end{array}} $$ We would need to include: $$ {n}_{CRXO}=\frac{N_{CRXO}}{2m}=\frac{63811}{2\times 1200}=27\kern0.5em \mathrm{ICUs}. $$ $$ {N}_{CRCT}=2\ {\left({z}_{\alpha /2}+{z}_{\beta}\right)}^2\frac{p_A\left(1-{p}_A\right)+{p}_B\left(1-{p}_B\right)}{{\left({p}_A-{p}_B\right)}^2\kern1.25em }\ \left(1+\left(m-1\right)\rho \right)+2m, $$ $$ {N}_{CRCT}=2\ {\left(1.96+0.84\right)}^2\frac{0.087\times \left(1-0.087\right)+0.072\times \left(1-0.072\right)}{{\left(0.087-0.072\right)}^2\kern1.25em }\ \left(1+\left(1200-1\right)\times 0.010\right)+2\times 1200=134792 $$ We would need \( {n}_{CRCT}=\frac{N_{CRCT}}{m}=\frac{134792}{1200}=113\kern0.5em \mathrm{ICUs}. \) $$ {N}_{IRCT}=2\ {\left({z}_{\alpha /2}+{z}_{\beta}\right)}^2\frac{p_A\left(1-{p}_A\right)+{p}_B\left(1-{p}_B\right)}{{\left({p}_A-{p}_B\right)}^2}\kern1.25em \left(1-\rho \right), $$ $$ {N}_{IRCT}=2\ {\left(1.96+0.84\right)}^2\frac{0.087\times \left(1-0.087\right)+0.072\times \left(1-0.072\right)}{{\left(0.087-0.072\right)}^2}\kern1.25em \left(1-0.010\right)=10090 $$ The total number of ICUs required is: $$ {n}_{IRCT}=\frac{N_{IRCT}}{m}=\frac{10090}{1200}=9. $$ Arnup, S.J., McKenzie, J.E., Hemming, K. et al. Understanding the cluster randomised crossover design: a graphical illustration of the components of variation and a sample size tutorial. Trials 18, 381 (2017). https://doi.org/10.1186/s13063-017-2113-2 Cluster randomised Within-period correlation Between-period correlation Components of variability
CommonCrawl
Tag Archives: Cosmology and Extragalactic Astrophysics Naturalness in Higgs inflation in a frame independent formalism [CEA] We make use of the frame and gauge independent formalism for scalar and tensor cosmological perturbations developed in Ref. [1] to show that the physical cutoff for 2-to-2 tree level scatterings in Higgs inflation is above the Planck scale M_P throughout inflation. More precisely, we found that in the Jordan frame, the physical cutoff scale is $({\Lambda}/a)_J \gtrsim \sqrt{M_P^2+{\xi}{\phi}^2}$, while in the Einstein frame it is $({\Lambda}/a)_J \gtrsim M_P$, where $\xi$ is the nonminimal coupling and $\phi$ denotes the Higgs vev during inflation. The dimensionless ratio of the physical cutoff to the relevant Planck scale is equal to one in both frames, thus demonstrating the physical equivalence of the two frames. Our analysis implies that Higgs inflation is unitary up to the Planck scale, and hence there is no naturalness problem in Higgs inflation. In this paper we only consider the graviton and scalar interactions. T. Prokopec and J. Weenink Posted in Cosmology and Extragalactic Astrophysics | Tagged Cosmology and Extragalactic Astrophysics, General Relativity and Quantum Cosmology SDSSJ143244.91+301435.3: a link between radio-loud narrow-line Seyfert 1 galaxies and compact steep-spectrum radio sources? [GA] We present SDSSJ143244.91+301435.3, a new case of radio-loud narrow line Seyfert 1 (RL NLS1) with a relatively high radio power (P1.4GHz=2.1×10^25 W Hz^-1) and large radioloudness parameter (R1.4=600+/-100). The radio source is compact with a linear size below ~1.4 kpc but, contrary to most of the RL NLS1 discovered so far with such a high R1.4, its radio spectrum is very steep (alpha=0.93) and not supporting a 'blazar-like' nature. Both the small mass of the central super-massive black-hole and the high accretion rate relative to the Eddington limit estimated for this object (3.2×10^7 Msun and 0.27, respectively, with a formal error of ~0.4 dex on both quantities) are typical of the class of NLS1. Through a modeling of the spectral energy distribution of the source we have found that the galaxy hosting SDSSJ143244.91+301435.3 is undergoing a quite intense star-formation (SFR=50 Msun y^-1) which, however, is expected to contribute only marginally (~1 per cent) to the observed radio emission. The radio properties of SDSSJ143244.91+301435.3 are remarkably similar to those of compact steep spectrum (CSS) radio sources, a class of AGN mostly composed by young radio galaxies. This may suggest a direct link between these two classes of AGN, with the CSS sources possibly representing the misaligned version (the so-called parent population) of RL NLS1 showing blazar characteristics. A. Caccianiga, S. Anton, L. Ballo, et. al. Posted in Galaxy Astrophysics | Tagged Cosmology and Extragalactic Astrophysics, Galaxy Astrophysics The Minimal Volkov – Akulov – Starobinsky Supergravity [CL] We construct a supergravity model whose scalar degrees of freedom arise from a chiral superfield and are solely a scalaron and an axion that is very heavy during the inflationary phase. The model includes a second chiral superfield $X$, which is subject however to the constraint $X^2=0$ so that it describes only a Volkov – Akulov goldstino and an auxiliary field. We also construct the dual higher – derivative model, which rests on a chiral scalar curvature superfield ${\cal R}$ subject to the constraint ${\cal R}^2=0$, where the goldstino dual arises from the gauge – invariant gravitino field strength as $\gamma^{mn} {\cal D}_m \psi_n$. The final bosonic action is an $R+R^2$ theory involving an axial vector $A_m$ that only propagates a physical pseudoscalar mode. I. Antoniadis, E. Dudas, S. Ferrara, et. al. Posted in Cross-listed | Tagged Cosmology and Extragalactic Astrophysics, General Relativity and Quantum Cosmology, High Energy Physics - Phenomenology, High Energy Physics - Theory Ionized gas disks in Elliptical and S0 galaxies at $z<1$ [GA] We analyse the extended, ionized-gas emission of 24 early-type galaxies (ETGs) at $0<z<1$ from the ESO Distant Cluster Survey (EDisCS). We discuss different possible sources of ionization and favour star-formation as the main cause of the observed emission. Ten galaxies have disturbed gas kinematics, while 14 have rotating gas disks. In addition, 15 galaxies are in the field, while 9 are in the infall regions of clusters. This implies that, if the gas has an internal origin, this is likely stripped as the galaxies get closer to the cluster centre. If the gas instead comes from an external source, then our results suggest that this is more likely acquired outside the cluster environment, where galaxy-galaxy interactions more commonly take place. We analyse the Tully-Fisher relation of the ETGs with gas disks, and compare them to EDisCS spirals. Taking a matched range of redshifts, $M_{B}<-20$, and excluding galaxies with large velocity uncertainties, we find that, at fixed rotational velocity, ETGs are 1.7 mag fainter in $M_{B}$ than spirals. At fixed stellar mass, we also find that ETGs have systematically lower specific star-formation rates than spirals. This study constitutes an important step forward towards the understanding of the evolution of the complex ISM in ETGs by significantly extending the look-back-time baseline explored so far. Y. Jaffe, A. Aragon-Salamanca, B. Ziegler, et. al. Inequivalence of Coset Constructions for Spacetime Symmetries [CL] Non-linear realizations of spacetime symmetries can be obtained by a generalization of the coset construction valid for internal ones. The physical equivalence of different representations for spacetime symmetries is not obvious, since their relation involves not only a redefinition of the fields but also a field-dependent change of coordinates. A simple and relevant spacetime symmetry is obtained by the contraction of the 4D conformal group that leads to the Galileon group. We analyze two non-linear realizations of this group, focusing in particular on the propagation of signals around non-trivial backgrounds. The aperture of the lightcone is in general different in the two representations and in particular a free (luminal) massless scalar is mapped in a Galileon theory which admits superluminal propagation. We show that in this theory, if we consider backgrounds that vanish at infinity, there is no asymptotic effect: the displacement of the trajectory integrates to zero, as can be expected since the S-matrix is trivial. Regarding local measurements, we show that the puzzle is solved taking into account that a local coupling with fixed sources in one theory is mapped into a non-local coupling and we show that this effect compensates the different lightcone. Therefore the two theories have a different notion of locality. The same applies to the different non-linear realizations of the conformal group and we study the particular case of a cosmologically interesting background: the Galilean Genesis scenarios. P. Creminelli, M. Serone, G. Trevisan, et. al. Posted in Cross-listed | Tagged Cosmology and Extragalactic Astrophysics, High Energy Physics - Theory A Technique to Search for High Mass Dark Matter Axions [CL] Axions are a well motivated dark matter candidate. Microwave cavity experiments have been shown to be sensitive to axions in the mass range 1 $\mu$eV to 40 $\mu$eV, but face challenges searching for axions with larger masses. We propose a technique using a microwave Fabry-P\'{e}rot resonator and a series of current-carrying wire planes that can be used to search for dark matter axions with masses above 40 $\mu$eV. This technique retains the advantages of the microwave cavity search technique but allows for large volumes and high $Q$s at higher frequencies. G. Rybka and A. Wagner Posted in Cross-listed | Tagged Cosmology and Extragalactic Astrophysics, Instrumentation and Detectors, Instrumentation and Methods for Astrophysics Multiwavelength investigations of co-evolution of bright custer galaxies [CEA] We report a systematic multi-wavelength investigation of environments of the brightest cluster galaxies (BCGs), using the X-ray data from the Chandra archive, and optical images taken with 34'x 27′ field-of-view Subaru Suprime-Cam. Our goal is to help understand the relationship between the BCGs and their host clusters, and between the BCGs and other galaxies, to eventually address a question of the formation and co-evolution of BCGs and the clusters. Our results include: 1) Morphological variety of BCGs, or the second or the third brightest galaxy (BCG2, BCG3), is comparable to that of other bright red sequence galaxies, suggesting that we have a continuous variation of morphology between BCGs, BCG2, and BCG3, rather than a sharp separation between the BCG and the rest of the bright galaxies. 2) The offset of the BCG position relative to the cluster centre is correlated to the degree of concentration of cluster X-ray morphology (Spearman rho = -0.79), consistent with an interpretation that BCGs tend to be off-centered inside dynamically unsettled clusters. 3) Morphologically disturbed clusters tend to harbour the brighter BCGs, implying that the "early collapse" may not be the only major mechanism to control the BCG formation and evolution. Y. Hashimoto, J. Henry and H. Boehringer Posted in Cosmology and Extragalactic Astrophysics | Tagged Cosmology and Extragalactic Astrophysics Far-infrared surveys of galaxy evolution [CEA] Roughly half of the radiation from evolving galaxies in the early universe reaches us in the far-infrared and submillimeter wavelength range. Recent major advances in observing capabilities, in particular the launch of the Herschel Space Observatory in 2009, have dramatically enhanced our ability to use this information in the context of multiwavelength studies of galaxy evolution. Near its peak, three quarters of the cosmic infrared background is now resolved into individually detected sources. The use of far-infrared diagnostics of dust-obscured star formation and of interstellar medium conditions has expanded from rare extreme high-redshift galaxies to more typical main sequence galaxies and hosts of active galactic nuclei, out to z>~2. These studies shed light on the evolving role of steady equilibrium processes and of brief starbursts, at and since the peak of cosmic star formation and black hole accretion. This review presents a selection of recent far-infrared studies of galaxy evolution, with an emphasis on Herschel results D. Lutz Posted in Cosmology and Extragalactic Astrophysics | Tagged Cosmology and Extragalactic Astrophysics, Galaxy Astrophysics Gravitational collapse of Bose-Einstein condensate dark matter halos [CL] We study the mechanisms of the gravitational collapse of the Bose-Einstein condensate dark matter halos, described by the zero temperature time-dependent nonlinear Schr\"odinger equation (the Gross-Pitaevskii equation), with repulsive inter-particle interactions. By using a variational approach, and by choosing an appropriate trial wave function, we reformulate the Gross-Pitaevskii equation with spherical symmetry as Newton's equation of motion for a particle in an effective potential, which is determined by the zero point kinetic energy, the gravitational energy, and the particles interaction energy, respectively. The velocity of the condensate is proportional to the radial distance, with a time dependent proportionality function. The equation of motion of the collapsing dark matter condensate is studied by using both analytical and numerical methods. The collapse of the condensate ends with the formation of a stable configuration, corresponding to the minimum of the effective potential. The radius and the mass of the resulting dark matter object are obtained, as well as the collapse time of the condensate. The numerical values of these global astrophysical quantities, characterizing condensed dark matter systems, strongly depend on the two parameters describing the condensate, the mass of the dark matter particle, and of the scattering length, respectively. The stability of the condensate under small perturbations is also studied, and the oscillations frequency of the halo is obtained. Hence these results show that the gravitational collapse of the condensed dark matter halos can lead to the formation of stable astrophysical systems with both galactic and stellar sizes. T. Harko Posted in Cross-listed | Tagged Cosmology and Extragalactic Astrophysics, General Relativity and Quantum Cosmology, High Energy Physics - Theory Bouncing cosmology in modified Gauss-Bonnet gravity [CL] We explore bounce cosmology in $F(\mathcal{G})$ gravity with the Gauss-Bonnet invariant $\mathcal{G}$. We reconstruct $F(\mathcal{G})$ gravity theory to realize the bouncing behavior in the early universe and examine the stability conditions for its cosmological solutions. It is demonstrated that the bouncing behavior with an exponential as well as a power-law scale factor naturally occurs in modified Gauss-Bonnet gravity. We also derive the $F(\mathcal{G})$ gravity model to produce the ekpyrotic scenario. Furthermore, we construct the bounce with the scale factor composed of a sum of two exponential functions and show that not only the early-time bounce but also the late-time cosmic acceleration can occur in the corresponding modified Gauss-Bonnet gravity. Also, the bounce and late-time solutions in this unified model is explicitly analyzed. K. Bamba, A. Makarenko, A. Myagky, et. al. Pixel area variations in sensors: a novel framework for predicting pixel fidelity and distortion in flat field response [IMA] We describe the drift field in thick depleted silicon sensors as a superposition of a one-dimensional backdrop field and various three-dimensional perturbative contributions that are physically motivated. We compute trajectories for the conversions along the field lines toward the channel and into volumes where conversions are confined by the perturbative fields. We validate this approach by comparing predictions against measured response distributions seen in five types of fixed pattern distortion features. We derive a quantitative connection between "tree ring" flat field distortions to astrometric and shape transfer errors with connections to measurable wavelength dependence – as ancillary pixel data that may be used in pipeline analysis for catalog population. Such corrections may be tested on DECam data, where correlations between tree ring flat field distortions and astrometric errors – together with their band dependence – are already under study. Dynamic effects, including the brighter-fatter phenomenon for point sources and the flux dependence of flat field fixed pattern features are approached using perturbations similar in form to those giving rise to the fixed pattern features. These in turn provide drift coefficient predictions that can be validated in a straightforward manner. Once the three parameters of the model are constrained using available data, the model is readily used to provide predictions for arbitrary photo-distributions with internally consistent wavelength dependence provided for free. A. Rasmussen Posted in Instrumentation and Methods for Astrophysics | Tagged Cosmology and Extragalactic Astrophysics, Instrumentation and Methods for Astrophysics Optical and X-ray Rest-frame Light Curves of the BAT6 sample [HEAP] We present the rest-frame light curves in the optical and X-ray bands of an unbiased and complete sample of Swift long Gamma-Ray Bursts (GRBs), namely the BAT6 sample. The unbiased BAT6 sample (consisting of 58 events) has the highest level of completeness in redshift ($\sim$ 95%), allowing us to compute the rest-frame X-ray and optical light curves for 55 and 47 objects, respectively. We compute the X-ray and optical luminosities accounting for any possible source of absorption (Galactic and intrinsic) that could affect the observed fluxes in these two bands. We compare the behaviour observed in the X-ray and in the optical bands to assess the relative contribution of the emission during the prompt and afterglow phases. We unarguably demonstrate that the GRBs rest-frame optical luminosity distribution is not bimodal, being rather clustered around the mean value Log(L$_{R}$) = 29.9 $\pm$ 0.8 when estimated at a rest frame time of 12 hr. This is in contrast with what found in previous works and confirms that the GRB population has an intrinsic unimodal luminosity distribution. For more than 70% of the events the rest-frame light curves in the X-ray and optical bands have a different evolution, indicating distinct emitting regions and/or mechanisms. The X-ray light curves normalised to the GRB isotropic energy (E$_{\rm iso}$), provide evidence for X-ray emission still powered by the prompt emission until late times ($\sim$ hours after the burst event). On the other hand, the same test performed for the E$_{\rm iso}$-normalised optical light curves shows that the optical emission is a better proxy of the afterglow emission from early to late times. A. Melandri, S. Covino, D. Rogantini, et. al. Posted in High Energy Astrophysical Phenomena | Tagged Cosmology and Extragalactic Astrophysics, High Energy Astrophysical Phenomena A compact, metal-rich, kpc-scale outflow in FBQS J0209-0438: Detailed diagnostics from HST/COS extreme UV observations [GA] We present HST/COS observations of highly ionized absorption lines associated with a radio-loud QSO at $z=1.1319$. The absorption system has multiple velocity components, tracing gas that is largely outflowing from the QSO at velocities of a few 100 km s$^{-1}$. There is an unprecedented range in ionization, with detections of HI, NIII, NIV, NV, OIV, OIV*, OV, OVI, NeVIII, MgX, SV and ArVIII. We estimate the total hydrogen number density from the column density ratio N(OIV*)/N(OIV) to be $\log(n_{\textrm{H}}/\textrm{cm}^3)\sim 3$. Assuming photoionization equilibrium, we derive a distance to the absorbing complex of $2.3<R<6.0$ kpc from the centre of the QSO. A range in ionization parameter, covering $\sim 2$ orders of magnitude, suggest absorption path lengths in the range $10^{-4.5}<l_{\textrm{abs}}<1$ pc. In addition, the absorbing gas only partially covers the background emission from the QSO continuum, which suggests clouds with transverse sizes $l_{\textrm{trans}}<10^{-2.5}$ pc. Widely differing absorption path lengths, combined with covering fractions less than unity across all ions pose a challenge to models involving simple cloud geometries. These issues may be mitigated by the presence of non-equilibrium effects, together with the possibility of multiple gas temperatures. The dynamics and expected lifetimes of the gas clouds suggest that they do not originate from close to the AGN, but are instead formed close to their observed location. Their inferred distance, outflow velocities and gas densities are broadly consistent with scenarios involving gas entrainment or condensations in winds driven by either supernovae, or the supermassive black hole accretion disc. In the case of the latter, the present data most likely does not trace the bulk of the outflow by mass, which could instead manifest itself as an accompanying warm absorber, detectable in X-rays. C. Finn, S. Morris, N. Crighton, et. al. Outflow and hot dust emission in broad absorption line quasars [GA] We have investigated a sample of 2099 broad absorption line (BAL) quasars with z=1.7-2.2 built from the Sloan Digital Sky Survey Data Release Seven and the Wide-field Infrared Survey. This sample is collected from two BAL quasar samples in the literature, and refined by our new algorithm. Correlations of outflow velocity and strength with hot dust indicator (beta_NIR) and other quasar physical parameters, such as Eddington ratio, luminosity and UV continuum slope, are explored in order to figure out which parameters drive outflows. Here beta_NIR is the near-infrared continuum slope, a good indicator of the amount of hot dust emission relative to accretion disk emission. We confirm previous findings that outflow properties moderately or weakly depends on Eddington ratio, UV slope and luminosity. For the first time, we report moderate and significant correlations of outflow strength and velocity with beta_NIR in BAL quasars. It is consistent with the behavior of blueshifted broad emission lines in non-BAL quasars. The statistical analysis and composite spectra study both reveal that outflow strength and velocity are more strongly correlated with beta_NIR than Eddington ratio, luminosity and UV slope. In particular, the composites show that the entire C IV absorption profile shifts blueward and broadens as beta_NIR increases, while Eddington ratio and UV slope only affect the high and low velocity part of outflows, respectively. We discuss several potential processes and suggest that dusty outflow scenario, i.e. dust is intrinsic to outflows and may contribute to the outflow acceleration, is most likely. The BAL quasar catalog is available from the authors upon request. S. Zhang, H. Wang, T. Wang, et. al. Kiloparsec-scale outflows are prevalent among luminous AGN: outflows and feedback in the context of the overall AGN population [CEA] We present integral field unit (IFU) observations covering the [O III]4959,5007 and H-Beta emission lines of sixteen z<0.2 type 2 active galactic nuclei (AGN). Our targets are selected from a well-constrained parent sample of 24,000 AGN so that we can place our observations into the context of the overall AGN population. Our targets are radio-quiet with star formation rates (<~[10-100] Msol/yr) that are consistent with normal star-forming galaxies. We decouple the kinematics of galaxy dynamics and mergers from outflows. We find high-velocity ionised gas (velocity widths of 600-1500 km/s and maximum velocities of <=1700 km/s) with observed spatial extents of >~(6-16) kpc in all targets and observe signatures of spherical outflows and bi-polar superbubbles. We show that our targets are representative of z<0.2, luminous (i.e., L([O III]) > 5×10^41 erg/s) type 2 AGN and that ionised outflows are not only common but also in >=70% (3 sigma confidence) of cases, they are extended over kiloparsec scales. Our study demonstrates that galaxy-wide energetic outflows are not confined to the most extreme star-forming galaxies or radio-luminous AGN; however, there may be a higher incidence of the most extreme outflow velocities in quasars hosted in ultra-luminous infrared galaxies. Both star formation and AGN activity appear to be energetically viable to drive the outflows and we find no definitive evidence that favours one process over the other. Although highly uncertain, we derive mass outflow rates (typically ~10x the SFRs), kinetic energies (~0.5-10% of L[AGN]) and momentum rates (typically >~10-20x L[AGN]/c) consistent with theoretical models that predict AGN-driven outflows play a significant role in shaping the evolution of galaxies. C. Harrison, D. Alexander, J. Mullaney, et. al. Posted in Cosmology and Extragalactic Astrophysics | Tagged Cosmology and Extragalactic Astrophysics, Galaxy Astrophysics, High Energy Astrophysical Phenomena Clustering of Local Group distances: publication bias or correlated measurements? I. The Large Magellanic Cloud [GA] The distance to the Large Magellanic Cloud (LMC) represents a key local rung of the extragalactic distance ladder. Yet, the galaxy's distance modulus has long been an issue of contention, in particular in view of claims that most newly determined distance moduli cluster tightly – and with a small spread – around the "canonical" distance modulus, (m-M)_0 = 18.50 mag. We compiled 233 separate LMC distance determinations published between 1990 and 2013. Our analysis of the individual distance moduli, as well as of their two-year means and standard deviations resulting from this largest data set of LMC distance moduli available to date, focuses specifically on Cepheid and RR Lyrae variable-star tracer populations, as well as on distance estimates based on features in the observational Hertzsprung-Russell diagram. We conclude that strong publication bias is unlikely to have been the main driver of the majority of published LMC distance moduli. However, for a given distance tracer, the body of publications leading to the tightly clustered distances is based on highly non-independent tracer samples and analysis methods, hence leading to significant correlations among the LMC distances reported in subsequent articles. Based on a careful, weighted combination, in a statistical sense, of the main stellar population tracers, we recommend that a slightly adjusted canonical distance modulus of (m-M)_0 = 18.49 +- 0.09 mag be used for all practical purposes that require a general distance scale without the need for accuracies of better than a few percent. R. Grijs, J. Wicker and G. Bono Non-Analytic Inflation [CL] We analyze quantum corrections on the naive $\phi^4$-Inflation. These typically lead to an inflaton potential which is non-analytic in the field. We consider both minimal and non-minimal couplings to gravity. For the latter case we also study unitarity of inflaton-inflaton scattering. Finally we confront these theories with the Planck data and show that quantum departures from the $\phi^4$-Inflaton model are severely constrained. J. Joergensen, F. Sannino and O. Svendsen Posted in Cross-listed | Tagged Cosmology and Extragalactic Astrophysics, High Energy Physics - Phenomenology The Massive and Distant Clusters of WISE Survey: Initial Spectroscopic Confirmation of z ~ 1 Galaxy Clusters Selected from 10,000 Square Degrees [CEA] We present optical and infrared imaging and optical spectroscopy of galaxy clusters which were identified as part of an all-sky search for high-redshift galaxy clusters, the Massive and Distant Clusters of WISE Survey (MaDCoWS). The initial phase of MaDCoWS combined infrared data from the all-sky data release of the Wide-field Infrared Survey Explorer (WISE) with optical data from the Sloan Digital Sky Survey (SDSS) to select probable z ~ 1 clusters of galaxies over an area of 10,000 deg^2. Our spectroscopy confirms 19 new clusters at 0.7 < z < 1.3, half of which are at z > 1, demonstrating the viability of using WISE to identify high-redshift galaxy clusters. The next phase of MaDCoWS will use the greater depth of the AllWISE data release to identify even higher redshift cluster candidates. S. Stanford, A. Gonzalez, M. Brodwin, et. al. Cusps and pseudo-cusps in strings with Y-junctions [CL] We study the occurrence of cuspy events on a light string stretched between two Y-junctions with fixed heavy strings. We first present an analytic study and give a solid criterion to discriminate between cuspy and non-cuspy string configurations. We then describe a numerical code, built to test this analysis. Our numerical investigation allows us to look at the correlations between string network's parameters and the occurrence of cuspy phenomena. We show that the presence of large amplitude waves on the light string leads to cuspy events. We then relate the occurrence of cuspy events to features like the number of vibration modes on the string or the string's root-mean-square velocity. T. Elghozi, W. Nelson and M. Sakellariadou The dust budget crisis in high-redshift submillimetre galaxies [CEA] We apply a chemical evolution model to investigate the sources and evolution of dust in a sample of 26 high-redshift ($z>1$) submillimetre galaxies (SMGs) from the literature, with complete photometry from ultraviolet to the submillimetre. We show that dust produced only by low-intermediate mass stars falls a factor 240 short of the observed dust masses of SMGs, the well-known `dust-budget crisis'. Adding an extra source of dust from supernovae can account for the dust mass in 19 per cent of the SMG sample. Even after accounting for dust produced by supernovae the remaining deficit in the dust mass budget provides support for higher supernova yields, substantial grain growth in the interstellar medium or a top-heavy IMF. Including efficient destruction of dust by supernova shocks increases the tension between our model and observed SMG dust masses. The models which best reproduce the physical properties of SMGs have a rapid build-up of dust from both stellar and interstellar sources and minimal dust destruction. Alternatively, invoking a top-heavy IMF or significant changes in the dust grain properties can solve the dust budget crisis only if dust is produced by both low mass stars and supernovae and is not efficiently destroyed by supernova shocks. K. Rowlands, H. Gomez, L. Dunne, et. al. Thu, 13 Mar 14 A Dark Matter Progenitor: Light Vector Boson Decay into (Sterile) Neutrinos [CL] We show that the existence of new, light gauge interactions coupled to Standard Model (SM) neutrinos give rise to an abundance of sterile neutrinos through the sterile neutrinos' mixing with the SM. Specifically, in the mass range of MeV-GeV and coupling of $g' \sim 10^{-6} – 10^{-2}$, the decay of this new vector boson in the early universe produces a sufficient quantity of sterile neutrinos to account for the observed dark matter abundance. Interestingly, this can be achieved within a natural extension of the SM gauge group, such as a gauged $L_\mu-L_\tau$ number, without any tree-level coupling between the new vector boson and the sterile neutrino states. Such new leptonic interactions might also be at the origin of the well-known discrepancy associated with the anomalous magnetic moment of the muon. B. Shuve and I. Yavin Mapping the particle acceleration in the cool core of the galaxy cluster RX J1720.1+2638 [CEA] We present new deep, high-resolution radio images of the diffuse minihalo in the cool core of the galaxy cluster RX ,J1720.1+2638. The images have been obtained with the Giant Metrewave Radio Telescope at 317, 617 and 1280 MHz and with the Very Large Array at 1.5, 4.9 and 8.4 GHz, with angular resolutions ranging from 1″ to 10″. This represents the best radio spectral and imaging dataset for any minihalo. Most of the radio flux of the minihalo arises from a bright central component with a maximum radius of ~80 kpc. A fainter tail of emission extends out from the central component to form a spiral-shaped structure with a length of ~230 kpc, seen at frequencies 1.5 GHz and below. We observe steepening of the total radio spectrum of the minihalo at high frequencies. Furthermore, a spectral index image shows that the spectrum of the diffuse emission steepens with the increasing distance along the tail. A striking spatial correlation is observed between the minihalo emission and two cold fronts visible in the Chandra X-ray image of this cool core. These cold fronts confine the minihalo, as also seen in numerical simulations of minihalo formation by sloshing-induced turbulence. All these observations provide support to the hypothesis that the radio emitting electrons in cluster cool cores are produced by turbulent reacceleration. S. Giacintucci, M. Markevitch, G. Brunetti, et. al. Einstein gravity of a diffusing fluid [CL] We discuss Einstein gravity for a fluid consisting of particles interacting with an unidentified environment of some other particles whose dissipative effect is approximated by a diffusion. The environment is described by a time dependent cosmological term which is compensating the lack of the conservation law of the energy momentum of the diffusing fluid. We are interested in a homogeneous flat expanding Universe described by a scale factor $a$. For a fluid of massless particles at finite temperature we obtain explicit solutions of the diffusion equation which are in the form of a modified Juttner distribution with a time dependent temperature. At later time Universe evolution is described as a diffusion at zero temperature with no equilibration. We find solutions of the diffusion at zero temperature which can be treated as a continuation to a later time of the finite temperature solutions describing an early stage of the Universe. A conservation of the total energy momentum determines the cosmological term up to a constant. The resulting energy momentum inserted into Einstein equations gives a modified Friedmann equation. Solutions of the Friedmann equation depend on the initial value of the cosmological term. The large value of the cosmological constant implies an exponential expansion. If the initial conditions allow a power-like solution for a large time then it must be of the form $a\simeq \tau$ (no deceleration, $\tau$ is the cosmic time). The final stage of the Universe evolution is described by a non-relativistic diffusion of a cold dust. Z. Haba Measuring the power spectrum of dark matter substructure using strong gravitational lensing [CEA] In recent years, it has become possible to detect individual dark matter subhalos near strong gravitational lenses. Typically, only the most massive subhalos in the strong lensing region may be detected this way. In this work, we show that strong lenses may also be used to constrain the much more numerous population of lower mass subhalos that are too small to be detected individually. In particular, we show that the power spectrum of projected density fluctuations in galaxy halos can be measured using strong gravitational lensing. We develop the mathematical framework of power spectrum estimation, and test our method on mock observations. We use our results to determine the types of observations required to measure the substructure power spectrum with high significance. We predict that deep observations with current facilities (in particular ALMA) can measure this power spectrum, placing strong constraints on the abundance of dark matter subhalos and the underlying particle nature of dark matter. Y. Hezaveh, N. Dalal, G. Holder, et. al. The Evolution of Galaxy Structure over Cosmic Time [CEA] I present a comprehensive review of the evolution of galaxy structure in the universe from the first galaxies we can currently observe at z~6 down to galaxies we see in the local universe. I further address how these changes reveal galaxy formation processes that galaxy structural analyses can provide. This review is pedagogical and begins with a detailed discussion of the major methods in which galaxies are studied morphologically and structurally. This includes the well-established visual method; Sersic fitting to measure galaxy sizes and surface brightness profile shapes; non-parametric structural methods including the concentration (C), asymmetry (A), clumpiness (S) (CAS) method, as well as newer structural indices. Included is a discussion of how these structural indices measure fundamental properties of galaxies such as their scale, star formation rate, and ongoing merger activity. Extensive observational results are shown demonstrating how broad galaxy morphologies and structures change with time up to z~3, from small, compact and peculiar systems in the distant universe to the formation of the Hubble sequence we find today. This review further addresses how structural methods measure accurately the merger history out to z~3. The properties and evolution of bulges, disks, bars, and at z>1 large star forming clumps are also described, along with how morphological galaxy quenching occurs. Furthermore, the role of environment in producing structure in galaxies over cosmic time is treated. Alongside the evolution of general structure, I also delineate how galaxy sizes change with time, with measured sizes up to a factor of 2-5 smaller at high redshift at a given stellar mass. This review concludes with a discussion of how galaxy structure reveals the formation mechanisms behind galaxies, providing a new and unique way to test theories of galaxy formation. C. Conselice Graceful exit from inflation to radiation era with rapidly decreasing agegraphic potentials [CEA] We present a class of models where both the primordial inflation and the late times de Sitter phase are driven by simple phenomenological agegraphic potentials. In this context, a possible new scenario for a smooth exit from inflation to the radiation era is discussed by resorting the kination (stiff) era but without the inefficient radiation production mechanism of these models. This is done by considering rapidly decreasing expressions for $V(t)$ soon after inflation. We show that the parameters of our models can reproduce the scalar spectral parameter $n_s$ predicted by Planck data in particular for models with concave potentials. S. Viaggiu Mergers drive spin swings along the cosmic web [CEA] The close relationship between mergers and the reorientation of the spin for galaxies and their host dark haloes is investigated using a cosmological hydrodynamical simulation (Horizon-AGN). Through a statistical analysis of merger trees, we show that spin swings are mainly driven by mergers along the filamentary structure of the cosmic web, and that these events account for the preferred perpendicular orientation of massive galaxies with respect to their nearest filament. By contrast, low-mass galaxies (M_s<10^10 M_sun at redshift 1.5) undergoing very few mergers, if at all, tend to possess a spin well aligned with their filament. Haloes follow the same trend as galaxies but display a greater sensitivity to smooth anisotropic accretion. The relative effect of mergers on spin magnitude is qualitatively different for minor and major mergers: mergers (and diffuse accretion) generally increase the magnitude of the angular momentum, but the most massive major mergers also give rise to a population of objects with less spin left. Without mergers secular accretion builds up the spin of galaxies but not that of haloes. It also (re)aligns galaxies with their filament. C. Welker, J. Devriendt, Y. Dubois, et. al. Herschel-ATLAS: Properties of dusty massive galaxies at low and high redshifts [CEA] We present a comparison of the physical properties of a rest-frame $250\mu$m selected sample of massive, dusty galaxies from $0<z<5.3$. Our sample comprises 29 high-redshift submillimetre galaxies (SMGs) from the literature, and 843 dusty galaxies at $z<0.5$ from the Herschel-ATLAS, selected to have a similar stellar mass to the SMGs. The $z>1$ SMGs have an average SFR of $390^{+80}_{-70}\,$M$_\odot$yr$^{-1}$ which is 120 times that of the low-redshift sample matched in stellar mass to the SMGs (SFR$=3.3\pm{0.2}$ M$_\odot$yr$^{-1}$). The SMGs harbour a substantial mass of dust ($1.2^{+0.3}_{-0.2}\times{10}^9\,$M$_\odot$), compared to $(1.6\pm0.1)\times{10}^8\,$M$_\odot$ for low-redshift dusty galaxies. At low redshifts the dust luminosity is dominated by the diffuse ISM, whereas a large fraction of the dust luminosity in SMGs originates from star-forming regions. At the same dust mass SMGs are offset towards a higher SFR compared to the low-redshift H-ATLAS galaxies. This is not only due to the higher gas fraction in SMGs but also because they are undergoing a more efficient mode of star formation, which is consistent with their bursty star-formation histories. The offset in SFR between SMGs and low-redshift galaxies is similar to that found in CO studies, suggesting that dust mass is as good a tracer of molecular gas as CO. K. Rowlands, L. Dunne, S. Dye, et. al. AzTEC/ASTE 1.1 mm survey of SSA22: Counterpart identification and photometric redshift survey of submillimeter galaxies [CEA] We present the results from a 1.1 mm imaging survey of the SSA22 field, known for having an overdensity of z=3.1 Lyman-alpha emitting galaxies (LAEs), taken with the AzTEC camera on the Atacama Submillimeter Telescope Experiment (ASTE). We imaged a 950 arcmin$^2$ field down to a 1 sigma sensitivity of 0.7-1.3 mJy/beam to find 125 submillimeter galaxies (SMGs) with a signal to noise ratio >= 3.5. Counterpart identification using radio and near/mid-infrared data was performed and one or more counterpart candidates were found for 59 SMGs. Photometric redshifts based on optical to near-infrared images were evaluated for 45 SMGs of these SMGs with Spitzer/IRAC data, and the median value is found to be z=2.4. By combining these estimation with estimates from the literature we determined that 10 SMGs might lie within the large-scale structure at z=3.1. The two-point angular cross-correlation function between LAEs and SMGs indicates that the positions of the SMGs are correlated with the z=3.1 protocluster. These results suggest that the SMGs were formed and evolved selectively in the high dense environment of the high redshift universe. This picture is consistent with the predictions of the standard model of hierarchical structure formation. H. Umehata, Y. Tamura, K. Kohno, et. al. Major Cluster Mergers and the Location of the Brightest Cluster Galaxy [CEA] Using a large N-body cosmological simulation combined with a subgrid treatment of galaxy formation, we study the formation and evolution of the galaxy and cluster population in a comoving volume (100 Mpc)^3 in a LCDM universe. At z = 0, our computational volume contains 1788 clusters with mass M_cl > 1.1×10^12 Msun, including 18 massive clusters with M_cl > 10^14 Msun. It also contains 1 088 797 galaxies with mass M_gal > 2×10^9 Msun and luminosity L > 9.5×10^5 Lsun. For each cluster, we identified the brightest cluster galaxy (BCG). We then computed the fraction f_BNC of clusters in which the BCG is not the closest galaxy to the center of the cluster in projection, and the ratio Dv/s, where Dv is the difference in radial velocity between the BCG and the whole cluster, and s is the radial velocity dispersion of the cluster. f_BNC increases from 0.05 for low-mass clusters (M_cl ~ 10^12 Msun) to 0.5 for high-mass ones (M_cl > 10^14 Msun), with no dependence on cluster redshift. The values of Dv/s vary from 0 to 1.8. These results are consistent with previous observational studies, and indicate that the central galaxy paradigm, which states that the BCG should be at rest at the center of the cluster, is usually valid, but exceptions are too common to be ignored. Analysis of the merger trees for the 18 most massive clusters in the simulation reveals that 16 of these clusters have experienced major mergers in the past. These mergers leave each cluster in a non-equilibrium state, but eventually the cluster settles into an equilibrium configuration, unless it is disturbed by another major merger. We found evidence that these mergers are responsible for the off-center positions and peculiar velocities of some BCGs. Our results thus support the merging-group scenario, in which some clusters form by the merger of smaller groups in which the galaxies have already formed. H. Martel, F. Robichaud and P. Barai Atmospheric effects in astroparticle physics experiments and the challenge of ever greater precision in measurements [IMA] Astroparticle physics and cosmology allow us to scan the universe through multiple messengers. It is the combination of these probes that improves our understanding of the universe, both in its composition and its dynamics. Unlike other areas in science, research in astroparticle physics has a real originality in detection techniques, in infrastructure locations, and in the observed physical phenomenon that is not created directly by humans. It is these features that make the minimisation of statistical and systematic errors a perpetual challenge. In all these projects, the environment is turned into a detector medium or a target. The atmosphere is probably the environment component the most common in astroparticle physics and requires a continuous monitoring of its properties to minimise as much as possible the systematic uncertainties associated. This paper introduces the different atmospheric effects to take into account in astroparticle physics measurements and provides a non-exhaustive list of techniques and instruments to monitor the different elements composing the atmosphere. A discussion on the close link between astroparticle physics and Earth sciences ends this paper. K. Louedec Posted in Instrumentation and Methods for Astrophysics | Tagged Atmospheric and Oceanic Physics, Cosmology and Extragalactic Astrophysics, High Energy Astrophysical Phenomena, High Energy Physics - Experiment, Instrumentation and Methods for Astrophysics The extended ROSAT-ESO Flux Limited X-ray Galaxy Cluster Survey (REFLEX II) IV. X-ray Luminosity Function and First Constraints on Cosmological Parameters [CEA] The X-ray luminosity function is an important statistic of the census of galaxy clusters and an important means to probe the cosmological model of our Universe. Based on our recently completed REFLEX II cluster sample we construct the X-ray luminosity function of galaxy clusters for several redshift slices from $z = 0$ to $z = 0.4$ and discuss its implications. We find no significant signature of redshift evolution of the luminosity function in the redshift interval. We provide the results of fits of a parameterized Schechter function and extensions of it which provide a reasonable characterization of the data. Using a model for structure formation and galaxy cluster evolution we compare the observed X-ray luminosity function with predictions for different cosmological models. For the most interesting constraints for the cosmological parameters $\Omega_m$ and $\sigma_8$ we obatain $\Omega_m \sim 0.27 \pm 0.03$ and $\sigma_8 \sim 0.80 \pm 0.03$ based on the statistical uncertainty alone. Marginalizing over the most important uncertainties, the normalisation and slope of the $L_X – M$ scaling relation, we find $\Omega_m \sim 0.29 \pm 0.04$ and $\sigma_8 \sim 0.77 \pm 0.07$ ($1\sigma$ confidence limits). We compare our results with those of the SZ-cluster survey provided by the PLANCK mission and we find very good agreement with the results using PLANCK clusters as cosmological probes, but we have some tension with PLANCK cosmological results from the microwave background anisotropies. We also make a comparison with other cluster surveys. We find good agreement with these previous results and show that the REFLEX II survey provides a significant reduction in the uncertainties compared to earlier measurements. H. Bohringer, G. Chon and C. Collins NuSTAR and XMM-Newton Observations of Luminous, Heavily Obscured, WISE-Selected Quasars at z ~ 2 [GA] We report on a NuSTAR and XMM-Newton program that has observed a sample of three extremely luminous, heavily obscured WISE-selected AGN at z~2 in a broad X-ray band (0.1 – 79 keV). The parent sample, selected to be faint or undetected in the WISE 3.4um (W1) and 4.6um (W2) bands but bright at 12um (W3) and 22um (W4), are extremely rare, with only ~1000 so-called W1W2-dropouts across the extragalactic sky. Optical spectroscopy reveals typical redshifts of z~2 for this population, implying rest-frame mid-IR luminosities of L(6um)~6e46 erg/s and bolometric luminosities that can exceed L(bol)~1e14 L(sun). The corresponding intrinsic, unobscured hard X-ray luminosities are L(2-10)~4e45 erg/s for typical quasar templates. These are amongst the most luminous AGN known, though the optical spectra rarely show evidence of a broad-line region and the selection criteria imply heavy obscuration even at rest-frame 1.5um. We designed our X-ray observations to obtain robust detections for gas column densities N(H)<1e24 /cm2. In fact, the sources prove to be fainter than these predictions. Two of the sources were observed by both NuSTAR and XMM-Newton, with neither being detected by NuSTAR and one being faintly detected by XMM-Newton. A third source was observed only with XMM-Newton, yielding a faint detection. The X-ray data require gas column densities N(H)>1e24 /cm2, implying the sources are extremely obscured, consistent with Compton-thick, luminous quasars. The discovery of a significant population of heavily obscured, extremely luminous AGN does not conform to the standard paradigm of a receding torus, in which more luminous quasars are less likely to be obscured. If a larger sample conforms with this finding, then this suggests an additional source of obscuration for these extreme sources. D. Stern, G. Lansbury, R. Assef, et. al. The impact of dark energy perturbations on the growth index [CEA] We show that in clustering dark energy models the growth index of linear matter perturbations, $\gamma$, can be much lower than in $\Lambda$CDM or smooth quintessence models and present a strong variation with redshift. We find that the impact of dark energy perturbations on $\gamma$ is enhanced if the dark energy equation of state has a large and rapid decay at low redshift. We study four different models with these features and show that we may have $0.33<\gamma\left(z\right)<0.48$ at $0<z<3$. We also show that the constant $\gamma$ parametrization for the growth rate, $f=d\ln\delta_{m}/d\ln a=\Omega_{m}^{\gamma}$, is a few percent inaccurate for such models and that a redshift dependent parametrization for $\gamma$ can provide about four times more accurate fits for $f$. We discuss the robustness of the growth index to distinguish between General Relativity with clustering dark energy and modified gravity models, finding that some $f\left(R\right)$ and clustering dark energy models can present similar values for $\gamma$. R. Batista Point source calibration of the AKARI/FIS all-sky survey maps for staking analysis [IMA] Investigations of the point spread functions (PSFs) and flux calibrations for stacking analysis have been performed with the far-infrared (wavelengths range of 60 to 140 um all-sky maps taken by the Far-Infrared Surveyor (FIS) onboard the AKARI satellite. The PSFs are investigated by stacking the maps at the positions of standard stars with their fluxes of 0.02 -10 Jy. The derived full widths at the half maximum (FWHMs) of the PSFs are ~ 60 arcsec at 65 and 90 um and ~ 90 arcsec at 140 um, which are much smaller than that of the previous all-sky maps obtained with IRAS (~ 6 arcmin). Any flux dependence in the PSFs is not seen on the investigated flux range. By performing the flux calibrations, we found that absolute photometry for faint sources can be carried out with constant calibration factors, which range from 0.6 to 0.8. After applying the calibration factors, the photometric accuracies for the stacked sources in the 65, 90, and 140 um bands are 9, 3, and 21 %, respectively, even below the detection limits of the survey. Any systematic dependence between the observed flux and model flux is not found. These results indicate that the FIS map is a useful dataset for the stacking analyses of faint sources at far-infrared wavelengths. K. Arimatsu, Y. Doi, T. Wada, et. al. NuSTAR Observations of the Bullet Cluster: Constraints on Inverse Compton Emission [HEAP] The search for diffuse non-thermal inverse Compton (IC) emission from galaxy clusters at hard X-ray energies has been undertaken with many instruments, with most detections being either of low significance or controversial. Background and contamination uncertainties present in the data of non-focusing observatories result in lower sensitivity to IC emission and a greater chance of false detection. We present 266ks NuSTAR observations of the Bullet cluster, detected from 3-30 keV. NuSTAR's unprecedented hard X-ray focusing capability largely eliminates confusion between diffuse IC and point sources; however, at the highest energies the background still dominates and must be well understood. To this end, we have developed a complete background model constructed of physically inspired components constrained by extragalactic survey field observations, the specific parameters of which are derived locally from data in non-source regions of target observations. Applying the background model to the Bullet cluster data, we find that the spectrum is well – but not perfectly – described as an isothermal plasma with kT=14.2+/-0.2 keV. To slightly improve the fit, a second temperature component is added, which appears to account for lower temperature emission from the cool core, pushing the primary component to kT~15.3 keV. We see no convincing need to invoke an IC component to describe the spectrum of the Bullet cluster, and instead argue that it is dominated at all energies by emission from purely thermal gas. The conservatively derived 90% upper limit on the IC flux of 1.1e-12 erg/s/cm^2 (50-100 keV), implying a lower limit on B>0.2{\mu}G, is barely consistent with detected fluxes previously reported. In addition to discussing the possible origin of this discrepancy, we remark on the potential implications of this analysis for the prospects for detecting IC in galaxy clusters in the future. D. Wik, A. Hornstrup, S. Molendi, et. al. Gravitational Lensing of the CMB: a Feynman Diagram Approach [CEA] We develop a Feynman diagram approach to calculating correlations of the Cosmic Microwave Background (CMB) in the presence of distortions. As one application, we focus on CMB distortions due to gravitational lensing by Large Scale Structure (LSS). We study the Hu-Okamoto quadratic estimator for extracting lensing from the CMB and derive the noise of the estimator up to ${\mathcal O}(\phi^4)$ in the lensing potential $\phi$. The previously noted large ${\mathcal O}(\phi^4)$ term can be significantly reduced by a reorganization of the $\phi$ expansion. Our approach makes it simple to obtain expressions for quadratic estimators based on any CMB channel. We briefly discuss other applications to cosmology of this diagrammatic approach, such as distortions of the CMB due to patchy reionization, or due to Faraday rotation from primordial axion fields. E. Jenkins, A. Manohar, W. Waalewijn, et. al. Wed, 12 Mar 14 Relating the anisotropic power spectrum to the CMB hemispherical anisotropy [CEA] We relate the observed hemispherical anisotropy in the cosmic microwave radiation data to an anisotropic power spectrum model. The hemispherical anisotropy can be parameterized in terms of the dipole modulation model. This model also leads to correlations between spherical harmonic coefficients corresponding to multipoles l and l+1. We extract the l dependence of the dipole modulation amplitude, A, by making a fit to the l dependence of the correlations betweenharmonic coefficients using PLANCK CMBR data. We propose an anisotropic power spectrum model which also leads to correlations between different multipoles. This power spectrum is determined by making a fit to data. We find that the spectral index of the anisotropic power spectrum is consistent with zero. P. Rath and P. Jain Star forming filaments in warm dark models [CEA] We performed a hydrodynamical cosmological simulation of the formation of a Milky Way-like galaxy in a warm dark matter (WDM) cosmology. Smooth and dense filaments, several co-moving mega parsec long, form generically above z 2 in this model. Atomic line cooling allows gas in the centres of these filaments to cool to the base of the cooling function, resulting in a very striking pattern of extended Lyman-limit systems (LLSs). Observations of the correlation function of LLSs might hence provide useful limits on the nature of the dark matter. We argue that the self-shielding of filaments may lead to a thermal instability resulting in star formation. We implement a sub-grid model for this, and find that filaments rather than haloes dominate star formation until z 6. Reionisation decreases the gas density in filaments, and the more usual star formation in haloes dominates below z 6, although star formation in filaments continues until z=2. Fifteen per cent of the stars of the z=0 galaxy formed in filaments. At higher redshift, these stars give galaxies a stringy appearance, which, if observed, might be a strong indication that the dark matter is warm. L. Gao, T. Theuns and V. Springel Posted in Cosmology and Extragalactic Astrophysics | Tagged Cosmology and Extragalactic Astrophysics, High Energy Astrophysical Phenomena Deep radio observations of the radio halo of the bullet cluster 1E 0657-55.8 [CEA] We present deep 1.1-3.1 GHz Australia Telescope Compact Array observations of the radio halo of the bullet cluster, 1E 0657-55.8. In comparison to existing images of this radio halo the detection in our images is at higher significance. The radio halo is as extended as the X-ray emission in the direction of cluster merger but is significantly less extended than the X-ray emission in the perpendicular direction. At low significance we detect a faint second peak in the radio halo close to the X-ray centroid of the smaller sub-cluster (the bullet) suggesting that, similarly to the X-ray emission, the radio halo may consist of two components. Finally, we find that the distinctive shape of the western edge of the radio halo traces out the X-ray detected bow shock. The radio halo morphology and the lack of strong point-to-point correlations between radio, X-ray and weak-lensing properties suggests that the radio halo is still being formed. The colocation of the X-ray shock with a distinctive radio brightness edge illustrates that the shock is influencing the structure of the radio halo. These observations support the theory that shocks and turbulence influence the formation and evolution of radio halo synchrotron emission. T. Shimwell, S. Brown, I. Feain, et. al. Satellite abundances around bright isolated galaxies II: radial distribution and environmental effects [CEA] We use the SDSS/DR8 galaxy sample to study the radial distribution of satellite galaxies around isolated primaries, comparing to semi-analytic models of galaxy formation based on the Millennium and Millennium-II simulations. SDSS satellites behave differently around high- and low-mass primaries: those orbiting objects with $M_*>10^{11}M_\odot$ are mostly red and are less concentrated towards their host than the inferred dark matter halo, an effect that is very pronounced for the few blue satellites. On the other hand, less massive primaries have steeper satellite profiles that agree quite well with the expected dark matter distribution and are dominated by blue satellites, even in the inner regions where strong environmental effects are expected. In fact, such effects appear to be strong only for primaries with $M_* > 10^{11}M_\odot$. This behaviour is not reproduced by current semi-analytic simulations, where satellite profiles always parallel those of the dark matter and satellite populations are predominantly red for primaries of all masses. The disagreement with SDSS suggests that environmental effects are too efficient in the models. Modifying the treatment of environmental and star formation processes can substantially increase the fraction of blue satellites, but their radial distribution remains significantly shallower than observed. It seems that most satellites of low-mass primaries can continue to form stars even after orbiting within their joint halo for 5 Gyr or more. W. Wang, L. Sales, B. Henriques, et. al. Evolution induced by dry minor mergers on to Fast Rotator S0 galaxies [GA] We have analysed collisionless N-body simulations of intermediate and minor dry mergers on to S0s to test whether these mergers can generate S0 galaxies with intermediate kinematics between Fast and Slow Rotators. We find that minor mergers induce a lower decrease of the global rotational support than encounters of lower mass ratios, giving rise to S0s with intermediate properties between Fast and Slow Rotators. The resulting remnants are intrinsically more triaxial, less flattened, and span the whole range of apparent ellipticities up to $\epsilon_\mathrm{e} \sim 0.8$. They do not show lower apparent ellipticities in random projections than initially; on the contrary, the formation of oval distortions and the disc thickening raise the percentage of projections at $0.4 < \epsilon_\mathrm{e} < 0.7$. In the experiments with S0b progenitor galaxies, minor mergers tend to spin up the bulge and to decrease slightly its intrinsic ellipticity, whereas in the cases of primary S0c galaxies they keep the rotational support of the bulge nearly constant and decrease significantly its intrinsic ellipticity. The remnant bulges remain nearly spherical ($B/A \sim C/A > 0.9$), but exhibit a wide range of triaxialities ($0.20 < T < 1.00$). In the plane of global anisotropy of velocities ($\delta$) vs. intrinsic ellipticity ($\epsilon_\mathrm{e,intr}$), some of our models extend the linear trend found in previous major merger simulations towards higher $\epsilon_\mathrm{e,intr}$ values, while others depart from it. This is consistent with the wide dispersion exhibited by real S0s in this diagram as compared to ellipticals, which follow the linear trend drawn by major merger simulations. The different trends exhibited by ellipticals and S0 galaxies in the $\delta$ — $\epsilon_\mathrm{e}$ diagram may be pointing to the different role played by major mergers in the buildup of each morphological type. T. Tapia, M. Eliche-Moral, M. Querejeta, et. al. A Broadband Polarization Catalog of Extragalactic Radio Sources [CEA] An understanding of cosmic magnetism requires converting the polarization properties of extragalactic radio sources into the rest-frame in which the corresponding polarized emission or Faraday rotation is produced. Motivated by this requirement, we present a catalog of multiwavelength linear polarization and total intensity radio data for polarized sources from the NRAO VLA Sky Survey (NVSS). We cross-match these sources with a number of complementary measurements — combining data from major radio polarization and total intensity surveys such as AT20G, B3-VLA, GB6, NORTH6CM, Texas, and WENSS, together with other polarization data published over the last 50 years. For 951 sources, we present spectral energy distributions (SEDs) in both fractional polarization and total intensity, each containing between 3 and 56 independent measurements from 400 MHz to 100 GHz. We physically model these SEDs, and where available provide the redshift of the optical counterpart. For a superset of 25,649 sources we provide the total intensity spectral index, $\alpha$. Objects with steep versus flat $\alpha$ generally have different polarization SEDs: steep-spectrum sources exhibit depolarization, while flat-spectrum sources maintain constant polarized fractions over large ranges in wavelength. This suggests the run of polarized fraction with wavelength is predominantly affected by the local source environment, rather than by unrelated foreground magnetoionic material. In addition, a significant fraction (21%) of sources exhibit "repolarization", which further suggests that polarized SEDs are affected by different emitting regions within the source, rather than by a particular depolarization law. This has implications for the physical interpretation of future broadband polarimetric surveys. J. Farnes, B. Gaensler and E. Carretti The Luminosity Function of Galaxies as modeled by a left truncated beta distribution [CEA] A first new luminosity functions of galaxies can be built starting from a left truncated beta probability density function, which is characterized by four parameters. In the astrophysical conversion, the number of parameters increases by one, due to the addition of the overall density of galaxies. A second new galaxy luminosity function is built starting from a left truncated beta probability for the mass of galaxies once a simple nonlinear relationship between mass and luminosity is assumed; in this case the number of parameters is six because the overall density of galaxies and a parameter that regulates mass and luminosity are added. The two new galaxy luminosity functions with finite boundaries were tested on the Sloan Digital Sky Survey (SDSS) in five different bands; the results produce a "better fit" than the Schechter luminosity function in two of the five bands considered. A modified Schechter luminosity function with four parameters has been also analyzed. L. Zaninetti 21cm fluctuations from primordial magnetic fields [CEA] Recent discoveries of magnetic fields in intergalactic void regions and in high redshift galaxies may indicate that large scale magnetic fields have a primordial origin. If primordial magnetic fields were present soon after the recombination epoch, they would induce density fluctuations on the one hand and dissipate their energy into the primordial gas on the other, and thereby significantly alter the thermal history of the universe. Here we consider both the effects and calculate the brightness temperature fluctuations of the 21cm line using simple Monte-Carlo simulations. We find that the fluctuations of 21cm line from the energy dissipation appear only on very small scales and those from the density fluctuations always dominate on observationally relevant angular scales. M. Shiraishi, H. Tashiro and K. Ichiki The Stochastic Gravitational Wave Background Generated by Cosmic String Networks: the Small-Loop Regime [CEA] We consider an alternative approach for the computation of the stochastic gravitational wave background generated by small loops produced throughout the cosmological evolution of cosmic string networks and use it to derive an analytical approximation to the corresponding power spectrum. We show that this approximation produces an excellent ?t to more elaborate results obtained using the Velocity-dependent One-Scale model to describe cosmic string network dynamics, over a wide frequency range, in the small-loop regime. L. Sousa and P. Avelino Constraints on Cosmological Models from Hubble Parameters Measurements [CEA] In this paper, we study the cosmological constraints from the measurements of Hubble parameters—$H(z)$ data. Here, we consider two kinds of $H(z)$ data: the direct $H_0$ probe from the Hubble Space Telescope (HST) observations of Cepheid variables with $H_0=73.8\pm2.4$ ${\rm km\,s^{-1}\,Mpc^{-1}}$ and several measurements on the Hubble parameter at high redshifts $H(z)$. Employing Markov Chain Monte Carlo method, we also combine the WMAP nine-year data (WMAP9), the baryon acoustic oscillations (BAO) and type Ia supernovae (SNIa) "Union2.1" compilation to determine the cosmological parameters, such as the equation of state (EoS) of dark energy $w$, the curvature of the universe $\Omega_k$, the total neutrino mass $\sum{m_\nu}$, the effective number of neutrinos $N_{\rm eff}$, and the parameters associated with the power spectrum of primordial fluctuations. These $H(z)$ data provide extra information on the accelerate rate of our Universe at high redshifts. Therefore, adding these $H(z)$ data significantly improves the constraints on cosmological parameters, such as the number of relativistic species. Moreover, we find that direct prior on $H_0$ from HST can also give good constraints on some parameters, due to the degeneracies between these parameters and $H_0$. W. Zheng, H. Li, J. Xia, et. al. NuSTAR J033202-2746.8: direct constraints on the Compton reflection in a heavily obscured quasar at z~2 [GA] We report NuSTAR observations of NuSTAR J033202-2746.8, a heavily obscured, radio-loud quasar detected in the Extended Chandra Deep Field-South, the deepest layer of the NuSTAR extragalactic survey (~400 ks, at its deepest). NuSTAR J033202-2746.8 is reliably detected by NuSTAR only at E>8 keV and has a very flat spectral slope in the NuSTAR energy band (Gamma=0.55^{+0.62}_{-0.64}; 3-30 keV). Combining the NuSTAR data with extremely deep observations by Chandra and XMM-Newton (4 Ms and 3 Ms, respectively), we constrain the broad-band X-ray spectrum of NuSTAR J033202-2746.8, indicating that this source is a heavily obscured quasar (N_H=5.6^{+0.9}_{-0.8}x10^23 cm^-2) with luminosity L_{10-40 keV}~6.4×10^44 erg s^-1. Although existing optical and near-infrared (near-IR) data, as well as follow-up spectroscopy with the Keck and VLT telescopes, failed to provide a secure redshift identification for NuSTAR J033202-2746.8, we reliably constrain the redshift z=2.00+/-0.04 from the X-ray spectral features (primarily from the iron K edge). The NuSTAR spectrum shows a significant reflection component (R=0.55^{+0.44}_{-0.37}), which was not constrained by previous analyses of Chandra and XMM-Newton data alone. The measured reflection fraction is higher than the R~0 typically observed in bright radio-loud quasars such as NuSTAR J033202-2746.8, which has L_{1.4 GHz}~10^27 W Hz^-1. Constraining the spectral shape of AGN, including bright quasars, is very important for understanding the AGN population, and can have a strong impact on the modeling of the X-ray background. Our results show the importance of NuSTAR in investigating the broad-band spectral properties of quasars out to high redshift. A. Moro, J. Mullaney, D. Alexander, et. al. The Galaxy Cluster Mid-Infrared Luminosity Function at 1.3<z<3.2 [CEA] We present 4.5 {\mu}m luminosity functions for galaxies identified in 178 candidate galaxy clusters at 1.3 < z < 3.2. The clusters were identified as Spitzer/IRAC color-selected overdensities in the Clusters Around Radio-Loud AGN (CARLA) project, which imaged 421 powerful radio-loud AGN at z > 1.3. The luminosity functions are derived for different redshift and richness bins, and the IRAC imaging reaches depths of m*+2, allowing us to measure the faint end slopes of the luminosity functions. We find that {\alpha} = -1 describes the luminosity function very well in all redshifts bins and does not evolve significantly. This provides evidence that the rate at which the low mass galaxy population grows through star formation, gets quenched and is replenished by in-falling field galaxies does not have a major net effect on the shape of the luminosity function. Our measurements for m* are consistent with passive evolution models and high formation redshifts z_f ~ 3. We find a slight trend towards fainter m* for the richest clusters, implying that the most massive clusters in our sample could contain older stellar populations, yet another example of cosmic downsizing. Modelling shows that a contribution of a star-forming population of up to 40% cannot be ruled out. This value, found from our targeted survey, is significantly lower than the values found for slightly lower redshift, z ~ 1, clusters found in wide-field surveys. The results are consistent with cosmic downsizing, as the clusters studied here were all found in the vicinity of radio-loud AGNs — which have proven to be preferentially located in massive dark matter halos in the richest environments at high redshift — they may therefore be older and more evolved systems than the general protocluster population. D. Wylezalek, J. Vernet, C. Breuck, et. al. A thousand shadows of Andromeda: rotating planes of satellites in the Millennium-II cosmological simulation [CEA] In a recent contribution, Bahl \& Baumgardt investigated the incidence of planar alignments of satellite galaxies in the Millennium-II simulation, and concluded that vast thin planes of dwarf galaxies, similar to that observed in the Andromeda galaxy (M31), occur frequently by chance in $\Lambda$-Cold Dark Matter cosmology. However, their analysis did not capture the essential fact that the observed alignment is simultaneously radially extended, yet thin, and kinematically unusual. With the caveat that the Millennium-II simulation may not have sufficient mass resolution to identify confidently simulacra of low-luminosity dwarf galaxies, we re-examine that simulation for planar structures, using the same method as employed by Ibata et al. (2013) on the real M31 satellites. We find that 0.04\% of host galaxies display satellite alignments that are at least as extreme as the observations, when we consider their extent, thickness and number of members rotating in the same sense. We further investigate the angular momentum properties of the co-planar satellites, and find that the median of the specific angular momentum derived from the line of sight velocities in the real M31 structure ($1.3\times10^4$ km/s kpc) is very high compared to systems drawn from the simulations. This analysis confirms that it is highly unlikely that the observed structure around the Andromeda galaxy is due to a chance occurrence. Interestingly, the few extreme systems that are similar to M31 arise from the accretion of a massive sub-halo with its own spatially-concentrated entourage of orphan satellites. R. Ibata, N. Ibata, G. Lewis, et. al. Debian Astro: An open computing platform for astronomy [IMA]
CommonCrawl
Geometry calibration in wireless acoustic sensor networks utilizing DoA and distance information Tobias Gburrek1, Joerg Schmalenstroeer1 & Reinhold Haeb-Umbach1 Due to the ad hoc nature of wireless acoustic sensor networks, the position of the sensor nodes is typically unknown. This contribution proposes a technique to estimate the position and orientation of the sensor nodes from the recorded speech signals. The method assumes that a node comprises a microphone array with synchronously sampled microphones rather than a single microphone, but does not require the sampling clocks of the nodes to be synchronized. From the observed audio signals, the distances between the acoustic sources and arrays, as well as the directions of arrival, are estimated. They serve as input to a non-linear least squares problem, from which both the sensor nodes' positions and orientations, as well as the source positions, are alternatingly estimated in an iterative process. Given one set of unknowns, i.e., either the source positions or the sensor nodes' geometry, the other set of unknowns can be computed in closed-form. The proposed approach is computationally efficient and the first one, which employs both distance and directional information for geometry calibration in a common cost function. Since both distance and direction of arrival measurements suffer from outliers, e.g., caused by strong reflections of the sound waves on the surfaces of the room, we introduce measures to deemphasize or remove unreliable measurements. Additionally, we discuss modifications of our previously proposed deep neural network-based acoustic distance estimator, to account not only for omnidirectional sources but also for directional sources. Simulation results show good positioning accuracy and compare very favorably with alternative approaches from the literature. A wireless acoustic sensor network (WASN) consists of sensor nodes, which are connected via a wireless link and where each node is equipped with one or more microphones, a computing and a networking module [1, 2]. A network of distributed microphones offers the advantage of superior signal capture, because it increases the probability that a sensor is close to every relevant sound source, be it a desired signal or an interfering source. Information about the position of an acoustic source may be used for acoustic beamforming and for realizing location-based functionality, such as switching on lights depending on a speaker's position or steering a camera to a speaker who is outside its field of view. Source position information is also beneficial for the estimation of the phase offset between the sampling oscillators of the distributed sensor nodes [3, 4]. However, source location information can only be obtained from the audio signals without using additional prior knowledge, e.g., about source position candidates, like it is used in fingerprinting-based methods [5, 6], if the position of the sensors, i.e., the microphones, is known. This, however, is an unrealistic assumption, because one of the key advantages of WASNs is that they are typically an ad hoc network formed by non-stationary devices, e.g., the smartphones of users, and, possibly, stationary devices, such as a TV or a smart speaker. For such a setup, the spatial configuration and even the number of sensor nodes is unknown a priori and may even be changing over time, e.g., with people, and thus smartphones, entering and leaving the setup. Geometry calibration refers to the task of determining the spatial position of the distributed microphones [7]. In case of sensor nodes equipped with an array of microphones [8], the orientation of the array is also of interest. An ideal calibration algorithm should infer the geometry of the network while the network is being used, i.e., solely from the recorded audio signals, neither requiring the playback of special calibration signals nor human assistance through manually measured distances. The calibration should be fast, not only during initial setup but also when detecting a change in the network configuration [9] which triggers a re-calibration. There is a further desirable feature, which is the independence from synchronized sampling clocks across the network (see [10–12]). Clearly, the tasks of geometry calibration and synchronization of the sensor nodes' sampling clocks are often closely linked [7]. Geometry calibration approaches relying on time difference of arrival (TDoA) [13, 14], time of arrival (ToA) [15], or time of flight (ToF) [16] information investigate time points of sound emission and/or intersignal delays, requiring that the clocks of the sensor nodes are synchronized. Only the direction of arrival (DoA)-based approach does not require clock synchronization at (sub-)sample precision. Here, the assumption is that sensor nodes are equipped with microphone arrays to be able to estimate the angle under which an acoustic source is observed. This requires that the microphones comprising the array share the same clock signal, while the clocks at different nodes only need to be coarsely synchronized, e.g., via [17–20]. That coarse synchronization, i.e., a synchronization with an accuracy of a few tens of milliseconds, is necessary to identify same signal segments across devices. DoA-based calibration obviously suffers from scale indeterminacy: only a relative geometry can be estimated, as no information is available to infer an absolute distance. Once measurements are given, be it ToA, TDoA, DoA or even combinations thereof [21, 22], the actual estimation of the spatial arrangement of the network amounts to the optimization of a cost function, which measures the agreement of an assumed geometry with the given measurements [13, 23–27]. This typically is a non-linear least squares (LS) problem [28, 29], for which no closed-form solution is known. Due to the non-convexity of the problem, iterative solutions depend on the initialization. What complicates matters further is the fact that the acoustic measurements, such as DoAs, suffer from reverberation, which results in outliers that can spoil the geometry calibration process. To combat those, the iterative optimization is often embedded in a random sample consensus (RANSAC) method [30], which, however, significantly increases the computational load. The approach presented here offers two innovations. First, we employ acoustic distance estimates, in addition to DoA measurements, which will solve the scale ambiguity of purely DoA-based geometry calibration and still renders clock synchronization at sample precision unnecessary. Compared to our previous approach presented in [31] which already utilized DoA and distance estimates in a two-stage manner, the approach proposed in the paper at hand combines both types of estimates directly in a common cost function. In [32, 33], it has been shown how the distance between an acoustic source and a microphone array can be estimated from the coherent-to-diffuse power ratio (CDR), the ratio between the power of the coherent, and the diffuse part of the received audio signal. The authors employed Gaussian processes (GPs) to estimate the distance between a close pair of microphones and the acoustic source. This technique performed well if the GP was trained in the target environment but generalized poorly to new acoustic environments. Better generalization capabilities were achieved by deep neural network (DNN)-based acoustic distance estimation, where the network was exposed to many different acoustic environments during training [31]. However, this approach to distance estimation needs signal segments where a coherent source is active for a time around 1 s to work well. This requirement excludes impulsive source signals but is generally fulfilled by speech. Therefore, we consider speech as source signal but do not exclude other acoustic sources. In the contribution at hand, we build upon the DNN approach and further generalize it to perform better in the presence of directional sources. The second contribution of this paper is the formulation of geometry calibration as a data set matching problem, similarly to [13], however, employing both distance and DoA estimates. Since data set matching can be efficiently realized, it greatly reduces the computational complexity of the task and thus the time it takes to estimate the geometry compared to a gradient-based optimization of a cost function. Moreover, we integrate the data set matching into an error-model-based re-weighting scheme and present a formal proof of convergence for it. The re-weighting scheme robustifies the geometry calibration process w.r.t. observations with large errors without the need of using a RANSAC. Additionally, a detailed experimental investigation of the proposed approach to geometry calibration is presented beside the mathematical analysis. Furthermore, the formulation as a data set matching problem allows the inference of the network's geometry even if it only consists of two sensor nodes, each equipped with at least three microphones which do not lie on a line. The paper is organized as follows: In Section 2, the geometry calibration problem and the notation is summarized, followed by the description of the cost function we investigate for geometry estimation in Section 3. Subsequently, the distance estimation via DNNs is briefly described in Section 4. In Section 5, the experimental results are summarized before we end the paper by drawing some conclusions in Section 6. Geometry calibration setup We consider a WASN, where a set of sensor nodes is randomly placed in a reverberant environment (see Fig. 1). Note that we investigate geometry calibration in a 2-dimensional space; however, the extension to 3-dimensional space is in principle straight-forward. Geometry calibration problem (red: sensor nodes; dark blue: acoustic sources; blue: source k; global coordinate system (x,y); local coordinate systems ( )) We assume that the internal geometric arrangement of each node's microphone array is known and that all microphones making up an array are synchronously sampled, which we consider a realistic assumption. To be able to identify which DoA and distance estimates made by the different sensor nodes correspond to the same source signal, we further assume that a coarse time synchronization, i.e., a synchronization with an accuracy of a few tens of milliseconds, exists between the clocks of the different sensor nodes. This can be established, e.g., by NTP [17] or PTP [18]. We do, however, not require time synchronization at the precision of a few parts per million (ppm). The WASN consists of L sensor nodes (red dots in Fig. 1), each equipped with a microphone array centered at positions \(\boldsymbol {n}_{l}{=}\left [\begin {array}{ll} n_{l,x} &n_{l,y} \end {array}\right ]^{\mathrm {T}}\)with an orientation θl,l∈{1,2,…,L} relative to the global coordinate system, which is spanned by the depicted coordinate axes x and y. Here, θl corresponds to the rotation angle between the local coordinate system of the l-th node and the global coordinate system, i.e., the angle between the positive x-axes of the global and the local coordinate system (measured counterclockwise from the positive x-axis to the positive y-axis). The K acoustic sources (blue dots in Fig. 1) are at positions \(\boldsymbol {s}_{k}{=}\left [\begin {array}{ll} s_{k,x} &s_{k,y} \end {array}\right ]^{\mathrm {T}}\), k∈{1,2,…,K}. We assume that only one source is active at any given time. Note that the positions of the sensor nodes nl, their orientations θl, and the positions of the acoustic sources sk are all unknown and will be estimated through a geometry calibration procedure from the observed acoustic source signals. The geometry calibration task amounts to determining the set Ωgeo={n1,…,nL,θ1,…,θL}. Furthermore, all source positions are gathered in the set Ωs={s1,…,sK}, which will be estimated alongside geometry calibration. This results in the set of all unknowns Ω=Ωgeo∪Ωs. Since a sensor node does not know its own position or orientation within the global coordinate system, all observations are given in the node's local coordinate system (see Fig. 2 for an illustration). In the following, the superscript (l) denotes that a quantity is measured in the local coordinate system of the l-th sensor node. Thus, the position of the k-th acoustic source, if expressed in the local coordinate system of the l-th sensor node, is denoted as \(\boldsymbol {s}_{k}^{(l)} {=} \left [s_{k,x}^{(l)},s_{k,y}^{(l)}\right ]^{\mathrm {T}}\). Quantities without a superscript are measured in the global coordinate system. For example, sk corresponds to the position of the k-th acoustic source described in the global coordinate system. Position of an acoustic source within the global coordinate system (x,y) and local coordinate system ( ) of node Each sensor node l, l∈{1,…,L}, computes DoA estimates \(\widehat {\varphi }_{k}^{(l)}\) and distance estimates \(\widehat {d}_{k}^{\:(l)}\) to the acoustic source k, k∈{1,…,K}, all w.r.t. the node's local coordinate system. Altogether, this results in K·L DoA estimates and K·L distance estimates available for geometry calibration. Geometry calibration using DoAs and source node distances To carry out geometry calibration, the given observations in the sensors' local coordinate systems have to be transferred to a common global coordinate system. Then, a cost function is defined that measures the fit of the transferred observations to an assumed geometry. The minimization of this cost function provides the positions and orientations of the sensor nodes, as well as the positions of the acoustic sources. Development of a cost function The position \(\boldsymbol {s}_{k}^{(l)}\) of source k w.r.t. the local coordinate system of sensor node l is given by $$\begin{array}{*{20}l} \boldsymbol{s}_{k}^{(l)} = d_{k}^{(l)} \left[\begin{array}{ll} \cos\left(\varphi_{k}^{(l)}\right) &\sin\left(\varphi_{k}^{(l)}\right) \end{array}\right]^{\mathrm{T}}. \end{array} $$ To project \(\boldsymbol {s}_{k}^{(l)}\) into the global coordinate system, the following translation and rotation operation is applied: $$\begin{array}{*{20}l} \boldsymbol{s}_{k} &= \boldsymbol{R}(\theta_{l}) \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l} \end{array} $$ $$\begin{array}{*{20}l} &= d_{k}^{(l)}\left[\begin{array}{ll} \cos\left(\varphi_{k}^{(l)} + \theta_{l}\right) \\ \sin\left(\varphi_{k}^{(l)} + \theta_{l}\right) \end{array}\right] + \boldsymbol{n}_{l}. \end{array} $$ $$\begin{array}{*{20}l} \boldsymbol{R}(\theta_{l}) = \left[\begin{array}{ll} \cos(\theta_{l}) & -\sin(\theta_{l}) \\ \sin(\theta_{l}) & \cos(\theta_{l})\end{array}\right] := \boldsymbol{R}_{l} \end{array} $$ denotes the rotation matrix corresponding to the rotation angle θl. If all distances and angles were perfectly known, all \(\boldsymbol {s}_{k}^{(l)}\) would map to a unique position sk. Hence, the geometry can be inferred by minimizing the deviation of the projected source positions from an assumed position sk by minimizing the LS cost function J(Ω): $$\begin{array}{*{20}l} \widehat{\Omega} = \underset{\Omega}{\operatorname{argmin}} \underbrace{\sum_{l=1}^{L} \sum_{k=1}^{K} \left\|\boldsymbol{s}_{k} - \left(\boldsymbol{R}_{l} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}\right)\right\|_{2}^{2}}_{:=J(\Omega)}, \end{array} $$ with ∥·∥2 denoting the Euclidean norm. Note that at least K=2 spatially different acoustic source positions have to be observed to arrive at an (over-)determined system of equations which is defined by \(\boldsymbol {s}_{k} = \boldsymbol {R}_{l} \boldsymbol {s}_{k}^{(l)} + \boldsymbol {n}_{l}\) with \(l \in \{1, \dots, L\}\) and \(k \in \{1, \dots, K\}\). There exists no closed-form solution for the non-linear optimization problem in (5). Thus, (5) has to be solved by an iterative optimization algorithm, e.g., by Newton's method as proposed in [23] or by gradient descent. Prior works, e.g., [23], have shown that the iterative optimization strongly depends on the initial values. Furthermore, the optimization is computationally demanding and, depending on the number of observed acoustic source positions, very time consuming, which limits its usefulness for WASNs with typically limited computational resources. In the following, we will present a computationally much more reasonable approach. Geometry calibration by data set matching We now interpret the relative acoustic source positions (see (1)) as the vertices of a rigid body. Matching the rigid body shapes as observed by the different sensor nodes will result in an efficient way for geometry calibration as described in [13]. In the following, we shortly recapitulate the concept of efficient geometry calibration based on data set matching [34, 35]. Let $$\begin{array}{*{20}l} \boldsymbol{S}^{(l)} = \left[\begin{array}{lll} \boldsymbol{s}_{1}^{(l)} &\cdots & \boldsymbol{s}_{K}^{(l)} \end{array}\right]. \end{array} $$ be the matrix of all K source positions, as measured in the local coordinate system of sensor node l. Similarly, let S be the same matrix of source positions, but now measured in the global coordinate system. The dispersion matrix Dl is defined as follows [35]: $$\begin{array}{*{20}l} \boldsymbol{D}_{l} = \frac{1}{K} \left(\boldsymbol{S}^{(l)} - \bar{\boldsymbol{s}}^{(l)}\boldsymbol{1}^{\mathrm{T}}\right) \boldsymbol{W}_{l} \left(\boldsymbol{S} -\bar{\boldsymbol{s}}\boldsymbol{1}^{\mathrm{T}}\right)^{\mathrm{T}}, \end{array} $$ where 1 denotes a vector of all ones. Wl is a diagonal matrix with (Wl)k,k=wkl, where (·)i,j denotes the i-th row and j-th column element of a matrix. \(\bar {\boldsymbol {s}}^{(l)}\) corresponds to the centroid of the observations made by sensor node l and \(\bar {\boldsymbol {s}}\) is the centroid of the source positions expressed in the global coordinate system: $$\begin{array}{*{20}l} \bar{\boldsymbol{s}}^{(l)} = \frac{\sum\limits_{k=1}^{K} w_{{kl}} \boldsymbol{s}_{k}^{(l)}}{\sum\limits_{k=1}^{K} w_{{kl}}} \:\:\:\: \text{ and} \:\:\:\: \bar{\boldsymbol{s}} = \frac{\sum\limits_{k=1}^{K} w_{{kl}} \boldsymbol{s}_{k}}{\sum\limits_{k=1}^{K} w_{{kl}}}. \end{array} $$ The weights wkl will be introduced in Section 3.3 to control the impact of an individual observation \(\boldsymbol {s}_{k}^{(l)}\) on the geometry estimates. Carrying out a singular value decomposition (SVD) of the dispersion matrix gives Dl=UΣVT. The estimate \(\widehat {\boldsymbol {R}}_{l}\) of the rotation matrix is then given by [34, 35] $$\begin{array}{*{20}l} \widehat{\boldsymbol{R}}_{l} = \boldsymbol{V}\boldsymbol{U}^{\mathrm{T}}, \end{array} $$ and the orientation of the corresponding sensor node by: $$\begin{array}{*{20}l} \widehat{\theta}_{l} = \arctan \! 2\left(\left(\widehat{\boldsymbol{R}}_{l}\right)_{1, 1}, \left(\widehat{\boldsymbol{R}}_{l}\right)_{2, 1}\right). \end{array} $$ Here, arctan 2 is the four-quadrant arc tangent. Thus, the l-th sensor node position estimate \(\widehat {\boldsymbol {n}}_{l}\) in the reference coordinate system is given by $$\begin{array}{*{20}l} \widehat{\boldsymbol{n}}_{l} = \bar{\boldsymbol{s}} - \widehat{\boldsymbol{R}}_{l} \bar{\boldsymbol{s}}^{(l)}. \end{array} $$ Note that the described data set matching procedure corresponds to minimizing the following cost function [34]: $$\begin{array}{*{20}l} J\left(\boldsymbol{n}_{l}, \boldsymbol{R}_{l}\right)= \sum_{k=1}^{K} w_{{kl}} \left|\left|\boldsymbol{s}_{k} - \left(\boldsymbol{R}_{l} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}\right)\right|\right|_{2}^{2}. \end{array} $$ Geometry calibration by iterative data set matching We now generalize the findings of the last section to an arbitrary number L of sensor nodes. Moreover, we consider the source positions as additional unknowns. The resulting cost function $$\begin{array}{*{20}l} J(\Omega) = \sum_{l=1}^{L} \sum_{k=1}^{K} w_{{kl}} \left|\left|\boldsymbol{s}_{k} - \left(\boldsymbol{R}_{l} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}\right)\right|\right|_{2}^{2} \end{array} $$ is optimized by alternating between the estimation of the set of source positions Ωs and the estimation of the sensor node parameters Ωgeo. Starting from an initial set of source positions Ωs, the geometry Ωgeo can be determined by optimizing (12) for each sensor node l∈{1,…,L} by data set matching as outlined in the last section. Note that the estimated positions are given relative to a reference coordinate system. The origin and orientation of this reference coordinate system is a result of the calibration process. Given a geometry Ωgeo the positions sk can be estimated for each acoustic source k∈{1,…,K} via: $$\begin{array}{*{20}l} {\hat{\boldsymbol{s}}_{k}} = \underset{{\boldsymbol{s}_{k}}}{\operatorname{argmin}} \sum_{l=1}^{L} w_{{kl}} \left|\left|\boldsymbol{s}_{k} - \left(\boldsymbol{R}_{l} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}\right)\right|\right|_{2}^{2}. \end{array} $$ For this, a closed-form solution exists, which is given by $$\begin{array}{*{20}l} \hat{\boldsymbol{s}}_{k} = \frac{\sum\limits_{l=1}^{L}w_{{kl}}\left(\boldsymbol{R}_{l} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}\right) }{\sum\limits_{l=1}^{L}w_{{kl}}}. \end{array} $$ What remains is to describe how the weights wkl are chosen. They should reflect how well the observations \(\boldsymbol {s}_{k}^{(l)}\) fit to the model specified by \(\widehat {\Omega }_{{\text {geo}}}\) and \(\widehat {\Omega }_{\boldsymbol {s}}\). This can be achieved by setting $$\begin{array}{*{20}l} w_{{kl}} = \frac{1}{\left|\left|\hat{\boldsymbol{s}}_{k} - \left(\widehat{\boldsymbol{R}}_{l} \boldsymbol{s}_{k}^{(l)} + \hat{\boldsymbol{n}}_{l} \right)\right|\right|_{2}}. \end{array} $$ With these weights and the ideas of [36], (13) can be interpreted as an iteratively re-weighted least squares (IRLS) algorithm [37] which minimizes the following sum of Euclidean distances: $$\begin{array}{*{20}l} {\widehat{\Omega} = \underset{\Omega}{\operatorname{argmin}}\sum_{l=1}^{L} \sum_{k=1}^{K} \left|\left|\boldsymbol{s}_{k} - \left(\boldsymbol{R}_{l} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}\right)\right|\right|_{2}.} \end{array} $$ Consequently, the resulting optimization problem is less sensitive to outliers than the optimization problem in (5). Algorithm 1 summarizes the iterative data set matching used for geometry calibration. In the beginning the set of observations \(\mathcal {S}^{(1)} = \left \{\boldsymbol {s}_{1}^{(1)}, \boldsymbol {s}_{2}^{(1)}, \dots, \boldsymbol {s}_{K}^{(1)}\right \}\) made by sensor node 1 is used as initial estimate of the acoustic sources' position set \(\widehat {\Omega }_{\boldsymbol {s}}\). Experiments on the convergence behavior have shown that the effect of the choice of the sensor node, whose observations are used for initialization, is negligible (see Section 5.2). Due to the fact that at this point no statement can be made about the quality of the observations \(\boldsymbol {s}_{k}^{(l)}\), the initial weights are all set to one: wkl=1;∀k,l. Subsequently, a first estimate of the geometry \(\widehat {\Omega }_{{\text {geo}}}\) can be derived by data set matching (line 3) utilizing \(\widehat {\Omega }_{\boldsymbol {s}}\) as reference source positions. Then, \(\widehat {\Omega }_{{\text {geo}}}\) is used to estimate the sources' positions \(\widehat {\Omega }_{\boldsymbol {s}}\) (line 4) based on (15) with the weights still left as above. In the next iterations, the weights are chosen as described in (16). The iterative weighted data set matching procedure, i.e., lines 3–5 in Algorithm 1, is repeated until \(\widehat {\Omega }_{{\text {geo}}}\) and \(\widehat {\Omega }_{\boldsymbol {s}}\) converge. A detailed analysis of the convergence behavior of this part of the algorithm can be found in the Appendix. Although outliers are already addressed by the weights wkl to some extent, they can still have a detrimental influence on the results of the iterative optimization process if the corresponding errors are very large. Therefore, after convergence, the iterative weighted data set matching procedure is repeated again (lines 7–12); however, only on that subset of observations \(\mathcal {S}_{{\text {fit}}}\) that best fits to the model defined by the current estimates \(\widehat {\Omega }_{{\text {geo}}}\) and \(\widehat {\Omega }_{\boldsymbol {s}}\). There are two criteria that describe how well the observations \(\boldsymbol {s}_{k}^{(l)}\) made by sensor node l fit to the model specified by \(\widehat {\Omega }_{{\text {geo}}}\) and \(\widehat {\Omega }_{\boldsymbol {s}}\). First, there are the distances between \(\boldsymbol {s}_{k}^{(l)}\) and the source position estimates \(\boldsymbol {s}_{k}^{(o)}, o \in \{1, \dots, L\} \backslash \{l\}\), made by the other sensor nodes: $$\begin{array}{*{20}l} \epsilon_{k}(l,o) &= \left|\left|\left(\widehat{\boldsymbol{R}}_{l} \boldsymbol{s}_{k}^{(l)} + \hat{\boldsymbol{n}}_{l} \right) - \left(\widehat{\boldsymbol{R}}_{o} \boldsymbol{s}_{k}^{(o)} + \hat{\boldsymbol{n}}_{o} \right)\right|\right|_{2}. \end{array} $$ Second, there is the distance between the observations after being projected and the estimated source position measured in the global coordinate system: $$\begin{array}{*{20}l} \sigma_{k}(l) = \left|\left|\hat{\boldsymbol{s}}_{k} - \left(\widehat{\boldsymbol{R}}_{l} \boldsymbol{s}_{k}^{(l)} + \hat{\boldsymbol{n}}_{l} \right)\right|\right|_{2}. \end{array} $$ Note that the choice of εk(l,o) and σk(l) is motivated by the fact that all relative source positions observed by the single sensor nodes would map on the same position in the global coordinate system if the observations are perfect. Combining the two criteria results in the function $$\begin{array}{*{20}l} C_{k}(l) = \sigma_{k}(l) + {\frac{1}{L-1}} \underset{o\neq l}{\sum_{o=1}^{L}} \epsilon_{k}(l,o), \end{array} $$ used for the selection of \(\mathcal {S}_{{\text {fit}}}\). The distance and DoA measurements of source k made by a node l are included in \(\mathcal {S}_{{\text {fit}}}\) only if the resulting relative source position belongs to the best γ measurements made by a node l. With Ck(l) outliers can be identified based on the fact that they do not align well with the source position estimates of the other nodes for the current geometry. In principle, this fitness selection could also be integrated in the first iterative data set matching rounds (lines 3–5). However, initial experiments have shown that this may lead to a degradation of performance if the number of observed source positions K is small. This can be explained by the fact that observations are discarded based on a model which is still not converged. Acoustic distance estimation To gather distance and, respectively, scaling information that can be used for geometry calibration, we propose to utilize the DNN-based distance estimator which we introduced in [31]. This distance estimator shows state-of-the-art performance and good generalization capabilities to different acoustic environments. In the following, we just concentrate on an adaptation of the distance estimator to directional sources and refer to [31] for a detailed description. Our approach to acoustic distance estimation considers a microphone pair recording a signal x(t) emitted by a single acoustic source. The reverberant signal, being captured by the ν-th microphone, ν∈{1,2}, is modeled as follows [32]: $$\begin{array}{*{20}l} y_{\nu}(t) &= h_{\nu}(t) \ast x(t) + v_{\nu}(t) \\ &= \underbrace{h_{\nu,e}(t) \ast x(t)}_{c_{\nu}(t)} + \underbrace{{h_{\nu,\ell}}(t) \ast x(t) + v_{\nu}(t)}_{r_{\nu}(t)}, \end{array} $$ with vν(t) corresponding to white sensor noise and hν(t) corresponding to the room impulse response which models the sound propagation from the source to the ν-th microphone. The ∗ operator denotes a convolution. hν(t) can be divided into hν,e(t) modeling the direct path and the early reflections and hν,ℓ(t) modeling the late reflections. Thus, yν(t) can be split up into a coherent component cν(t) which corresponds to the direct path and the early reflections and a diffuse component rν(t) produced by the late reflections and the sensor noise. In [32] it was shown that the CDR, i.e., the power ratio of the coherent signal component cν(t) to diffuse signal component rν(t), is related to the distance between the microphone pair and the acoustic source (the larger the distance the smaller the value of the CDR). The DNN-based distance estimator utilizes a time-frequency representation of the CDR as an input feature. Due to the large effort needed to measure room impulse responses (RIRs) in various acoustic environments, we here stick to synthetic RIRs for the training of the distance estimator, using the RIR generator of [38]. However, there are a lot of simplifying assumptions for the simulation of RIRs. For example, the room is modeled as a cuboid, and an omnidirectional characteristic is typically assumed for the acoustic sources and microphones. Especially the omnidirectional characteristic of the acoustic sources is a large deviation from reality, because a real acoustic source, like a speaker, typically exhibits directivity. While an omnidirectional source emits sound waves with equal power in all directions, a directional source emits most of the power into one direction. In both cases, the sound waves are reflected multiple times on the surfaces of the room which mainly causes the late reflections and accumulates to hν,ℓ. Hence, a directional source pointing towards a microphone array causes a less diffuse signal compared to an omnidirectional source that is assumed in the simulated RIRs. Consequently, a distance estimator trained with simulated RIRs and applied to recordings of directional sources, pointing towards the microphone array, would exhibit a systematic error and underestimates the distance. Furthermore, a directional source may cause a more diffuse signal compared to an omnidirectional source if it does not point towards a microphone array, causing a systematic overestimation of the distance. However, this case is not further investigated as such recording conditions are not included in the MIRD database [39] which is used in the experimental section. We approach this mismatch by applying a recently proposed direct-to-reverberant ratio (DRR) data augmentation technique [40]. The DRR is defined as $$\begin{array}{*{20}l} \eta_{\nu} = \frac{\sum_{t} h_{\nu,e}^{2}(t)}{\sum_{t} {h_{\nu,\ell}^{2}}(t)}. \end{array} $$ Considering (21), it is obvious that CDR and DRR are equivalent [41] if the influence of the sensor noise is negligible. Consequently, an augmentation of the DRR results into an augmentation of the CDR. Therefore, during training, a scalar gain α is applied to hν,e(t) which contains the direct path and the early reflections of the RIRs. To avoid discontinuities within the RIR caused by the scaling, a window wd(t) is employed to smooth the product α·hν,e(t): $$\begin{array}{*{20}l} \overline{h}_{\nu,e}(t) = w_{d}(t) \cdot \alpha \cdot h_{\nu,e}(t) + \left(1-w_{d}(t)\right) \cdot h_{\nu,e}(t). \end{array} $$ Hereby, wd(t) corresponds to a Hann window of 5 ms size, which is centered around the time delay td corresponding to the direct path. td is identified by the location of the maximum of |hν(t)|. Due to the fact that the directivity of the acoustic source is unknown in general there is also no knowledge how α has to be chosen to adapt the simulated RIRs to the real scenario. Nevertheless, it is known that the DRR of the simulated RIRs has to be increased if a directional source pointing towards the center of the microphone pair is considered. Thus, \(\alpha {\sim } \mathcal {U}(1, \alpha _{{max}})\) is used, where αmax corresponds to the fixed upper limit of α and \({\sim } \mathcal {U}(\text {min}, \text {max})\) denotes to uniformly draw a value from the interval [min,max]. Furthermore, the DRR is only manipulated with probability Pr(aug). Hence, beside manipulated examples, also examples that are not manipulated are presented to the DNN during training. The non-manipulated examples should ease the process of learning that examples being manipulated with different scaling factors α belong to the same distance. In this section, the proposed approach to geometry calibration is evaluated. First, the adaptation of the DNN-based acoustic distance estimation method to directional sources is examined. For deeper insights into acoustic distance estimation see [31]. Afterwards, the proposed approach to geometry calibration is investigated based on simulations of the considered scenario. In the following, the adaptation of the DNN-based distance estimator to directional sources is evaluated on the MIRD database [39]. This database consists of measured RIRs for multiple source positions on an angular grid at a distance of 1 m and 2 m. The measurements took place in a 6 m×6 m×2.4 m room with a configurable reverberation time T60. From the data we used, the two subsets corresponding to T60=360 ms and T60=610 ms, considering the central microphone pair with inter microphone distance equal to 8 cm. The setups of the MIRD database are limited w.r.t. the number of source and sensor positions. Nevertheless, the experimental data is sufficient to proof that the approach works for directional acoustic sources and not only on simulated audio data of omnidirectional sources. We refer to [31] for a detailed investigation of a wider range of considered setups using simulated data. As described in Section 4, the distance estimator is trained utilizing RIRs which are simulated using the implementation of [38]. The training set consists of 100,000 source microphone pair constellations whereby the properties of the considered room and the placement of the microphone pair and acoustic source is randomly drawn for each of these constellations. Table 1 summarizes the corresponding probability distributions. We first draw the position of the microphone pair and then place the acoustic source relative to this position at the same height using the distance d and the DoA φ. Table 1 Description of the training set of the distance estimator used on the MIRD database The RIRs are used to reverberate clean speech signals from the TIMIT database [42]. During training, these speech probes are randomly drawn from the database. For the evaluation of the distance estimator on the MIRD database, we utilized R=100 speech probes which were randomly drawn from the TIMIT database and then reverberated by each of the RIRs. In the following, the configuration and training scheme of the distance estimator are explained. We employ 1 s long speech segments to calculate the CDR which results in a feature map that is passed to the DNN. The short-time Fourier transform (STFT), which is needed to estimate the CDR, utilizes a Blackman window of size 25 ms, and a frame shift of 10 ms. The CDR is calculated for frequencies between 125Hz and 3.5 kHz, which corresponds to the frequency range, where speech has significant power. Table 2 shows the architecture of the DNN used for distance estimation. The estimator is trained using Adam [43] with a mini-batch size of B=32 and a learning rate of 3·10−4 for 500,000 iterations. Besides, the maximum DRR augmentation factor αmax is chosen to be equal to 3. After training, we utilize the best performing checkpoint w.r.t. the mean-absolute error (MAE) of the distance estimates on an independent validation set. Table 2 Architecture of the DNN used for distance estimation on the MIRD database The influence of the DRR manipulation probability Pr(aug) can be seen in Table 3. Thereby, the MAE $$\begin{array}{*{20}l} e_{d} = \frac{1}{2 \cdot A \cdot R} \sum_{c=1}^{2}\sum_{a=1}^{A}\sum_{r=1}^{R}|d(c, a) - \widehat{d}_{r}(c, a)| \end{array} $$ Table 3 MAE ed/ m on the MIRD database and the corresponding simulated RIRs is used as metric. Here, d(1,a)=1 m and d(2,a)=2 m correspond to the ground truth distance at DoA-candidate a. \(\widehat {d}_{r}(c, a)\) denotes the corresponding estimate using the r-th speech sample and A the number of DoA in the angular grid of the MIRD database. Furthermore, results for distance estimation on a simulated version of the RIRs of the MIRD database with omnidirectional sources are provided (see Table 3). Without DRR augmentation, i.e., for Pr(aug)=0, the distance estimation error is large compared to the error on simulated RIRs. This can be explained by the systematic error resulting from the fact that the simulated RIRs used during the training include more diffuse signal parts than the recorded RIR. With DRR augmentation the error of the distance estimates on the MIRD database can be reduced and the best performance is achieved if the DRR of all examples is manipulated during training. However, DRR augmentation makes the learning process more difficult, which increases the error on the simulated RIRs. Geometry calibration To evaluate the proposed approach to geometry calibration, we generated a data set consisting of G=100 simulated scenarios. Thereby, each scenario corresponds to a WASN with L=4 sensor nodes. Furthermore, each scenario contains acoustic sources at a fixed amount of K=100 spatially independent positions within the room. This number can be justified by the fact that in realistic environments, e.g., living rooms, acoustic sources like speakers will move over time such that the amount of observed acoustic source positions will also grow over time. All rooms have a random width rw∼U(6 m,7 m), random length \(r_{l} {\sim } \mathcal {U}({5}\text { m}, {6}\text { m}),\) and a fixed height rh of 3 m. In the experiments, we investigate reverberation times T60 from the set {300 ms,400 ms,500 ms,600 ms}. Both, the nodes and the acoustic sources, are placed at a height of 1.4 m, whereby the sensor nodes are equipped with a circular array with six microphones and a diameter of 5 cm. The way how the sensor nodes and the acoustic sources are placed within the room is exemplarily shown in Fig. 3. Simulated setup; red: microphones; blue: acoustic sources; gray area: possible area to randomly place sensor nodes (microphone arrays); all sensor nodes and acoustic sources have a minimum distance of 0.1 m to the closest wall; 1 m spacing between the gray areas We assume that at each of the possible K=100 source positions, a 1 s long speech signal is emitted, whereby the speech signals are randomly drawn from the TIMIT database. The speech samples are reverberated by RIRs gathered from the RIR generator of [38]. Subsequently, the reverberant signals are used for distance and DoA estimation. We employ the convolutional recurrent neural network (CRNN) which we proposed in [31] to compute the distance estimates used for geometry calibration. Feature extraction, training set, and training scheme mainly coincide with the ones described in Section 5.1. The description of the corresponding training set which consists of 10,000 source node constellations can be found in Table 4. During training, DRR augmentation is used with a manipulation probability of Pr(aug)=0.5. Table 4 Description of the training set of the distance estimator used for geometry calibration We take the three microphone pairs formed by the opposing microphones of the considered circular microphone array for distance estimation. The CDR is estimated for each of these microphone pairs and the three resulting feature maps are jointly passed to the CRNN. DoA estimation is done using the complex Watson kernel method introduced in [44], where it was shown that this estimator is competitive to state-of-the-art estimators. The considered DoA candidates have an angular resolution of 1∘ and the concentration parameter of the complex Watson probability density function is chosen to be κ=5. The fitness selection contained in our approach to geometry calibration always selects the best 50% relative source positions for each sensor node. Figures 4 and 5 show the cumulative distribution function (CDF) of the distance and DoA estimation errors. The majority of distance and DoA estimates exhibits only small errors, so in general there will be enough reliable estimates for geometry calibration. But in both cases, there is also a non-negligible amount of estimates exhibiting large errors which have to be considered as outliers. It can also be observed that the amount of outliers increases with increasing reverberation time T60. We refer to [31, 44] for a comparison of the used estimators to alternative estimators. CDF of the distance estimation error CDF of the DoA estimation error After the geometry calibration process is started, more and more observed relative source positions \(\boldsymbol {s}_{k}^{(l)}\) will become available. The resulting effect on the geometry calibration results can be seen in Fig. 6, which displays the MAE of the sensor nodes' position $$\begin{array}{*{20}l} e_{p} = \frac{1}{G \cdot L} \sum_{g=1}^{G} \sum_{l=1}^{L} \left|\left|\boldsymbol{n}_{l,g} - \widehat{\boldsymbol{n}}_{l,g}\right|\right|_{2} \end{array} $$ Influence of number of source positions on calibration performance and orientation $$\begin{array}{*{20}l} e_{o} = \frac{1}{G \cdot L} \sum_{g=1}^{G} \sum_{l=1}^{L} \left|\angle\left(e^{j\left(\theta_{l,g} - \widehat{\theta}_{l,g}\right)}\right)\right|, \end{array} $$ where ∠(·) denotes the phase of a complex-valued number. Further, nl,g and θl,g are the ground truth values of the location parameters of the l-th node in the g-th scenario and \(\widehat {\boldsymbol {n}}_{l,g}\) and \(\widehat {\theta }_{l,g}\) denote the corresponding estimates. Note that the geometry estimates are projected into the coordinate system of the ground truth geometry using data set matching to align the sensor node positions before the errors are calculated. Figure 6 shows that the geometry estimation error gets smaller when more source positions have been observed and thus more relative source position estimates exhibiting a small error are available. Hence, the estimate of the geometry will improve over time. However, reasonable results can already be achieved with a small amount of observed source positions. This especially holds for scenarios with small reverberation times T60 where the estimates of the relative source positions are less error-prone. In addition to the MAE of the geometry estimates, the distribution of the corresponding error is displayed in Figs. 7 and 8 for K=20 and K=100 observed source positions. For a small number of observed source positions, i.e., K=20, the majority of node position and node orientation estimates shows acceptably small errors. As can be seen, there are still outliers exhibiting large errors, despite the used error-model-based re-weighting method and the fitness selection method. Distribution of the geometry calibration error for K=20 Distribution of the geometry calibration error for K=100 If more source positions are observed, e.g., K=100, the probability increases that a sufficient amount of good relative source position estimates is available, thus improving the average calibration accuracy and also decreasing the number of outliers. Table 5 shows the influence of the individual outlier rejection and error handling steps of our approach to geometry calibration, namely the weighting in data set matching (WLS), the weighting in source localization (WLS SRC), and the fitness selection (Select). If all weights are set to wkl=1;∀k,l, and fitness selection is omitted, the geometry estimates are clearly worse compared to the other cases depicted in the table. Introducing weighting factors in data set matching and source localization improves the results. However, the experiment with active data selection reveals that the weighting is not powerful enough to completely suppress the detrimental effect of outliers, which can only be achieved by removing these outliers from the processed data via fitness selection. Table 5 Influence of the weighting of the proposed geometry calibration procedure for K=20 and T60=500 ms Figures 9 and 10 show the effect of fitness selection on the distribution of the DoA and distance estimation errors. Fitness selection causes larger errors to occur less frequently for both quantities, removing a large portion of the outliers. This especially holds for the distance estimates. Effect of fitness selection on the distribution of DoA estimation errors for K=20 Effect of fitness selection on the distribution of distance estimation errors for K=20 These outliers are often caused by strong early reflections of sound on surfaces in the room, e.g., when a sensor node is placed near to a wall, resulting in poor distance and DoA estimates. However, outliers can also occur if a source is too close to a sensor node, i.e., the far-field assumption for DoA estimation is not met, or the distance between a sensor node and an acoustic source is too large which leads to a challenging situation for distance estimation. Because of the large number of possible reasons for outliers in the DoA and distance estimates, we refer the reader to the relevant literature for a more detailed discussion [31, 44, 45]. The convergence behavior of the sensor nodes' positions is shown in Fig. 11 based on the CDF of the average spread of the sensor node position estimates $$\begin{array}{*{20}l} {\zeta_{\boldsymbol{n}_{l}} = \frac{1}{I} \sum_{i=1}^{I} \left|\left|\widehat{\boldsymbol{n}}_{l, i} - \mu_{\boldsymbol{n}_{l}} \right|\right|_{2},} \end{array} $$ Effect of the initialization on the convergence behavior of the sensor nodes' positions for K=20 and T60=500 ms whereby \(\widehat {\boldsymbol {n}}_{l, i}\) denotes the estimate of the position of the l-th sensor node resulting from the i-th of the I considered initializations of \(\widehat {\Omega }_{\boldsymbol {s}}\) and \(\mu _{\boldsymbol {n}_{l}} {=} \frac {1}{I}\sum _{i=1}^{I} \widehat {\boldsymbol {n}}_{l, i}\) the corresponding mean. We compare two initialization strategies, namely the proposed initialization using the observed source positions of one sensor node and a random initialization. For the proposed initialization scheme, the geometry was estimated using the observations of each of the sensor nodes as initial values resulting in I=L=4 different initializations. In the random case, all values of \(\widehat {\Omega }_{\boldsymbol {s}}\) are drawn from a normal distribution and I=100 initialization were considered. It can be seen that the proposed initialization scheme leads to smaller deviations in the results. In most cases, the spread of the sensor node positions is even vanishingly small. Consequently, the choice of the sensor node whose source position estimates were used as initial values is not critical for the proposed initialization scheme. Moreover, the experiments showed that the spread of the estimated node orientations is in the order of magnitude of (10−13)∘ and can therefore be neglected. In addition to geometry, our approach also provides estimates of the positions of the sound sources. The MAE of these estimates $$\begin{array}{*{20}l} e_{\boldsymbol{s}} = \frac{1}{G \cdot K} \sum_{g=1}^{G} \sum_{k=1}^{K} \left|\left|\boldsymbol{s}_{k,g} - \hat{\boldsymbol{s}}_{k,g}\right|\right|_{2} \end{array} $$ is given in Table 6. Again, the coordinate system of the geometry estimates is aligned with the coordinate system of the ground truth geometry using data set matching before the errors are calculated. These results are compared to the results of source localization, i.e., solving (14) for each acoustic source, using the ground truth geometry. It is shown that for small reverberation times T60, the proposed iterative geometry calibration procedure yields comparable results to source localization using the ground truth geometry of the sensor network. As the reverberation time increases and thus the observation errors increase, the geometry calibration error increases and consequently the source localization error increases. Table 6 MAE es/ m of source positions with and without fitness selection (Select) for K=20 Moreover, the effect of fitness selection is shown in Table 6. Calculating the MAE es only for the subset of observed source positions selected by the fitness selection always leads to a smaller error. Thus, the algorithm succeeds in selecting a set of observations with smaller errors. Finally, in Table 7, we compare the proposed approach to geometry calibration to state-of-the-art approaches solely using distance [46] or DoA estimates [29]. Hereby, the DoA-based approach utilizes the optional Maximum Likelihood refinement procedure which was proposed in [29]. Note that the considered distance-based approach called GARDE only delivers estimates for the positions of the sensor nodes and no orientations. Furthermore, the DoA-based approach estimates a relative geometry which has to be scaled subsequently. To this end, we employed the ground truth source node distances to fix the scaling as described in [31]. Table 7 Comparison of the calibration results and average computing time \(\overline {T}_{c}\) Table 7 shows that our approach is able to outperform both approaches by far. This can be explained by the additional information which results from the combined usage of distance and DoA information. In addition to that, the considered DoA-based approach contains no outlier handling while GARDE suffers from the outliers in the distance estimates. The proposed approach also compares favorably in terms of computational effort, when looking at the average computing time \(\overline {T}_{c}\), i.e., the average time which is needed to estimate the geometry once. The average computing time for distance estimation (47 ms) and the average computing time for DoA estimation (545 ms) are not included in \(\overline {T}_{c}\). Note that the DoA-based approach utilizes a Fortran accelerated implementation [47] to optimize the underlying cost function while all other approaches are based on a Python implementation. Moreover, Table 7 provides the average computing time required to solve the optimization problem in (5) by the Broyden-Fletcher–Goldfarb-Shanno (BFGS) method and the average computing time of the proposed approach if the weighting and the fitness selection is omitted which also can be interpreted as solving (5). Thereby, the latter leads to the same results as the BFGS method while being 70 times faster. This leaves room for the additional computing time required for the weighting and fitness selection in our approach. Consequently, despite its iterative character the proposed approach shows competitive computing time compared to the other considered approaches while providing better geometry estimates. In this paper, we proposed an approach to geometry calibration in a WASN using DoA and distance information. The DoA and distances are estimated from the microphone signals and are interpreted as estimates of the relative positions of acoustic sources w.r.t. the coordinate system of the sensor node. Our approach uses these observations to alternatingly estimate the geometry and the acoustic sources' positions. Hereby, geometry calibration is formulated as an iterative data set matching problem which can be efficiently solved using a SVD. In order to improve robustness against outliers and large errors contained in the observations, we integrate the iterative geometry estimation and source localization procedure into an error-model-based weighting and observation selection scheme. Simulations show that the proposed approach delivers reliable estimates of the geometry while being computationally efficient. Furthermore, it requires only a coarse synchronization between the sensor nodes. \thelikesection Appendix \thelikesection Convergence analysis of geometry calibration using iterative data set matching We now analyze the convergence behavior of the iterative data set matching procedure, following the ideas of [48]. Therefore, we consider the part of iterative data set matching procedure where fitness selection is not used as shown in Algorithm 2. In the following, the superscript [η] denotes the value after the update in the η-th iteration. Thus, the sets of quantities resulting from the η-th iteration of the alternating optimization procedure are defined as \(\Omega _{{\text {geo}} }^{[\eta ]} {=} \left \{\boldsymbol {n}_{1}^{[\eta ]}, \ldots, \boldsymbol {n}_{L}^{[\eta ]}, \theta _{1}^{[\eta ]}, \ldots, \theta _{L}^{[\eta ]}\right \}\), \(\Omega _{\boldsymbol {s} }^{[\eta ]} {=} \left \{\boldsymbol {s}_{1}^{[\eta ]}, \ldots, \boldsymbol {s}_{K}^{[\eta ]}\right \}\), and \(\Omega _{\mathrm {w} }^{[\eta ]}{=}\left \{w_{11}^{[\eta ]}, \dots, w_{{KL}}^{[\eta ]} \right \}\). \(\boldsymbol {R}_{l}^{[\eta ]}\) denotes the rotation matrix corresponding to \(\theta _{l}^{[\eta ]}\). Furthermore, the cost function is now interpreted as a function of \(\Omega _{{\text {geo}} }^{[\eta ]}, \Omega _{\boldsymbol {s} }^{[\eta ]}\) and \(\Omega _{\mathrm {w} }^{[\eta ]}\): $$\begin{array}{*{20}l} {J\left(\Omega_{{\text{geo}}}^{[\eta]}, \Omega_{\boldsymbol{s}}^{[\eta]}, \Omega_{w}^{[\eta]}\right) = \sum_{l=1}^{L} \sum_{k=1}^{K} w_{{kl}}^{[\eta]} \left|\left|\boldsymbol{s}_{k}^{[\eta]} - \left(\boldsymbol{R}_{l}^{[\eta]} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}^{[\eta]}\right)\right|\right|_{2}^{2}.} \end{array} $$ Considering the (η+1)-th iteration of the alternating optimization the following monotonicity property of the cost function holds: Lemma 6.1 The inequality $$\begin{array}{*{20}l} {J\left(\Omega_{{\text{geo}}}^{[\eta+1]}, \Omega_{\boldsymbol{s}}^{[\eta+1]}, \Omega_{w}^{[\eta+1]}\right) \leq J\left(\Omega_{{\text{geo}}}^{[\eta]}, \Omega_{\boldsymbol{s}}^{[\eta]}, \Omega_{w}^{[\eta]}\right)} \end{array} $$ holds for all η>0, i.e., each iteration monotonically decreases the considered cost function. Proof Inserting the definition of the weights $$\begin{array}{*{20}l} {w_{{kl}}^{[\eta]} = \frac{1}{\left|\left|\boldsymbol{s}_{k}^{[\eta]} - \left(\boldsymbol{R}_{l}^{[\eta]} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}^{[\eta]} \right)\right|\right|_{2}}} \end{array} $$ into (29) leads to $$\begin{array}{*{20}l} {}{J\left(\Omega_{{\text{geo}}}^{[\eta]}, \Omega_{\boldsymbol{s}}^{[\eta]}, \Omega_{w}^{[\eta]}\right)} {=} &{ \sum_{l=1}^{L} \sum_{k=1}^{K} w_{{kl}}^{[\eta]} \!\!\left|\left|\boldsymbol{s}_{k}^{[\eta]} - \left(\boldsymbol{R}_{l}^{[\eta]} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}^{[\eta]} \right)\right|\right|_{2}^{2}} \\ {=} &{ \sum_{l=1}^{L} \sum_{k=1}^{K} \frac{\left|\left|\boldsymbol{s}_{k}^{[\eta]} - \left(\boldsymbol{R}_{l}^{[\eta]} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}^{[\eta]} \right)\right|\right|_{2}^{2}}{\left|\left|\boldsymbol{s}_{k}^{[\eta]} - \left(\boldsymbol{R}_{l}^{[\eta]} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}^{[\eta]} \right)\right|\right|_{2}}} \\ {=} &{ \sum_{l=1}^{L} \sum_{k=1}^{K} \left|\left|\boldsymbol{s}_{k}^{[\eta]} - \left(\boldsymbol{R}_{l}^{[\eta]} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}^{[\eta]} \right)\right|\right|_{2}} \end{array} $$ for the costs at the end of the η-th iteration. Firstly, data set matching is used to update the geometry Ωgeo (see line 3 in Algorithm 2). As described in [34] data set matching minimizes the cost function $$\begin{array}{*{20}l} {J_{\eta}\left(\boldsymbol{n}_{l}, \boldsymbol{R}_{l}\right) = \sum_{k=1}^{K} w_{{kl}}^{[\eta]} \left|\left|\boldsymbol{s}_{k}^{[\eta]} - \left(\boldsymbol{R}_{l} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l} \right)\right|\right|_{2}^{2}} \end{array} $$ for each of the L sensor nodes. Considering all L sensor nodes together results in $$\begin{array}{*{20}l} {}{\Omega_{{\text{geo}}}^{[\eta+1]}} {=} {\underset{\Omega_{{\text{geo}}}}{\operatorname{argmin}}\! \sum_{l=1}^{L} J_{\eta}\!\left(\boldsymbol{n}_{l}, \boldsymbol{R}_{l}\right)} {=\!\underset{\Omega_{{\text{geo}}}}{\operatorname{argmin}} \ J\!\left(\Omega_{{\text{geo}}}, \Omega_{\boldsymbol{s}}^{[\eta]}, \Omega_{w}^{[\eta]}\right).} \end{array} $$ Consequently, $$\begin{array}{*{20}l} {J\left(\Omega_{{\text{geo}}}^{[\eta+1]}, \Omega_{\boldsymbol{s}}^{[\eta]}, \Omega_{w}^{[\eta]}\right) \leq J\left(\Omega_{{\text{geo}}}^{[\eta]}, \Omega_{\boldsymbol{s}}^{[\eta]}, \Omega_{w}^{[\eta]}\right)} \end{array} $$ holds. The next step, i.e., the update of the source positions sk (see line 4 in Algorithm 2), is done by minimizing $$\begin{array}{*{20}l} {J_{\eta}\left(\boldsymbol{s}_{k}\right)} &{= \sum_{l=1}^{L} w_{{kl}}^{[\eta]} \left|\left|\boldsymbol{s}_{k} - \left(\boldsymbol{R}_{l}^{[\eta+1]} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}^{[\eta+1]} \right)\right|\right|_{2}^{2}} \end{array} $$ for all K source positions. Note that Jη(sk) corresponds to a sum of squared Euclidean distances, i.e, a convex function of sk, and, thus, is convex. Consequently, the resulting linear least squares solution (see (15)) corresponds to the global minimum of Jη(sk). Summarizing this step for all K acoustic sources gives $$\begin{array}{*{20}l} {}{\Omega_{\boldsymbol{s}}^{[\eta+1]}} {=} {\underset{\Omega_{\boldsymbol{s}}}{\operatorname{argmin}} \sum_{k=1}^{K} J_{\eta}\left(\boldsymbol{s}_{k}\right) =\underset{\Omega_{\boldsymbol{s}}}{\operatorname{argmin}} \ J\left(\Omega_{{\text{geo}}}^{[\eta+1]}, \Omega_{\boldsymbol{s}}, \Omega_{w}^{[\eta]}\right).} \end{array} $$ So it follows that $$\begin{array}{*{20}l} {J\left(\Omega_{{\text{geo}}}^{[\eta+1]}, \Omega_{\boldsymbol{s}}^{[\eta+1]}, \Omega_{w}^{[\eta]}\right) \leq J\left(\Omega_{{\text{geo}}}^{[\eta+1]}, \Omega_{\boldsymbol{s}}^{[\eta]}, \Omega_{w}^{[\eta]}\right)} \end{array} $$ and with (35) it holds: $$\begin{array}{*{20}l} {J\left(\Omega_{{\text{geo}}}^{[\eta+1]}, \Omega_{\boldsymbol{s}}^{[\eta+1]}, \Omega_{w}^{[\eta]}\right) \leq J\left(\Omega_{{\text{geo}}}^{[\eta]}, \Omega_{\boldsymbol{s}}^{[\eta]}, \Omega_{w}^{[\eta]}\right).} \end{array} $$ Finally, the influence of the weight update has to be discussed (see line 5 in Algorithm 2). Applying Titu's lemma to \(J\left (\Omega _{{\text {geo}}}^{[\eta +1]}, \Omega _{\boldsymbol {s}}^{[\eta +1]}, \Omega _{w}^{[\eta ]}\right)\) gives $$\begin{array}{*{20}l} &{J\left(\Omega_{{\text{geo}}}^{[\eta+1]}, \Omega_{\boldsymbol{s}}^{[\eta+1]}, \Omega_{w}^{[\eta]}\right)} \\ {=} &{\sum_{l=1}^{L} \sum_{k=1}^{K} \frac{\left|\left|\boldsymbol{s}_{k}^{[\eta+1]} - \left(\boldsymbol{R}_{l}^{[\eta+1]} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}^{[\eta+1]} \right)\right|\right|_{2}^{2}}{\left|\left|\boldsymbol{s}_{k}^{[\eta]} - \left(\boldsymbol{R}_{l}^{[\eta]} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}^{[\eta]} \right)\right|\right|_{2}}} \\ {\geq} & {\frac{\left(\sum\limits_{l=1}^{L} \sum\limits_{k=1}^{K} \left|\left|\boldsymbol{s}_{k}^{[\eta+1]} - \left(\boldsymbol{R}_{l}^{[\eta+1]} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}^{[\eta+1]} \right)\right|\right|_{2}\right)^{2}}{\sum\limits_{l=1}^{L} \sum\limits_{k=1}^{K} \left|\left|\boldsymbol{s}_{k}^{[\eta]} - \left(\boldsymbol{R}_{l}^{[\eta]} \boldsymbol{s}_{k}^{(l)} + \boldsymbol{n}_{l}^{[\eta]} \right)\right|\right|_{2}}} \\ {=} &{ \frac{\left(J\left(\Omega_{{\text{geo}}}^{[\eta+1]}, \Omega_{\boldsymbol{s}}^{[\eta+1]}, \Omega_{w}^{[\eta+1]}\right)\right)^{2}}{J\left(\Omega_{{\text{geo}}}^{[\eta]}, \Omega_{\boldsymbol{s}}^{[\eta]}, \Omega_{w}^{[\eta]}\right)}.} \end{array} $$ With (39) and (40) it follows: $$\begin{array}{*{20}l} {\frac{\left(J\left(\Omega_{{\text{geo}}}^{[\eta+1]}, \Omega_{\boldsymbol{s}}^{[\eta+1]}, \Omega_{w}^{[\eta+1]}\right)\right)^{2}}{J\left(\Omega_{{\text{geo}}}^{[\eta]}, \Omega_{\boldsymbol{s}}^{[\eta]}, \Omega_{w}^{[\eta]}\right)}} {\leq} {J\left(\Omega_{{\text{geo}}}^{[\eta]}, \Omega_{\boldsymbol{s}}^{[\eta]}, \Omega_{w}^{[\eta]}\right).} \end{array} $$ Since \(J\left (\Omega _{{\text {geo}}}^{[\eta ]}, \Omega _{\boldsymbol {s}}^{[\eta ]}, \Omega _{w}^{[\eta ]}\right) > 0\) holds this results in $$\begin{array}{*{20}l} {\left(J\left(\Omega_{{\text{geo}}}^{[\eta+1]}, \Omega_{\boldsymbol{s}}^{[\eta+1]}, \Omega_{w}^{[\eta+1]}\right)\right)^{2}} {\leq} {\left(J\left(\Omega_{{\text{geo}}}^{[\eta]}, \Omega_{\boldsymbol{s}}^{[\eta]}, \Omega_{w}^{[\eta]}\right)\right)^{2}} \end{array} $$ and, finally, in $$\begin{array}{*{20}l} {J\left(\Omega_{{\text{geo}}}^{[\eta+1]}, \Omega_{\boldsymbol{s}}^{[\eta+1]}, \Omega_{w}^{[\eta+1]}\right)} &{\leq J\left(\Omega_{{\text{geo}}}^{[\eta]}, \Omega_{\boldsymbol{s}}^{[\eta]}, \Omega_{w}^{[\eta]}\right)}. \end{array} $$ Due to the fact that \(J\left (\Omega _{{\text {geo}}}^{[\eta ]}, \Omega _{\boldsymbol {s}}^{[\eta ]}, \Omega _{w}^{[\eta ]}\right)\) is monotonically decreasing and has the lower bound J∞≥0 it converges to J∞≥0 for η→∞. The datasets and Python software code supporting the conclusions of this article are available in the paderwasn repository, https://github.com/fgnt/paderwasn. The MIRD database [39] is available under the following link: https://www.iks.rwth-aachen.de/en/research/tools-downloads/databases/multi-channel-impulse-response-database/. BFGS: Broyden-Fletcher–Goldfarb-Shanno CDF: Cumulative distribution function CDR: Coherent-to-diffuse power ratio CRNN: Convolutional recurrent neural network DNN: Deep neural network DoA: Direction of arrival DRR: Direct-to-reverberant ratio Gaussian process GRU: Gated recurrent unit IRLS: Iteratively re-weighted least squares LS: Least squares MAE: Mean-absolute error Parts per million RANSAC: Random sample consensus RIR: Room impulse response STFT: Short-time Fourier transform SVD: Singular value decomposition TDoA: Time difference of arrival ToA: ToF: Time of flight WASN: Wireless acoustic sensor network A. Bertrand. Applications and trends in wireless acoustic sensor networks: a signal processing perspective, (2011). https://doi.org/10.1109/SCVT.2011.6101302. V. Potdar, A. Sharif, E. Chang, in Proc. International Conference on Advanced Information Networking and Applications Workshops (AINA). Wireless Sensor Networks: A Survey (IEEEBradford, 2009), pp. 636–641. https://doi.org/10.1109/WAINA.2009.192. N. Ono, H. Kohno, N. Ito, S. Sagayama, in Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). Blind alignment of asynchronously recorded signals for distributed microphone array (IEEENew Paltz, 2009). https://doi.org/10.1109/ASPAA.2009.5346505. S. Wozniak, K. Kowalczyk, Passive Joint Localization and Synchronization of Distributed Microphone Arrays. IEEE Signal Proc. Lett.26(2), 292–296 (2019). https://doi.org/10.1109/LSP.2018.2889438. B. Laufer-Goldshtein, R. Talmon, S. Gannot, Semi-supervised source localization on multiple manifolds with distributed microphones. IEEE/ACM Trans. Audio Speech Lang. Process.25(7), 1477–1491 (2017). https://doi.org/10.1109/TASLP.2017.2696310. A. Plinge, F. Jacob, R. Haeb-Umbach, G. A. Fink, Acoustic Microphone Geometry Calibration: an overview and experimental evaluation of state-of-the-art algorithms. IEEE Signal Proc. Mag.33(4), 14–29 (2016). https://doi.org/10.1109/MSP.2016.2555198. H. Afifi, J. Schmalenstroeer, J. Ullmann, R. Haeb-Umbach, H. Karl, in Proc. ITG Fachtagung Sprachkommunikation (Speech Communication). MARVELO - A Framework for Signal Processing in Wireless Acoustic Sensor Networks (Oldenburg, Germany, 2018). G. Miller, A. Brendel, W. Kellermann, S. Gannot, Misalignment recognition in acoustic sensor networks using a semi-supervised source estimation method and Markov random fields (2020). http://arxiv.org/abs/arXiv:2011.03432. J. Elson, K. Roemer, in Proc. ACM Workshop on Hot Topics in Networks (HotNets). Wireless sensor networks: a new regime for time synchronization (Association for Computing MachineryPrinceton, 2002). R. Lienhart, I. V. Kozintsev, S. Wehr, M. Yeung, in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). On the importance of exact synchronization for distributed audio signal processing (IEEEHong Kong, 2003), p. 840. https://doi.org/10.1109/ICASSP.2003.1202774. I. -K. Rhee, J. Lee, J. Kim, E. Serpedin, Y. -C. Wu, Clock synchronization in wireless sensor networks: an overview. Sensors. 9(1), 56–85 (2009). https://doi.org/10.3390/s90100056. M. Hennecke, T. Plotz, G. A. Fink, J. Schmalenstroeer, R. Haeb-Umbach, in Proc. IEEE/SP Workshop on Statistical Signal Processing (SSP 2009). A hierarchical approach to unsupervised shape calibration of microphone array networks, (2009), pp. 257–260. https://doi.org/10.1109/SSP.2009.5278589. L. Wang, T. Hon, J. D. Reiss, A. Cavallaro, Self-localization of ad-hoc arrays using time difference of arrivals. IEEE Trans. Signal Process.64(4), 1018–1033 (2016). https://doi.org/10.1109/TSP.2015.2498130. M. H. Hennecke, G. A. Fink, in Proc. Joint Workshop on Hands-free Speech Communication and Microphone Arrays (HSCMA). Towards acoustic self-localization of ad hoc smartphone arrays (Edinburgh, United Kingdom, 2011), pp. 127–132. https://doi.org/10.1109/HSCMA.2011.5942378. V. C. Raykar, I. V. Kozintsev, R. Lienhart, Position calibration of microphones and loudspeakers in distributed computing platforms. IEEE Trans. Speech Audio Proc.13(1), 70–83 (2005). https://doi.org/10.1109/TSA.2004.838540. D. Mills, Internet Time Synchronization: The Network Time Protocol. IEEE Trans. Commun.39:, 1482–1493 (1991). https://doi.org/10.1109/TSA.2004.838540. M. Maróti, B. Kusy, G. Simon, A. Lédeczi, in Proceedings of the 2nd international conference on Embedded networked sensor systems. The flooding time synchronization protocol (Baltimore, 2004), pp. 39–49. https://doi.org/10.1145/1031495.1031501. M. Maroti, B. Kusy, G. Simon, A. Ledeczi, in Proc. International Conference on Embedded Networked Sensor Systems (SenSys). The flooding time synchronization protocol (Association for Computing MachineryBaltimore, 2004). https://doi.org/10.1145/1031495.1031501. M. Leng, Y. -C. Wu, Distributed clock synchronization for wireless sensor networks using belief propagation. IEEE Trans. Signal Process.59(11), 5404–5414 (2011). https://doi.org/10.1109/TSP.2011.2162832. A. Plinge, G. A. Fink, S. Gannot, Passive online geometry calibration of acoustic sensor networks. IEEE Signal Proc. Lett.24(3), 324–328 (2017). https://doi.org/10.1109/LSP.2017.2662065. Y. Dorfan, O. Schwartz, S. Gannot, Joint speaker localization and array calibration using expectation-maximization. EURASIP Journal on Audio, Speech, and Music Processing. 2020(9), 1–19 (2020). https://doi.org/10.1186/s13636-020-00177-1. J. Schmalenstroeer, F. Jacob, R. Haeb-Umbach, M. Hennecke, G. A. Fink, in Proc. Annual Conference of the International Speech Communication Association (Interspeech). Unsupervised geometry calibration of acoustic sensor networks using source correspondences (ISCAFlorence, 2011), pp. 597–600. F. Jacob, J. Schmalenstroeer, R. Haeb-Umbach, in Proc. International Workshop on Acoustic Signal Enhancement (IWAENC). Microphone array position self-calibration from reverberant speech input (VDEAachen, 2012). F. Jacob, J. Schmalenstroeer, R. Haeb-Umbach, in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). DOA-based microphone array postion self-calibration using circular statistics (IEEEVancouver, 2013), pp. 116–120. https://doi.org/10.1109/ICASSP.2013.6637620. F. Jacob, R. Haeb-Umbach, in Proc. ITG Fachtagung Sprachkommunikation (Speech Communication). Coordinate mapping between an acoustic and visual sensor network in the shape domain for a joint self-calibrating speaker tracking (VDEErlangen, 2014). F. Jacob, R. Haeb-Umbach, Absolute Geometry Calibration of Distributed Microphone Arrays in an Audio-Visual Sensor Network. ArXiv e-prints, abs/1504.03128 (2015). R. Wang, Z. Chen, F. Yin, DOA-based three-dimensional node geometry calibration in acoustic sensor networks and its Cramér–Rao bound and sensitivity analysis. IEEE/ACM Trans. Audio Speech Lang. Process.27(9), 1455–1468 (2019). https://doi.org/10.1109/TASLP.2019.2921892. S. Wozniak, K. Kowalczyk, in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Exploiting rays in blind localization of distributed sensor arrays (IEEEBarcelona, 2020), pp. 221–225. https://doi.org/10.1109/ICASSP40776.2020.9054752. M. A. Fischler, R. C. Bolles, Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications ACM. 24(6), 381–395 (1981). https://doi.org/10.1145/358669.358692. T. Gburrek, J. Schmalenstroeer, A. Brendel, W. Kellermann, R. Haeb-Umbach, in Proc. European Signal Processing Conference (EUSIPCO). Deep neural network based distance estimation for geometry calibration in acoustic sensor networks (Amsterdam, The Netherlands, 2021). A. Brendel, W. Kellermann, Distributed source localization in acoustic sensor networks using the coherent-to-diffuse power ratio. IEEE J. Sel. Top. Signal Proc.13(1), 61–75 (2019). https://doi.org/10.1109/JSTSP.2019.2900911. A. Brendel, A. Regensky, W. Kellermann, in Proc. International Congress on Acoustics. Probabilistic modeling for learning-based distance estimation (Deutsche Gesellschaft für Akustik (DEGA e.V.)Aachen, 2019). J. M. Sachar, H. F. Silverman, W. R. Patterson, Microphone position and gain calibration for a large-aperture microphone array. IEEE Trans. Speech Audio Proc.13(1), 42–52 (2005). https://doi.org/10.1109/TSA.2004.834459. O. Sorkine-Hornung, M. Rabinovich, Least-squares rigid motion using svd. Computing. 1(1), 1–5 (2017). K. Aftab, R. Hartley, J. Trumpf, Generalized weiszfeld algorithms for lq optimization. IEEE Trans. Pattern Anal. Mach. Intell.37(4), 728–745 (2015). https://doi.org/10.1109/TPAMI.2014.2353625. I. Daubechies, R. DeVore, M. Fornasier, C. S. Güntürk, Iteratively reweighted least squares minimization for sparse recovery. Commun. Pur. Appl. Math.63(1), 1–38 (2010). https://doi.org/10.1002/cpa.20303. E. A. Habets, Room impulse response generator. Technische Universiteit Eindhoven, Tech. Rep. 2(2.4), 1 (2006). E. Hadad, F. Heese, P. Vary, S. Gannot, in Proc. International Workshop on Acoustic Signal Enhancement (IWAENC). Multichannel audio database in various acoustic environments (IEEEAntibes, 2014), pp. 313–317. https://doi.org/10.1109/IWAENC.2014.6954309. N. J. Bryan, in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Impulse response data augmentation and deep neural networks for blind room acoustic parameter estimation (IEEEBarcelona, 2020), pp. 1–5. https://doi.org/10.1109/ICASSP40776.2020.9052970. A. Schwarz, W. Kellermann, Coherent-to-diffuse power ratio estimation for dereverberation. IEEE/ACM Trans. Audio Speech Lang. Process.23(6), 1006–1018 (2015). https://doi.org/10.1109/TASLP.2015.2418571. J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, N. L. Dahlgren, DARPA TIMIT Acoustic Phonetic Continuous Speech Corpus CDROM. NIST (1993). https://doi.org/10.6028/nist.ir.4930. D. Kingma, J. Ba, in Proc. International Conference on Learning Representations (ICLR). Adam: a method for stochastic optimization (Banff, Canada, 2014). http://arxiv.org/abs/arXiv:1412.6980v9. L. Drude, F. Jacob, R. Haeb-Umbach, in Proc. European Signal Processing Conference (EUSIPCO). DOA-estimation based on a complex Watson kernel method (IEEENice, 2015). https://doi.org/10.1109/EUSIPCO.2015.7362384. J. R. Jensen, J. K. Nielsen, R. Heusdens, M. G. Christensen, in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). DOA estimation of audio sources in reverberant environments, (2016), pp. 176–180. https://doi.org/10.1109/ICASSP.2016.7471660. T. Gburrek, J. Schmalenstroeer, R. Haeb-Umbach, in Accepted for Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Iterative geometry calibration from distance estimates for wireless acoustic sensor networks, (2021). http://arxiv.org/abs/arXiv:2012.06142. R. H. Byrd, P. Lu, J. Nocedal, C. Zhu, A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput.16(5), 1190–1208 (1995). https://doi.org/10.1137/0916069. P. V. Giampouras, A. A. Rontogiannis, K. D. Koutroumbas, Alternating iteratively reweighted least squares minimization for low-rank matrix factorization. IEEE Trans. Signal Process.67(2), 490–503 (2019). https://doi.org/10.1109/TSP.2018.2883921. We would like to thank Mr. Andreas Brendel for the fruitful discussions on distance estimation. Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project 282835863. Open Access funding enabled and organized by Projekt DEAL. Department of Communications Engineering, Paderborn University, Paderborn, Germany Tobias Gburrek, Joerg Schmalenstroeer & Reinhold Haeb-Umbach Tobias Gburrek Joerg Schmalenstroeer Reinhold Haeb-Umbach DNN model development and training: TG. Geometry calibration software and experiments: TG and JS. Writing paper: TG, JS, and RH. The authors read and approved the final manuscript. Reinhold Haeb-Umbach received the Dipl.-Ing. and Dr.-Ing. degrees from RWTH Aachen University of Technology in 1983 and 1988, respectively. He is currently a professor of Communications Engineering at Paderborn University, Germany. His main research interests are in the fields of statistical signal processing and machine learning, with applications to speech enhancement, acoustic beamforming and source separation, as well as automatic speech recognition and unsupervised learning from speech and audio. He is a fellow of the International Speech Communication Association(ISCA) and of the IEEE. Joerg Schmalenstroeer received the Dipl.-Ing. and Dr.-Ing. degree in electrical engineering from the University of Paderborn in 2004 and 2010, respectively. Since 2004, he has been a Research Staff Member with the Department of Communications Engineering of the University of Paderborn. His research interests are in acoustic sensor networks and statistical speech signal processing. Tobias Gburrek is a Ph.D. student at Paderborn University since 2019 where he also pursued his Bachelor's and Masters's degree in Electrical Engineering. His research interests include acoustic sensor networks with a focus on geometry calibration and signal processing with deep neural networks. Correspondence to Tobias Gburrek. Gburrek, T., Schmalenstroeer, J. & Haeb-Umbach, R. Geometry calibration in wireless acoustic sensor networks utilizing DoA and distance information. J AUDIO SPEECH MUSIC PROC. 2021, 25 (2021). https://doi.org/10.1186/s13636-021-00210-x Data-driven Approaches in Acoustic Signal Processing: Methods and Applications
CommonCrawl
The effect of surface pattern property on the advancing motion of three-dimensional droplets Existence and stability of generalized transition waves for time-dependent reaction-diffusion systems Optimal control strategies for an online game addiction model with low and high risk exposure Youming Guo and Tingting Li , School of Science, Guilin University of Technology, Guilin, Guangxi 541004, China * Corresponding author: Tingting Li Received May 2020 Revised August 2020 Published November 2020 Fund Project: The second author is supported by the Basic Competence Promotion Project for Young and Middle-aged Teachers in Guangxi, China (2019KY0269) Figure(14) / Table(2) In this paper, we establish a new online game addiction model with low and high risk exposure. With the help of the next generation matrix, the basic reproduction number $ R_{0} $ is obtained. By constructing a suitable Lyapunov function, the equilibria of the model are Globally Asymptotically Stable. We use the optimal control theory to study the optimal solution problem with three kinds of control measures (isolation, education and treatment) and get the expression of optimal control. In the simulation, we first verify the Globally Asymptotical Stability of Disease-Free Equilibrium and Endemic Equilibrium, and obtain that the different trajectories with different initial values converges to the equilibria. Then the simulations of nine control strategies are obtained by forward-backward sweep method, and they are compared with the situation of without control respectively. The results show that we should implement the three kinds of control measures according to the optimal control strategy at the same time, which can effectively reduce the situation of game addiction. Keywords: Game addiction model, globally asymptotically stable, Lyapunov function, forward-backward sweep method, optimal control. Mathematics Subject Classification: Primary: 34D23; Secondary: 49J15. Citation: Youming Guo, Tingting Li. Optimal control strategies for an online game addiction model with low and high risk exposure. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020347 F. B. Agusto and M. A. Khan, Optimal control strategies for dengue transmission in pakistan, Math. Biosci., 305 (2018), 102-121. doi: 10.1016/j.mbs.2018.09.007. Google Scholar J. O. Akanni, F. O. Akinpelu, S. Olaniyi, A. T. Oladipo and A. W. Ogunsola, Modelling financial crime population dynamics: Optimal control and cost-effectiveness analysis, Int. J. Dyn. Control, 8 (2020), 531-544. doi: 10.1007/s40435-019-00572-3. Google Scholar A. Barrea and M. E. Hernández, Optimal control of a delayed breast cancer stem cells nonlinear model, Optimal Control Appl. Methods, 37 (2016), 248-258. doi: 10.1002/oca.2164. Google Scholar E. Bonyah, M. A. Khan, K. O. Okosun and J. F. Gómez-Aguilar, Modelling the effects of heavy alcohol consumption on the transmission dynamics of gonorrhea with optimal control, Math. Biosci., 309 (2019), 1-11. doi: 10.1016/j.mbs.2018.12.015. Google Scholar D. K. Das, S. Khajanchi and T. K. Kar, The impact of the media awareness and optimal strategy on the prevalence of tuberculosis, Appl. Math. Comput., 366 (2020), 124732, 23 pp. doi: 10.1016/j.amc.2019.124732. Google Scholar C. Ding, Y. Sun and Y. Zhu, A schistosomiasis compartment model with incubation and its optimal control, Math. Methods Appl. Sci., 40 (2017), 5079-5094. doi: 10.1002/mma.4372. Google Scholar P. van den Driessche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Math. Biosci., 180 (2002), 29-48. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar G. Fan, H. R. Thieme and H. Zhu, Delay differential systems for tick population dynamics, J. Math. Biol., 71 (2015), 1017-1048. doi: 10.1007/s00285-014-0845-0. Google Scholar W. H. Fleming and R. W. Rishel, Deterministic and Stochastic Optimal Control, Springer-Verlag, New York, 1975. Google Scholar D. Gao and N. Huang, Optimal control analysis of a tuberculosis model, Appl. Math. Model., 58 (2018), 47-64. doi: 10.1016/j.apm.2017.12.027. Google Scholar Y. Guo and T. Li, Optimal control and stability analysis of an online game addiction model with two stages, Math. Method App. Sci., 43 (2020), 4391-4408. Google Scholar K. Hattaf, Optimal control of a delayed HIV infection model with immune response using an efficient numerical method, ISRN Biomathematics, (2012), Article ID215124. Google Scholar J. M. Heffernan, R. J. Smith and L. M. Wahl, Perspectives on the basic reproductive ratio, J. R. Soc. Interface, 2 (2005), 281-293. doi: 10.1098/rsif.2005.0042. Google Scholar H.-F. Huo, F.-F. Cui and H. Xiang, Dynamics of an SAITS alcoholism model on unweighted and weighted networks, Physica A, 496 (2018), 249-262. doi: 10.1016/j.physa.2018.01.003. Google Scholar H.-F. Huo and X.-M. Zhang, Complex dynamics in an alcoholism model with the impact of Twitter, Math. Biosci., 281 (2016), 24-35. doi: 10.1016/j.mbs.2016.08.009. Google Scholar M. A. Khan, S. W. Shah, S. Ullah and J. F. Gómez-Aguilar, A dynamical model of asymptomatic carrier zika virus with optimal control strategies, Nonlinear Anal. Real World Appl., 50 (2019), 144-170. doi: 10.1016/j.nonrwa.2019.04.006. Google Scholar [17] Y. Kuang, Delay Differential Equations with Application in Population Dynamics, Academic Press, Inc., Boston, MA, 1993. Google Scholar V. Lakshmikantham, S. Leela and A. A. Martynyuk, Stability Analysis of Nonlinear Systems, Marcel Dekker, Inc., New York, 1989. Google Scholar T. Li and Y. Guo, Stability and optimal control in a mathematical model of online game addiction, Filomat, 33 (2019), 5691-5711. Google Scholar Z. Lin and H. Zhu, Spatial spreading model and dynamics of West Nile virus in birds and mosquitoes with free boundary, J. Math. Biol., 75 (2017), 1381-1409. doi: 10.1007/s00285-017-1124-7. Google Scholar Z. Lu, From E-Heroin to E-sports: The development of competitive gaming in China, The International Journal of the History of Sport, 33 (2017), 2186-2206. doi: 10.1080/09523367.2017.1358167. Google Scholar [22] D. L. Lukes, Differential Equations: Classical to Controlled, Matheatics in Science and Engineering, Academia Press, New York, 1982. Google Scholar M. McAsey, L. Mou and W. Han, Convergence of the forward-backward sweep method in optimal control, Comput. Optim. Appl., 53 (2012), 207-226. doi: 10.1007/s10589-011-9454-7. Google Scholar K. O. Okosun, M. A. Khan, E. Bonyah and O. O. Okosun, Cholera-schistosomiasis coinfection dynamics, Optim. Contr. Appl. Met., 40 (2019), 703-727. doi: 10.1002/oca.2507. Google Scholar K. A. Pawelek, A. Oeldorf-Hirsch and L. Rong, Modeling the impact of Twitter on influenza epidemics, Math. Biosci. Eng., 11 (2014), 1337-1356. doi: 10.3934/mbe.2014.11.1337. Google Scholar M. Sana, R. Saleem, A. Manaf and M. Habib, Varying forward backward sweep method using Runge-Kutta, Euler and Trapezoidal scheme as applied to optimal control problems, Sci.Int.(Labore), 27 (2015), 839-843. Google Scholar O. Sharomi and A. B. Gumel, Curtailing smoking dynamics: A mathematical modeling approach, Appl. Math. Comput., 195 (2008), 475-499. doi: 10.1016/j.amc.2007.05.012. Google Scholar Statistical Classification of Sports Industry, 2019. Available from: http://www.stats.gov.cn/tjgz/tzgb/201904/t20190409_1658556.html. Google Scholar X. Sun, H. Nishiura and Y. Xiao, Modeling methods for estimating HIV incidence: A mathematical review, Theor. Biol. Med. Model, 17 (2020), 1-14. doi: 10.1186/s12976-019-0118-0. Google Scholar C. S. Tang, Y. W. Koh and Y. Q. Gan, Addiction to internet use, online gaming, and online social networking among young adults in China, Singapore, and the United States, Asia Pac. J. Public. He, 29 (2017), 673-682. doi: 10.1177/1010539517739558. Google Scholar The 43rd Statistical Report on Internet Development in China, 2019. Available from: http://www.cac.gov.cn. Google Scholar X. Tian, R. Xu and J. Lin, Mathematical analysis of a cholera infection model with vaccination strategy, Appl. Math. Comput., 361 (2019), 517-535. doi: 10.1016/j.amc.2019.05.055. Google Scholar S. Ullah, M. A. Khan and J. F. Gómez-Aguilar, Mathematical formulation of hepatitis B virus with optimal control analysis, Optim. Contr. Appl. Met., 40 (2019), 529-544. doi: 10.1002/oca.2493. Google Scholar R. Viriyapong and M. Sookpiam, Education campaign and family understanding affect stability and qualitative behavior ofan online game addiction model for children and youth in Thailand, Math. Method App. Sci., 42 (2019), 6906-6916. doi: 10.1002/mma.5796. Google Scholar X. Wang, M. Shen, Y. Xiao and L. Rong, Optimal control and cost-effectiveness analysis of a Zika virus infection model with comprehensive interventions, Appl. Math. Comput., 359 (2019), 165-185. doi: 10.1016/j.amc.2019.04.026. Google Scholar X. Wang, Y. Shi, D. Wang and C. Xu, Dynamic Analysis on a Kind of Mathematical Model Incorporating Online Game Addiction Model and Age-Structure, Journal of Beijing University of Civil Engineering and Architecture, 2 (2017), 54-58. Google Scholar World Health Statistics 2019, 2019. Available from: https://www.who.int/data/gho/publications/world-health-statistics. Google Scholar T. A. Yıldız and E. Karaoǧlu, Optimal control strategies for tuberculosis dynamics with exogenous reinfections in case of treatment at home and treatment in hospital, Nonlinear Dynam., 97 (2019), 2643-2659. Google Scholar Z.-K. Zhang, C. Liu, X.-X. Zhan, X. Lu, C.-X. Zhang and Y.-C. Zhang, Dynamics of information diffusion and its applications on complex networks, Phys. Rep., 651 (2016), 1-34. doi: 10.1016/j.physrep.2016.07.002. Google Scholar W. Zhou, Y. Xiao and J. M. Heffernan, Optimal media reporting intensity on mitigating spread of an emerging infectious disease, Plos. One, 3 (2019), E0213898. doi: 10.1371/journal.pone.0213898. Google Scholar Figure 1. Transfer diagram of model Figure 2. DFE $ D_{0} = (829,0,0,0,0,0) $ is Globally Asymptotically Stable when $ R_{0} = 0.5778 < 1 $ and $ \beta = 0.2 $ Figure 3. EE $ D^{*} = (358.829, 20.903, 31.354, 67.916, 25.648,324.35) $ is Globally Asymptotically Stable when $ R_{0} = 2.3111 > 1 $ and $ \beta = 0.8 $ Figure 4. Dynamical behavior of infected when $ R_{0} = 0.5778 $ and $ \beta = 0.2 $ Figure 6. Graphical results for strategy A Figure 7. Graphical results for strategy B Figure 8. Graphical results for strategy C Figure 9. Graphical results for strategy D Figure 10. Graphical results for strategy E Figure 11. Graphical results for strategy F Figure 12. Graphical results for strategy G Figure 13. Graphical results for strategy H Figure 14. Graphical results for strategy I Table 1. Estimation of parameters Parameters Descriptions Values $ \mu $ Natural supplementary and death rate 0.05 per week $ \theta $ Proportion of individuals who became low risk exposed 0.4 per week $ \beta $ Contact transmission rate 0.1$ \sim $ 0.8 per week $ v_{1} $ Proportion of $ E_{1} $ who become infected 0.2 per week $ v_{2} $ Proportion of $ E_{1} $ who become professional 0.2 per week $ w_{1} $ Proportion of $ E_{2} $ who become infected 0.3 per week $ w_{2} $ Proportion of $ E_{1} $ who become professional 0.1 per week $ k_{1} $ Proportion of $ I $ who become quitting 0.05 per week $ k_{2} $ Proportion of $ I $ who become professional 0.1 per week $ \delta $ Proportion of $ P $ who become quitting 0.5 per week $ u_{1} $ The decreased proportion by isolation Variable $ u_{2} $ The decreased proportion in $ E_{1} $ by prevention Variable $ u_{4} $ The decreased proportion in $ I $ by treatment Variable Table 2. Results of different control strategies Strategy Total infectious individuals ($ \int_{0}^{t_f}(E_{1}+E_{2}+I)dt) $ Averted infectious individuals Objective function $ J $ Without control 7461.1302 $ - $ $ 8.5947\times 10^{6} $ Strategy A 526.3468 6934.7835 $ 1.3646\times 10^{6} $ Strategy B 1426.9073 6034.2229 $ 2.5242\times 10^{6} $ Strategy C 701.3874 6759.7428 $ 1.7413\times 10^{6} $ Strategy D 524.2143 6936.9159 $ 1.3592\times 10^{6} $ Strategy E 525.4126 6935.7176 $ 1.3619\times 10^{6} $ Strategy F 525.0718 6936.0585 $ 1.3618\times 10^{6} $ Strategy G 579.8124 6881.3178 $ 4.784\times 10^{6} $ Strategy H 1626.7971 5834.3331 $ 2.7511\times 10^{6} $ Strategy I 658.0017 6803.1286 $ 2.6232\times 10^{6} $ Hong Niu, Zhijiang Feng, Qijin Xiao, Yajun Zhang. A PID control method based on optimal control strategy. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 117-126. doi: 10.3934/naco.2020019 Yueyang Zheng, Jingtao Shi. A stackelberg game of backward stochastic differential equations with partial information. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020047 Qingfeng Zhu, Yufeng Shi. Nonzero-sum differential game of backward doubly stochastic systems with delay and applications. Mathematical Control & Related Fields, 2021, 11 (1) : 73-94. doi: 10.3934/mcrf.2020028 Musen Xue, Guowei Zhu. Partial myopia vs. forward-looking behaviors in a dynamic pricing and replenishment model for perishable items. Journal of Industrial & Management Optimization, 2021, 17 (2) : 633-648. doi: 10.3934/jimo.2019126 Lars Grüne, Matthias A. Müller, Christopher M. Kellett, Steven R. Weller. Strict dissipativity for discrete time discounted optimal control problems. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020046 Hongbo Guan, Yong Yang, Huiqing Zhu. A nonuniform anisotropic FEM for elliptic boundary layer optimal control problems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1711-1722. doi: 10.3934/dcdsb.2020179 Yuan Tan, Qingyuan Cao, Lan Li, Tianshi Hu, Min Su. A chance-constrained stochastic model predictive control problem with disturbance feedback. Journal of Industrial & Management Optimization, 2021, 17 (1) : 67-79. doi: 10.3934/jimo.2019099 Mikhail I. Belishev, Sergey A. Simonov. A canonical model of the one-dimensional dynamical Dirac system with boundary control. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021003 Mahir Demir, Suzanne Lenhart. A spatial food chain model for the Black Sea Anchovy, and its optimal fishery. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 155-171. doi: 10.3934/dcdsb.2020373 Ming Chen, Hao Wang. Dynamics of a discrete-time stoichiometric optimal foraging model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 107-120. doi: 10.3934/dcdsb.2020264 Xiuli Xu, Xueke Pu. Optimal convergence rates of the magnetohydrodynamic model for quantum plasmas with potential force. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 987-1010. doi: 10.3934/dcdsb.2020150 Pierluigi Colli, Gianni Gilardi, Jürgen Sprekels. Deep quench approximation and optimal control of general Cahn–Hilliard systems with fractional operators and double obstacle potentials. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 243-271. doi: 10.3934/dcdss.2020213 Youming Guo Tingting Li
CommonCrawl
What is the average mass of galaxies according to Hubble Deep and Ultra Deep field observations? It is very widely known among people interested in astronomy that there are 100-400 billion stars in the Milky Way galaxy and there are ~ 100 billion galaxies in the observable universe, which is usually calculated as $10^{22}$ stars in the observable universe. I have also seen estimates an order or even two orders of magnitude higher than this. What I am thinking here is why it is assumed that the Milky Way is the average galaxy ? One way to check whether or not the Milky Way is an average galaxy in mass and so in stellar mass is to check the results of the Hubble deep and ultra deep field images. While searching for information, I came across this page, which I somehow doubt its source of information, and it states: "Within the Hubble Ultra Deep Field there are approximately 10,000 discrete objects. Most of these objects are very small and likely have masses in the range of $10^5$ to $10^7$ solar masses. Note the mass of the Milky Way galaxy is $10^{12}$ solar masses." So if "most" of the 10,000 galaxies seen in the Hubble ultra deep field image are that small in mass, then it is completely incorrect to consider the Milky Way an average mass galaxy. It is also clear from this diagram that the Hubble deep field extends to the "Normal Galaxies", while the Ultra deep field extends to the "First Galaxies". So it is expected to observe many small irregular and dwarf galaxies in the HUDF. As far as I understand, these small galaxies merge to form more massive spiral and elliptical galaxies that we see today. So again, if we are observing 10,000 small galaxies that should merge to form normal galaxies, we shouldn't be thinking that the image contains 10,000 galaxies similar to the Milky Way. So, am I correct that the Milky Way is not an average mass galaxy in the observable universe and that we should use a smaller number when referring to the number of stars in the universe ? astronomy astrophysics galaxies observational-astronomy Abanob EbrahimAbanob Ebrahim $\begingroup$ Who thinks the Milky Way is an average galaxy? It isn't. $\endgroup$ – Rob Jeffries Dec 17 '14 at 23:13 $\begingroup$ Might want to read Phil Plait and Emily Lackdalwalla's blogs. $\endgroup$ – Carl Witthoft Dec 17 '14 at 23:16 I think the following image, which comes from Tomczak et al. (2014) and the so-called ZFOURGE/CANDELS galaxy survey should do the trick. It shows how the galaxy stellar mass function (i.e. the number of galaxies per unit mass per cubic megaparsec that have a certain stellar mass) evolves as a function of redshift. As you might imagine this is not just a case of counting galaxies and estimating their masses - you have to account for the fact that it is harder to see low-mass galaxies. Anyway, these are their results and they clearly show that a galaxy like the Milky Way that has about 200 billion stars and a stellar mass of about $5\times10^{10}M_{\odot}$ (note that the total mass of the Milky Way is dominated by dark matter), is quite a massive galaxy (note the logarithmic y-axis). In other words, small galaxies dominate the statistics. However, when you look at the Hubble Deep or Ultradeep fields, it is quite difficult to use this information. You will always tend to see the most luminous and most massive galaxies and the low-mass galaxies will not be represented as shown in the mass functions shown in this picture. So there are actually two separate things here, and I'm not sure I can definitively answer either. (i) What is the average mass of a galaxy; (ii) what is the average mass of a galaxy seen in the Hubble Deep fields? The answer to (ii) will obviously be much bigger than the answer to (i). Fortunately you can see from the plot that the straight(ish) line section below about the mass of the Milky Way are a power laws with slope $\sim -0.5$. That means that $M\Phi(M) \propto M^{+0.5}$ and when you integrate this over some range, it is the upper limit that dominates. So low-mass galaxies, do not dominate the stellar mass. In fact, it is galaxies about the size of the Milky Way that dominate the stellar mass. Galaxies with $M>10^{11}M_{\odot}$ (in stars) become increasingly rare, so these do not contribute so much. Therefore, very roughly, the number of stars in the Universe will be given by the number of galaxies with mass within a factor of a few of the Milky Way multiplied by the number of stars in the Milky Way. I cannot provide an answer for the average mass for a galaxy in the UDF or any other survey volume because it is unclear how many of the lowest mass objects there are or what lower mass cut-off to work with. The plots shown for the CANDELS field below will be perfectly representative of the UDF or any other deep observation, the cosmic variance should not be an issue for order of magnitude estimates. EDIT: As an example, let's take the average space density of $5\times 10^{10}M_{\odot}$ galaxies to be $10^{-2.5}$ per dex per Mpc$^{3}$ in the low redshift universe and assume galaxies over a 1 order of magnitude (1 dex) range of mass contribute almost all the stellar mass. If the observable universe if 46 billion light years ($\sim 15,000$ Mpc - see Size of the Observable Universe) and the average star is $0.25M_{\odot}$; there are: $$N_* = 10^{-2.5} \times 5\times10^{10} \times \frac{4\pi}{3} \times (15000)^3/0.25 \simeq 10^{22}$$ stars in the observable universe. $\begingroup$ Great, that's exactly what I am looking for. But honestly I seem to be unable to read the plots so I hope you can help me with this. There are two things that should answer my question. 1) If there is a certain number of galaxies at redshifts < 3, what is the percentage of this number that should be in galaxies with stellar masses > $5\times10^{9}$ Solar masses ? 2) Is there a way to find the number of galaxies in the HUDF image with redshifts less than 3 ? $\endgroup$ – Abanob Ebrahim Dec 18 '14 at 0:35 $\begingroup$ @AbanobEbrahim Your (1) could be quite difficult. Where do you make the low-end cutoff? For a warmup, how many satellite galaxies orbit the Milky Way? This number was revised not long ago, and is still debated. And what is a galaxy anyway? Do you count globular clusters? $\endgroup$ – user10851 Dec 18 '14 at 4:37 $\begingroup$ @AbanobEbrahim You need to convert the differential frequency distributions to cumulative frequencies by integrating the mass functions. As Chris says, difficult unless you define a lower mass limit. Second point: you need to find papers on the UDF that do this - I'm sure you will find some. $\endgroup$ – Rob Jeffries Dec 18 '14 at 7:33 $\begingroup$ Let me try to make this a easier. Are there any examples for stellar masses of ANY galaxies in observed in the UDF ? I just need a general idea about how the stellar masses change with increasing redshifts. $\endgroup$ – Abanob Ebrahim Dec 18 '14 at 20:49 $\begingroup$ Why would you think the UDF is different to the CANDELS fields? The redshift distribution might be different, but the mass function at a given redshift will be the same. $\endgroup$ – Rob Jeffries Dec 18 '14 at 21:21 Not the answer you're looking for? Browse other questions tagged astronomy astrophysics galaxies observational-astronomy or ask your own question. Size of the Observable Universe Why are there not many detectable supernovae? Solar system, visible stars and deep sky objects What is an extreme deep field image (XDF) and how is it captured? A question concerning the act of observing distant galaxies What is the luminosity of the Milky Way galaxy? According to the initial mass function, should there be more brown dwarfs than red dwarfs? What is the area of the sky that is covered by the Hubble Ultra Deep Field image? Primordial galaxies and associated mass of blackholes What is the limit of the deep field exploration? Do satellite galaxies have the same proportion of dark matter as "ordinary" galaxies
CommonCrawl
What exactly is a photon? Consider the question, "What is a photon?". The answers say, "an elementary particle" and not much else. They don't actually answer the question. Moreover, the question is flagged as a duplicate of, "What exactly is a quantum of light?" – the answers there don't tell me what a photon is either. Nor do any of the answers to this question mentioned in the comments. When I search on "photon", I can't find anything useful. Questions such as, "Wave function of a photon" look promising, but bear no fruit. Others say things like, "the photon is an excitation of the photon field." That tells me nothing. Nor does the tag description, which says: The photon is the quantum of the electromagnetic four-potential, and therefore the massless bosonic particle associated with the electromagnetic force, commonly also called the 'particle of light'... I'd say that's less than helpful because it gives the impression that photons are forever popping into existence and flying back and forth exerting force. This same concept is in the photon Wikipedia article too - but it isn't true. As as anna said, "Virtual particles only exist in the mathematics of the model." So, who can tell me what a real photon is, or refer me to some kind of authoritative informative definition that is accepted and trusted by particle physicists? I say all this because I think it's of paramount importance. If we have no clear idea of what a photon actually is, we lack foundation. It's like what kotozna said: Photons seem to be one of the foundation ideas of quantum mechanics, so I am concerned that without a clear definition or set of concrete examples, the basis for understanding quantum experiments is a little fuzzy. I second that, only more so. How can we understand pair production if we don't understand what the photon is? Or the electron? Or the electromagnetic field? Or everything else? It all starts with the photon. I will give a 400-point bounty to the least-worst answer to the question. One answer will get the bounty, even if I don't like it. And the question is this: particle-physics photons John DuffieldJohn Duffield $\begingroup$ People might find this article useful; Are there photons in fact? S.A. Rashkovskiy $\endgroup$ – Riad Dec 7 '18 at 7:40 $\begingroup$ Interesting! The author claims that many things that are usually though to require discrete photons can be explained using more-or-less continuous waves of radiation and discrete atoms, charges, etc. Do you know whether there are any things left that do require discrete quanta of light, beyond the ones he looks at? $\endgroup$ – J Thomas Dec 7 '18 at 14:56 The photon is a construct that was introduced to explain the experimental observations that showed that the electromagnetic field is absorbed and radiated in quanta. Many physicists take this construct as an indication that the electromagnetic field consists of dimensionless point particles, however of this particular fact one cannot be absolutely certain. All experimental observations associated with the electromagnetic field necessarily involve the absorption and/or radiation process. So when it comes to a strictly ontological answer to the question "What is a photon?" we need to be honest and say that we don't really know. It is like those old questions about the essence of things; question that could never really be answered in a satisfactory way. The way to a better understanding often requires that one becomes comfortable with uncertainty. Epanoui flippiefanusflippiefanus $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – David Z♦ Aug 8 '16 at 1:38 $\begingroup$ This is a rather superficial answer. There are technical meanings to the word "photon" in quantum field theory. DavidZ hints at that here and I explain the notion of particle a bit further e.g. here. It can further be found in most standard treatments of QED. Your statements about ontology are true but not really the physics answer to the question. That the asker seems dissatisfied with or ignorant of those technical meanings doesn't mean we shouldn't explain them. $\endgroup$ – ACuriousMind♦ Aug 8 '16 at 22:12 $\begingroup$ @ACuriousMind, Yes it is true that we have the more technical treatment of the photon in quantum field theory etc. However, it seemed to me that the person that asked the question is aware of these treatments and is unsatisfied with them. The question seems to require an ontological answer. $\endgroup$ – flippiefanus Aug 10 '16 at 4:32 $\begingroup$ Wasn't a quantum of light "invented" to help with Quantum Mechanics? Or was it the other way around... $\endgroup$ – Fizikly Q-Ryus Aug 15 '16 at 12:03 $\begingroup$ "The photon is a construct that was introduced to explain the experimental observations " This can be said about any product of human intellect. The question is whether this construct represents a particle - whatever that means - or a convenient fiction. $\endgroup$ – my2cts May 25 '18 at 8:30 The word photon is one of the most confusing and misused words in physics. Probably much more than other words in physics, it is being used with several different meanings and one can only try to find which one is meant based on the source and context of the message. The photon that spectroscopy experimenter uses to explain how spectra are connected to the atoms and molecules is a different concept from the photon quantum optics experimenters talk about when explaining their experiments. Those are different from the photon that the high energy experimenters talk about and there are still other photons the high energy theorists talk about. There are probably even more variants (and countless personal modifications) in use. The term was introduced by G. N. Lewis in 1926 for the concept of "atom of light": [...] one might have been tempted to adopt the hypothesis that we are dealing here with a new type of atom, an identifiable entity, uncreatable and indestructible, which acts as the carrier of radiant energy and, after absorption, persists as an essential constituent of the absorbing atom until it is later sent out again bearing a new amount of energy [...] –"The origin of the word "photon"" I therefore take the liberty of proposing for this hypothetical new atom, which is not light but plays an essential part in every process of radiation, the name photon. –"The Conservation of Photons" (1926-12-18) As far as I know, this original meaning of the word photon is not used anymore, because all the modern variants allow for creation and destruction of photons. The photon the experimenter in visible-UV spectroscopy usually talks about is an object that has definite frequency $\nu$ and definite energy $h\nu$; its size and position are unknown, perhaps undefined; yet it can be absorbed and emitted by a molecule. The photon the experimenter in quantum optics (detection correlation studies) usually talks about is a purposely mysterious "quantum object" that is more complicated: it has no definite frequency, has somewhat defined position and size, but can span whole experimental apparatus and only looks like a localized particle when it gets detected in a light detector. The photon the high energy experimenter talks about is a small particle that is not possible to see in photos of the particle tracks and their scattering events, but makes it easy to explain the curvature of tracks of matter particles with common point of origin within the framework of energy and momentum conservation (e. g. appearance of pair of oppositely charged particles, or the Compton scattering). This photon has usually definite momentum and energy (hence also definite frequency), and fairly definite position, since it participates in fairly localized scattering events. Theorists use the word photon with several meanings as well. The common denominator is the mathematics used to describe electromagnetic field and its interaction with matter. Certain special quantum states of EM field - so-called Fock states - behave mathematically in a way that allows one to use the language of "photons as countable things with definite energy". More precisely, there are states of the EM field that can be specified by stating an infinite set of non-negative whole numbers. When one of these numbers change by one, this is described by a figure of speech as "creation of photon" or "destruction of photon". This way of describing state allows one to easily calculate the total energy of the system and its frequency distribution. However, this kind of photon cannot be localized except to the whole system. In the general case, the state of the EM field is not of such a special kind, and the number of photons itself is not definite. This means the primary object of the mathematical theory of EM field is not a set of point particles with definite number of members, but a continuous EM field. Photons are merely a figure of speech useful when the field is of a special kind. Theorists still talk about photons a lot though, partially because: it is quite entrenched in the curriculum and textbooks for historical and inertia reasons; experimenters use it to describe their experiments; partially because it makes a good impression on people reading popular accounts of physics; it is hard to talk interestingly about $\psi$ function or the Fock space, but it is easy to talk about "particles of light"; partially because of how the Feynman diagram method is taught. (In the Feynman diagram, a wavy line in spacetime is often introduced as representing a photon. But these diagrams are a calculational aid for perturbation theory for complicated field equations; the wavy line in the Feynman diagram does not necessarily represent actual point particle moving through spacetime. The diagram, together with the photon it refers to, is just a useful graphical representation of certain complicated integrals.) Note on the necessity of the concept of photon Many famous experiments once regarded as evidence for photons were later explained qualitatively or semi-quantitatively based solely based on the theory of waves (classical EM theory of light, sometimes with Schroedinger's equation added). These are for example the photoelectric effect, Compton scattering, black-body radiation and perhaps others. There always was a minority group of physicists who avoided the concept of photon altogether for this kind of phenomena and preferred the idea that the possibilities of EM theory are not exhausted. Check out these papers for non-photon approaches to physics: R. Kidd, J. Ardini, A. Anton, Evolution of the modern photon, Am. J. Phys. 57, 27 (1989) http://www.optica.machorro.net/Lecturas/ModernPhoton_AJP000027.pdf C. V. Raman, A classical derivation of the Compton effect. Indian Journal of Physics, 3, 357-369. (1928) http://dspace.rri.res.in/jspui/bitstream/2289/2125/1/1928%20IJP%20V3%20p357-369.pdf Trevor W. Marshall, Emilio Santos: The myth of the photon, Arxiv (1997) https://arxiv.org/abs/quant-ph/9711046v1 Timothy H. Boyer, Derivation of the Blackbody Radiation Spectrum without Quantum Assumptions, Phys. Rev. 182, 1374 (1969) https://dx.doi.org/10.1103/PhysRev.182.1374 Ján LalinskýJán Lalinský $\begingroup$ It's probably worth linking to Lamb's "Anti-photon" paper/opinion-piece. Unfortunately I don't have a non-paywalled link. $\endgroup$ – dmckee♦ Aug 7 '16 at 16:16 $\begingroup$ @dmckee : thanks. I found the paper. Because you referred to it a couple of years back! $\endgroup$ – John Duffield Aug 8 '16 at 8:54 $\begingroup$ So, in short, a photon is not exactly. $\endgroup$ – Yakk Aug 8 '16 at 15:25 $\begingroup$ Before Lewis, there was a different definition of photon, one candle per square meter, on an area of one square millimeter. See page 173, footnote 20, in the 1920 article here: books.google.com/… , the unit now called the troland en.wikipedia.org/wiki/Troland $\endgroup$ – DavePhD Aug 11 '16 at 12:10 $\begingroup$ This is a very strange definition of how high-energy experimenters regard photons. Real photons are hard gamma rays which are observend via electromagnetic showers in calorimeters. They are also produced in scintillation detectors, and via bremstrahlung (cf. track curvature). We also do talk a lot about them as Feynman integral propagators, and are probably a bit more cavalier than we should be about calling them "particles" in that context ;-) $\endgroup$ – andybuckley Jan 20 '17 at 22:56 This is the elementary particle table used in the standard model of particle physics, you know, the one that is continuously validated at LHC despite hopeful searches for extensions. The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth. Note the word "particle" and note that whenever a physical process is calculated so as to give numbers to compare with experimental measurements, these particles are treated as point particles. i.e. in these Feynman diagrams for photon scattering: The incoming photon is real, i.e. on mass shell, 0 mass, energy $h\nu$. The vertex is a point and is the reason that particle physicists keep talking of point particles (until maybe string theory is validated, and we will then be talking of string particles). The concept of a photon as a particle is as realistic as the concept of an electron, and its existence is validated by the fit to the data of the standard model predictions. So the answer is that the photon is a particle in the standard model of physics which fits the measurements in quantum mechanical dimensions, i.e. dimensions commensurate with $\hbar$. To navel-gaze over the "meaning of a photon" more than over the "meaning of an electron" in a mathematical model is no longer physics, but metaphysics. i.e. people transfer their belief prejudices on the explanation. We call an electron a "particle" in our experimental setups because the macroscopic footprint as it goes through the detectors is that of a classical particle. The same is true for the photons measured in the calorimeters of the LHC, their macroscopic "footprint" is a zero-mass particle with energy $h\nu$ and spin one. This CMS diphoton event is in no doubt whether the footprint is a photon or not. It is a photon of the elementary particle table. It is only at the vertices of the interaction that the quantum mechanical indeterminacy is important. You ask: How can we understand pair production if we don't understand what the photon is It seems that one has to continually stress that physics theories are modelling data; they are not a metaphysical proposition of how the world began. We have a successful QFT model for particle physics, which describes the behavior of the elementary particles as their footprint is recorded in experiments and successfully predicts new outcomes. That is all. We understand the processes as modelled by QFT, the understanding of the nature of the axiomatic particle setup in the table belongs to metaphysics. Assuming the quantum mechanical postulates and assuming the particles in the table, we can model particle interactions. It is similar to asking "why $SU(3)\times SU(2)\times U(1)$" The only answer is because the model on these assumptions describes existing particle data and predicts new setups successfully. I would like to give the link on a blog post of Motl which helps in understanding how the classical electromagnetic field emerges from a large confluence of photons. It needs the mathematics of quantum field theory. The electric and magnetic fields are present in the photon wave function, which is a complex number function and is not measurable, except its complex conjugate squared, a real number gives the probability density of finding the photon in $(x,y,z,t)$. It is the superposition of the innumerable photons wave functions which builds up the classical EM wave. The frequency for the individual photon wave function appears in the complex exponents describing it. It should not be surprising that it is the same frequency for the probability as the frequency in the classical electromagnetic wave that emerges from innumerable same energy photons (same frequency). Both mathematical expressions are based on the structure of the Maxwell equations, the photon a quantized form, the EM the classical equations. xray0 anna vanna v $\begingroup$ Yes and they don't call it particle physics for nothing. $\endgroup$ – Bill Alsept Aug 8 '16 at 4:56 $\begingroup$ Can I get some insight into what can be seen on the picture? It seems way too complicated for a collision of two photons, even if we look at the yellow traces only. $\endgroup$ – John Dvorak Aug 8 '16 at 12:03 $\begingroup$ @JanDvorak if you look at the link above the picture, it is proton proton scattering in the CMS detector and the two blue histograms are the two high energy photons coming out of the interaction, measured in the electromagnetic calorimeters. The rest are charged particles of various types and one would need the specific analysis of the event to see if there is something more interesting there. $\endgroup$ – anna v Aug 8 '16 at 12:08 $\begingroup$ Anna, I have a question specifically for you because I think you may be one of the very few people who can answer it. It relates to the LHC scatter image. Assuming the depicted collision could be repeated IDENTICALLY as it happened, would the two photons scatter in exactly the same way? Again, assume the collision could be repeated absolutely identically. $\endgroup$ – Inquisitive Aug 13 '16 at 16:29 $\begingroup$ @Inquisitive Identically can only be controlled by the incoming scattering of proton on proton, i.e. same energy/momentum . This is what an LHC experiment is about. Identical input scatters and measurement of the outgoing created particles. So the experiment says that each event is different , even though the input is identical. The study depends on the standard model of particle physics en.wikipedia.org/wiki/Standard_Model which allows for categories to be studied as for example cds.cern.ch/record/1378102 . In the quantum mechanical regime the output is probabilistic $\endgroup$ – anna v Aug 13 '16 at 16:41 The starting point to explain photons from a theoretical point of view should be the Maxwell equations. In covariant form, the equations in vacuum without sources are \begin{align} \partial_\mu F^{\mu\nu}&=0\\ \partial_\mu(\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta}) &=0 \end{align} It is well known that the second equation is automatically verified if $F$ is defined in terms of the potential $A$ $$F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$$ The Maxwell equations can be obtained from the Lagrangian $$\mathcal{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$$ when the Euler-Lagrange are used variating the potential. This classic Lagrangian is the basis for the formulation of the quantum field theory. Since the Maxwell equations define a classic field theory, it is natural to look for a QFT description, and not just a QM description. Without entering in a discussion of the meaning of quantization (which would be too mathematical-philosophical and wouldn't illuminate your question), let's assume that the formulation of a QFT can be done, equivalently, via path integral and canonical quantization. I only will discuss the latter. In canonical quantization, the potential $A_\mu$ and its conjugate momentum $\Pi^\mu=\frac{\partial \mathcal{L}}{\partial(\partial_0 A_\mu)}$ become field-valued operators that act on some Hilbert space. This operators are forced to satisfy the commutation relation $$[A_\mu(t,x), \Pi_\nu(t, x')]=i \eta_{\mu\nu}\delta(x-x')$$ Because of this relation, the two physical polarizations for $A$ can be expanded in normal modes that are to be interpreted as annihilation and creation operators, $a$ and $a^\dagger$. If the vacuum state (i.e., the state of minimum energy of the theory) is $|0\rangle$, then the states $a^\dagger|0\rangle$ are called 1-photon states. Therefore, the photon is the minimum excitation of the quantum electromagnetic potential. Everything above considers only free electromaynetic fields. That means that photons propagate forever, they can't be emitted or absorbed. This is clearly in conflict with real life (and it's too boring). Going back to classical electromagnetism, the Lagrangian for the EM field with a 4-current $J$ that acts as a source is $$\mathcal{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-A_\mu J^\mu$$ The most common example is the current created by a charged fermion (for example, an electron or a muon) $$J_\mu = ie\bar{\psi}\partial_\mu\psi$$ But this type of Lagrangians presents a huge drawback: we don't know how to quantize them in an exact way.* Things get messy with interactions. We only can talk, with some rigor of asymptotic states: states long before or long after any interactions resemble those of the free fields. Therefore, the real photon is the excitation of the quantum electromagnetic potential that in the limit $t\to \pm\infty$ tends to the free photon as defined above. So yes, in a sense, you are right that we don't know what a photon is. But this [formal] obstacle doesn't prevent us from making predictions, like the case of pair production that worries you. The key point is that we don't know what happens during the interaction, we cannot know it and we don't need to know it. We only need to compare the asymptotic states before and after the interaction. In order to do that, we need to perform some approximation, usually a perturbative expansion (that results in Feynman diagrams, the wrongly-called "virtual particles" and all that). The comparation between in and out states, encoded in the $S$ matrix${}^\dagger$, is enough to predict decay rates, cross sections and branching ratios for any process you can imagine. And those observables are the only ones that we can measure. In conclusion, the things that you can't precisely define are the things that you can't experimentally verify. This answer is only a sketch, a complete answer would require me to write a book on the topic. If you want to know more, I encourage you to read any book on QFT, like Peskin & Schroeder, Weinberg, Srednicki, etc. * In an interacting theory, the classical equations of motion are non-linear, and can't be solved using a Fourier expansion that produces creation and annihilation operators. In the path integral formulation, we only know how to solve Gaussian integrals (i.e. free fields). To solve the path integrals for interacting fields we still need approximate methods like perturbative expansions or lattice QFT. According to Peskin & Schroeder: No exactly solvable interacting field theories are known in more than two spacetime dimensions, and even there the solvable models involve special symmetries and considerable technical complication. ${}^\dagger$ For more details on this, I refer you to this excellent answer by ACuriousMind to another question of yours. BosoneandoBosoneando $\begingroup$ I like this answer best simply because it doesn't hold back on the stark reality that it's the interpretation of the math that defines these objects. $\endgroup$ – Zach466920 Aug 9 '16 at 6:34 $\begingroup$ What do you mean by "we don't know how to quantize them in an exact way"? $\endgroup$ – gented Aug 9 '16 at 7:46 $\begingroup$ @GennaroTedesco That in an interacting theory, the classical equations of motion are non-linear, and can't be solved using a Fourier expansion that produces creation and annihilation operators. According to Peskin & Schroeder: "No exactly solvable interacting field theories are known in more than two spacetime dimensions, and even there the solvable models involve special symmetries and considerable technical complication" $\endgroup$ – Bosoneando Aug 9 '16 at 9:01 $\begingroup$ Well, you need not write the quantisation in terms of annihilation and creations operators; you may just write the path integral for the interacting Lagrangian. That then that would be non-renormalisable or ill-defined or else, I agree, but conceptually you can quantise the interacting theories too. $\endgroup$ – gented Aug 9 '16 at 9:07 $\begingroup$ Bosoneando : I do not share your sentiment: "we don't know what a photon is... we don't know what happens during the interaction, we cannot know it and we don't need to know it." $\endgroup$ – John Duffield Aug 18 '16 at 8:52 Ontological answer There is no short answer. The photon is exactly what you get when you study all our knowledge about it in form of mathematical theories, most of which having the big Q in their names. And then probably some more we didn't find yet. There are no shortcuts. Justification, reference This is by all intents and purposes a cop-out answer. A question that asks "how does a photon behave?", "what do we know about the interactions of a photon with XXX?", etc. would be easy(ish) to answer. But I suggest that the question "what is a photon" (in the sense of "what is it really, all the mathematics aside?") can have no meaningful answer whatsoever, in the same respect as the question "what is a XXX" (where XXX is any particle or field of the Standard Model) has no meaning. Instead of typing a lot, I suggest the Feynman interview, Richard P Feynman - FUN TO IMAGINE (full); the part relevant to this answer ("true meaning of things") goes from 01:03:00 right to the end (summary: even if we have the theories about particles right, and thus can explain their effects, we still have no way to explain them in everyday/practical terms without the maths, and there won't ever be one, as there are no "mundane" laws underneath it). Also partly, in respect to easy, short, answers a portion starting at 17:20 (summary: it is hard to describe anything completely at a particular "resolution", it goes ever deeper; as I said, only partially related, but quite insightful still). EDIT: summaries added. AnoEAnoE $\begingroup$ "Instead of typing a lot". Still, a summary would be in order. $\endgroup$ – Peter Mortensen Aug 13 '16 at 12:01 $\begingroup$ Added, @PeterMortensen. $\endgroup$ – AnoE Aug 14 '16 at 21:28 It takes many pages to really answer the question what a photon is, and different experts give different answers. This can be seen from an interesting collection of articles explaining different current views: The Nature of Light: What Is a Photon? Optics and Photonics News, October 2003 My theoretical physics FAQ contains an entry with the title ''What is a photon?''. Here is a short excerpt; but to answer the question in some depth needs read the FAQ entry itself. From the beginning: According to quantum electrodynamics, the most accurately verified theory in physics, a photon is a single-particle excitation of the free quantum electromagnetic field. More formally, it is a state of the free electromagnetic field which is an eigenstate of the photon number operator with eigenvalue 1. The pure states of the free quantum electromagnetic field are elements of a Fock space constructed from 1-photon states. A general n-photon state vector is an arbitrary linear combinations of tensor products of n 1-photon state vectors; and a general pure state of the free quantum electromagnetic field is a sum of n-photon state vectors, one for each n. If only the 0-photon term contributes, we have the dark state, usually called the vacuum; if only the 1-photon term contributes, we have a single photon. A single photon has the same degrees of freedom as a classical vacuum radiation field. Its shape is characterized by an arbitrary nonzero real 4-potential A(x) satisfying the free Maxwell equations, which in the Lorentz gauge take the form $\nabla \cdot A(x) = 0$, expressing the zero mass and the transversality of photons. Thus for every such A there is a corresponding pure photon state |A>. Here A(x) is not a field operator but a photon amplitude; photons whose amplitude differ by an x-independent phase factor are the same. And from near the end: The talk about photons is usually done inconsistently; almost everything said in the literature about photons should be taken with a grain of salt. There are even people like the Nobel prize winner Willis E. Lamb (the discoverer of the Lamb shift) who maintain that photons don't exist. See towards the end of http://web.archive.org/web/20040203032630/www.aro.army.mil/phys/proceed.htm The reference mentioned there at the end appeared as W.E Lamb, Jr., Anti-Photon, Applied Physics B 60 (1995), 77-84. This, together with the other reference mentioned by Lamb, is reprinted in W.E Lamb, Jr., The interpretation of quantum mechanics, Rinton Press, Princeton 2001. I think the most apt interpretation of an 'observed' photon as used in practice (in contrast to the photon formally defined as above) is as a low intensity coherent state, cut arbitrarily into time slices carrying an energy of $h\nu = \hbar\omega$, the energy of a photon at frequency nu and angular frequency omega. Such a state consists mostly of the vacuum (which is not directly observable hence can usually be neglected), and the contributions of the multiphoton states are negligible compared to the single photon contribution. With such a notion of photon, most of the actual experiments done make sense, though it does not explain the quantum randomness of the detection process (which comes from the quantized electrons in the detector). See also the slides of my lectures here and here Arnold NeumaierArnold Neumaier 35.7k9595 silver badges183183 bronze badges To me, the other elemntary particles are equally mysterious. This is because of their non-intuitive nature. Naive ontology of the world We humans grow up in a world with objects. These objects have mass and volume, they have discreet boundaries. Our brains are wont to consider the objects as separate things, and each thing can, for instance, be picked up and otherwise inspected by the senses. Things are accordingly denoted by ordinary nouns. Then there are fluids, which are almost like things but not quite. They do have mass and volume, and they can be interacted with through the senses (although to realize that air is not vacuum by noting air resistance is a non-trivial leap of insight), but they are not separate. They merge with each other and can be arbitrarily divided, unlike things, so you can never put your hands around exactly what a fluid is. Accordingly, we know them by uncountable nouns, and understand that ultimately it makes sense to think of parts of a single universal fluid. The universal fluid becomes the only thing, since other subdivisions are moot, except in the special case of droplets or quantities sequestered in vessels, where the thingness is being artificially enforced (ie. you can pretend the coffee and the milk are separate things so long as they are in their separate cups, but the moment you let them come in contact this pretense falls apart). Everything besides these we think of as phenomena. For instance, fire is something that happens: It is not a thing you can pick up and manipulate (you can only pick up the fuel, and the fire clings weirdly to it), nor is it something that you can trap in containers and subdivide or combine (though the fuel may be either a thing or a fluid, and it can be operated on, with fire sometimes coming along for the ride). Likewise for sound, light, temperature and similar concepts. Of course we now know fire is just plasma, ie. a fluid that "spoils" into something else very quickly, and that it is in fact possible to trap it with the right exotic vessel. Thus the imagination may be coerced into accepting fire as a fluid, although in everyday life it still appears as a phenomenon, thus it is not a truly intuitive thing. The theory of the atom When the atom theory came about, I'm certain the Greeks at some point expected the "atoms" to have some mass, shape and size, ie. to be things. Thus, a neat conceptual trick is employed: To the careless eye, sand seems like a fluid, since quantities of it appear to freely merge and split (note that sand is uncountable). But on closer inspection, the sand is just a bunch of tiny objects, which themselves are clearly things. The realization then came that the world would make sense if everything was a kind of sand made up of many tiny particles, which are themselves things. This is nice because it unites fluids and things: The fluid is only an apparent class, and deep down they're all things - which is nice. Things are very intuitive for us humans, and it can be easier to reason about a pile of millions of tiny things than a single bit of fluid because of how weird fluids are. Luckily, it turned out that both fluids and things were made up of particles, and these particles do seem like bona fide things. Here the caveats begin. Molecules are not quite Newtonian solids. They behave almost like them: For instance, they can have mass and volume. Almost all of them can be broken apart, but if you consider the very rigid rule of breaking up a molecule vs. breaking a rock, it starts looking funny already. They do have a boundary and bump off of each other... But watch out that you don't bump them too hard, or they weirdly merge together (unlike rocks). But the worst part is the boundary, which is only a fake boundary: The Van der Waals radius is not a binary "can/cannot pass" delimiter, but is a consequence of a continuous force equation. It isn't really that much harder to be slightly inside the molecule than slightly outside. Compare being slightly inside of a rock -- impossible. As an aside, I think it's interesting that the Greeks came up with a theory of the atom rather than a theory of the fluid, where all solid objects are in fact fluids in some temporary state of rigidity. The various theories of the elements come to mind, but they don't make the right physical observations: One could observe discrete pieces of iron can be melted and seemlessly combined, and then conclude that surely it must be possible to melt any thing, therefore there are only apparent things, and everything is essentially a fluid. Perhaps it is because this theory of fluids makes the world more confusing, not less. Subatomic particles Molecules, as it turned out, where simply small aggregations of things -- surely when you break up a thing the result must be smaller things? We soon found out about atoms, and then the parts of the atom. This is where we stop, since to my knowledge, none of the elemntary particles are known to be divisible into further constituents. Photons are one such elemntary particle. The pretense of a thing may be maintained for atoms and molecules through devices like the Van der Waals volume. For elementary particles, this pretense is hopeless. As has been famously shown time and time again, not only do elementary particles not have volume, they blatantly don't have volume: If they did, physics just clearly doesn't work, and you get things like "surfaces" of electrons spinning faster than the speed of light. The observation was then made that the world would make sense if only these particles were points. Of course, nobody actually knows what a point mass is. Nobody has ever seen such a thing (well, except that which we wish to christen a point mass in the first place). Its implications seem bizarre: For instance, its density is infinite, and in theory the entire universe could be squeezed to a single point. Had particles been things, such insanities would be comfortably precluded: Rocks cannot be squeezed arbitrarily, not even with infinite force. Not being squeezable, by the way, is another property of things. Even soft things like sponges turn out to just have pockets of air in them. Once the holes are all squeezed out, a thing cannot be compressed further: Liquids politely pay lip service to this principle, although you can tell (eg. by the water in a syringe experiment) that their hearts really aren't in it, and gases just couldn't care less -- another way in which fluids are strange and unlike things. Or at least, to a naive observer without access to the extreme energies required by our modern physical experiments. The subatomic level is where intuition completely breaks down. You can create analogies, such as to strings and pots of water, but you can never really imagine what a particle is like in terms of what objects from everyday life. The Universe has played a very cruel trick on us, in that it is one way, but it is such that at the macro level at which we necessarily started to understand it, it is entirely in an other way, with no semblence of the one way to be seen. We are then doomed to grow up expecting and getting used to the other way, only to take Physics 201 in college and find out everything we know is an illusion and no intuition is possible for the true nature of the world. Intuition, indeed, is an comprehension based on experience: Who can experience the subatomic? At best we may experience experimental apparatus. The top-down approach to understanding the Universe fails, and it fails precisely at the subatomic level. The bottom up version One can debate the true meaning of intuition, but I think some sanity can be restored by instead starting over and setting everything right. We can forget all the naive baggage about things and fluids, wipe the slate clean, and start with the fundamental truth that in the world, there are particles. Particles have momentum, they are points, they interact with each other and the vacuum in certain ways described by quantum mechanics. They are elementary, and not made of any smaller units. Photons, then, are one such particle, with specific properties described elsewhere (I won't repeat them, since you explicitly said in your question that these descriptions are not what you want). When truly large numbers of the particles act together, at the macro level some bizarre phenomena come about, such as "volumes" and "state transitions". You can't really get an intuition for these bizarrities from our knowledge of particles. But logically, ie. if you follow the math, you know it is a simple and straightforward consequence, albeit non-intuitive. Unfortunately, this bottom up intuition is not very useful. All of our daily lives concern macro phenomena. A lot of the interesting things in the universe (basically, all the disciplines other than subatomic physics) are macro scale. One expects that after learning physics, the world will become easier to understand -- but learning the bottom up intuition only makes everything harder. I suspect even subatomic physics is not made much easier, since all the real work is done with math, not intuition. So, in conclusion, the question cannot be answered satisfactorily. There are two ways of understanding a question like "what is an X": "Tell me the salient properties of X": For the photon, there is no shortage of various texts, and even on this site there are perfectly serviceable answers that you have found lacking. "Help me intuitively understand X": As I said, no intuition is possible without tearing down all the intuition you have built up over your life. If you do tear it down, what intuition may be gotten is unsatisfying, and only serves to give you a headache. But that said, a photon is an elementary particle. It behaves as if a point. It has momentum, and moves at light speed (implying that it cannot stop). It has an associated electromagnetic wave. The energy carried by this wave is quantized. The photon can interact with other molecules, for instance by being absorbed and emitted; with enough energy you can create them "from scratch", and they appear to always carry packets of energy with them. One can wonder, if all particles are merely some form or arrangement of discrete bits of energy, which when aggregated in a certain way, leads to an appearance or seeming of point masses (or should I say "masses"?) and particles, and if individual Planck units of energy are really the basis for everything else in the universe, and perhaps the photon is very close to what these "energies" look like "on their own". Perhaps this is closer to what you were asking, but at this point I'm firmly into navelgazing territory, so I'll stop here. SuperbestSuperbest $\begingroup$ +1 I'm not sure that this directly answers the question, but it does definitely imply a worthwhile and highly interesting critique of the implicit demand for "intuition" in the OP's question. I myself didn't hazard an answer because the conclusions I have reached place me firmly in the category of being able to tell the OP nothing (by his definition). I especially liked your clear description of our innate tendency to seek "thinghood". Much of our thought is simply evolved behaviors that reflect our evolutionary history, and not needfully useful in the worlds we now explore as physicits. $\endgroup$ – WetSavannaAnimal Aug 9 '16 at 13:56 $\begingroup$ Incidentally, and interestingly, though there are sometimes ways where we can remake contact with the "thinghood" notions we seem hard-wired to love through quite abstract ways: I speak of Wigner's notion of a particle as the "atomic" subspaces of a system's quantum state space that are left invariant by an irreducible representation of the Poincaré group. These are not"things" as we are hardwired to conceive them, but, behind a pretty thin veil of abstraction, they have most of the properties you describe in your first section that we'd innately ken as being of a "thing". $\endgroup$ – WetSavannaAnimal Aug 9 '16 at 14:13 $\begingroup$ Perhaps the Greeks came up with atoms because they had such problems with infinity. For example, Zeno's paradoxes. Atoms a way of keeping a fluid from being infinitely divisible. $\endgroup$ – mmesser314 Dec 16 '17 at 16:31 Who can tell me what a real photon is? Or refer me to some kind of authoritative informative definition that is accepted and trusted by particle physicists? I say all this because I think it's of paramount importance. If we have no clear idea of what a photon actually is, we lack foundation... How can we understand pair production if we don't understand what the photon is? Or the electron? Or the electromagnetic field? Or everything else? It all starts with the photon. I think this is really a philosophical problem, not a physics problem. No answer will satisfy you, because you are asking a question which it is impossible to answer with finality : What is the essence of a thing? Exactly the same problem exists with every concept of human thought, not only in science (What is energy? What is time? What is colour? What is consciousness?...) but also in the humanities (What is love? What is beauty? What is happiness?...). In each case the more we try to define something, the more elusive it becomes, the less we seem to really understand what the essence of it is. And when we think we have a grasp of it, some new property emerges to throw our understanding into disarray again. I agree with AnoE (perhaps because I am a disciple of Richard Feynman) that things can only be understood as the sum of their properties, their inter-relations with other things. In life, it is not necessary to know what love is in order to experience it, or to know what justice is in order to act justly or recognise injustice. The only definition we can give is to summarise our experience of a thing into one or more idealised "models" which isolate the features we consider to be "essential". In the same way, it is not necessary to have an ultimate definition of a photon as a solid foundation before we can study light or develop powerful theories like QED. A working definition or model is adequate, one which allows us to identify and agree on the common experience and properties which we are investigating. The history of science shows that the concepts we use are gradually refined over decades or centuries, in particular the question of "What is light?" This lack of ultimate definition has not prevented us from developing elaborate theories like QED and General Relativity which allow us to predict with astonishing accuracy and expand our understanding of how the universe works. "Photon" and "electron" and "magnetic field" are only our models of things we find in the universe, to help us predict and find relationships between things. As Elias puts it, these models are, of necessity, approximate concepts. They are not what really exists. It is inevitable that they will change as we refine our approximations to try to fit new properties, new observations, into the framework of our understanding, our theories. sammy gerbilsammy gerbil $\begingroup$ approximate concepts ? of what ? of some reality ? I don't agree unless you say that they are approximations of the next step of human knowledge. They are concepts in a theory that answers to a set of questions. That's all and it's a lot. $\endgroup$ – user46925 Aug 11 '16 at 17:24 $\begingroup$ What I mean is that a working model does not have to be realistic in all its attributes. I agree that the model is particular to a theory. But I think we can only categorise models as useful not true. Maxwell developed his equations from a model of molecular vortices which look ridiculous today - but that model worked for him. I think this agrees with your final definition : the photon is a concept in a theory. To specify the concept, we need to specify the theory. As with Maxwell, when the theory is complete, we can discard the model. $\endgroup$ – sammy gerbil Aug 11 '16 at 18:46 $\begingroup$ Might this be described as the infinite process of condensation of a concept into the space of human language, which by nature is a set of evolving approximations? $\endgroup$ – dan Aug 12 '16 at 8:05 $\begingroup$ @danielAzuelos: nice, but convergence from approximations to an ideal involve the concept of reality, that is not very helpful and is a little difficult to handle. Caution is a virtue in this matter ... $\endgroup$ – user46925 Aug 12 '16 at 12:01 I am annoyed by the definitions of photon as described in the question. It is not that they are wrong, but because I was mislead by them almost as if they were preventing me to understand what a photon is. Below is what I think now. That is of course no new physics, and every interpretation is subjective. I will go through this by introducing few antithesis. 1. Photons are not discrete The terms like 'particle', 'quantum of light' or 'unit of energy exchange' lead to believe that photons are something discrete and sudden. Second quantization supplements this idea. For example, in second quantization, the Hamiltonian of a single state (say a particular standing wave in a cavity) can be written as $$ H = \hbar \omega (a^\dagger a + 1/2)$$ This is also the Hamiltonian for a harmonic oscillator. Consequently, we can easily then write the 'wave function' of this state as $\Psi(q)$ and Hamiltonian with classical kinetic energy like $p^2$ and potential energy like $q^2$ terms. We can write this wave function as a linear combination, $$ \Psi(q,t) = \sum_n c_n(t) \psi_n(q), $$ and we realize that the dynamics of photons are not that different from dynamics of electrons. In middle of quantum dynamics (between measurements that is), there can be any kind of wave packet described by $\Psi(q,t)$ or the linear combination coefficients $c_n(t)$. Therefore, the number of photons are not discrete, and they are not exchanged instantaneously in discrete quantities. Instead, all that is, is the field, and it is subject to typical quantum wave evolution. This field couples to matter. 2. Quantization is not unique Let's discuss the two transverse modes of a propagating photon (there are actually two more, longitudinal and energy-like, but that is out of scope). It is often said that a photon has angular momentum of $\pm \hbar$, which corresponds to circularly polarized light particles. This leads to a spinor-like representation for a photon. $$ \left[ \begin{array}{c} \Psi_L(q) \\ \Psi_R(q) \end{array} \right] $$ However, in some applications, it better to analyze only linearly polarized photons ($\Psi_{x,y}(q) = \frac{1}{2} (\Psi_L(q) \pm i \Psi_R(q))$). Now, it is easy to see, that just like electron spin, one has chosen just a preferred frame of reference and there is nothing extremely special about the choice of these discrete coordinates. (Of course, there is something special in choice of coordinates: the physical intuition to describe a problem well.) But in fact, I think that even the transverseness of a polarization is a choice of reference. 3. Wave function collapse creates the apparent discreteness Say a dye molecule gets excited in our eye-receptor, and it subsequently changes its form, and a nerve impulse is transmitted. Such a process resembles a quantum measurement since it involves so many uncontrolled degrees of freedom in high temperatures, and a phenomenon called decoherence happens. Thus, if the photon wave function was previously $(\frac{1}{\sqrt{2}}|1> + e^{i\theta} \frac{1}{\sqrt{2}}|0>$, the effective wave function (integrating out the macroscopic degrees of freedoms) is in a discrete state with probabilities given by their amplitudes. That is why photons can be seen and heard as clicks. With a grain of salt, it is the collapse of the wave function which makes the sound :) 4. Far-field and near-field photons are different It is often said that a photon has definite energy and momentum which must be conserved (i.e. it follows the dispersion relation $E=\hbar k$ and one photon which hits the detector has always this (E,k). But for example, there are photonic crystals, where photon energies have band gaps and photons appear to have masses (non-linear dispersion relations). Again, one can quantize the Maxwell equations in a photonic crystal by some choice of states, and assign particles to these states. One can speak of photons here as well, and even say that they have mass since their equations of motion behave as they have mass. However, since the measurement is usually done in far field, where the photons are asymptotically free, one measures photons like $E=\hbar k$. 5. Modes are not unique Now picture more modes than just one before. The wave function is now $\Psi(q_1,q_2 \ldots q_N)$. Now imagine creating a linear combination of these modes $q'_i = \sum_j A_{ij} q_j$ to localize them as much as possible. In fact, let's localize to the extent that one mode $q'$ will correspond to a particular location in space. Now you have a 'wave unction' of a photon, which gives a probability amplitude of the photon field at different positions of space. $$\Psi(r_1, r_2, \ldots r_N)$$ By limiting ourselves to N coordinates which describe a photon roughly around positions ($r_1 \ldots r_N$), we have effectively imposed an energy cutoff to our equations and everything is fine. Now imagine extending this process to a continuum limit (far from trivial) and switching on the light-matter interaction, and we have encountered the problem of renormalization and all really hard and hardcore stuff. Given all that, one wants for practical reasons and for physical intuitions sake to go back to the second quantization and talk about one photon in mode 15. In other words, second quantization and the talk about particles as an excitations of harmonic oscillators is all just instruments created by and for the physical intuition. But if one wants to understand what a photon is, one needs to go under the hood. Peter Mortensen Mikael KuismaMikael Kuisma $\begingroup$ You're making confusion among so many different things., mixing QM and QFT above all (and what with the fact that quantisation is not unique?). What is so unsatisfactory in the classical QFT description of the quantisation of the electromagnetic field? $\endgroup$ – gented Aug 7 '16 at 22:47 $\begingroup$ @GennaroTedesco I mixed QM and QFT here because I wanted to show that the very basics of quantum fields can be understood quite well with regular QM. (But basically, the only reference to QFT I make is about renormalization, the rest is some form of cavity QED.) But I can agree that it was perhaps confusingly :) And perhaps there are also some confusions as well, I would love if you could elaborate. However, this is as far as my current understanding of photons go and so far I have found this the most satisfactory. I will change by view as soon as I gather more knowledge. $\endgroup$ – Mikael Kuisma Aug 7 '16 at 22:58 $\begingroup$ The thing is, there is no photon at all in QM; it only emerges as force carrier of the electromagnetic field in QFT (whatever procedure of quantisation you want to choose), which has not to be confused with the discretisation of the energy in processes like absorption, radiation and all the rest. Then whether or not the QFT description is well posed is another kind of question, but no contradictions or confusions arise from QM just because there is no photon involved at all. $\endgroup$ – gented Aug 7 '16 at 23:03 $\begingroup$ @GennaroTedesco I was doing a QM treatment to electromagnetic field degrees of freedom. The quantum evolution I describe can be as an example as such in Eq. 21 of this article nano-bio.ehu.es/files/articles/… Now, the couplings are given ad hoc in this formulation, and I agree that proper QFT with (beyond me) renormalizations have to be done to properly get them. So what I did was to use a model cavity QED system to simplify the description of a photon. I think your QM vs. QFT line with no photons in QM is too strict. $\endgroup$ – Mikael Kuisma Aug 7 '16 at 23:18 I am going to start my answer by referring to a different one: Which is more fundamental, fields or particles The photon is really just a special case of what is outlined in this answer. Quoting DanielSank: Consider a violin string which has a set of vibrational modes. If you want to specify the state of the string, you enumerate the modes and specify the amplitude of each one, eg with a Fourier series $$\text{string displacement}(x) = \sum_{\text{mode }n=0}^{\infty}c_n \,\,\text{[shape of mode }n](x).$$ The vibrational modes are like the quantum eigenstates, and the amplitudes $c_n$ are like the number of particles in each state. With that analogy, the first quantization notation, where you index over the particles and specify each one's state, is like indexing over units of amplitude and specifying each one's mode. That's obviously backwards. In particular, you now see why particles are indistinguishable. If a particle is just a unit of excitation of a quantum state, then just like units of amplitude of a vibrating string, it doesn't make any sense to say that the particle has identity. All units of excitation are the same because they're just mathematical constructs to keep track of how excited a particular mode is. A better way to specify a quantum state is to list each possible state and say how excited it is. A photon is exactly that: a unit of excitation (1) of a mode of the electromagnetic field. The main problem with the photon is that people try to over trivialise it. This has roots in history. In the early days of quantum mechanics particles and in particular photons were invoked to explain the "particle features of light". In the modern view of quantum field theory this picture gets replaced by what DanielSank describes in the linked question. As such a photon is complicated. It is not a priori a wavepacket or a small pointlike particle. The field theory unifies both these pictures. Real photon wavefields then are superpositions of these fundamental excitations and they can display both field and particle behaviour. The answer to the OPs following question... How can we understand pair production if we don't understand what the photon is? Or the electron? Or the electromagnetic field? Or everything else? ...lies therein. If you want to know what happens to the real physical objects, you are moving away from photons. Single photon states in nature are rare if not non-existent. So what is a photon fundamentally? Quoting from the question: [...] "the photon is an excitation of the photon field". That tells me nothing. It tells a lot, the mathematical formalism is very clear and many people have tried to explain it in answers here and elsewhere. [...] because it gives the impression that photons are forever popping into existence and flying back and forth exerting force. This concept is there in the photon Wikipedia article too. It isn't true. As as anna said virtual particles only exist in the mathematics of the model. So, who can tell me what a real photon is? [...] The problem here is really the relation between the mathematical formalism and "reality". A "real" photon is not a thing, the photon is a mathematical construct (that was described above) and we use it (successfully) to describe experimental outcomes. (1) courtesy of DanielSank. WolpertingerWolpertinger When Max Planck was working to understand the problem of black-body radiation, he only had success when he assumed that electromagnetic energy could only be emitted in quantitized form. In other words, he assumed there was a minimal unit of light that could be emitted. In assuming this, he of course found $E = h v$ where $h$ is Planck's constant. In 1905, Einstein took this seriously and assumed that light exists as these fundamental units (photons) with their energy given by $hv$ where $v$ is the frequency of the radiation. This photon explained many an experimental result, and gave light its wave-particle duality. Many things have a minimum "chunk": the Planck length, below which distance becomes meaningless, the quark, gluon, and other fundmental units of matter, Planck time (supposed to be the minimum measurement of time, it is the time required for light to travel in a vacuum the distance of one Planck length). So, what is a photon? It is the minimum "unit" of light, the fundamental piece. I would say the atom of light, but that doesn't quite convey the right image. (The "quark" of light?) It's also important to remember that we can't "see" a photon, just as (well, even more so) than we can't "see" a quark. What we know about them is from experiments and calculations, and as such, there isn't really a physical picture of them, as is true for most of quantum mechanics. heatherheather $\begingroup$ A photon is just a unit of excitation of an electromagnetic mode. $\endgroup$ – DanielSank Aug 9 '16 at 16:49 $\begingroup$ We can get closer to seeing a photon than you might think: nature.com/news/… $\endgroup$ – Rococo Aug 10 '16 at 2:55 $\begingroup$ Most physicists, including string theorists, strongy disapproves that length would be meaningless below the Planck length. $\endgroup$ – peterh Aug 13 '16 at 21:51 $\begingroup$ How do you mean that a photon be a 'minimum' unit when it can have any energy in a continuous range of energies? $\endgroup$ – user50229 Aug 14 '16 at 1:19 $\begingroup$ @heather: Would you agree that we can only talk about this unit of energy in the context of interactions, i.e. when the photon is absorb or emitted? While light propagates we don't know whether there are photons. $\endgroup$ – flippiefanus Aug 18 '16 at 4:37 With any concept in physics there is a dichotomy between model and physical system. In practice we forget about this dichotomy and act as if model and physical system are one and the same. Many answers to the question "what is a photon?" will reflect this identification of model and physical system, i.e., a photon is an ideal point particles; a photon is a field quanta; a photon is a line in a Feynamn diagrams, etc. These definitions of a photon are deeply rooted in models. Our predilection for identifying model and physical system is rooted in a false assumption that the intuition and imagery we develop for understanding the model applies equally well to the physical system. We convince ourselves that the little white speck whizzing through 3-space at the speed of light in our head is a photon, when in reality it is imagery associated with a model. In light of this, there are two essential ways to answer the question "what is a photon?" The first way is to refer to a model and say "the photon is concept X in model Y." Many users have taken this route. The second way is to refer to an experiment and say "the photon is the thing that is responsible for this data value." I tend to prefer this route when answering the question "what is a ___?" because it avoids the assumption that model and physical system are identical. Applied to the photon I would say "a photon is a packet of electromagnetic radiation that satisfies $E=h\nu$ and it is the smallest packet of electromagnetic radiation." If you are dissatisfied with both types of definition then you are out of luck. Our models will always remain in our heads and the physical world they so accurately describe will always remain just out of reach. James RowlandJames Rowland $\begingroup$ This is a nice point that will probably be helpful to anyone who is bewildered by this variety of descriptions. $\endgroup$ – Rococo Aug 12 '16 at 15:42 My entry: For a free or weakly-interacting electromagnetic field, which has radiation in some region at a definite frequency and energy, there is a minimum nonzero amount of energy that one can add to or take away from the field. That amount is a "photon." Now the fine print: Of course, many other answers here are more precise, and I think quite a few are insightful as well, but I took the question to be "explain it like I'm a child, but truthfully," and tried to get as close as possible to this. As others have noted, photon is not always used consistently, but for virtually every use I can think of the above statement is true (if you think you have a counterexample, please point it out to me). I say "virtually" because the one exception I can think of is the so-called "virtual photon." However, I think this terminology is wildly overused by non-experts anyway and should be avoided, or at least should be discussed separately. "Strong" vs "weak" coupling does have a standard precise definition among physicists, but really the transition between free photons plus matter excitations to strongly coupled excitations like polaritons happens smoothly, and there is at no point a sharp qualitative change. Experimentally, the requirement of "a definite frequency" will often be relaxed slightly to "a well-defined frequency," because any real source of light always has some finite spectral broadness. This is one of the issues that sometimes causes a slight difference between the experimentalist and theorist notions of "photon." This definition, phrased in terms of the energy of a particular part of a field, is hard at first to square away with the "billiard ball particle-like" picture you might have of photons as discrete objects that fly around and bounce off of things. This is simply because that picture is, in many cases, severely flawed. In some very specific situations (perhaps Compton scattering), you might be able to get away with it. However, it is so often misleading that it would probably be better to jettison it entirely until you understand the conditions under which it is roughly valid, which is a subtle issue worthy of an entirely separate discussion. Most of the time, photons are really not at all like little "billiard balls of light". RococoRococo I agree fully with flippiefanus' answer. First and foremost, a photon is a useful concept introduced to describe phenomena to which we do not have an intuitive approach, and apart from that we do not really know what a photon is. Even though it's true, this is not particularly satisfactory. What I want to add to his answer is why the concept of a photon was introduced. For a long time there was a dispute if light is a particle or a wave. Newton supported and developed much of the "corpuscular theory of light". His strongest argument was that light travels in straight lines, while waves tend to disperse spatially. Huygens, on the other hand, argued for light being a wave. The wave theory of light could explain phenomena like diffraction, which the corpuscular theory failed to explain. When Young performed his famous double-slit experiment, which showed interference patterns just like the ones known from sound waves or water waves, the question seemed to be settled once and for all. [An interesting side note: waves need a medium to propagate in, but light also propagates through vacuum. This led to the postulation of the so-called aether, a medium which was supposed to permeate all space. The properties of this aether, however, were contradictory and no experimental evidence of it was found. This played a role in the development of the theory of relativity, but that's another story] Then, around 1900, Max Planck was able to describe the spectrum of a black body correctly with what he at first thought was just a mathematical trick: for his calculations he assumed that the energy is radiated in tiny portions, rather than continuously, as you would suppose if light was a wave. The black body spectrum was one of the most important unresolved questions at that time and its explanation was a scientific break-through. In consequence, his method received a lot of attention. Shortly after, Einstein used Planck's method to explain another unresolved problem in physics: the photoelectric effect. Again, this phenomenon could be described if light is imagined as small packets of energy. But unlike Planck, Einstein considered these packets of energy a physical reality, which were later called photons. This neologism was by all means justified, because by that time it was already clear that photons had to be something else than just small billiard balls as Newton imagined. Sometimes it exhibits wave-like properties that cannot be explained with classical particles, sometimes it exhibits particle-like properties that cannot be explained with classical waves. Here is what we know: Light is emitted in discrete numbers of packets. This means that there are countable entities of light (which we call photons today). This statement is probably the most fundamental to the photon idea. It would also be my "one-sentence-answer" if someone asked me what a photon is. They carry physical properties like energy and momentum and can transfer them between physical objects (e.g. when thermal radiation is absorbed and heats up a body). Since photons have a momentum, they must also have mass. Later it was also shown that photons have other properties such as spin They propagate in straight lines without the need of a medium (The aether I mentioned before was proven to not exist). All of these properties are commonly associated with particles. In the very least they show that a photon is something (in the sense that a collection of physical properties typically qualifies as a thing). But a photon also has properties that are different from classical objects: When photons propagate they show diffraction, refraction and interference The energy and momentum of a photon correspond to wavelength and frequency of light, which govern the interference and diffraction behavior. The bottom line is that a photon is neither wave nor particle, but a quantum object that has both wave-like and particle-like properties. In general one can say that the particle-like properties are dominant when you look at small numbers of photons their interactions with matter (like the pair production process you mentioned) high energies while the wave-like properties are dominant when you look at large numbers of photons their propagation in space low energies I think this is as far as one can get with classical analogies. A photon is what its properties and its behavior tell us, and everything else is just an incomplete analogy. Personally, I like to imagine photons (as with any visualization this is by no means correct, but it works nicely in many situations and helps to get a grip) as small, hard, discrete particles that move around in space like waves would. SentrySentry $\begingroup$ I disagree on that "light is emitted in discrete number of packets". As far as I understand, it is the measurement which 'collapses' the photon wave function so that photons appear as discrete clicks on photon counter etc. In other words, during quantum propagation of an exited state of an atom, many photon states get amplitude. These however entangle to measurent device outcomes in such way that discreteness appears due to decoherence. I'd say: "light is measured in discrete packets". $\endgroup$ – Mikael Kuisma Aug 11 '16 at 15:34 $\begingroup$ "Since photons have a momentum, they must also have mass." Nonsense. A relativistic understanding of momentum is pretty much essential for a intrinsically relativistic entity, no? $\endgroup$ – dmckee♦ May 3 '17 at 3:07 $\begingroup$ Energy of light is not quantized value. The energy of light of given specific frequency is quantized value, based on formula w=hv. total energy of frequency=v light must be N times of hv. But this is self evident and seems no much meaning. We take any amount of things as an init, the total value must be N times of that unit. $\endgroup$ – Cang Ye Jul 11 at 7:54 Radiation is emitted when an electron decelerates and absorbed when it accelerates according to the well known Larmor formula.This radiation is a continuous electromagnetic field. Inside the atom, electrons change orbit and accelerate fast in the process resulting in emitting and absorbing radiation so fast that it appears as lines of spectra. But other than the spectral lines, atoms and molecules emit at so many other frequencies due to the various oscillation motion in them and the corresponding acceleration/deceleration accompanying that.These background frequencies would naturally be of lower frequency than that of the line spectra for the same system. Cherenkov radiation is perhaps the nearest to a continuous spectra. That is why radiation from all matter is composed of sharp spectral lines on a background of continuous radiation. The photon is the unit of radiation energy exchanged between bound(not free) electrons. It is like the currency of money.. and like currency, photons are not all of the same denomination/energy. The formula E=nhf gives n as the number of photons exchanged of frequency f resulting in energy E. But since f is variable and not even discrete, E is not discrete, but the photon is. The photon is also described as a packet of energy. This is correct, but only means the minimum (n=1)energy of a certain color that can be exchanged in a certain interaction between atoms. Normally the energy of waves is directly proportional to the square of their amplitude and nothing to do with frequency. To reconcile this with the definition of the photon very much needed for nuclear interaction, the number of photons n is introduced to compensate for the original formula E=hf. Note that while a photon is a packet of energy, the amount of energy in the packet can vary. A single blue photon has a lot more energy than a red photon for example. Gamma rays have the highest energy content due to their higher frequency. If you read a book on photonics, you'll find that the word photon crops up in nearly every line of it. This shows how important is the concept of a photon- despite its unusual and a bit confusing definition and use. RiadRiad A friend asked me this in college, and this is more or less what I told him. Experimenters were figuring out the behavior of electric & magnetic phenomena in the late 18th and early 19th centuries, and by about the mid-19th century it was all coming together. James Clerk Maxwell put the "finishing touches" on the equations that described (classical) electrical and magnetic phenomena. One of those equations, (Faraday's law), describes how a changing magnetic field can induce an electric current, while another equation, (Ampère's circuital law) describes how an electric current can induce a magnetic field. So think of electrons, they have electric charge, if we "jiggle" one just right, we can create a changing electric field, which induces a changing magnetic field, which induces a changing electric field, and so on... These little ripples of the electric and magnetic fields, inducing one another, that's what photons are. At some point I read a fascinating account of Maxwell, about how once he had worked out the equations to describe electromagnetism, he observed they could be used to derive a wave equation. Wave equations have a constant that describes the propagation speed of the waves and his derived wave equation had $\frac{1}{\sqrt{\mu\epsilon}}$ for that constant (with $\mu$ being the permeability and $\epsilon$ the permittivity). If I recall right, these values could be measured by experiments with electricity. Something like sending a known amount of current through two parallel wires, they'll generate a magnetic force that pushes the wires apart (if the current is in the same direction, and pull them together if it's in opposite directions, I think?). Measuring the resulting force can tell you the value (I suppose of only one of them). So these values had been measured, and plugging them in Maxwell got something close to the speed of light, which experimenters had been measuring with increasing accuracy around that time (notably in 1849 and 1862). And this was the first time someone (Maxwell) could realize that light was some kind of electromagnetic phenomenon. [Looking it up I see that actually, Wilhelm Eduard Weber and Rudolf Kohlrausch in 1855 had noticed the units of $\mu$ and $\epsilon$ could produce a velocity, and they measured them experimentally and came up with a number very close to the speed of light, but didn't make that final leap of logic, which Maxwell did in 1861.] (From the wikipedia article History of Maxwell's equations) I'm not an expert, but my impression is that Maxwell also noted that his equations seemed incomplete, because they suggested that the speed of light remained constant regardless of the speed of the observer or the emitter. It's common for people to think that Einstein's work on special relativity was resolving the famous null result of Michelson & Morley's interferometer experiments looking for a luminiferous aether, but Einstein was actually addressing this invariance indicated by Maxwell's equations, if I understand right. (Note, it's been a long time since I've recounted most of this, and I just have a B.Sc. in math-physics, and haven't used any of this knowledge in a long time, so it's possible I'm getting some details wrong, but I think the general gist is pretty close.) YrastYrast Before even attempting to ask what a photon is "exactly" , we have to ask: do photons exist? You can go a long way believing they do not. Atoms, molecules and crystals have discrete states that determine the quantum nature of matter, so emit and absorb quanta of energy while the entity itself can be continuous, much like wine is a continuous entity that is only quantised to 70 cl because of the bottles it is sold in. Quantum mechanics uses the classical EM field. The wavy lines in Feynman diagrams, often loosely called photons, are just a graphical notation for terms in a perturbation expansion. Yet a problem remains: how come photons are absorbed in very local reactions? How can an extended classical electromagnetic wave be absorbed by a single atom? To me a sensible interpretation of these phenomena is that the EM field describes the probability that an absorption / emission takes place. For this reason I am now convinced that discrete photons exist and that the wave equation underlying the Maxwell equations is a relativistic quantum wave equation describing massless quantum particles just like the Schrödinger, Dirac and Klein-Gordon equations describe massive quantum particles. The electromagnetic wave equation in my interpretation is a massless Klein-Gordon equation describing the quantum particles known as photons. This does not answer the question what a photon exactly is. It does propose an answer to the preceding question whether photons exist. my2ctsmy2cts This is the big "zen" question of physics for centuries, thanks for asking it. Other answers are good/acceptable, this one (reputationally risky, but sincere/detailed) takes in some ways radically different angle/approach. Other answers look toward the past, this one will attempt to do the near impossible of anticipating the future in a visionary yet =---still scientifically grounded way. In other words the questionable parts of it can be considered hypothetical i.e. hypotheses-under-consideration but all carefully backed by current/ solid (in some cases very recent) research findings. The photon story has a dramatic "blind men and elephant aspect" that crosses many centuries.[1] The wave versus particle nature of light was even debated in Newtons time in the 17th century now close to ~4 centuries ago and light units or particles were named "corpuscles" in contrast to the Huygens wave theory.[2] Newtons theory held sway for something like a century over the latter "partly because of Newtons great prestige" even though Huygens was formulated nearly/ initially the same time. This shows an example of the "reverse" effect of human reputation on scientific thinking in the era. The recent Lacour-Ott experiments are breakthrough and show "a locally deterministic, detector-based model of quantum measurement".[3][4] This is a startling finding that has not been widely considered yet. It proves that a complete quantum mechanical formalism can arise in the analysis of mere classical systems. So this calls into serious question the nearly-century long assertions that quantum mechanics is inherently different than classical mechanics, now seen as not merely a belief system but a virtually a dogma of the field. There are many other recent developments that put chinks/dents in this long armor and seem to force a reevaulation/reconsideration[7] (but this will surely be a lengthy process and it is only beginning). The new theories are being compared to Bohmian mechanics but have distinctly different and new aspects, and should not be knee-jerk dismissed as refuted. One of the most comprehensive surveys so far is by Bush.[5] its is newly supported by experiments![6] So how is this possibly conceptually/theoretically? One striking new realization is that Borns probabilistic law in quantum mechanics can arise in classical systems. See e.g. Qiaochu Yuan, "Finite noncommutative probability, the Born rule, and wave function collapse" [8] and other much more detailed analysis of detectors comes from Khrennikov and "PCST", "Prequantum Classical Statistical Field Theory," what is also known as roughly semi-classical theories.[9][10] [9] Talks about a detector discarding energy where the incoming energy does not match the detector energy threshold (p9) and the detector "eating a portion of energy" (p10). Lets call this a dissipative detector. another similar concept in measurement is the detector dead time[10p5] where "the detector cannot interact with an incoming pulse". It appears these concepts are similar to a very sophisticated/comprehensive study of Bells theorem where the signalling system may have so-called "abort" events and which finds that stricter versions of Bell inequalities are not violated by current experiments.[11][13] These are similar to the "sampling loophole"[12] which is not necessarily the same as so-called efficiency loopholes, because the former may still persist even as detector efficiency is measured at 100%! Lets explore the concept of a dissipative detector more carefully and how it would look theoretically. Consider the following sketch. A spherical single wavefront travels through space. Now imagine it passing through the detector. the detector may be in the dead time region and will not detect the wavefront. Or, it may detect it. This is the probabilistic nature of light. It appears that possibly dead time cannot be reduced to zero as a physical law related/similar to the Heisenberg uncertainty principle. Another way of saying all this is that perfect detectors do not exist. The only detectors we have are made of atoms, aka particles. The mystery of the photon is then finally unravelled. A photon is a (probabilistic) interaction between a wavefront and a measuring device, namely a atom or other particle. The interaction can only be referenced a posteriori and not a priori. In other words even a detector made of a single atom will have this dead time and energy-dissipating property. So there we also have some interpretation for so-called virtual particles. Others may question "a spherical single wavefront" travelling through space. Exactly that picture is now supported by a new model of spacetime outlined comprehensively by Tenev/Horstemeyer, the "spacetime fabric".[14] they don't seem to consider the EM stress tensor much but one obvious generalization of their work is that EM waves are s-waves in the spacetime fabric. A fairly straightforward experiment demonstrating these ideas is the HBT effect. Imagine a line of detectors all at the exact same distance from a "single-photon" source as a straightforward way of increasing wavefront detection sensitivity. The idea of a "single-photon" source may be better visualized as a "single-wavefront" source. As the wavefront passes through the detectors, each detector may or may not click. If any click, there was a wavefront. If none do click, the wavefront may have passed but they may have all been in their "unresponsive" deadtime period. The overall combined array will detect the wavefront with greater accuracy. This effect is already observed but not interpreted under this point of view. It's called photon (anti)bunching in the literature. Many other effects are currently misinterpreted under the fog/haze of our currently cloudy theory. It will take a long time to rework it all. But such re-workings are not unheard of in the history of science, although they tend to be about once-a-century events and literally lead to/require e.g. textbooks to be rewritten (but not all at once!).[17] They cannot be timed exactly (analogously to earthquakes) and are even difficult to recognize in the middle but some signs (collected refs, e.g. also [18], many others not cited due to space/ format limitations, following quote etc) are currently present and we seem to be overdue for one. "I wish that the people who were developing quantum mechanics at the beginning of last century had access to these experiments," Milewski said, "because then the whole history of quantum mechanics might be different."[7] [1] blind men and elephant / wikipedia [2] Corpuscular theory of light / wikipedia [3] A Locally Deterministic, Detector-Based Model of Quantum Measurement / La Cour [4] Quantum computer emulated by a classical system / physorg [5] Pilot-Wave Hydrodynamics / Bush [6] new support for alternative quantum view / Wolchover [7] Have We Been Interpreting Quantum Mechanics Wrong This Whole Time? / Wolchover [8] Finite noncommutative probability, the Born rule, and wave function collapse Qiaochu Yuan [9] Born's rule from measurements of classical signals by threshold detectors which are properly calibrated / Khrennikov [10] Prequantum Classical Statistical Field Theory: Simulation of Probabilities of Photon Detection with the Aid of Classical Brownian Motion / Khrennikov [11] Robust Bell inequalities from communication complexity / La Plante [12] Loopholes in Bell test experiments / wikipedia [13] Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres / Hensen et al [14] The Mechanics of Spacetime - A Solid Mechanics Perspective on the Theory of General Relativity / Tenev, Horstemeyer [15] EM stress energy tensor / wikipedia [16] HBT effect / wikipedia [17] Paradigm shift / wikipedia [18] EmQM13: Emergent Quantum Mechanics 2013 conference / proceedings Pertinent Einstein quotes: All the fifty years of conscious brooding have brought me no closer to answer the question, "What are light quanta?" Of course today every rascal thinks he knows the answer, but he is deluding himself. For the rest of my life I will reflect on what light is. Is there anything more to the photon than can be accounted for by the changes in the properties of its emitter and absorber alone? (Answer re-expressed 6-11-16) After slightly more negative than positive responses, and also a helpful correction, overall it's time to stop digging myself into a hole. Nevertheless, checking recent literature, may I finish by drawing the attention of the critics to a recent paper in Quantum Stud.: Math. Found. (2016) 3:147–160 (copy attached Rashkovskiy (2016)) which also argues for the photon's non-existence, but via the semi-classical approach. Although it goes into the matter in far greater depth and detail than I could achieve, I believe it is consistent with the view that, ultimately, the photon's characteristics might best be sought in the properties of the emitter and absorber, rather than being attributed to a fictitious photon… or does it make any difference at all, whether or not we regard it as 'real'? As comments indicated that part of my argument is flawed, references to "space-time co-incident points" and "Minkowski-space "contact interaction" " have been removed Nevertheless, references in Arthur Neumaier's contribution above include instances of eminent physicists arguing the non-existence of photons: P C W Davies: "In his provocatively titled paper "Particles do not Exist," Paul Davies advances several profound difficulties for any conventional particle conception of the photon" The Nobellist Willis Lamb, was said to "maintain that photons don't exist." I offer an elementary argument, by relying on Feynman's lectures (Vol 1, end of section 17-2 on page 17-4), "In our diagram of space-time, therefore, we would have a representation something like this: at 45° there are two lines (actually, in four dimensions these will be "cones," called light cones) and points on these lines are all at zero interval from the origin. Where light goes from a given point is always separated from it by a zero interval, as we see from Eq. (17.5). Incidentally, we have just proved that if light travels with speed c in one system, it travels with speed c in another, for if the interval is the same in both systems, i.e., zero in one and zero in the other, then to state that the propagation speed of light is invariant is the same as saying that the interval is zero." in which he states that points on the light-cone are zero distance from the origin and from each other. Of course, he means zero 4-distance in space-time, as a result of the Minkowski metric requiring a subtraction between the 3-space interval and c times the time interval. So one can argue that photons have no independent existence in space-time: they "appear" only as a loss, by emission from a source, and a gain by absorption [rest of original sentence deleted]. In this view, photons are, in a sense (as others have mentioned in more sophisticated replies) simply convenient fictions of the maths that account for what happens in the observed energy transfer between emitter and absorber, across a 3-space and time gap that we bridge using the mathematics describing wave motions. This suggests that the photon's characteristics (e.g. spin) should be sought in the properties of the emitter and absorber, rather than being attributed to the fictitious photon… iSeekeriSeeker $\begingroup$ Replying to the downvoters (NB there have been a couple of upvoters too) Can you please indicate exactly where the idea falls down. I am simply puzzled by what, in my chemist's relative ignorance, appears to be the clear implication of SR that the passage of what we describe as a photon is only an interaction of emitter and absorber that are, according to Feynman, in direct "4-contact" – no finite space-time interval separates them. My question is an attempt to get clarification on this seeming puzzle. Thank you $\endgroup$ – iSeeker Aug 13 '16 at 16:41 $\begingroup$ Zero spacetime distance is not the same as "coincident points", the Minkowski "metric" crucially does not induce a proper metric in the mathematical sense that two points with zero distance are the same, so the part about "contact interaction" is just non-sensical. There are infnitely many distinct points that are at zero spacetime interval to each other, and this is not a transitive relation (consider the intersection of two lightcones of spatially separated points - the intersection is at zero distance to, both, but the two points are not). $\endgroup$ – ACuriousMind♦ Aug 15 '16 at 14:43 $\begingroup$ @igael Thanks for the edit - improves presentation $\endgroup$ – iSeeker Aug 15 '16 at 14:47 $\begingroup$ @ACuriousMind - Thank you. Where can I find (or what would be a keyword for) something about the difference between the Minkowski "metric" and what you call a proper metric? I've had another communication disabusing me of the notion of a contact interaction, so I appreciate the confirmation (assuming you're not also "UM") $\endgroup$ – iSeeker Aug 15 '16 at 15:00 $\begingroup$ What I call a "proper" metric would be a metric in the strict mathematical sense. It conforms exactly to our intuition of distance, in particular the fact that two point with zero metric distance are the same point, i.e. "in contact". The SR/GR notion of "metric" is that of a metric tensor, which gives a "proper" metric only in the Riemannian case, but SR/GR is only Lorentzian/pseudo-Riemannian. $\endgroup$ – ACuriousMind♦ Aug 15 '16 at 15:04 protected by Qmechanic♦ Aug 7 '16 at 16:49 Not the answer you're looking for? Browse other questions tagged particle-physics photons or ask your own question. If photons have no mass, how can they have momentum? What is more fundamental, fields or particles? Why do the neutrinos (with mass) from a supernova arrive before the light (no mass)? Do Maxwell's equation describe a single photon or an infinite number of photons? Do virtual particles actually physically exist? What is a phonon? What exactly is a quantum of light? How does gamma-gamma pair production really work? The concept of particle in QFT How to rebut denials of the existence of photons? What is light, and how can it travel in a vacuum forever in all directions at once without a medium? What happens when an atom absorb electron/photon? What is a photon? will osmium or lead stop all high-energy photons in a shorter distance? What is the QUANTUM mechanical "explanation" for the "red shift?" Photon energy depends on frequency (and/or amplitude)? Relation between radio waves and photons generated by a classical current Photon energy comes in packets What are the Basic Properties of a Photon?
CommonCrawl
CALF SEMINAR Past Abstracts This page contains abstracts from previous Calf seminars, listed alphabetically by speaker. Please contact one of the organisers if you spot any errors or dead links. Arkadij Bojko (University of Oxford) Computing with virtual fundamental classes of Hilbert schemes on Calabi--Yau 4-folds. Joyce introduced a new approach to wall-crossing for \C-linear enumerative theories by constructing a vertex algebra on the homology of the stack of coherent sheaves. A conjectural application of this theory is a wall-crossing formula of virtual fundamental cycles in Calabi--Yau 4-folds which were introduced in recent years. We apply this framework to virtual fundamental classes of Hilbert schemes, by considering an insertion given by the top Chern class of a tautological vector bundle $L^{[n]}$ associated to any line bundle $L$ on $X$. We prove that the conjectural formula of Cao--Kool for these invariants holds for any $L$ if it is true for any smooth divisor. Assuming this, we also give an explicit expression of the virtual fundamental class of Hilbert schemes and 0-dimensional sheaves, which allows us to compute with other insertions. Tiago Guerreiro (Loughborough University) On Singular Fano 3-folds complete intersections and their birational geometry. The Minimal Model Program aims at finding good representatives in a birational equivalence class. Starting from a 3-dimensional smooth projective variety W, Mori has shown that W can be dismantled by a sequence of birational transformation to a either a Minimal Model or a Mori Fibre Space. The outcomes of the minimal model program are rarely unique although relations between minimal models are well understood in general. On the other hand, Mori Fibre Spaces relate to each other in more complicated and unexpected ways. In this talk I will aim to give a flavour of these relations and explain how this problem relates to rationality questions. Tarig Abdel Gaidr (University of Glasgow) [A^n /G] from the McKay quiver. Take G an abelian subgroup of SL(n,C) (n<4), it is well known that the moduli space of certain representations of the McKay quiver of G is isomorphic to G-Hilb (a crepant resolution of A^n/G). We will use the McKay quiver to recover the stack [A^n/G] (a noncommutative crepant resolution) and relate it to G-Hilb by a finite number of wall crossings in some GIT chamber decomposition. Mohammad Akhtar (Imperial College) Mutations and Fano Varieties This talk is an introduction to the theory of mutations. We will discuss two closely related viewpoints on the subject: the algebraic approach interprets mutations as birational transformations, and is concerned with their action on Laurent polynomials. The combinatorial approach sees mutations as operations on lattice polytopes, and allows one to construct deformations of the corresponding toric varieties. We will explain the role played by algebraic mutations in the program to classify Fano 4-folds. We also discuss recent results concerning combinatorial mutations of weighted projective surfaces. The contents of this talk are joint work with Tom Coates, Alexander Kasprzyk and Sergey Galkin. Oliver E. Anderson (University of Liverpool) Chow sheaves and h-representability. Abstract: We start by giving an introduction to Suslin and Voevodsky's theory of relative cycles and h-topologies. After this is done we briefly describe Kollar's theory of families of algebraic cycles before proving that Suslin and Voevodsky?s presheaf of effective relative cycles of dimension r and degree d is h-represented by Kollar?s coarse moduli space Chow_{r,d}(X/S). As an application we give a modern proof of the following classical result: Let X be a scheme over an algebraically closed field (of arbitrary characteristic), then two cycles Z,Z' on X are algebraically equivalent if and only if there is a cycle W such that Z + W and Z' + W are both positive and there passes an irreducible curve through the corresponding points of the Chow scheme. Elizabeth Baldwin (University of Oxford) Introduction to Deligne-Mumford Stacks; Parts I & II. Stacks are a more general object than schemes. Their definition is very abstract; they are in fact categories, and only after some work can their geometric structure be understood. We will start part I by defining categories fibred in groupoids, with emphasis on examples and moduli interpretations. From there we will move on to Deligne-Mumford stacks, and define the etale topology on these. In part II we review the definition of categories fibered in groupoids. We then go on to look at how these may be assigned geometric properties, via representable morphisms, and define Deligne-Mumford stacks. Moduli of stable maps as a GIT quotient. The moduli space of curves is one of the most fundamental objects in attempts to classify algebraic varieties or schemes. This may be generalised to the moduli space of pointed curves, and futher to moduli spaces of pointed maps (where the maps are from curves to a specified space). Geometric invariant theory is a powerful tool for creating quotients in algebraic geometry. It is hence a way to construct moduli spaces. An early construction of the moduli space of curves was done in this way, by Gieseker. Along with David Swinaski, I have extended his method to construct the moduli space of pointed maps. We will review the definitions of the spaces in question, and of geometric invariant theory, before going through an outline of the method. Federico Barbacovi (UCL) Flops and derived equivalences In algebraic geometry every time we encounter a flop we expect to find a derived equivalence. Many examples of this phenomenon are known, and more are appearing. In this talk I will start explaining the case of the Atiyah flop and its generalisations, so to get a flavour of how to tackle the problem of proving the derived equivalence. Then, I will talk about the Abuaf flop, an example of 5-folds flop. To conclude, I will survey the reasons why understanding the derived equivalence is interesting from a purely geometric point of view. Gergely Berczi (University of Budapest) Multidegrees of Singular Maps Let G be a compact, semisimple Lie group. Given a affine algebraic variety in a complex vector space, invariant under the linear G-action on the vector space, one can ask for the multidegree of the variety, which is an element of the equivariant cohomology ring of the vector space. This is a polynomial in dimT variables (T is the maximal torus), which stores more information about the variety than the ordinary degree of the projective closure of it. The goal is to show a toy example for calculating multidegrees coming from singularities of maps, using equivariant localization, and show how nonreductive quotients come into the picture. Fabio Bernasconi (Imperial College) Some remarks on singular Fano threefolds in positive characteristic . According to the Minimal model Program, Fano varieties constitute one of the building blocks of algebraic varieties. Over the complex numbers we now have a good understanding of their behavior, e.g. we know the vanishing of the cohomology groups of the structure sheaf, that they are rationally connected varieties and, with some assumptions on the singularities, that they are bounded. However their geometry is still shrouded in mystery over fields of positive characteristic. We will survey some of the known results, explaining where difficulties and subtleties arise and I will discuss some of my recent work aimed to bound their possible pathological behavior. Alberto Besana (University of Milan) Symplectic aspects of framed knots. We propose an interpretation of the topological framing of a knotas a generating function for a Lagrangian submanifold of a symplectic manifold; the setting is Brylinski's space of knots (embedding of S^1 in R^3) and Maslov theory for Lagrangian submanifold. Examples and applications to existence of linear bundles with prescribed curvature will be given. Matt Booth (University of Edinburgh) Contraction algebras and noncommutative derived geometry. Given a threefold flopping contraction, one can associate to it a finite-dimensional noncommutative algebra, the contraction algebra, which controls the noncommutative deformation theory of the flopping curves. If the threefold was smooth, this algebra is conjectured to determine the complete local geometry of the base. I'll talk about a new invariant, the derived contraction algebra (which has an interpretation in terms of derived deformation theory), and explain (via singularity categories) why the derived version of the above conjecture holds. Time permitting, I'll talk about the flop-flop autoequivalence and indicate some aspects of the theory for surfaces. Pawel Borowka (University of Bath) Abelian surfaces and genus 4 curves. I would like to present some basic facts on abelian varieties. In particular I am interested in (1,3) polarised abelian surfaces. In analogy with the theta divisor I will distinguish a curve in the linear system of polarization. Using an idea of Andreotti and Mayer I will prove that on a genaral surface the resulting curve is smooth. An easy exercise or an open problem?. In my talk I will present the paper of H. Graf von Bothmer et. al. (arXiv:math/0605090v2). Using a nice scheme theory they partially proved the Casas Alvero question about polynomials in one variable. Non-simple abelian varieties Abstract: In dimension two, the locus of non-simple principally polarised abelian varieties have infinitely many irreducible components called Humbert surfaces. I will briefly explain the situation there and show how to generalise the notion of Humbert surface to higher dimensions to find irreducible components of the locus of non-simple principally polarised abelian varieties. Nathan Broomhead (University of Bath) Cohomology of line bundles on toric varieties. One of the reasons for studying toric varieties is the ability to do certain computations simply, using the combinatoric structure. In this talk we consider an example of this. We recall with reference to P2 the usual construction of a toric variety from its fan. We then introduce Cech cohomology, and use this to look at the cohomology of line bundles on toric varieties in terms of cohomology calculated on the support of the fan. As an example of its use, we prove a vanishing theorem for toric varieties. The Dimer Model and Calabi-Yau Algebras. From dimer models, first studied in physics, we can produce a class of Calabi-Yau algebras which are candidates for non-commutative crepant resolutions of Gorenstein 3-fold affine toric singularities. In this talk I will introduce, via examples, dimers and their corresponding toric varieties. I will then talk briefly about the "consistency" condition that underlies the Calabi-Yau property. Tim Browning (University of Oxford) Arithmetic of del Pezzo surfaces. Del Pezzo surfaces provide beatiful examples of rational surfaces. Whilst their geometry is classical, they are still somewhat myserious from the point of view of a number theorist. A basic property of such surfaces is that there are infinitely many rational points on the surface provided there is at least one. I will discuss two basic questions: 1) When does there exist a rational point? 2) Whenever the set of rational points is non-empty, how dense is it? Jaroslaw Buczynski (University of Warsaw) Linear sections of some Segre products.. I will explicitly describe and identify general linear sections of some products of varieties (mainly P^1 \times P^n and P^1 \times \Q^{n-1}) under their the Segre embeddings. Legendrian varieties. For given a vector space V with a symplectic form we define a subvariety in P(V) to be legendrian if its affine cone has a Lagrangian tangent space at each smooth point. We can prove that the ideal defining a legendrian subvariety is a Lie subalgebra of the ring of functions on V with Poisson bracket. Specially interesting is the case where the ideal is generated by quadratic functions - then we can restrict our considerations to a finite dimensional Lie algebra which happens to be isomorphic to the symplectic algebra of V. Next we prove that the subgroup of Sp(V) corresponding to the subvariety act transitively on smooth points of the subvariety (in particular, if it is smooth then it is homogeneous). The next goal is fully classify legendrian subvarieties generated by quadrics. There are not to many examples: twisted cubic in P^3, product P^1 times Q_{n-2} in P^{2n-1} (where Q_{n-2} is a smooth quadric in P^{n-1} and the embedding is Segre embedding) and four more exceptional examples. This list appears in a paper of Landsberg and Manivel and also in Mukai "Legendre varieties and simple Lie algebras". The conjecture is that these are all possible smooth legendrian subvarieties generated by quadrics. Moreover no singular example is known - so possibly the assumption of smoothness is not necessary. For more details see the preprint math.AG/0503528. Vittoria Bussi (Oxford) Categorification of Donaldson-Thomas invariants and of Lagrangian intersections We study the behaviour of perverse sheaves of vanishing cycles under action of symmetries and stabilization, and we investigate to what extent they depend on the function which defines them. We investigate the relation between perverse sheaves of vanishing cycles associated to isomorphic critical loci with their symmetric obstruction theories, pointing out the necessity for an extra "derived data". Similar results are proved for mixed Hodge modules and motivic Milnor fibres. These results will be used to construct perverse sheaves and mixed Hodge modules on moduli schemes of stable coherent sheaves on Calabi-Yau 3-folds equipped with 'orientation data', giving a categorification of Donaldson-Thomas invariants. This will be a consequence of the more general fact that a quasi-smooth derived scheme with a (-1)-shifted symplectic structure and orientation data has a "categorification". Finally we categorify intersections of Lagrangians in a complex symplectic manifold, describing the relation with Fukaya categories and deformation-quantization. Paul Cadman (University of Warwick) Deformations of singularities and the intersection form. The nonsingular level manifolds of a miniversal deformation of a singularity carry an intersection pairing in homology which can be thought of geometrically as intersection of cycles. By a procedure of Givental' and Varchenko it is possible to use a nondegenerate intersection pairing to furnish the base space of the deformation with a closed 2-form. This is a symplectic form if the base space is even dimensional. The symplectic form identifies a Lagrangian submanifold in the discriminant of the deformation over which level sets share the same degeneracy type. I will explain this construction, give examples of the computation of these symplectic forms and discuss the relationship between the coefficients of the form and the equations of the Lagrangian submanifold. Livia Campo (University of Nottingham) Birational non-rigidity of codimension 4 Fano 3-folds. As possible outcomes of the Minimal Model Program, Mori fibre spaces play a crucial role in Birational Geometry, and is therefore important to study how they are connected to each other. The Sarkisov Program provides tools to analyse birational transformations between Mori fibre spaces. In this talk we examine the case of codimension 4 Fano 3-folds with Fano index 1 and explain how we link them to other Mori fibre spaces. Francesca Carocci (Imperial College) Homological projective duality and blow ups. Kuznetsov's homological projective duality is a powerful tool for investigating semiorthogonal decompositions of algebraic varieties, which in turn are interesting as they seem to contain a lot of information about the geometry of the variety in question. I will recall the notion of homological projective duals and present a new example of geometric HP duals. Its construction is a special case of a more general story coming from blowing up base loci of linear systems. The example also highlights an interesting phenomenon: starting with a noncommutative HP dual pair one can obtain a commutative HP dual via the blowing up process. This example is a generalisation of other people's work on rationality of cubic fourfolds. This is joint work with Zak Turcinovic, Imperial College. Gil Cavalcanti (University of Oxford) Massey products in Symplectic Geometry. For complex manifolds the "ddbar" lemma implies that the massey products vanish and this is how it is proved that Kaehler manifolds are formal. For symplectic manifolds satisfying the Lefschetz property Merkulov proved a similar lemma using symplectic operators analogous to d and dbar. The question that arises is "do Massey products vanish for such manifolds?" I intend to give an example of symplectic manifold satisfying the Lefschetz property with nonvanishing Massey products. Notes for this talk are available. Examples of generalized complex structures. I'll introduce generalized complex structures and go through their basic properties, as determined by Gualtieri in his thesis. I'll show how symplectic and complex structures fit into this generalized setting. Then I'll present some results about existence of such structures on manifolds that do not admit either complex or symplectic structures. I'll also go through a classification of generalized structures on 6-nilmanifolds and give results about their moduli space. Andrew Chan (Warwick) Gröbner Bases over Fields with Valuations Gröbner bases have several nice properties that mean that certain problems in algebraic geometry can be reduced to the construction of a Gröbner basis. For example Gröbner bases allows us to easily determine whether a polynomial lives in some ideal, find the solutions to systems of polynomial equations, as well as having applications in robotics. In this talk I shall introduce Gröbner bases and see the problems that arise when trying to adapt this theory to polynomial rings over fields with valuations. We shall discuss how these Gröbner bases are interesting to algebraic geometers and how they have important applications to tropical geometry. Emily Cliff (Oxford) Universal D-modules A universal D-module of dimension n is a rule assigning to every family of smooth n-dimensional varieties a family of D-modules, in a compatible way. This seems like a huge amount of data, but it turns out to be entirely determined by its value over a single formal disc. We begin by recalling (or perhaps introducing) the notion of a D-module, and proceed to define the category M_n of universal D-modules. Following Beilinson and Drinfeld we define the Gelfand-Kazhdan structure over a smooth variety (or family of varieties) of dimension n, and use it to build examples of universal D-modules and to exhibit a correspondence between M_n and the category of modules over the group-scheme of continuous automorphisms of formal power series in n variables. Giulio Codogni (Cambridge) Curves, Jacobians and Modular Forms In the first part of the talk, I will introduce the classical theory of Jacobians. In particular, I will stress the relations between the singularities of the Theta divisor and the projective geometry of the curve. In the second part, I will focus on the Schottky problem and on modular forms. I will discuss modular forms arising from lattices: these might provide upper bound on the slope of the moduli space of curves. Alex Collins (University of Bath) Representations of quivers and weighted projective lines. Gaia Comaschi (Université des Sciences et Technologies de Lille 1) Pfaffian representations of cubic threefolds and instanton bundles. Given a hypersurface X ⊂ P^n, we can determine whether its equation might be expressed as the determinant of a matrix of linear forms, by showing the existence of certain ACM sheaves on X. One of the most ecient way to produce these sheaves is to use Serre's correspondence, starting from AG subschemes of X. In this talk I will treat the case where X is a cubic threefold. I will illustrate how we can construct explicitly AG curves corresponding to Pfaffian representations of X and how Serre's correspondence yields a component of the moduli space of instanton bundles on X. Barrie Cooper (University of Bath) Koszul Duality and Twisted Group Algebras. Let V be a representation of a finite group G. Then the symmetric (S) and exterior (L) algebras of V are Koszul dual (over k), in the sense that S \otimes L^* is a bigraded algebra with a natural differential ofdegree (1,-1) which is exact (except in degree (0,0)). The exactness ofthe differential gives a well-known recurrence for the symmetric powers of a representation in terms of tensor products of exterior and symmetric powers. In particular, this gives a recurrence on the McKay matrices of these representations. In order to see how the matrix recurrence arises in a more direct way, we should consider the following: Given a left kG-module W, define a twisted bimodule structure on Twist(W) = W \otimes kG, where the right action is (right) multiplication in kG and the left action is both the left action on W and (left) multiplication in kG. The McKay matrix of W now coincides with its decomposition matrix in terms of the irreducible kG-bimodules. Furthermore, it can be shown that Twist(S) and Twist(L) are Koszul dual rings (over kG). Hence, the recurrence on the McKay matrices reflects the fact that the differential respects the grading induced by the projectors onto the irreducible kG-bimodules. We also discuss the related theory of almost-Koszul rings and their connection with periodic recurrences. An introduction to Derived Categories. The derived category is a powerful homological tool and is the correct way of understanding derived functors such as Tor and Ext. Many important papers in algebraic geometry and mathematical physics are now being written in the language of derived categories, most notably Bridgeland, King and Reid's "Mukai implies McKay: the McKay correspondence as an equivalence of derived categories". We begin by discussing abelian categories, which satisfy precisely the axioms needed to define homology. Next we encounter a problem for which homology isn't quite powerful enough to find a solution - this leads naturally to the definition of the derived category of an abelian category. The algebraic structure of the derived category is that of a triangulated category and we finish by trying to understand the interplay between abelian, derived and triangulated categories. McKay Matrices, CFT Graphs, and Koszul Duality (Part I). To a finite subgroup of SL(n) we associate a graph. We explore the possibility of classifying such graphs, and representation theory highlights a recurrence relation for which these graphs exhibit unusual behaviour. We reduce the qualitative behaviour under this recurrence to a quantitative test, which the rational conformal field theory graphs also appear to satisfy. In subsequent talks we discuss how this test may betray the existence of a pair of Koszul or almost-Koszul dual algebras associated to the path algebra of the graph. Stephen Coughlan (University of Warwick) Introduction to graded rings and varieties. Graded rings are a basic component in studying the birational geometry of algebraic surfaces and 3-folds. I illustrate graded rings and their relation to (weighted) projective space via the Proj construction. I will go on to describe some algebraic varieties using graded ring methods. Alice Cuzzucoli (Warwick) A glimpse at the classification of Orbifold del Pezzo surfaces In this talk, we will discuss the main ingredients involved in the classification of del Pezzo surfaces with orbifold points, i.e. complex projective varieties of dimension 2 admitting log terminal singularities. The smooth case is a well-known classical topic: indeed, a smooth del Pezzo is either $\mathbb{P}^1\times\mathbb{P}^1$ or $\mathbb{P}^2$ blown up in $9-d$ points, and the proof dates back to the 19th century. In the singular case we are still missing a classification just as complete. Nevertheless, in the case of cyclic quotient singularities we can still have some interesting constructions. We will introduce the most crucial aspects of such constructions, which are divided in two main steps: with the help of Mori Theory, we can give a first representation of our birational models; secondly, by having a brief look at the toric case, we will describe how toric degenerations come into play in this classification. Ultimately, we can use graded rings methods to recreate analogous constructions to the cascade of blow ups for the smooth case by means of qG-deformations. Dougal Davis (LSGNT) Some stacks of principal bundles over elliptic curves and their shifted symplectic geometry. In a 2015 paper, I. Grojnowski and N. Shepherd-Barron give a recipe which produces an algebraic variety from the ingredients of an elliptic curve E, a simple algebraic group G, and an unstable principal G-bundle on E. In the case where G = D_5, E_6, E_7 or E_8, they show that a particular choice of G-bundle yields a del Pezzo surface of the same type as G. It is an open question which varieties arise for different choices of G-bundle. In this talk, I will describe how certain stacks of principal bundles on E, which are the main players in this construction, carry natural shifted symplectic and Lagrangian structures over the locus of semi-stable bundles. Time permitting, I will show how a very crude study of the degeneration of these structures at the unstable locus gives a much more direct computation of some of the canonical bundles appearing in the Grojnowski-Shepherd-Barron paper, which works for all groups and all bundles. Ruadhaí Dervan (Cambridge) An introduction to K-stability A central problem in complex geometry is to find necessary and sufficient conditions for the existence of a constant scalar curvature Kahler metric on an ample line bundle. The Yau-Tian-Donaldson conjecture states that this is equivalent to the algebro-geometric notion of K-stability, related to geometric invariant theory. I will give a gentle introduction to K-stability and time permitting there will be some applications. Carmelo Di Natale (Cambridge) A period map for global derived stacks In the sixties Griffiths constructed a holomorphic map, known as the local period map, which relates the classification of smooth projective varieties to the associated Hodge structures. Fiorenza and Manetti have recently described it in terms of Schlessinger's deformation functors and, together with Martinengo, have started to look at it in the context of Derived Deformation Theory. In this talk we propose a rigorous way to lift such an extended version of Griffiths period map to a morphism of derived deformation functors and use this to construct a period morphism for global derived stacks. Will Donovan (Imperial College) Tilting, derived categories and non-commutative algebras. We can describe the derived categories of coherent sheaves on certain simple spaces by a method called tilting. This gives an equivalence of the derived category with another category built from a certain non-commutative algebra. We will work this out in some simple cases. The McKay Correpsondence. Kleinian surfaces singularities are obtained by taking a quotient of the affine plane, under the action of a finite subgroup G of SL(2). There exist certain minimal resolutions of such singularities: the McKay correspondence tells us that the geometry of the resolution remembers some of the representation theory of G, albeit in a subtle manner. In particular, the irreducible components of the exceptional locus turn out to be in bijection with non-trivial irreps of G. This is part of a long (and continuing) story, bridging geometry and algebra in deep ways, which I will not have time to go into. However I will explain, following Bridgeland-King-Reid, how derived categories can give us an elegant insight into this correspondence. Bradley Doyle (UCL) Non-commutative crepant resolutions Instead of looking for a geometric resolution of a singular space, one could instead look for an algebraic (or categorical) resolution, these are known as NC(C)Rs. I will briefly introduce NC(C)R's and then will explain a method to find NC(C)R's for quotients by reductive groups. This method was found by Špenko and Van den Bergh. Finally I will sketch how these techniques can be strengthened to find NC(C)R's for the affine cone of the Grassmannian. Vivien Easson (University of Oxford) Applying algebraic geometry to 3-manifold topology. In the study of 3-dimensional manifolds, two of the most useful structures to have are hyperbolic geometric structures and essential surfaces lying in the manifold. Each of these relates to a different kind of representation of the fundamental group. A series of papers by Marc Culler and Peter Shalen has examined the interaction between these, using some algebraic geometry of the character variety to provide the connection. Culler-Shalen theory continues to provide new insights and deep theorems in 3-manifold topology. I intend to give an accessible introduction to the various ideas involved. Vladimir Eremichev (University of Warwick) G-Hilbert schemes and related constructions. Let X be a complex quasiprojective variety and G a finite group acting on X by automorphisms. The resulting quotient space X/G is singular. In dimensions 2 and 3 we have a prefered crepant (symplectic) resolution, namely the G-Hilbert scheme, defined by Nakamura as the moduli space of G-clusters. In dimensions 4 and higher the situation is more complicated -- G-Hilb X usually fails to resolve singularities and crepant resolutions exist only in very special cases. In my talk I will introduce the G-Hilb, explain that it is a resolution in lower dimensions and show what goes wrong in higher dimensions, including possible ways of fixing it. Daniel Evans (University of Liverpool) Birationally rigid complete intersections. In this talk I will introduce the method of maximal singularities and how it is applied to prove birational (super)rigidity of Fano varieties. In particular, I will consider the case of Fano complete intersections of index one with certain singularities. Andrea Fanelli (Imperial College London) Lifting Theorems in Birational Geometry In this talk, I will try to convince you of how important lifting pluri-canonical sections is. Two main approaches can be used: the algebraic one, based on vanishing and injectivity theorems, and the analytic one, which relies on Ohsawa-Takegoshi type L^2-extension theorems. Bypassing as much as possible the birational mumbo jumbo, I will eventually discuss the Dlt Extension Conjecture proposed by Demailly, Hacon and Păun. Enrico Fatighenti (University of Warwick) Hodge Theory via deformations of affine cones Hodge Theory and Deformation Theory are known to be closely related: many example of this phenomenon occurs in the literature, such as the theory of Variation of Hodge Structure or the Griffiths Residues Calculus. In this talk we show in particular how part of the Hodge Theory of a smooth projective variety X with canonical bundle either ample, antiample or trivial can be reconstructed by looking at some specific graded component of the infinitesimal deformations module of its affine cone A. In an attempt of a global reconstruction theorem we then move to the study of the Derived deformations of the (punctured) affine cone, showing how to find amongst them the missing Hodge spaces. Aeran Fleming (University of Liverpool) Kähler packings of projective, complex manifolds. In this talk I will introduce the notion of Kähler packings and explore their connections to multipoint Seshadri constants and Nagata's conjecture. I will then briefly present a general strategy to explicitly construct Kähler packings on projective, complex manifolds and if time permits discuss some examples of blow ups of the complex projective plane. Joel Fine (Imperial College London) Constant scalar curvature Kahler metrics on fibred complex surfaces. I will spend half the talk motivating the search for constant scalar curvature Kahler metrics. In particular I will explain why these special metrics should be of use in studying "the majority of" smooth algebraic varieties (i.e. stably polarised ones). In the other half of the talk I will explain how to use an analytic technique called an adiabatic limit to prove the existence of constant scalar curvature Kahler metrics on a special type of complex surface. This talk is based on the preprint math.DG/0401275. Slides for this talk are available. Peter Frenkel (Budapest University of Technology & Economics) Fixed point data of finite groups acting on 3-manifolds. We consider fully effective orientation-preserving smooth actions of a given finite group G on smooth, closed, oriented 3-manifolds. We investigate the relations that necessarily hold between the numbers of fixed points of various non-cyclic subgroups. All such relations are in fact equations mod 2, and the number of independent equations yields information concerning low-dimensional equivariant cobordism groups. We determine all the equations for non-cyclic subgroups G of SO(3). This talk is based on the preprint math.AT/0301159. Tim Grange (Loughborough) The classification of Mori Dream Spaces is an open problem in algebraic geometry. Castravet and Tevelev gave a criterion to determine whether the blow-up of products of projective spaces of the form (ℙ^a)^b at points in general position are Mori Dream Spaces. The case of blowing up points in very general position in arbitrary products of projective spaces remains largely unknown. In this talk, I will discuss results in this direction, compute cones of effective divisors, and describe the Cox rings of some of these varieties. Jacob Gross (Oxford) Homology of moduli stacks of complexes There are many known ways to compute the homology of the moduli space of algebraic vector bundles on a curve. For higher-dimensional varieties however, this problem is very difficult. It turns out that the moduli stack of objects in the derived category of a variety X, however, is topologically simpler than the moduli stack of vector bundles on X. We compute the rational homology of the moduli stack of complexes in the derived category of a smooth complex projective variety. For a certain class of varieties X including curves, surfaces, flag varieties, and certain 3- and 4-folds we get that the rational cohomology is freely generated by Künneth components of Chern characters of the universal complex––this allows us to identify Joyce's vertex algebra construction with a super-lattice vertex algebra on the rational cohomology of X in these cases. Giulia Gugiatti (LSGNT) Hyperelliptic integrals and mirrors of the Johnson-Kollár del Pezzo surfaces In this talk I will consider the regularised I-function of the family of del Pezzo surfaces of degree 8k+4 in P(2,2k+1,2k+1,4k+1), first constructed by Johnson and Kollár, and I will ask the following two equivalent questions: 1) Is this function a period of a pencil of curves? 2) Does the family admit a Landau-Ginzburg (LG) mirror? After some background on the Fano-LG correspondence, I will explain why these two questions are interesting on their own, and I will give a positive answer to them by explicitly constructing a pencil of hyperelliptic curves of genus 3k+1 as a LG mirror. To conclude, I will sketch how to find this pencil starting from the work of Beukers, Cohen and Mellit on hypergeometric functions. This is a joint work with Alessio Corti. Pierre Guillot (University of Cambridge) Algebraic cycles in the cohomology of finite groups. The classifying space BG of an algebraic group G can be approximated by algebraic varieties and therefore has a well-defined Chow ring CH^*BG, which is useful in the study of varieties acted on by G. Conjecturally this is the same as another ring defined topologically, namely using complex cobordism. These rings come equipped with a natural map to the ordinary cohomology ring. After explaining this in some detail I will give some examples of computations, using tools like the Steenrod algebra or the Morava K-theories. Eloïse Hamilton (University of Oxford) Moduli spaces for Higgs bundles, semistable and unstable.. The aim of this talk is to describe the classification problem for Higgs bundles and to explain how a combination of classical and Non-Reductive Geometric Invariant Theory (GIT) might be used to obtain moduli spaces for these objects. I will start by defining Higgs bundles and explaining the classification problem for Higgs bundles. This will involve introducing the "stack" of Higgs bundles, a purely formal object which allows us to consider all isomorphism classes of Higgs bundles at once. Then, I will explain how this stack can be described geometrically. As we will see, the stack of Higgs bundles can be decomposed into disjoint strata, each consisting of Higgs bundles of a given "instability type". I will explain how classical GIT can be used to obtain a moduli space for the substack of semistable Higgs bundles, and how non-reductive GIT might be applied to obtain moduli spaces for the remaining unstable strata. Umar Hayat (University of Warwick) Gorenstein Quasi-homogeneous Affine Varieties. We study quasi-homogeneous affine algebraic varieties, in particular their tangent bundle and canonical class, with the aim of characterising the case in which the variety is Gorenstein. Thomas Hawes (University of Oxford) GIT for non-reductive groups Geometric invariant theory (GIT) is concerned with the question of constructing quotients of algebraic group actions within the category of varieties. This problem turns out to be sensitive to the kind of group being considered. When a reductive group G acts on a projective variety X, Mumford showed how to find an open subset X^s of X (depending on a linearisation of the action) that admits an honest orbit space variety X^s/G. Moreover, this admits a canonical compactification X//G, obtained by taking Proj of the finitely generated ring of invariant sections of the linearisation. This rather nice picture breaks down when the group G is not reductive, since there is the possibility of non-finitely generated rings of invariants. This talk will look at work being done to describe a similar Mumford-style picture for non-reductive group actions. After reviewing Mumford's result for reductive groups, we will look at the work done by Doran and Kirwan on GIT for unipotent group actions, which provide the key for formulating GIT for general algebraic groups. We will finish by looking at work in progress on how to extend the ideas of Doran and Kirwan to the case where the group is not unipotent. David Holmes (University of Warwick) Jacobians of hyperelliptic curves. Jacobians of curves are the natural higher-dimensional analogues of elliptic curves, and many of the familiar properties of elliptic curves carry over. In particular, the Mordell Weil theorem (that the group of rational points over a number field is finitely generated) holds on any Jacobian, and the proof is again based on a theory of heights. After giving basic definitions, we will look at how to use this to find an algorithm to compute the torsion part of the Mordell-Weil group of the Jacobian of a hyperelliptic curve, giving a method to explicitly construct the Jacobian and exploring why this isn't enough. Julian Holstein (University of Cambridge) Preserving K(pi,1)'s - Hyperplane arrangements and homotopy type. Katzarkov, Pantev and Toen define schematic homotopy types as algebraic models for topological spaces. In this talk I will look at some properties of their construction in the case of hyperplane arrangements. Vicky Hoskins (University of Oxford) An introduction to stacks. Stacks are needed to give a geometric space to moduli functors that are not representable by a scheme, e.g. M_g the stack of smooth curves of genus g. In this sense we can see stacks as generalisations of schemes. In this talk we approach stacks from two different viewpoints. Firstly we view them as pseudofunctors from the category of schemes to the 2-category of groupoids; this point of view originates from our motivation, moduli functors. Secondly we describe the slightly more common definition, using categories fibred over groupoids. The aim is by the end of the talk to give the definition of a Deligne-Mumford stack and also give some examples. Daniel Hoyt (University of Cardiff) Braided categories and TQFTs. Topological quantum field theories (TQFTs) have proven an interesting tool in topology, providing invariants of 3-manifolds; to every (three-dimensional) TQFT there is a "quantum invariant". But how does one construct a TQFT? One solution is through using categories with extra structure, such as a tensor product. In this talk I plan to define both TQFTs and an important class of category (braided categories) that can be used in the construction of TQFTs. I will also give a few examples that demonstrate how familiar braided categories really are. Anton Isopoussu (Cambridge) K-stability, convex cones and fibrations Test configurations are a basic object in the study of canonical metrics and K-stability. We introduce two ideas into the theory. We extend the convex structure on the ample cone to the set of test configurations. The asymptotics of a filtration are described by a convex transform on the Okounkov body of a polarisation. We describe how these convex transforms change under a convex combination of test configurations. We also discuss the K-stability of varieties which have a natural projection to a base variety. Our construction appears to unify several known examples into a single framework where we can roughly classify degenerations of fibrations into three different types: degenerations of the cocycle, degenerations of the general fibre and degenerations of the base. Seung-Jo Jung (Warwick) Moduli of representations of McKay quiver This talk describes representations of McKay quiver and moduli spaces of them. Specially, for a finite group A in SL(3), I introduce A-HilbC^3 in terms of moduli space of McKay quiver representations. If time permits, we can discuss moduli spaces of McKay quiver for a finite groups in GL(3). Anne-Sophie Kaloghiros (Cambridge) The defect of terminal quartic 3-folds. Let $X \subset \mathbb{P}4$ be a quartic 3-fold with terminal singularities. The Grothendieck Lefschetz theorem states that any Cartier divisor on X is the restriction of a Cartier divisor on $\mathbb{P}4$. However, no such result holds for Weil divisor. If the quartic X is not assumed to be $\mathbb{Q}$-factorial, very little is known about its group of Weil divisors. $\mathbb{Q}$-factoriality is a global topological property, and very ''simple" quartics fail to be $\mathbb{Q}$-factorial. More generally, one could consider Gorenstein terminal Fano 3-folds of Picard rank 1. Can one bound the rank of the group of Weil divisors of a terminal Gorenstein quartic (Fano) 3-fold of Picard rank 1? I will give such a bound for quartics and for some Fanos. I will also show that if a quartic is not $\mathbb{Q}$-factorial, then it contains a (Weil non Cartier) surface of low degree. There is a finite number of possibilities for these surfaces. Grzegorz Kapustka and Michal Kapustka (Jagiellonian University, Krakow) Some geometric properties of singular del Pezzo surfaces. In this talk we study geometric properties of singular del Pezzo surfaces with log terminal singularities of index less then or equal to 2. We study their (in some way) canonical embedding and use it to describe them with equations in some weighted projective space. Grzegorz Kapustka (Jagiellonian University, Krakow) Linear systems on an Enriques surface. The aim of the talk is to describe linear systems of an irreducible curve on an Enriques surface, and the maps associated with these linear systems. We ask when a map is a morphism, what is the degree of this morphism, and describe the eventual singularities by looking at the image. Michal Kapustka (Jagiellonian University, Krakow) Linear Systems on a K3 surface. The aim of this talk is to describe linear systems on K3 surfaces. We are mostly concerned with their base points (or components), the morphism associated with them and its image. We also try to introduce the notion of Seshadri constants and we show some examples of linear systems on some K3 surface where we can compute them. Alexander Kasprzyk (University of Bath) Introduction to Toric Varieties. Toric varieties form an important class of algebraic varieties whose particular strength lies in methods of construction via combinatorial data. Understanding this construction has led to the development of a rich dictionary allowing combinatorial statements to be translated into algebraic statements, and vice versa. In the first talk the basic details of the combinatorial approach to constructing toric varieties are given. The construction is motivated by specific examples from which the more general methods can be deduced. The second talk will concentrate on the torus action on the variety. We will discuss the orbit closure and introduce the "star" construction. Finally, we shall apply what we have learnt to toric surfaces, analysing the singularities and seeing how they are resolved. Notes for the first talk are available. Recognising toric Fano singularities. It is a well known fact that toric Fano varieties of dimension n correspond to convex polytopes in R^n. In particular, if the variety has at worst terminal singularities, then the associated polytope is a lattice polytope P such that P\cap Z^n consists of the vertices of P and the origin. A similar condition on the polytope exists when the variety is allowed to possess canonical singularities. In this talk I intend to review the definitions of the singularities involved, and hopefully shed some light on these equivalencies. Some basic knowledge of toric geometry will be assumed. What little I know about Fake Weighted Projective Space. At the December 2003 Calf in Warwick, Weronika Krych introduced me to the idea of fake (or false) weighted projective space. These are objects which arise naturally in the context of toric geometry, and are quotients of bona fide weighted projective space. Fake weighted projective spaces also arise in toric Mori theory. Loosely speaking, they appear as the fibres of an elementary contraction. We shall see that a great deal of information about the singularities of a fake weighted projective space can be deduced from weighted projective space. We shall also establish bounds on how ``far away'' these fake weighted projective spaces can be from weighted projective space whilst still remaining sufficiently ``nice''. Jonathan Kirby (University of Oxford) Model Theory and Geometry - An Introduction. The talk will be in two parts. In the first I will explain what model theory is, and how it can be thought of as a generalization of the study of zeros of polynomials (aka Algebraic Geometry). In the second I will explain how simple geometric ideas crop up naturally in model theory. The aim is to give an overview of the ideas rather than any technicalities, and no familiarity with logic will be assumed. Weronika Krych (University of Warsaw) False weighted projective spaces and Mori theorem for orbifolds. We define false weighted projective spaces as toric varieties with fan constructed from vectors v_0,...,v_n in lattice Z^n with sum_{i=0}^n (a_i * v_i) = 0 for some integers a_i. The only difference of this fan and the one of weighted projective space is that the v_i's do not span the lattice. False weighted projective space are quotients of P(a_0,...,a_n) by the action of a finite group. We distinguish false ones by introducing the fundamental group in codimension 1 and proving it is non-trivial exactly for false ones. False projective spaces are quotients of P^n and they are orbifolds. We conjecture a generalization of Mori theorem characterizing P^n as the only projective varieties with ample tangent bundle. Roberto Laface (Leibniz Universität Hannover) Decompositions of singular Abelian surfaces Inspired by a work of Ma, in which he counts the number of decompositions of abelian surfaces by lattice-theoretical tools, we explicitly fi nd all such decompositions in the case of singular abelian surfaces. This is done by computing the transcendental lattice of products of isogenous elliptic curves with complex multiplication, generalizing a technique of Shioda and Mitani, and by studying the action of a certain class group act on the factors of a given decomposition. Incidentally, our construction provides us with an alternative and simpler formula for the number of decompositions, which is obtained via an enumeration argument. Also, we give an application of this result to singular K3 surfaces. Marco Lo Giudice (University of Bath, and University of Milan) Scheme-theoretic projective geometry. We will introduce projective geometry in the language of schemes. Starting from the projective spectrum of a graded ring we will explain some basic properties of projective varieties. Introduction to schemes. People tend to think about "schemes" to be synonymous with "Algebraic Geometry" but this is not quite true. As a result learning the machinery can be really frustrating, as our geometric intuition doesn't seem to fit into the picture. Actually the theory of schemes is far more general than Algebraic Geometry, and many concepts arising in the geometric context make sense only for a particular kind of schemes usually called "algebraic schemes". I will define algebraic varieties from this point of view, avoiding too much abstract nonsense and retaining the geometric point of view in evidence. Detailed notes on scheme theory are available. Artin level algebras. Artin level algebras are zero dimensional graded algebras, they are a generalization of Gorenstein algebras. I will describe their Hilbert function and their graded minimal resolution. Cormac Long (University of Southampton) Some results on Coxeter groups. We give necessary conditions for the {3,5,3} Coxeter group to surject onto PSL(2,p^n). We also look at some of the manifolds arising from the low index normal subgroups of this group. Andrew MacPherson (Imperial College London) Mirror Symmetry is T-duality The SYZ conjecture suggests that mirror manifolds should admit fibrations by dual special Lagrangian tori. I'll talk about some of the motivation for, and consequences of, this conjecture, and then I'll say something about what some people are trying to do about it. Be warned that this talk will be both i) very imprecise and ii) not particularly algebraic. A non-archimedean analogue of the SYZ conjecture The SYZ conjecture is a statement, or rather, a framework of statements, about the geometry of the large complex structure and large radius limit points of the moduli space of CY n-folds, and the mirror involution that exchanges them. Following proposals of Kontsevich, I'll talk about how non-Archimedean geometry can be used to study this limit in a more algebro-geometric setting. Diletta Martinelli (Imperial College London) Semiampleness of line bundles in positive characteristic I will explain why the property of semiampleness is very important in algebraic geometry and I will present some sufficient conditions for the semiampleness of a line bundle on a variety defined over the algebraic closure of a finite fields. In the second part of the talk I will present some results that are part of a joint work with Jakub Witaszek and Yusuke Nakamura. Mirko Mauri (LSGNT) Dual complexes of log Calabi-Yau pairs and Mori fibre spaces. Dual complexes are CW-complexes, encoding the combinatorial data of how the irreducible components of a simple normal crossing pair intersect. They have been finding useful applications for instance in the study of degenerations of projective varieties, mirror symmetry and nonabelian Hodge theory. In particular, Kollár and Xu conjectures that the dual complex of a log Calabi-Yau pair should be a sphere or a finite quotient of a sphere. It is natural to ask whether the conjecture holds on the end products of minimal model programs. In this talk, we will validate the conjecture for Mori fibre spaces of Picard rank two. Francesco Meazzini (Sapienza Università di Roma) QUIVER REPRESENTATIONS AND GORENSTEIN-PROJECTIVE MODULES. We consider a finite acyclic quiver Q and a quasi-Frobenius ring R. We then characterise Gorenstein-projective modules over the path algebra RQ in terms of the corresponding quiver representations over R, generalising the work of X.-H. Luo and P. Zhang to the case of not necessarily finitely generated RQ-modules. We recover the stable category of Gorenstein-projective RQ-modules as the homotopy category of a certain model structure on quiver representations over R. Caitlin McAuley (University of Sheffield) The spaces of stability conditions of the Kronecker quiver. It is well known that the space of stability conditions of a triangulated category is a complex manifold. In fact, mirror symmetry predicts that this space carries a richer geometric structure: that of a Frobenius manifold. From a quiver, one can construct a sequence of triangulated categories which are indexed by the integers. It is then natural to study the stability manifolds of these categories, and in particular to consider any changes to the manifolds as the integer indexing the triangulated category varies. We will study this construction for the Kronecker quiver, and discuss how the results provide evidence for a Frobenius structure on these stability manifolds. Carl McTague (University of Cambridge) The Cayley plane genus. I will give a new geometric characterization of the Witten genus. Ciaran Meachan (University of Edinburgh) Moduli of Bridgeland-stable objects. In the spirit of Arcara & Bertram, we investigate wall-crossing phenomena in the stability manifold of an irreducible principally polarized abelian surface for objects with the same invariants as (twists of) ideal sheaves of points. In particular, we construct a sequence of fine moduli spaces which are related by Mukai flops and observe that the stability of these objects is completely determined by the configuration of points. Finally, we use Fourier-Mukai theory to show that these moduli are projective. Ben Morley (University of Cambridge) Motivating mirror symmetry and the Gross-Siebert program. I'll try to explain why mirror symmetry is an interesting phenomenon, and motivate (parts of) the current Gross-Siebert approach to constructing and understanding mirrors. Very little knowledge of mirror symmetry or anything symplectic will be assumed. Jasbir Nagi (University of Cambridge) Graded Riemann spheres. Riemann spheres are extremely useful in the study of two-dimensional conformal field theories. One can ask what is the corresponding structure to look at if one wishes to study a superconformal field theory. One way of introducing anti-commuting co-ordinates is to consider the sheaf functions on the Riemann sphere, and extend them by anti-commuting variables. This can be more useful than a superspace formalism, since there is still a notion of a "patching function" on intersections of "co-ordinate patches". This talk is based on the preprint hep-th/0309243. Oliver Nash (University of Oxford) An Introduction to Twistor Theory. An introduction to the Penrose twistor corresponce will be presented. We will begin by discussing the correspondence between conformal four-manifolds and appropriate complex three-manifolds. In particular, our discussion will include the usual Penrose transform in this case. We will then discuss the various generalisations of the correspondence to other dimensions and geometric structures. We will conclude by describing some of the applications of twistor theory to gauge theory (monopoles and instantons), existence of complex structures and deformations of hypercomplex structures. Igor Netay (HSE, Moscow) On A-infinity algebras of highest weight orbits I will present recent results on syzygy algebras. For any algebraic variety X --> P^n with an embedding to projective space the syzygy spaces have a natural structure of an A-infinity algebra. I will discuss the case of projectivization of highest weight orbits in irreducible representations of reductive groups. Alvaro Nolla de Celis (University of Warwick) Introduction to cyclic quotient singularities. I will introduce quotient singularities and their resolution, in particular I will talk about Du Val singularities or rational double points, giving a descriptions of their resolution in terms of Hirzebruch-Jung continued fractions and Dynkin diagrams. Claudio Onorati (University of Bath) Moduli spaces of generalised Kummer varieties are not connected Using the recent computation of the monodromy group of irreducible holomorphic symplectic (IHS) manifolds deformation equivalent to generalised Kummer varieties, we count the number of connected components of the moduli space of both marked and polarised such manifolds. After recalling basic facts about IHS manifolds, their moduli spaces and parallel transport operators, we show how to construct a monodromy invariant which translates this problem in a combinatorial one and eventually solve this last problem. John Christian Ottem (University of Cambridge) Ample subschemes We discuss how various notions of positivity of vector bundles is related to the geometry of subschemes. Asymptotic cohomological functions. Asymptotic cohomological functions were introduced by Demailly and Küronya to measure the growth rate of the cohomology of high tensor powers of a line bundle L. These functions generalize the volume function of a line bundle and capture a lot of the positivity properties of L. In this talk I will review some recent results on them by Demailly, Küronya and Matsumura and explain how they compare with other notions of weaker positivity of a line bundle. Kyriakos Papadopoulos (University of Liverpool) Reflection Groups, Generalised Cartan Matrices & Kac-Moody Algebras. This talk will be the continuation of my talk in the Calf seminar in Liverpool (January 2005). I will spend a few minutes talking about reflection groups in integral hyperbolic lattices, and use this machinery to define the geometric realisation of a generalised Cartan matrix. There will be a short introduction to infinite-dimensional Lie algebras, based on the theory that we will give for generalised Cartan matrices. Reflection groups of integral hyperbolic lattices. This is an introductory talk on reflection groups of integral hyperbolic symmetric bilinear forms. Lobachevskii (hyperbolic) geometry is a strong tool in mathematics, and lots of problems which appeared in algebraic geometry have been attacked using this tool. We will divide the lecture into two parts; in the first one we will present all the preliminaries, and in the second part we will formulate Vinberg's algorithm. This algorithm permits us to find all cells of a polyhedron C of an acceptable set P(C) of orthogonal vectors to C, where C is the fundamental chamber for a subgroup W of the group W(M) (the group generated by reflections in all elements of M), where S:MxM -> Z is a given quadratic form. Hopefully, this material will be used as a basis, for a future lecture on hyperbolic Kac-Moody algebras. Nebojsa Pavic (University of Sheffield) Quotient singularities and Grothendieck groups We study the K-groups of the singularity category for quasi-projective schemes. Particularly, we show for isolated quotient singularities that the Grothendieck group of the singularity category is finite torsion and that rational Poincare duality is satisfied on the level of Grothendieck groups. We consider also consequences for the resolution of singularities of such quotient singularities and study dual properties in this setting. More concretely, we prove a conjecture of Bondal and Orlov in the case of (not necessarily isolated) quotient singularities. Andrea Petracci (Imperial College London) On the quantum periods of del Pezzo surfaces I will discuss a conjecture, due to Coates, Corti, Kasprzyk et al., which relates the quantum cohomology of del Pezzo surfaces with isolated cyclic quotient singularities to combinatorial data coming from lattice polygons and Laurent polynomials. I will present evidence for this conjecture in the case of del Pezzo surfaces with 1/3(1,1) singularities. The ideas discussed are in the spirit of recent work by Coates-Corti-Galkin-Kasprzyk, who used quantum cohomology to reproduce the Iskovskikh-Mori-Mukai classification of smooth Fano 3-folds. This is joint work with A. Oneto. James Plowman (Warwick) The Witt complex of a scheme with a dualising complex The Witt complex of a scheme can be thought of as the negative part of a Grothendieck-Witt analogue to the Gersten resolution of algebraic K-theory. Grothendieck-Witt theory can be described as K-theory "with duality" - and direct constructions of Witt complexes rely upon careful manipulation of the local dualising objects involved. The main aim of this talk is to present a construction of a Witt complex in greater generality than is currently available in the literature by extracting the local dualities required from residual complexes - which are the minimal injective resolutions of dualising complexes. Matthew Pressland (University of Bath) Labelled Seeds and Mutation Groups This talk will introduce labelled seeds, whose definition is a modification of that of seeds of a cluster algebra. Under this new definition, the cluster algebra itself will be unchanged, but the set of labelled seeds will form a homogeneous space for a a group of mutations and permutations. We will study the automorphism group of this space, and conclude that for certain mutation classes, the orbits of this automorphism group consist of seeds with "the same cluster combinatorics", in the sense that their quivers are all related by opposing some connected components. Knowledge of cluster algebras will not be assumed, and indeed one goal is to provide an introduction to the subject, albeit in a slightly esoteric way. Ice Quivers with Potential and Internally 3CY Algebras. A dimer model, which is a bipartite graph on a closed orientable surface, gives rise to a Jacobian algebra. Under consistency conditions on the dimer model, this algebra satisfies a very strong symmetry condition; it is 3-Calabi-Yau. However, the consistency condition forces the surface to be a torus. This can be avoided by allowing surfaces with boundary, on which dimer models give rise to frozen Jacobian algebras. We define a suitable modification of the 3-Calabi-Yau property for these algebras, and explain some interesting cluster-theoretic results that follow from it. Thomas Prince (Cambridge) From scattering diagrams to Gromov-Witten theory This talk will be a survey of the paper of Gross, Pandharipande and Siebert on enumerative consequences of their scattering diagram calculations. In particular recalling the notions of scattering diagrams, tropical curves and the Kontsevich-Soibelman lemma before discussing the holomorphic analogues of the tropical curve counts. This is also supposed to be valuable background material for reading recent papers of Gross Hacking and Keel on mirror symmetry for log Calabi-Yaus. Qiu Yu (University of Bath) Stability space of quivers/species of two vertices I'm going to describe the stability space (in the sense of Bridgeland) of the quiver A_2. As a comparison, I will show that this space 'contains' the fundamental domain of stability space of the Kronecker quiver P_2 (or equivalently, of the projective space of dimension 1) in some sense. Then I will explain the folding techniques to describe the stability space of the species of type B_2=C_2 and G_2. Lisema Rammea (University of Bath) Some Interesting Surfaces of General Type in Projective 4-space A well known theorem of Gieseker says that there exists a quasi-projective coarse moduli scheme for canonical models of surfaces of general type S with fixed K^{2}_{S} and c_{2}(S). However there are some classical inequalities which a surface of general type must satisfy. Beyond these numerical restrictions the study of surfaces of general type largely consist of studying examples in : (1) "Geography"--deciding which Chern numbers or other topological invariants arise as the invariants of a minimal surface of general type, and (2)"Botany"--for decribing all the deformation types within a fixed topological type. In this talk we look at (1). Construction of Non-General Type surfaces in P^4_w. We wish to generate smooth Non-General Type surfaces in four dimensional weighted projective space, P^{4}_w. For trivial weights (all weights equal to 1), a lot of work has been done by various people. In this case it is known that all surfaces of degree greater or equal to 52 are of general type. The conjectured bound is 15. Decker et al generated examples of smooth Non-General Type surfaces using an earlier version of the computer algebra system Macaulay2 in the case of trivial weights. We study their construction methods to try and come up with an efficient method to generate Non-General Type surfaces in P^{4}_w, where not all the weights equal one. For now we insist that our weights are pairwise coprime. Nontrivial weights lead naturally to cyclic quotient singularities. Examples of K3 surfaces have been found by Altinok et al in P^{4}_w. We discuss construction of an Enriques' surface in P^{4}_w by taking an example with w=(1,1,1,1,2). Nils Henry Rasmussen (University of Bergen) The dimension of W^1_d(C) where C is a smooth curve on a K3 surface Jorgen Rennemo (Imperial College) Gottsche's Ex-Conjecture and the Hilbert Scheme of Points on a Surface Consider a smooth, projective surface with a line bundle L on it. We say a curve is d-nodal if it has d singular points that are nodes and no other singularities. The Göttsche Conjecture (now a theorem) is a statement about the number of d-nodal curves in a d-dimensional linear system of divisors of class L. The first aim of this talk will be to explain this statement in some detail. I will then introduce the Hilbert scheme of points on a surface and show how the conjecture can be reduced to the computation of a cohomology class on the Hilbert scheme. This is the first step in one of the known proofs of the conjecture. Sönke Rollenske (Imperial College London) Some very non-Kahler manifolds. In the first part of the talk I want to give an elementary solution to the classical question how much de Rham and Dolbeault cohomology can differ on a compact complex manifold (cf. [Griffith-Harris78], p.444). In the second part I will explain how this fits into the more general framework of nilmanifolds with left-invariant complex structures and how these can be used to produce manifolds with interesting properties. (reference: arXiv:0709.0481) Taro Sano (University of Warwick) Deformation theoretic approach to the classification of singular Fano 3-folds. Smooth Fano 3-folds are classified classically and there are around 100 different families of them. If I allow terminal singularities on Fano 3-folds, things get much complicated and the classification is not completed. I will explain difficulties in the classification of those Fano 3-folds and how to make the classification easier by considering their deformations. Deformations of weak Fano manifolds The Kuranishi space of a projective variety is the parameter space of small deformations of the variety. It is important in the study of moduli spaces of projective varieties. In many cases, the Kuranishi space is singular. However, it is smooth in some important cases. I will explain when the smoothness holds. Shu Sasaki (Imperial College London) Crystalline cohomology and crystals. Crystalline cohomology originated from the observation that l-adic cohomology groups of a smooth projective (connected) variety over an algebraically closed field of characteristic p=l are "miserable" in comparison to p\neq l case. Roughly speaking, Grothendieck's idea (outlined in his lectures at IHES in 1966) was to lift varieties to characteristic zero and then take the de Rham cohomology to obtain "nice" (p-adic) cohomology. However, still some questions remained. Most notably: Is it always the case that one can lift varieties? To remedy this situation, we needed more sophisticated and subtle theory. The answer was... the theory of crystalline cohomology! In my talk, I'd like to explain why this crystalline cohomology is the "right" one and if time permitting, I would hope to talk about things like F-crystals to illustrate how mind-boggling this theory can sometimes be. I shall start from very basics such as Grothendieck topology so don't be scared of what I've just said above. Danny Scarponi (Oxford/Tolouse) The degree zero part of the motivic polylogarithm and the Deligne-Beilinson cohomology.. Last year, G. Kings and D. R ̈ossler related the degree zero part of the on abelian schemes pol0 with another object previously defined by V. Maillot and D. R ̈ossler. More precisely, the canonical class of currents constructed by Maillot and R ̈ossler provides us with the realization of pol0 in analytic Deligne cohomology. I will show that, adding some properness conditions, it is possible to give a refinement of Kings and R ̈ossler's result involving Deligne-Beilinson instead of analytic Deligne cohomology. Chris Seaman (Cardiff) An introduction to H-schobers The group of autoequivalences of a given derived category of coherent sheaves on a variety X, D^b(X), has been the subject of much study. In this talk I will start by recalling some well-known results about Aut(D^b(X)), then introduce some more recent technology in the form of schobers on a hyperplane arrangement \mathcal{H}. These can be thought of as `categorifications' of perverse sheaves on the same space. Ed Segal (Imperial College London) Operads and the Moduli of Curves. This talk will be an (attempted) explanation of Kevin Costello's paper math.AG/0402015. I'll go through the definition of A-infinity algebras and show how the universal structure (operad) describing them relates to moduli spaces of riemann surfaces with boundary. We'll see that up to homotopy equivalence, these moduli spaces have a simple combinatorial description. Crepant resolutions and quiver algebras A resolution of a singularity is called 'crepant' if it's canonical bundle is trivial. For some singularities it's possible to find a non-commutative algebra A, which we can draw as a quiver, such that modules over A are 'the same' as sheaves on a crepant resolution of the singularity (the derived categories are equivalent). In Van den Bergh's terminology A is a 'non-commutative resolution'. I'll describe the ways that this can be done and discuss the various interpretations of the resulting quivers and their representations. If time permits I might explain the conjectural significance of A_infinity deformation theory in this context. Despite some the high-tech material in the above paragraph, most of this talk will be about a simple example. Superpotential algebras from three-fold singularities. The orbifold X = C^3 / Z_3 is a simple but interesting example of a (non-compact) Calabi-Yau threefold. Physicists predict that type II string theory on X reduces in the low-energy limit to a gauge theory, which is described by a quiver and a superpotential. We'll discuss how these objects arise mathematically. Lars Sektnan (Imperial College) Algebro-geometric obstructions to the existence of cscK metrics on toric varieties. The existence of constant scalar curvature (cscK) metrics on Kähler manifolds is a central problem in Kähler geometry. There are several known obstructions to the existence of such metrics and the algebro-geometric notion of K-stability is conjectured to be equivalent to this. We will present a classical obstruction, the Futaki invariant, in the toric setting and use it to show that the blow-up of P^2 with its anti-canonical polarisation does not admit a cscK metric. We will then show that this is not enough, by exhibiting an example due to Wang-Zhou of a toric variety with vanishing Futaki invariant, which is not K-stable. Along the way we will introduce filtrations of the homogeneous coordinate ring of a polarised projective variety and discuss how these relate to K-stability and also give a stronger stability criterion. I will begin with a reminder on toric geometry. Yuhi Sekiya (University of Nagoya) Moduli spaces of McKay quiver representations. The derived category of the minimal resolution of a Kleinian singularity is equivalent to the derived category of a certain non-commutative algebra. I will illustrate that the minimal resolution is recovered as a moduli space of modules over the non- commutative algebra. Michael Selig (Warwick University) Orbifold Riemann-Roch in high dimensions. We are interested in explicit constructions of 3-folds and 4-folds with given invariants. We use the following well-known graded ring construction: given a polarised variety (X,D), under certain assumptions the graded ring R(X,D) = ⊕ n≥0H0(X,nD) gives an embedding X Proj(R(X,D)) ∈ wℙ. It is well known that the numerical data of (X,D) is encoded in the Hilbert series PX(t) := ∑ n≥0h0(X,nD)tn. We aim to break down the Hilbert series into terms associated to the orbifold loci of X. The talk should be fairly introductory. I will explain the ideas behind the work from scratch, exhibit some results in 3-D and explain some ideas for the 4-D case. Orbifold Riemann-Roch and Hilbert Series. Given a polarised orbifold (X,D) and its associated graded ring R = R(X,D) its numerical invariants (such as the plurigenera and the singularity basket) are encoded in its Hilbert series P_X(t). Studying the Hilbert series is therefore a sensible thing, as we could hope to use it to find generators and relations for the graded ring R. We deconstruct the Hilbert series into a sum of terms where each term corresponds clearly to an orbifold locus; using these methods we find a similar more general deconstruction of rational functions with poles only at roots of unity. Kenneth Shackleton (University of Southampton) Tightness and Computing Distances in the Curve Complex. We give explicit bounds on the intersection number between any curve on a tight geodesic and the two ending curves. We use this to construct all tight geodesics and so conclude that distances are computable. The algorithm applies to all surfaces. The central argument makes no use of the geometric limit arguments seen in the recent work of Bowditch (2003) and Masur-Minsky (2000). From this we recover the finiteness result of Masur-Minsky for tight geodesics. This talk is based on the preprint math.GT/0412078. Alexander Shannon (University of Cambridge) Twistor D-modules. A desire to extend Hodge theory to ever more general general geometric settings necessitates a corresponding generalisation in the structures we use to describe it. I will review some aspects of Saito's theory of Hodge modules, which play the role of sheaves of Hodge structures on varieties, and give an indication of how the algebraic data can be recast in a more geometric way to give the more flexible twistor D-modules of Sabbah. Geometry without geometry. We all know how to compute the (topological) cohomology of an elliptic curve in various standard ways, but let's pretend we've forgotten, and all we know about is a small piece of the derived category of coherent sheaves (I'll start with a reminder of what this is), but large enough that it generates the whole thing. Then we can get the answer purely algebraically, along with the Hodge structure and its variation with the parameter defining the elliptic curve, by looking at structures on the cyclic homology of what turns out to be a fairly small (and thus easy to work with explicitly) dg category. Time permitting, I shall try to suggest why this is a potentially interesting point of view for exploring ways of how elliptic curves might degenerate in the world of non-commutative geometry. Dirk Schlueter (University of Oxford) DM stacks in toric geometry and moduli theory This talk will be a follow-up to last term's introduction to stacks. The aim will be to show algebraic stacks in action: as a first example, I will discuss weighted projective spaces and toric geometry from the point of view of Deligne- Mumford stacks. The second part of the talk will focus on how algebraic stacks come up in moduli problems and in what sense they record more information than the classical coarse moduli schemes. As a guiding example, I will discuss moduli spaces of (marked) curves and some of the maps between them. YongJoo Shin (Sogang University) Classification of involutions on a surface of general type with p_g=q=0 We would like to understand involutions on a minimal surface of general type with p_g=q=0. Especially, for the surface with K^2=7 we give a table classified branched divisors and birational models of the quotient surface induced by an involution. And we explain how to get the table, and which cases are supported by examples. James Smith (University of Warwick) Introduction to K3 surfaces. The two talks will cover some basic aspects of K3 surfaces with the following aims: First, to say what a K3 surface is and how to recognise one. Examples will be given and the difficult question of why K3s are interesting may be tackled. Second, to become familiar with some of the methods used in the study of K3 surfaces such as lattice theory and Hodge structures. Time permitting, we may look at some deeper aspects of the subject and try and build an understanding of the moduli space of K3 surfaces. In the second talk, by looking at explicit examples, we shall illustrate some general properties of K3 surfaces. In particular, we look at variations of Hodge structure, periods and the associated Picard-Fuchs differential equation, and use these to visualise the moduli space of certain one-parameter families of K3 surfaces. Notes for the second part of this talk are available. K3s as quotients of symmetric surfaces. We consider the action of finite subgroups of SO(4) on P^3. Recent work of W. Barth and A. Sarti provides three examples of families of K3 surfaces that arise as the quotient of invariant surfaces modulo this group action. We describe an easy way to prove this and to find more examples using graded ring methods and invariant theory. This talk will cover a basic introduction to algebraic K3 surfaces and will demonstrate the use of graded rings and weighted projective spaces to their study. David Stern (University of Sheffield) Tilting T-structures, Mutating Exceptional Collections, Seiberg Duality... It's all quivers to me. In this I will working in the context of the bounded derived categories of coherent sheaves of a fano surface Z and it's canonical bundle \omega_{Z} which is a Calabi Yau 3-fold. I will breifly state how quivers relate to t-structures and tilting them, to exceptional collections and their mutations, and if any physists are present to seiberg duality. I will then use this to explain Tom Bridgeland's result in "T-structures on some local Calabi-Yau varieties" and if all goes well give a brief description of my current work. Vocabulary made easy. The aim of this talk is to provide an alternative understanding of derived categories, focusing on using the formal definitions of a t-structure and the heart of a t-structure to get visual understanding of what a derived category is, even for those with little prior knowedge. Then depending on time and peoples interest I will use this 'picture' to give simple explainations, of things like torsion pairs, tilting with respect to torsion pairs, stability conditions (Tom Bridgelands description), etc. Jacopo Stoppa (Imperial College) Stability and blowups. We show that K- and and Chow- stability of the blowup of some polystable variety along a 0-cycle is related to the Chow stability of the cycle itself. This can be used to give almost a converse to a well known result of Arezzo and Pacard in the theory of constant scalar curvature Kaehler metrics. Andrew Strangeway (Imperial College) A Reconstruction Theorem for the Quantum Cohomology of Fano Bundles A vector bundle E is said to be Fano if the projectivisation P(E) is a Fano Manifold. I will present a reconstruction theorem for Fano vector bundles, which recovers the small quantum cohomology of the projectivisation of the bundle from a small number of low degree Gromov-Witten invariants. In special cases the quantum cohomology is entirely determined by this theorem. I will give an example where the theorem is used to calculate the quantum cohomology of a certain Fano 9-fold. Tom Sutherland (University of Oxford) Stability conditions for the one-arrow quiver. Stability conditions are needed in order to construct nice moduli spaces, the classical example being vector bundles over a curve. Spaces of stability conditions of Calabi-Yau threefolds are also important in studying mirror symmetry which is a duality for Calabi-Yau threefolds arising in string theory. In this talk we will give an introduction to stability conditions in algebraic geometry and then study the space of stability conditions of a particularly simple CY3 category described by the one-arrow quiver Affine cubic surfaces and cluster varieties In this talk we will consider affine cubic surfaces obtained as the complement of three lines in a cubic surface where it intersects a tritangent plane. We will interpret certain families of these affine cubic surfaces as moduli spaces of local systems on the punctured Riemann sphere. We will see how to draw quivers on the sphere so that the associated cluster variety is related to the total space of these families. Rosemary Taylor (University of Warwick) Constructions of Fano 3-folds using unprojections. The Graded Ring Database uses numerical data to create a list of the Fano 3-folds in weighted projective space which could exist. In codimensions 1, 2 and 3 we know those that exist. But what do we know of codimension 4? Unprojections provide a method for constructing explicit examples of these Fano varieties. This talk will provide an overview of the current research, beginning with an introduction to unprojections and concluding with recent progress in type II1 unprojections. Elisa Tenni (University of Warwick) Surface fibrations and their relative canonical algebras. The aim of this talk is to introduce some properties of the relative canonical algebra of a surface fibration. It has been shown (by works of Konno, Reid, Catanese and Pignatelli, and others) that this algebra encodes important information about the geography of the surface. In particular I will show how such methods apply to the case of a fibration with genus 5 fibres, and I will prove a relation between the most important invariants of the surface. Alan Thompson (University of Oxford) Tjurina and Milnor numbers of matrix singularities. The Tjurina and Milnor numbers are two numbers that arise in the study of the singularity theory of composed mappings. This talk aims firstly to define these numbers and provide the means to calculate them in specific examples. This will then lead into a discussion of the fascinating relationship between the two numbers, focussing specifically on the case where the spaces in consideration are spaces of matrices and one of the functions to be composed is the determinant function. Here one can obtain explicit formulas relating the two numbers in certain dimensions, but little is known about the general case. Models for Threefolds Fibred by K3 surfaces of Degree Two. It is well known that a K3 surface of degree two can be seen as a double cover of the complex projective plane ramified over a smooth sextic curve. This talk will be concerned with finding explicit birational models for threefolds that admit fibrations by such surfaces. It will be shown that the nature of K3 surfaces of degree two allows these models to be constructed as double covers of rational surface bundles, a structure which in turn enables many of their properties to be explicitly calculated. Andrey Trepalin (HSE, Moscow) Rationality of the quotient of P^2 by a finite group of automorphisms over an arbitrary field of characteristic zero It's well known that any quotient of P^2 by a finite group is rational over an algebraically closed field. We will prove that any quotient of P^2 is rational over an arbitrary field of characteristic 0. Jorge Vitoria (University of Warwick) t-structures and coherent sheaves. Let D be the derived category of coherent sheaves on a projective variety X. In this talk we will study methods of constructing t-structures on D and explore examples. Anna Lena Winstel (TU Kaiserlautern) The Relative Tropical Inverse Problem for Curves in a Fixed Plane. Tropical Geometry is a rather new tool in algebraic geometry, in which an algebraic variety is assigned a polyhedral complex, called its tropical variety. By studying these tropical varieties, one can obtain information about the original algebraic variety. Since these new objects are combinatorial, problems can often be solved more easily. There is hope to find new results in algebraic geometry by looking at the tropical counterpart and then transferring the result into algebraic geometry. However, it is not always clear how this transformation from tropical into algebraic geometry can work. This problem is called the Tropical Inverse Problem: given a polyhedral complex, one asks if it is the tropical variety of an algebraic variety. So far, there are answers to this problem for special cases such as a polyhedral complex of codimension one or a polyhedral fan of dimension one, but there is no general solution. One may also ask the question in a relative setting: given an algebraic variety X and a polyhedral complex which is set-theoretically contained in the tropical variety trop(X) assigned to X, does there exists a subvariety of X such that this polyhedral complex is the tropical variety of this subvariety? This question is called the Relative Tropical Inverse Problem. It is the aim of this talk to present an algorithm able to decide the Relative Tropical Inverse Problem in the case that X is the projective plane V(x+y+z+w). John Wunderle (University of Liverpool) Properties of higher genus curves. We will investigate some of the geometric and number theoretic properties of curves of genus two, which admit various types of isogenies. We will look at these via covering techniques and go on to extend some of the results regarding curves with bad reduction at 2 and p, where p is some prime. The resolution of Diophantine equations over the rationals is one with a deep history. In this talk I will consider ways to solve a general family of curves - specifically the Fermat Quartic curves ($x^4+y^4=c$). We present work from Flynn and Wetherell and expand upon their work. We consider a "flow diagram" approach to solving these curves and present explicit examples as well as a general method for approaching the curves. Finally we explain how these methods can be adopted to suit other diophantine equations over the rationals. Christian Wuthrich (University of Cambridge) On p-adic heights in families of elliptic curves. About twenty years ago, following an initial idea of Bernardi and Neron, Perrin-Riou and Schneider found a canonical p-adic height pairing on an elliptic curve defined over a number field. The associated p-adic regulator appears in the p-adic version of the conjecture of Birch and Swinnerton-Dyer, but it is still unknown if this pairing is non-degenerate except for special cases. Following the work done for the real-valued pairing, one can analyse the behaviour of the p-adic height as a point varies in a family of elliptic curves, and get so new information about this pairing.
CommonCrawl
Do we have a quantum field theory of monopoles? Recently, I read a review on magnetic monopole published in late 1970s, wherein some conjectures of properties possibly possessed by a longingly desired quantum field theory of monopoles are stated. My question is what our contemporary understanding of the quantum field theory of monopoles is. Do we have a fully developed one? Any useful ref. is also helpful. quantum-field-theory magnetic-monopoles xiaohuamaoxiaohuamao This answer is based on David Tong's lectures on solitons - Chapter 2 - Monopoles. The general answer to the question is that it is known how to construct a quantum mechanical theory of magnetic monopoles acting as individual particles among themselves and also perturbatively in the background of the standard model fields. t' Hooft - Polyakov monopoles appear as solitons in non-Abelian gauge theories, i.e. as stable static solutions of the classical Yang-Mills-Higgs equations. These solutions depend on some free parameters called moduli. For exmple the center of mass vector of the monopole is a modulus, since monopoles centered around any point in space are solutions since the basic theory is translation invariant. The full moduli space in this case is: $\mathcal{M_1} = \mathbb{R}^3 \times S^1$. The first factor is the monopole center of mass, the second factor $S^1$ will provide after quantization an electric charge to the monopole by means of its winding number. A two monopole solution will have apart of its geometric coordinates an and charge another compact manifold giving it more internal dynamics. This part is called the Atiyah-Hitchin manifold after Atiyah and Hitchin who were the first to investigate the monopole moduli spaces and compute many of their characteristics: $\mathcal{M_2} = \mathbb{R}^3 \times \frac{S^1 \times \mathcal{M_{AH}}}{\mathbb{Z}_2}$. The knowledge about the arbitrary Atiyah-Hitchin manifolds is not complete. We can compute its metric and its symplectic structure. It is known thta they are HyperKaehler, which suggests that they can be quantized in a supersymmetric theory. Also, some topological invariants are also known. These moduli spaces can be quantized (i.e., associated with Hilbert spaces on which the relevant operators can act), and the resulting theory will be a quantum mechanical theory of the monopoles. For example the for the charge 2 monopole one can in principle find the solutions representing the scattering of the two monopoles. It should be emphasized that this is a quantum mechanical theory and not a quantum field theory. One way to understand that is to let the moduli vary very slowly (although strictly speaking the solutions are only for constant moduli). Then the resulting solutions will correspond to the classical scattering of the monopoles. Basically, one can find the interaction of the monopoles with the usual fields of the theory by expanding the Yang-Mills theory around the monopole solution, then quantize the moduli space. In particular, the Dirac equation in the monopole background has zero modes which can be viewed as particles in the infrared limit. David Bar MosheDavid Bar Moshe This is almost, but not quite, a duplicate of What tree-level Feynman diagrams are added to QED if magnetic monopoles exist?. In principle quantum electrodynamics includes magnetic monopoles as well as electrons, so yes we do have a theory to describe them. However we expect monopoles to be many orders of magnitude heavier than electrons, and that causes problems trying to describe both with a perturbative calculation. John RennieJohn Rennie Not the answer you're looking for? Browse other questions tagged quantum-field-theory magnetic-monopoles or ask your own question. Are gravitomagnetic monopoles hypothesized? Magnetic monopoles Quantum field theory alternatives Symmetry Breaking and Vacuum Expectation Values Quantum Field Theory phenomenology A question from Professor Anthony Zee's book: "Quantum Field Theory in a Nutshell" Magnetic monopoles in field theory Is the String Field of String Field Theory the same (ontologically identical to) as the field of QFT? How are local observables encoded in this formulation of quantum field theory as a functor?
CommonCrawl
Prospects for beam-based study of dodecapole nonlinearities in the CERN High-Luminosity Large Hadron Collider Part of a collection: Focus Point on High-Energy Accelerators: Advances, Challenges, and Applications Regular Article E. H. Maclean ORCID: orcid.org/0000-0002-1578-51761,2, F. S. Carlier1,3, J. Dilly1,4, M. Le Garrec1,5, M. Giovannozzi1 & R. Tomás1 The European Physical Journal Plus volume 137, Article number: 1249 (2022) Cite this article Nonlinear magnetic errors in low-\(\beta\) insertions can have a significant impact on the beam-dynamics of a collider, such as the CERN Large Hadron Collider (LHC) and its luminosity upgrade (HL-LHC). Indeed, correction of sextupole and octupole magnetic errors in LHC experimental insertions has yielded clear operational benefits in recent years. Numerous studies predict, however, that even correction of nonlinearitites up to dodecapole order will be required to ensure successful exploitation of the HL-LHC. It was envisaged during HL-LHC design that compensation of high-order errors would be based upon correction of specific resonances, as determined from magnetic measurements during construction. Experience at the LHC demonstrated that beam-based measurement and correction of the sextupole and octupole errors was an essential complement to this strategy. As such, significant interest also exists regarding the practicality of beam-based observables of multipoles up to dodecapole order. Based on experience during the LHC's second operational run, the viability of beam-based observables relevant to dodecapole order errors in the experimental insertions of the HL-LHC are assessed and discussed in detail in this paper. Avoid the common mistakes To achieve the desired collision rates in the High-Luminosity Large Hadron Collider (HL-LHC) [1], beam intensities will be significantly increased relative to LHC operation (achieved via upgrades of the injector chain [2]) and optics will be squeezed to significantly smaller \(\beta ^{*}\) in the experimental insertion regions (IRs) than for LHC operation. This latter requirement necessitates large \(\beta\)-functions in nearby elements of the lattice, notably the quadrupole triplets and the separation/recombination dipoles. Nonlinear errors in these insertion magnets can thus significantly perturb the beam dynamics. Control of such IR-errors during design and construction has been an issue of longstanding concern to the collider community, notably at the Tevatron [3], RHIC [4] and LHC [5,6,7,8]. Beam-based optimisation of lifetime using nonlinear corrector magnets in the experimental IRs yielded operational benefits at the RHIC collider [9, 10], where measurements of feed-down to tune also showed significant promise for compensation of IR errors [11]. Operational benefits were obtained at the LHC through beam-based compensation of sextupole and octupole errors in the ATLAS and CMS insertions [12]. Control of IR-nonlinearities is also a key ingredient in the design and development of the Future Circular Collider [13,14,15] and is expected to represent a challenge for operation of SuperKEKB [16]. In the HL-LHC it is proposed to correct IR-errors up to dodecapole order in the experimental insertions [1]. During the HL-LHC design it was envisaged that such corrections would be calculated based upon magnetic measurements performed during construction, following the procedure described in [6, 17]. This nominal correction strategy is heavily dependent on the quality of the magnetic model [18,19,20]. In practice, at the LHC it was found that discrepancies existed between predictions of the magnetic model of the IR sextupole and octupole errors, and beam-based observations [12, 21]. It is desirable therefore, to have beam-based observables up to dodecapole order which could be used to validate (and if necessary refine) corrections. During its second operational run several sessions of LHC beam-time were devoted to testing beam-based observables with a view to dodecapole measurement in HL-LHC, with further experience gained during regular optics commissioning. The aim of this paper is to present the relevant LHC experience, and assess the viability of the techniques for application in the HL-LHC. The structure of this paper is as follows: in Sect. 2 notation and assumptions regarding the expected dodecapole errors in the HL-LHC used in the rest of the paper are introduced, while Sect. 3 summarises briefly the main motivations for correction to such high-order. Section 4 discusses the prospect to study dodecapole errors via methods based on amplitude detuning: viability of measuring quadratic detuning directly generated by dodecapoles is assessed via measurement of artificially introduced sources in the LHC, and LHC experience of measuring feed-down to linear (octupole-like) detuning is reviewed and compared to the expected feed-down in HL-LHC. In Sect. 5 LHC experience of measurement of high-order resonance driving terms is reviewed, and compared to expectations in HL-LHC. Finally Sect. 6 and 7 present the prospect to measure changes in dynamic aperture of free motion and forced-oscillations, due to the dodecapole errors expected in HL-LHC, based upon measurements of artificially introduced sources in the LHC. A technical report providing further details of the studies in this paper is available at [22]. Expected dodecapole errors in HL-LHC A key element of the high-luminosity LHC upgrade (HL-LHC) will be replacement of existing triplet quadrupoles with larger aperture magnets, allowing for a baseline optics squeeze to \(\beta ^{*}=0.15\,\text {m}\) (the LHC currently operates with an ultimate squeeze in the range \(\beta ^{*}=0.3\,\text {m} - 0.25\,\text {m}\)). Dodecapole errors in the new HL-LHC triplets are expected to be the dominant source of \(b_{6}\) in the HL-LHC during operation with squeezed beams. Target values for dodecapole errors in the triplets were initially specified as a systematic value of \(b_{6} = -0.64\,\text {units}\) in the body of the magnet together with a random component \(\sigma (b_{6}) = 1.1\,\text {units}\), where a dimensionless '\(\text {unit}\)' of the multipole error (\(b_{\mathrm{n}}\), of multipole order \(\mathrm{n}\)) is defined relative to the main field of the magnet (\(B_{\mathrm{N,main}}\), of multipole order \(\mathrm{N}\)) at a reference radius (\(R_{\mathrm{ref}}\), equal to \(0.05\,\text {m}\) in the HL-LHC triplets) $$\begin{aligned} b_{\mathrm{n}} [\text {unit}]&= \frac{B_{\mathrm{n}}}{B_{\mathrm{N,main}}}R_{\mathrm{ref}}^{n-N} \times 10^{-4} \end{aligned}$$ where the field gradients (\(B_{\mathrm{n}}\)) and normalised field strength (\(K_{\mathrm{n}}\)) are defined $$\begin{aligned} B_{n} [{\text{Tm}}^{{1 - n}} ] &= \frac{1}{{(n - 1)!}}\frac{{\partial ^{{n - 1}} B_{y} }}{{\partial x^{{ - 1}} }} \\ K_{n} [{\text{m}}^{{ - n}} ] &= \frac{1}{{B\rho }}( - 1)!B_{n} \\ \end{aligned}$$ and \(1/(B\rho )\) is the beam rigidity. Skew multipoles are similarly denoted \(a_{n}\). Recent work on the development of HL-LHC triplet magnets suggested dodecapole errors in HL-LHC could exceed the original target [23,24,25], though significant work is also still underway to improve magnet designs [24, 25]. A more pessimistic expectation of the normal dodecapole errors can be taken to be systematic value of \(b_{6} = -4\,\text {units}\) in the body of the magnet, with an unchanged random part (\(\sigma (b_{6}) = 1.1\,\text {units}\)). Expectation for decapole and lower-order multipoles can be found in [23,24,25]. Motivation for correction to dodecapole order in HL-LHC A major concern over correction of IR-nonlinearities in HL-LHC arises from potential reduction of beam-lifetime and dynamic aperture (DA). DA is defined as the extent of the phase-space volume in which particle motion remains bounded (see, e.g. [26] and references therein for more detailed discussion). Numerous simulation-based studies predict that correction to dodecapole order is necessary to guarantee sufficient DA for effective exploitation of the collider [27,28,29]. As an example, Fig. 1 shows the reduction to simulated DA after \(10^{6}\,\text {turns}\) (about \(1.5\,\text {minutes}\)) if dodecapole errors (with systematic \(b_{6} = -4\,\text {units}\)) in the ATLAS and CMS insertion triplets were left uncorrected. Simulation was performed for the baseline HL-LHC configuration at end-of-squeeze (\(\beta ^{*}=0.15\,\text {m}\)) including the beam-beam interactions, with nominal correction of all other linear and nonlinear errors (following the procedure [17]). Values shown are minimum DA over sixty instances ('seeds') of the magnetic model to account for uncertainties in the errors. Failure to correct the normal dodecapole errors leads to a substantial (\(\approx 25\,\%\)) reduction in predicted DA which poses a risk to productive operation. The impact of uncorrected dodecapole sources on operation of the HL-LHC with colliding beams has been discussed in detail in [30]. Simulated DA of the HL-LHC in collision at end-of-squeeze (\(\beta ^{*}=0.15\,\text {m}\), with beam-beam included), with (blue) and without (red) correction of normal dodecapole errors in the ATLAS and CMS insertion triplets In addition to concerns over dynamic aperture, feed-down from the high-order errors can also represent a challenge to HL-LHC operation. To stabilise non-colliding beams during operation with high intensity, the HL-LHC will operate with amplitude detuning purposefully introduced via Landau octupole magnets in the arcs. Limited margin will be available in the Landau octupole strength to maintain beam-stability while also maintaining sufficient dynamic aperture [31]. Existing estimates [31] anticipate a tolerance on detuning from the IR-errors of \(\approx 12\times 10^{3}\,{\hbox {m}^{-1}}\). In the absence of limitation from the cryogenic system, ultimate HL-LHC scenarios foresee collisions beginning from \(\beta ^{*}= 0.4\,\text {m}\): Fig. 2 shows predictions of linear amplitude dependent tune shifts generated by feed-down from decapole and dodecapole errors in the ATLAS and CMS IRs upon introduction of a \(190\,\mu \text {rad}\) crossing-scheme at \(\beta ^{*}=0.4\,\text {m}\) in the HL-LHC (where the pessimistic estimate of a systematic \(b_{6}=-4\,\text {units}\) has been considered). In the absence of correction up to dodecapole order the feed-down to linear amplitude detuning can significantly exceed the expected \(12\times 10^{3}\,{\hbox {m}^{-1}}\) margin for the Landau octupoles (indicated in purple in Fig. 2), though this issue will be significantly alleviated during early years of HL-LHC operation, and non-ultimate scenarios, by starting collisions at higher \(\beta ^{*}\). Predicted detuning generated by decapole and dodecapole feed-down upon application of \(190\,\mu \text {rad}\) crossing-scheme in the HL-LHC at \(\beta ^{*}=0.4\,\text {m}\). Histograms are shown before (red) and after (green) application of decapole and dodecapole corrections in the ATLAS and CMS insertions. Dodecapole errors are considered for a systematic \(b_{6}=-4\,\text {units}\) together with a random component of \(1.1\,\text {unit}\) It has also been observed in the LHC that amplitude detuning at the level of \(40\times 10^{3}\,\mathrm{m}^{-1}\) caused a substantial degradation to performance of the online tune and coupling measurement [12]. This posed an obstacle even to commissioning of the linear optics in the LHC during 2016 [12] (via techniques such as K-modulation). In the most pessimistic cases considered in Fig. 2 a comparable detuning can be generated through feed-down (increasing for larger crossing-angles and smaller \(\beta ^{*}\) configurations). Maintaining the performance of beam-instrumentation represents a further motivation for correction up to dodecapole order in HL-LHC. To facilitate local correction of nonlinear errors in the experimental insertions, dedicated correctors are located on the left and right sides of each experimental IR. In the LHC correctors exist for normal/skew sextupole, normal/skew octupole, and normal dodecapole errors. In the HL-LHC additional correctors will be available for normal/skew decapole and skew dodecapole errors. Figure 3 displays a schematic of one side of an LHC experimental IR. The nonlinear correctors in the LHC and HL-LHC are located on the non-IP side of the \(\text {Q3}\) triplet quadrupole (location \(\text {C3}\) in Fig. 3). Further details regarding the lattice and the corrector magnets may be found in [5, 17, 29, 32]. During several studies presented in this paper dodecapole errors were artificially introduced into the LHC lattice: in all cases this was done using the \(b_{6}\) correctors in the experimental IRs. Linear and nonlinear corrector layout in LHC experimental IRs [17] Amplitude detuning Amplitude detuning is the variation of tune as a function of Courant-Snyder invariant (\(\epsilon _{x,y}\)) or particle action (\(J_{x,y}\), with \(\epsilon _{x,y} = 2J_{x,y}\)). It is described as a Taylor expansion about the unperturbed tune, $$\begin{aligned} Q_{z}(\epsilon _{x},\epsilon _{y})\ &=\ Q_{z0} \ \ + \ \ \frac{\partial Q_{z}}{\partial \epsilon _{x}}\epsilon _{x}+\frac{\partial Q_{z}}{\partial \epsilon _{y}}\epsilon _{y} \ \ + \ \ \frac{1}{2!}\left( \frac{\partial ^2 Q_{z}}{\partial \epsilon _{x}^2}\epsilon _{x}^2+2\frac{\partial ^2 Q_{z}}{\partial \epsilon _{x}\partial \epsilon _{y}}\epsilon _{x}\epsilon _{y}+\frac{\partial ^2 Q_{z}}{\partial \epsilon _{y}^2}\epsilon _{y}^2\right) +... \end{aligned}$$ where \(z=x,y\). For the purpose of this paper, terms \(\frac{\partial ^{(1)}Q}{\partial \epsilon ^{(1)}}\) are denoted as 'linear detuning coefficients' and \(\frac{\partial ^{(2)}Q}{\partial \epsilon ^{(2)}}\) the 'quadratic detuning coefficients'. Terms as in Eq. (3) which depend only on the horizontal, or only on the vertical planes (e.g. \(\frac{\partial Q_{x}}{\partial \epsilon _{x}}\) and \(\frac{\partial Q_{y}}{\partial \epsilon _{y}}\)) are denoted as the 'direct detuning coefficients'. Terms which depend on both the horizontal and vertical planes (e.g. \(\frac{\partial Q_{x}}{\partial \epsilon _{y}} = \frac{\partial Q_{y}}{\partial \epsilon _{x}}\)) are denoted as 'cross-term detuning coefficients' To first-order in the multipole strength a linear detuning is generated by normal octupole fields, and a quadratic detuning by normal dodecapole fields. Higher-order multipoles can contribute to a given detuning order through feed-down: thus a normal dodecapole directly generates quadratic detuning and can generate linear detuning via feed-down to a normal octupole field. Lower-order and skew multipoles may also contribute to a given detuning order through perturbations of higher-order in multipole strength [33]. Beam-based measurement of the high-order multipoles in HL-LHC will only take place after correction of sextupole and octupole errors [34], at which point any contributions from such lower-order multipoles are predicted to be negligible compared to the contribution to linear detuning from feed-down and the contribution to quadratic detuning directly from the dodecapoles. Amplitude detuning at top energy in the LHC is measured using an AC-dipole, which excites driven oscillations of the beam that can be ramped up and down adiabatically, allowing repeated excitation of the same bunch [35,36,37]. This is in contrast to measurement via single-kicks which can be employed at injection [38] but is impossible to apply at top-energy due to constraints from machine protection and practical limitations due to bunch decoherence [18]. Detuning measured via AC-dipole differs from that of free oscillations according to a well-defined relation [39], in particular direct detuning coefficients are enhanced by a factor which varies according to the detuning order. Where measured detuning coefficients are quoted in this paper the effect of the driven oscillation has been compensated to give the equivalent free oscillation coefficients. Further detail of the amplitude detuning measurement techniques are provided in [22, 39]. Quadratic amplitude detuning from normal dodecapoles (\(b_{6}\)) Figure 4 shows the magnitude of quadratic detuning anticipated for the HL-LHC at \(\beta ^{*}=0.15\,\text {m}\), flat-orbit. Two distributions are shown, for the original target error specification (with systematic \(b_{6}=-0.64\,\text {units}\), blue) and for the more pessimistic estimate based on early magnet designs (with systematic \(b_{6}=-4\,\text {units}\), red). Histograms are shown over sixty instances of the model to account for uncertainties in the errors. The anticipated magnitude of the detuning is dominated by the uncorrected normal-dodecapole errors in the ATLAS and CMS IRs. Predicted quadratic detuning coefficients for the HL-LHC at \(\beta ^{*}=0.15\,\text {m}\). Histograms are shown over sixty instances of the model to account for uncertainties in the errors. Two configurations of the systematic part of the normal dodecapole error in the magnet body are considered, the HL-LHC target value (systematic \(b_{6}=-0.64\,\text {units}\), blue) and a more pessimistic expectation based on early magnet designs (systematic \(b_{6}=-4\,\text {units}\), red) Amplitude detuning measurements with AC-dipole have become routine in the LHC at top-energy as a way to study octupole errors [12]. During this time no quadratic variation of the tune shift with amplitude was observed. To test the prospect for measurement of quadratic detuning in HL-LHC therefore, dodecapole correctors in the LHC experimental IRs were used to increase the \(b_6\) content of the ATLAS and CMS insertions during dedicated machine tests at \(\beta ^{*}=0.4\,\text {m}\). The four dodecapole correctors were uniformly powered to a strength of \(K_{6}=24\,950\,{\hbox {m}}^{-6}\), generating a predicted quadratic detuning of \(\frac{\partial ^{2}Q_{x}}{\partial \epsilon _{x}^{2}}=-4.5\times 10^{12}\,{\hbox {m}}^{-2}\) (in the LHC at \(\beta ^{*}=0.4\,\text {m}\)). This quadratic detuning is representative of that anticipated in the HL-LHC at \(\beta ^{*}=0.15\,\text {m}\) (as seen in Fig. 4). Sextupole and octupole corrections determined during commissioning [12] were applied. A detailed description of the study is found in [40]. Having increased the LHC dodecapole content in this manner, amplitude detuning measurements with AC-dipole were attempted for the horizontal action. Figure 5 shows the outcome of the detuning measurement. Data shown in the plot corresponds to the difference between the natural tune determined from spectral analysis of the AC-dipole excitation, and the unkicked tune measured in the LHC BBQ [41, 42]. In the absence of enhanced \(b_{6}\), no second-order detuning is observed in the LHC. With enhanced \(b_{6}\) however, a quadratic component to the variation of tune with amplitude can be observed. Measurement of tune change with action of AC-dipole excitation, with and without an artificially enhanced dodecapole content of the ATLAS and CMS insertions Table 1 shows the quadratic detuning coefficient and \(\chi ^{2}_{\mathrm{red}}\) statistic determined from fits of the measurement with the enhanced \(b_{6}\) sources in Fig. 5 (black/red). The expected value from the model is also shown. Attempting to fit the measurement with only a linear detuning returned a \(\chi ^{2}_{\mathrm{red}}\) statistic which was significantly worse than fits including the quadratic term, demonstrating the identification of quadratic detuning with the enhanced \(b_{6}\) sources. Table 1 Second-order detuning coefficients and reduced chi-squared statistics obtained from fits to the AC-dipole detuning data. The expected second-order detuning coefficient obtained from \({\hbox {PTC}}\_{\hbox {NORMAL}}\) [43] for the applied powering of MCTX is also shown The measured quadratic detuning agrees with the expected value within the standard error on the fitted coefficient, and within \(20\,\%\) of the expected value. The achievable uncertainty on the measurement of \(\sigma \le 1\times 10^{12}\,{\hbox {m}}^{-2}\) can be compared to the predicted HL-LHC quadratic detuning in Fig. 4, and implies that good precision on the global IR1\(+\)IR5 \(b_6\) correction should be achievable at end-of-squeeze for the expected dodecapole errors, particularly for scenarios with strong \(b_{6}\) sources which are of greatest relevance to operation. Conventional detuning measurements based upon single-kicks cannot be applied in the HL-LHC at top energy. The results presented in this section represent a first demonstration of measurement of quadratic detuning with driven oscillations at top energy in the LHC. Results were consistent with predictions of the model for a well defined \(b_{6}\) source introduced using dodecapole correctors in the ATLAS and CMS insertions. The technique shows sufficient precision to provide a direct quantitative probe for \(b_{6}\) errors in experimental insertions of the HL-LHC at end-of-squeeze. Feed-down to linear amplitude detuning from normal dodecapoles and crossing-angle orbit bumps Measurement of quadratic detuning described above appears a promising technique, but suffers from two weaknesses. As a global variable it does not distinguish locally between errors in the ATLAS and CMS insertions. Secondly, given the strong scaling of the quadratic detuning with \((\beta ^{*})^{-3}\), it is only expected to be measurable at very low-\(\beta ^{*}\), such as the \(\beta ^{*}=0.15\,\text {m}\) optics considered in Fig. 4. Feed-down to linear detuning provides a potential observable for both decapole and normal dodecapole errors, and can overcome some of the weaknesses of a quadratic detuning measurement. Crossing-angles can be varied independently in the IRs providing a local observable, and due to weaker scaling with \((\beta ^{*})^{-2}\) it may be a viable observable earlier in the HL-LHC squeeze. The feed-down is also of direct operational relevance as discussed in Sect. 3. In the LHC multiple measurements of shifts to linear detuning with changes in crossing-scheme have been performed, both to test measurement viability and to study higher-order errors in the existing triplets. An example of one such study is shown in Fig. 6, which shows a detuning measurement performed at flat-orbit (blue), with IR1 and IR5 crossing-angles applied at \(160\,\mu \text {rad}\) (black), and with only the crossing-angle in IR5 applied at \(160\,\mu \text {rad}\) (red). Measurements were performed in 2018 at \(\beta ^{*}=0.3\,\text {m}\). At flat-orbit the detuning is consistent with zero (normal octupole corrections having been applied). Upon application of crossing-angles in the IRs clear shifts to the linear detuning can be observed, indicating the presence of errors of decapole or higher-order in the existing LHC IRs. Table 2 presents the direct detuning coefficients measured with crossing-angle bumps applied in Fig. 6. Example of amplitude detuning measurements performed for various configurations of the crossing-scheme in the LHC at \(\beta ^{*}=0.3\,\text {m}\) Table 2 Direct detuning coefficients measured in the LHC at \(\beta ^{*}=0.3\,\text {m}\) with the full LHC crossing-scheme applied and with only the IR5 crossing-scheme applied Figure 6 and Table 2 illustrate that high-quality measurements of the linear detuning can be performed in the LHC even when large crossing-angles are applied in the IRs, and that feed-down to linear detuning can be measured at high-energy using driven oscillations with an AC-dipole. A review of all detuning measurements performed at top energy in the LHC is provided in [44], and prospects for correction in the LHC are discussed in [45]. Figure 7 presents a histogram of the standard errors obtained from all successful linear detuning measurements performed throughout the LHC's second Run, for measurements performed at flat-orbit (blue) and with crossing-angles applied in the IRs (red). No distinction is made between measurements of different detuning terms or between the LHC beams since all show comparable measurement uncertainties. Studies performed with a crossing-scheme applied achieved a comparable measurement quality to that obtained at flat-orbit. Histograms of standard errors on linear detuning coefficients measured in the LHC, measurements performed with flat-orbit (blue) and with some crossing-scheme applied (red) It is desired to extrapolate this LHC experience to the HL-LHC, in order to consider the viability of measuring feed-down to amplitude detuning at both \(\beta ^{*}=0.4\,\text {m}\) (representing a \(\beta ^{*}\) of operational relevance to the early years of HL-LHC operation) and \(\beta ^{*}=0.15\,\text {m}\) (end-of-squeeze: where feed-down effects will be most significant). To this end a series of simulated measurements were considered at each \(\beta ^{*}\). Each simulated measurement consisted of a scan of the crossing-angle (individually for each IR) over a \(\pm 250\,\mu \text {rad}\) range (\(250\,\mu \text {rad}\) is the nominal HL-LHC crossing-scheme), with each scan consisting of linear detuning measurements at five different crossing-angles. A total of 6000 simulated scans were performed for each \(\beta ^{*}\) and IR studied. The linear and quadratic variations of the detuning coefficients were determined from PTC [43] models of the HL-LHC with a systematic \(b_{6}=-4\,\text {units}\) and a random \(\sigma _{{\mathrm{b}}_{6}}=1.1\,\text {units}\). Sixty instances ('seeds') of the errors were considered to account for the random \(b_{6}\) component. For each seed of the magnetic model 100 instances of random Gaussian errors were applied to the individual detuning points in each scan (giving the total of 6000 simulated scans). To generate the random errors on the crossing-angle and detuning values at each point in the simulated scans, the LHC precision of the crossing-angle was taken (\(\sigma _{\mathrm{xing}}=10\,\mu \text {rad}\)), and the precision in the detuning measurement was taken to be \(\sigma _{\mathrm{detuning}}=4000\,{\hbox {m}}^{-1}\) (based on Fig. 7). These values were truncated at \(3\,{\upsigma }\). For each of the 6000 scans the linear and quadratic variation of detuning vs crossing-angle was determined from a polynomial fit of the five measurement points and compared to the expected values from the PTC models. Figure 8 shows two examples of the simulated scans of detuning vs crossing-angle at \(\beta ^{*}=0.4\,\text {m}\), corresponding to different seeds of the magnetic model, and different random Gaussian errors applied to the individual crossing-angle and detuning values. The detuning vs crossing-angle in the model is shown in blue, the simulated measurement in red, and polynomial fits to simulated measurements in orange. The left and right plots show particularly good and bad instances of the artificial measurement data, respectively. Two examples of artificial measurements of linear detuning versus crossing-angle in IR5 at \(\beta ^{*}=0.4\,\text {m}\), corresponding to different seeds of the magnetic model, and different random Gaussian errors applied to the individual crossing-angle and detuning values. The true variation of detuning from the PTC model is shown in blue, and the artificial measurement in red Decapole errors can be quantified by the linear variation of detuning with crossing-angle and normal dodecapole errors can be quantified by the quadratic variation. Figure 9 shows histograms over the 6000 simulated scans, of the true linear (left, pale blue) and quadratic (right, pale blue) variations of the vertical direct detuning coefficient with IR5 crossing-angle, as obtained from the PTC models. This can be compared to the difference between the artificially measured and true values (dark blue). Measurement based on feed-down is viable if the measurement discrepancy (dark blue) is small compared to expected value (light blue). Histograms of expected linear (left) and quadratic (right) variations of the vertical direct detuning coefficient as a function of the crossing-angle in IR5, for models of the HL-LHC at \(\beta ^{*}=0.4\,\text {m}\) (pale blue). The difference between the modelled linear and quadratic variations and those determined from artificial crossing-scan measurements are shown in dark blue For the decapole (linear) feed-down the distribution of the measurement discrepancy (Fig. 9 left, dark blue) is comparable to the uncorrected value (Fig. 9 left, pale blue). This implies that at \(\beta ^{*}=0.4\,\text {m}\) the crossing-scan technique lacks sufficient precision to measure the decapole errors. By contrast the measurement discrepancy in the quadratic variation of detuning with crossing-angle (Fig. 9 right, dark blue) is significantly smaller than the expected values (Fig. 9 right, pale blue) for uncorrected dodecapole errors, which implies that at \(\beta ^{*}=0.4\,\text {m}\) the crossing-scan technique does have sufficient precision to measure the normal dodecapole errors. This reflects that, for this \(\beta ^{*}\) and configuration of the errors, feed-down is dominated by the dodecapole contribution and the decapole contribution is too small to measure reliably. At \(\beta ^{*}=0.15\,\text {m}\) the expected feed-down to linear detuning increases significantly (due to the larger \(\beta\) functions in the IR magnets), and the relative precision of the feed-down scan measurements increases correspondingly. Figure 10 compares distributions, over another 6000 simulated scans at \(\beta ^{*}=0.15\,\text {m}\), of the true linear and quadratic variations with crossing-angle (pale blue) to distributions of the measurement discrepancy (dark blue). At this smaller \(\beta ^{*}\) precision is sufficient to study both the decapole and dodecapole components of the feed-down. Histograms of expected linear (left) and quadratic (right) variations of the vertical direct detuning coefficient as a function of the crossing-angle in IR5, for models of the HL-LHC at \(\beta ^{*}=0.15\,\text {m}\) (pale blue). The difference between the modelled linear and quadratic variations and those determined from artificial crossing-scan measurements are shown in dark blue While improved precision of the crossing-angle scans can be obtained by going to smaller \(\beta ^{*}\), measurement at \(\beta ^{*}\approx 0.4\,\text {m}\) will still be of interest in the initial years of HL-LHC operation (before the full \(\beta ^{*}\)-reach of the HL-LHC is achieved). Of particular concern is whether the feed-down can be measured with good precision compared to the tolerance on residual amplitude detuning defined by available margin in the Landau octupoles, as presented in Sect. 3 (\(12\times 10^{3}\,{\hbox {m}}^{-1}\)). To assess this, fitted polynomial terms from the 6000 simulated scans at \(\beta ^{*}=0.4\,\text {m}\), were used to infer the detuning value at a crossing-angle of \(190\,\mu \text {rad}\) and \(\beta ^{*}=0.4\,\text {m}\). Figure 11 shows the discrepancy between the true direct detuning terms and those obtained from simulated crossing-angle scans. Difference obtained at \(\beta ^{*}=0.4\,\text {m}\), \(190\,\mu \text {rad}\), between the direct detuning value via fits to simulated measurement of detuning vs crossing-angle, versus the true value obtained from the model. A histogram is shown for simulated crossing-scan measurements with detuning measurements at 5 individual crossing-angles and applied random detuning errors of \(\sigma _{\mathrm{detuning}}=4000\,{\hbox {m}}^{-1}\) (blue) and \(\sigma _{\mathrm{detuning}}=2000\,{\hbox {m}}^{-1}\) (red). A histogram is also shown for simulated crossing-scan measurements with detuning measurements at 10 individual crossing-angles and applied random detuning errors of \(\sigma _{\mathrm{detuning}}=4000\,{\hbox {m}}^{-1}\) (black) The precision of detuning variation with feed-down inferred from the simulated crossing-angle scans is equivalent to \(\approx 10\times 10^{3}\,{\hbox {m}}^{-1}\) in the worst cases, which is within the available detuning margin, and is significantly better than the level at which detuning was observed to impact on performance of beam instrumentation (as discussed in Sect. 3). In the assumption of random measurement errors better precision could also be achieved via reduction in the uncertainty on the individual detuning measurements (Fig. 11, red), which considering Fig. 7 represents an optimistic but achievable target, and by increasing the number of scanned crossing-angles (Fig. 11, black). A potential weakness of this method is that to measure feed-down to linear detuning vs crossing-angle multiple high-quality detuning measurements are necessary. Detuning measurements such as that seen in black in Fig. 6 necessitate many AC-dipole excitiations. In the mentioned example about 20 kicks were performed at varying amplitudes, taking between \(30-60\,\text {minutes}\). Scaling this to a realistic HL-LHC measurement scenario would constitute a significant investment of beam-time (approximately \(800\,\text {kicks}\), requiring 20–40 hours of measurements [22, 34]). LHC experience has shown that high-quality detuning measurements using forced oscillations with an AC-dipole could be consistently obtained even with large crossing-schemes applied, and measurement of feed-down to linear detuning coefficients in experimental insertions has been demonstrated at top-energy. On the basis of this LHC experience measurement of feed-down to the linear detuning does appear to be a viable observable to quantitatively study the nonlinear errors up to dodecapole order in the HL-LHC. Resonance driving terms (RDT) Baseline correction strategy for nonlinear errors in LHC and HL-LHC experimental IRs assumes local minimisation of selected resonances [6, 17, 46, 47]. The prospect for direct beam-based measurement and correction of resonance driving terms (RDT) is therefore of interest. Measurement via the conventional approach (spectral analysis following single-kicks) is not viable at top-energy in LHC or HL-LHC due to restrictions from machine protection (relating to the risk of a superconducting magnet quench), limits from available kicker strength, and practical limitations due to the decoherence of the bunches following single kicks [18, 34]. 'Forced RDTs' of oscillations driven with the AC-dipole however, can be considered [36, 48]. Such AC-dipole-based measurements have been employed extensively in the LHC to validate sextupole and octupole corrections [12], and to directly determine skew-octupole corrections [48]. A review of the methodology for AC-dipole based RDT measurement in the LHC is provided in [48]. Dodecapole RDTs have never been successfully identified or measured in beam-based studies of the LHC. This includes occasions where \(b_{6}\) was enhanced to generate perturbations on a scale anticipated at HL-LHC end-of-squeeze (such as the quadratic detuning studies in Sect. 4.1, and [49]). Without significant hardware improvements (for example reductions to BPM noise, or upgrades to increase AC-dipole excitation length and maximum amplitude) it does not appear viable to measure dodecapole RDTs with existing measurement procedures. Forced normal- and skew-decapole RDTs were, however, measured for the first time in the LHC at top-energy in 2018 [48]. Figure 12 shows an example of the horizontal spectrum obtained from large amplitude AC-dipole kicks in the LHC at \(\beta ^{*}=0.3\,\text {m}\) (for non-colliding beams, with no enhanced error sources). Spectral lines corresponding to normal decapole RDT \(f_{1004}\) and skew decapole RDT \(f_{1130}\) can be identified at frequencies \((0,4 Q_{y})\) (highlighted in orange) and \((Q_{x},-3 Q_{y})\) (highlighted in pink), respectively. The spectral line corresponding to the tune of the free oscillation is shown in black. Example of tune spectrum obtained from a large amplitude AC-dipole excitation in the LHC at \(\beta ^{*}=0.3\,\text {m}\), showing visible spectral components corresponding to normal and skew decapole resonances, highlighted in orange and pink, respectively The crossing-scheme in the experimental insertions of the HL-LHC may generate feed-down from normal- and skew-dodecapole errors to RDTs of lower-order multipoles. In fact, changes to octupole and decapole forced RDT strength with crossing-angle have already been observed in the LHC. Figure 13 (left) shows measurements of the forced normal-decapole RDT \(|f_{0140}|\) at flat-orbit (blue) and with a horizontal crossing-angle (\(-160\,\mu \text {rad}\)) introduced in the IR5 (CMS) insertion (red). Figure 13 (right) shows measurements of the skew-octupole RDT \(f_{1210}\) at flat-orbit and with \(145\,\mu \text {rad}\) crossing-angles applied in both the ATLAS (IR1) and CMS (IR5) insertions. Forced RDT amplitude is plotted for BPMs in the LHC arcs, and feed-down to linear coupling and tune were corrected between measurements at flat-orbit and with crossing-angles applied. Left: measurement of forced decapole RDT \(|f_{0140}|\) performed during 2022 LHC commissioning at \(\beta ^{*}=0.3\,\text {m}\). Measurements were performed for flat-orbit (blue) and with a \(-160\,\mu \text {rad}\) horizontal crossing-angle orbit bump applied only in IR5 (red). Right: measurement of forced skew-octupole RDT \(f_{1210}\) performed during dedicated machine tests in 2018 at \(\beta ^{*}=0.3\,\text {m}\). Measurements were performed for flat-orbit (teal) and with a \(145\,\mu \text {rad}\) crossing-scheme applied in IR1 and IR5 (red) Skew-octupole and decapole-order RDTs have now been measured on multiple occasions in the LHC at top-energy. Table 5 in Appendix A summarises amplitudes and uncertainties of relevant forced RDT measurements from the LHC in 2018 [48] and 2022. Based on this experience, Table 3 details minimum RDT amplitudes successfully measured to date and gives estimates for achievable RDT measurement uncertainties. Table 3 Minimum RDT amplitudes successfully measured in LHC, and typical measurement uncertainties, for selected skew-octupole, skew-decapole, and normal decapole RDTs Figure 14 shows the predicted feed-down from dodecapole-order errors to selected lower-order RDTs, expected in the HL-LHC at end-of-squeeze, due to introduction of a \(250\,\mu \text {rad}\) vertical crossing-angle in the IR5 (CMS) insertion (sixty seeds of the magnetic model are considered). Similar results are obtained for the IR1 (ATLAS) insertion. Predicted feed-down to selected RDTs in the HL-LHC at \(\beta ^{*}=0.15\,\text {m}\), upon introduction of a \(250\,\mu \text {rad}\) vertical crossing-angle in the IR5 (CMS) insertion. Values shown are the change in mean RDT amplitude in the arc BPMs, compared to the value expected at flat-orbit. Histograms are shown over sixty seeds of the magnetic model to account for uncertainty in the expected errors. Plots show feed-down from skew-dodecapole errors to normal-decapole RDT \(f_{0140}\) (left), feed-down from normal-dodecapole errors to skew-decapole RDT \(f_{0014}\) (centre), and feed-down from skew-dodecapole errors to skew-octupole RDT \(f_{1210}\) (right). For each RDT the smallest amplitude measured in the LHC at top energy is shown (green) With a vertical crossing-angle in IR5, normal-dodecapole errors feed-down linearly to skew-decapole RDTs. The expected feed-down to \(|f_{0014}|\) (Fig. 14, left) is significant compared to both the minimum RDT amplitude successfully measured in the LHC, and compared to the typical uncertainty on the RDT measurement detailed in Tab. 3. By contrast the expected feed-down from skew-dodecapole errors to normal-decapole RDTs (Fig. 14, center) is small in comparison with the successfully measured decapole RDTs and uncertainties. Interestingly, however, the expected feed-down from skew-dodecapole errors to skew-octupole RDT \(f_{1210}\) (Fig. 14, right) is, for many seeds of the magnetic model, significant in comparison with the minimum amplitude measured and to the typical measurement uncertainty (skew-octupole RDTs are considerably more easily measured in the LHC than decapole RDTs). Figure 15 shows two examples of simulated measurements of RDT feed-down in the HL-LHC at \(\beta ^{*}=0.15\,\text {m}\). Feed-down is shown as a function of vertical crossing-angle in the CMS (IR5) insertion, from normal-dodecapole errors to skew-decapole RDT \(f_{0014}\) (Fig. 15, left) and from skew-dodecapole errors to skew-octupole RDT \(f_{1210}\) (Fig. 15, right). The true variation of the RDT with crossing-angle in the model (dark blue) is obtained from PTC simulations. Corrections for octupole- and decapole-order errors were applied in the model, and tune and coupling were corrected at each crossing-angle. RDT values quoted are the mean amplitudes in the arc BPMs. Simulated measurements (red) were generated in \(20\,\mu \text {rad}\) steps, by adding random Gaussian errors to the true model values, with \(\sigma _{|\mathrm{f}_{\mathrm{jklm}}|}\) defined as in Table 3 (truncated to \(3\,{\upsigma }\)). Errors on the crossing-angle were taken to be \(\sigma _{\theta }=10\,\mu \text {rad}\) (truncated at \(3\,{\upsigma }\)) as in Sect. 4.2. Simulated measurements (in the HL-LHC at \(\beta ^{*}=0.15\,\text {m}\)) of linear feed-down from normal-dodecapole errors to skew-decapole RDT \(f_{0014}\) as a function of vertical crossing-angle in the IR5 insertion (left), and of quadratic feed-down from skew-dodecapole errors to skew-octupole RDT \(f_{1210}\) (right). RDT values shown are the mean amplitude in the arc BPMs Linear and quadratic variations of the decapole and skew-octupole RDTs due to feed-down can be clearly seen in Fig. 15 left and right, respectively. In Fig. 15 (left) grey lines indicate the feed-down expected upon changing the systematic normal-dodecapole component of the HL-LHC triplets by \(\Delta {\hbox {b}}_{6} = \pm 0.5\,\text {units}\) (relative to the original systematic \(b_{6} = -4\,\text {units}\)). In Fig. 15 (right) grey lines represent the feed-down expected due to changing the systematic skew-dodecapole component of the triplets by \(\Delta {\hbox {a}}_{6} = \pm 0.3\,\text {units}\). Figures 14 and 15 imply that, based on the quality of forced skew-octupole and decapole-order RDTs which have been achieved in the LHC so far, it should be possible to measure feed-down to such RDTs from normal- and skew-dodecapole errors in the HL-LHC at end-of-squeeze. Beam-based tests of this subject are currently less advanced than the detuning-based methods presented in the previous section, but represent a promising topic for further development in the future LHC operational runs. Direct measurement of dynamic aperture Given its close relationship to lifetime [50] (and hence delivered luminosity) dynamic aperture represents a key figure of merit for high-order nonlinear correction in HL-LHC. As such direct measurement of DA is of significant interest to the HL-LHC commissioning. Direct DA measurement could be particularly useful as a means to validate dodecapole corrections determined from the magnetic model or via other beam-based observables: for example, by confirming DA reduction upon removal of the corrections, or as a means to directly benchmark the magnetic model by comparing the measurements to tracking simulations. During its first two Runs significant experience of direct DA measurement was obtained in the LHC. Conventionally DA measurements have been performed in the LHC at injection, based upon analysis of beam-losses following large amplitude single kicks, with results being found to agree well to that predicted by the LHC model, within about \(10\,\%\) [38]. Unfortunately single-kick based study of DA is not viable at top-energy due to strength requirements in the kicker and the risk of quenching the superconducting magnets. An alternative technique was demonstrated at LHC injection however [51], based upon controlled blow-up of a pilot-bunch to large emittance using noise generated in the transverse damper (ADT), followed by examination of beam-losses upon changes in the nonlinear corrector circuits. The method showed a similar level of agreement (\(\approx 10\,\%\)) to model predictions as the kick-based technique [51]. Dedicated beam-tests were therefore performed in the LHC in 2017 to test the viability of using this latter technique to directly measure changes in dynamic aperture due normal dodecapole errors, on the scale expected in HL-LHC at end-of-squeeze. At a \(\beta ^{*}=0.4\,\text {m}\), the \(b_{6}\) correctors in the ATLAS and CMS insertions were used to introduce a large dodecapole perturbation, representative of that expected in the HL-LHC at \(\beta ^{*}=0.15\,\text {m}\). Such a procedure mirrors one potential use case for this observable in HL-LHC: introduction of the enhanced dodecapole source in the LHC (via the IR correctors) proxies removal of a \(b_{6}\) correction at end-of-squeeze in HL-LHC. If a DA reduction can be measured due to the enhanced dodecapoles in the LHC, it implies that a comparable \(b_{6}\) correction can also be tested in HL-LHC via direct DA measurement. In practice the enhanced dodecapole sources in the LHC at \(\beta ^{*}=0.4\,\text {m}\) were scaled to generate a quadratic detuning of \(|\partial ^{2}Q/\partial \epsilon ^{2}|=6.8\times 10^{12}\,{\hbox {m}}^{-2}\), which is representative of that expected in HL-LHC at end-of-squeeze (Fig. 4). To measure the impact of the introduced \(b_{6}\) sources on dynamic aperture, three low-intensity bunches were initially blown-up with the transverse damper to very large normalised emittance of \(\epsilon \approx 25\,\mu \text {m}\) in either the horizontal, vertical, or horizontal and vertical planes (the nominal LHC normalised emittance is \(\epsilon =3.75\,\mu \text {m}\)). The enhanced dodecapole sources in the ATLAS and CMS insertions were then applied and resulting beam-losses measured over an extended period (about \(1\,\text {hour}\)). The tests were performed with non-colliding beams at flat-orbit (limiting any feed-down from the introduced \(b_{6}\)), and with linear optics and lower-order nonlinear errors corrected. More detailed discussion of the measurement procedure is provided in [22, 52] and a discussion of the outcome of the measurements in the context of tests of a diffusion model for DA evolution is also provided in [53]. In the absence of the enhanced \(b_{6}\) sources (and with lower-order errors well corrected) the dynamic aperture was predicted to lie outside of the collimator aperture. During blow-up of the bunch emittances, prior to application of the enhanced \(b_{6}\), losses were observed due to scraping on the collimator aperture, but no persistent losses associated with DA were seen [52], consistent with the model prediction. Upon application of the enhanced \(b_{6}\), clearly measurable beam-losses could be observed due to reduction of the DA. The losses were observed to persist for more than \(1\,\text {hour}\) following completion of the dodecapole corrector trims (up to the end of the fill), and the evolution of the loss-rate was characteristic of the expected laws for DA evolution with time [54,55,56]. Figure 16 shows bunch intensity measured during introduction of the enhanced \(b_{6}\) sources for a bunch blown up in both horizontal and vertical planes (left). The evolution of the surviving fractional intensity following the dodecapole corrector trim is also shown (right). Measurement of beam-intensity of bunch with \(\epsilon _{x}=\epsilon _{y}\approx 25\,\mu \text {m}\) as dodecapole correctors in the ATLAS and CMS insertions are powered on to increase the dodecapole perturbation at \(\beta ^{*}=0.4\,\text {m}\) to a level representative of that expected in the HL-LHC end-of-squeeze. Measured intensity is shown in the left plot, while the right plot shows the evolution of surviving fractional intensity as a function of the number of turns since the end of the dodecapole trim The surviving bunch intensity following introduction of the enhanced dodecapole sources can be related to DA for a Gaussian charge distribution, \(\rho = e^{-\left[ (x^{2}/2\sigma _{x}^{2}) + (y^{2}/2\sigma _{y}^{2}) \right] }\), according to [55], $$\begin{aligned} \frac{I(N)}{I(0)}&= 1 - \frac{1}{2\pi \sigma _{x}\sigma _{y}}\iint \limits _{\ \ D(N)}^{\quad \infty } \rho \, \text {d}x\text {d}y \\&= 1 - e^{\frac{D(N)^2}{2}} \end{aligned}$$ where N indicates the turn number, D(N) (in units of the beam-\(\sigma\)) represents the average DA over the \((\sigma _{x},\sigma _{y})\) parameter space as a function of turn number, and over time the DA is expected to decrease towards the limit of long-term stable motion [26]. Figure 17 compares the dynamic aperture measured after introduction of the enhanced \(b_{6}\) (Fig. 17, black) as a function of time, to the simulated DA (Fig. 17, blue) obtained from SixTrack [57, 58] tracking simulations (sixty different DA simulations are shown corresponding to different seeds of the magnetic model). Tracking is only performed up to \(10^{6}\,\text {turns}\) due to computational limitations, and the DA is expressed in units of the measured beam-\(\sigma\) (which is much larger than the nominal LHC or HL-LHC emittance). A good agreement between the measured and predicted DA was obtained. Evolution of DA inferred from measured beam losses, compared to predictions from SixTrack simulations for evolution of average DA vs turns Figures 16 and 17 demonstrate that (given an appropriate experimental configuration) compensation/introduction of a dodecapole error representative of those expected in HL-LHC end-of-squeeze, leads to a change in the dynamic aperture which can be directly measured using the proposed method based on controlled blow-up of the beam via the transverse damper. With known dodecapole sources, the DA reduction due to the enhanced \(b_{6}\) also agreed well with tracking simulations, which further implies that direct DA measurement should allow tests of the HL-LHC magnetic model via comparison to tracking simulations. Upon closer inspection of the tracking simulations, it was noticed that the majority of the predicted DA reduction due to the enhanced \(b_{6}\) sources occurred in the vertical plane, while the horizontal DA was largely unaffected by the dodecapoles. This can be seen in Fig. 18 which shows survival plots obtained from the SixTrack tracking simulations without (left) and with (right) the enhanced \(b_{6}\) sources. Initial conditions which survived \(>10^{6}\,\text {turns}\) are shown in red, and amplitude is quoted in units of the measured beam-\(\sigma\). This prediction is reflected in Fig. 19, which compares the surviving fractional intensity of the bunches blown-up only horizontally (green) and only vertically (orange), after introduction of the enhanced dodecapole sources. The pattern of observed losses, with no significant beam-loss from the bunch with large horizontal emittance, but significant losses for the bunch with the large vertical emittance, matches that expected from the model prediction, further demonstrating an ability to probe the impact of dodecapoles on the shape of the DA in the \((\sigma _{x},\sigma _{y})\) plane. SixTrack simulation of the \(10^{6}\,\text {turn}\) LHC DA at \(\beta ^{*}=0.4\,\text {m}\) without (left) and with (right) artificially enhanced \(b_{6}\) sources in the ATLAS and CMS insertions. Amplitude is quoted in units of the measured beam-\(\sigma\) Fractional beam-loss observed after introduction of enhanced dodecapole sources in the ATLAS and CMS insertions. Losses are shown for two bunches: one blown-up in only the H plane (green), and one blown-up in only the V plane (orange) While conventional single-kick type DA measurements are not possible at top-energy in the LHC and HL-LHC, the results presented in this section demonstrate that direct dynamic aperture measurement is viable at top-energy based on a method of heating the beam to large emittance with noise from the transverse damper. In particular, deliberately introduced dodecapole sources at the level anticipated in the HL-LHC at end-of-squeeze generated a change in DA which could be clearly measured, and agreed well with predictions of the magnetic model. The measurement procedure utilised in these LHC studies was also straightforward and required relatively little beam time. As such it is of particular interest in regard to rapid validation of dodecapole corrections determined, for example, via magnetic measurements. Ultimately these results also suggest the impact on DA from expected dodecapole errors and correction at HL-LHC end-of-squeeze are large enough to be readily observed, implying direct optimisation of the DA will be a viable option in HL-LHC, and that the magnetic model of the HL-LHC dodecapoles can be benchmarked via direct DA measurement. Short-term dynamic aperture of driven oscillations The DA of a beam under the influence of driven oscillations from an AC-dipole is, in general, substantially smaller than the DA of free betatron oscillations. A discussion of the concept of DA for driven oscillations is provided in [59]. This can pose a challenge to HL-LHC optics commissioning [18], but also provides an observable to test the impact of \(b_{6}\) errors and their corrections. To test the practicality of the short-term DA of driven oscillations as an observable for the \(b_{6}\), dedicated beam-based tests were performed in the LHC at \(6.5\,\text {TeV}\). Beam-losses upon excitation for ten thousand turns with an AC-dipole were examined as a function of the AC-dipole kick amplitude, for configurations of the LHC at \(\beta ^{*}=0.4\,\text {m}\) with and without an enhanced configuration of the \(b_{6}\) sources which replicated the \(b_{6}\) perturbation expected in HL-LHC end-of-squeeze. The same enhanced \(b_{6}\) sources were used for this study as for the free-DA tests described above. A detailed review of the measurements are provided in [52]. Figure 20 shows measured beam losses upon AC-dipole excitation with (red) and without (green) the enhanced \(b_{6}\) sources. A clearly measurable increase to the beam-losses can be observed in the enhanced \(b_{6}\) configuration (for unchanged emittance). This demonstrates that \(b_{6}\) sources representative of those expected in HL-LHC end-of-squeeze can lead to a measurable shift in the short-term DA of driven oscillations. Measured beam-losses upon AC-dipole excitation with (red) and without (green) artificially enhanced \(b_{6}\) sources in the ATLAS and CMS insertions of the LHC at \(\beta ^{*}=0.4\,\text {m}\). Solid lines show the beam-loss expected for a single-Gaussian profile with \(DA_{forced}\) equal to \(2.8\,\sigma _{\mathrm{nom}}\) and \(3.3\,\sigma _{\mathrm{nom}}\) for configurations with and without the enhanced \(b_{6},\) respectively An expression for beam-losses upon AC-dipole excitation as a function of the action of the forced-DA and kick is given in Eq. (14) of [59], which can be used to infer the DA by minimisation of the residual to data in Fig. 20. The resulting DA before and after application of the enhanced \(b_{6}\) sources is shown in Table 4. Table 4 Measured DA of driven oscillations before and after application of \(b_{6}\) sources representative of those expected in HL-LHC end-of-squeeze The forced DA does not directly relate to that of free-betatron oscillations due to the different detuning pattern and the presence of additional resonances of the driven motion [59]. As such it does not provide an alternative to direct measurement of the dynamic aperture of free oscillations. Nonetheless, since dedicated tests in the LHC imply that dodecapole compensation in HL-LHC at end-of-squeeze should lead to measurable changes in the DA of driven oscillations, it represents an additional observable which can provide qualitative information regarding the quality of \(b_{6}\) corrections in the HL-LHC. The prospect for beam-based study of high-order nonlinear errors, up to dodecapole order, is a topic of significant and immediate interest to the High-Luminosity upgrade of the LHC, as well as of more general interest to the accelerator community in the context of future collider projects. It also represents an intensely challenging topic within the field of beam-optics, particularly at high-energy where many conventional measurement techniques for the nonlinear dynamics cannot be applied. Looking forwards, specifically to the HL-LHC era, it is important to understand whether beam-based measurement of the dodecapole errors can be assumed to be feasible, and if so which techniques are of interest. Such conclusions will inform not only development of theoretical studies, but also the commissioning strategy. Although challenging, multiple beam-based measurement techniques have been identified which, based upon real-world experience in the LHC, appear viable options to use during the optics commissioning of the HL-LHC. Detuning-based methods show significant potential. Measurement of quadratic tune change with the action of AC-dipole kicks was demonstrated in the LHC at top-energy, and a precision around \(20\,{\%}\) was achieved for a \(b_{6}\) perturbation representative of that expected at HL-LHC end-of-squeeze. LHC experience also demonstrated that high-quality measurements of the change to linear detuning coefficients with crossing-angle due to feed-down can be achieved with the AC-dipole at top-energy. Based upon achievable quality of the linear detuning measurements in the LHC, measurement of feed-down to linear detuning during crossing-angle scans in the experimental IRs represents a viable observable for the HL-LHC, even at \(\beta ^{*}\) as high as \(0.4\,\text {m}\), before the ultimate \(\beta ^{*}\)-reach has been achieved. In contrast it has never proved possible to identify spectral lines corresponding to dodecapole RDTs in the LHC, even where dodecapole perturbations (representative of those expected at HL-LHC end-of-squeeze) have been artificially introduced. Consequently, direct RDT measurement appears unlikely to be a viable observable for dodecapole errors without significant hardware improvements. Robust measurements of forced skew-octupole and normal- and skew-decapole RDTs have been achieved with AC-dipole excitation at top energy in the LHC, however. Under the influence of changes in crossing-angle in the experimental IRs, feed-down from normal-dodecapole errors is predicted to give shifts in decapole RDTs which are significant compared to typical measurement uncertainties achieved already in the LHC. Similarly the feed-down from skew-dodecapole errors to skew-octupole RDTs is also expected to be measurable in HL-LHC at end-of-squeeze. This is of particular interest since few quantitative beam-based observables exist for skew-dodecapole sources. RDT methods have been less thoroughly tested in the LHC compared to detuning-based techniques, and further development and testing will be a priority during the next operational run. Finally, direct measurement of changes to dynamic aperture have also been demonstrated in the LHC at \(6.5\,\text {TeV}\), under the influence of normal dodecapole perturbations representative of those anticipated in HL-LHC at end-of-squeeze. Demonstrations were achieved for both the long-term DA of free betatron-oscillations, via heating of bunch emittance with the transverse damper, and for the short-term DA of forced oscillations under the influence of an AC-dipole. Both methods are comparatively straightforward and fast to employ. As such, they represent a promising tool for the rapid validation of corrections based upon magnetic measurements or even a direct optimisation. This is significant as the DA represents an important figure of merit for operation of the collider at end-of-squeeze. This manuscript has associated data in a data repository. [Authors' comment: The datasets generated and analysed in this manuscript are available upon reasonable request, by contacting the corresponding author.] I. Béjar Alonso, O. Bruning, P. Fessia, M. Lamont, L. Rossi, L. Tavian, M. Zerlauth, High-Luminosity Large Hadron Collider (HL-LHC): Technical design report. Technical report, (2015). CERN-2020-010 J. Coupard, H. Damerau, A. Funken, R. Garoby, S. Gilardoni, B. Goddard, K. Hanke, A. Lombardi, D. Manglunki, M. Meddahi, B. Mikulec, G. Rumolo, E. Shaposhnikova, M. Vretena, (eds.), LHC Injectors Upgrade, Technical Design Report. CERN, (2014) L.C. Teng, Error analysis for the low-\(\beta\) quadrupoles of the Tevatron collider. Technical report (1982). FERMILAB-TM-1097 J. Wei, M. Harrison, The RHIC project - design, status, challenges, and perspectives. In Multi-GeV high-performance accelerators and related technology. Proceedings, 16th RCNP International Symposium, Osaka, Japan, March 12-14, 1997, number C97-03-12.1 p.198-206, (1997). FERMILAB-TM-1097 O. Brüning et al. (eds.), LHC Design Report v.1: the LHC Main Ring. CERN (2004) J. Wei, W. Fischer, V. Ptitsin, Interaction Region Local Correction for the Large Hadron Collider. In Proceedings of the 1999 Particle Accelerator Conference, New York (1999) H. Grote, F. Schmidt, L.H.A. Leunissen. LHC dynamic aperture at collision. Technical report (1999). LHC-PROJECT-NOTE-197 O.S. Brüning, S. Fartoukh, M. Giovannozzi, Field quality issues for LHC magnets: analysis and perspectives for quadrupoles and separation dipoles. Technical report (2004). CERN-AB-2004-014-ADM F. Pilat, Y. Luo, N. Malitsky, V. Ptitsyn, Beam-based non-linear optics corrections in colliders. In Proceedings of PAC'05, number WOAC007 (2005) W. Fischer, J. Beebe-Wang, Y. Luo, S. Nemesure, L. Rajulapati, RHIC proton beam lifetime increase with 10- and 12-pole correctors. In Proceedings of IPAC 2010, number THPE099 (2010) J. Koutchouk, F. Pilat, V. Ptitisyn, Beam-based measurements of field multipoles in the RHIC low-beta insertions and extrapolation of the method to the LHC. In Proceedings of the 2001 Particle Accelerator Conference, Chicago (2001) E.H. Maclean, R. Tomás, F.S. Carlier, M.S. Camillocci, J.W. Dilly, J.M. Coello de Portugal, E. Fol, K. Fuchsberger, A. Garcia-Tabares Valdivieso, M. Giovannozzi, M. Hofer, L. Malina, T.H.B. Persson, P.K. Skowronski, A. Wegscheider, New approach to LHC optics commissioning for the nonlinear era. Phys. Rev. Accel. Beams 22, 061004 (2019) D. Schulte, Optics challenges for future hadron colliders. CERN-ICFA Workshop on Advanced Optics Control (2015) M. Benedikt, D. Schulte, J. Wenninger, F. Zimmerman, Challenges for highest energy circular colliders. Technical report (2014). CERN-ACC-2014-0153 E. Cruz-Alaniz, A. Seryi, E.H. Maclean, R. Martin, R. Tomás, Non linear field correction effects on the dynamic aperture of the FCC-hh. In Proc. IPAC 17. Copenhagen, Denmark, number TUPVA038 (2017) H. Sugimoto, SuperKEKB. Presentation at CERN-ICFA Workshop on Advanced Optics Control (2015) O. Bruning, S. Fartoukh, M. Giovannozzi, T. Risselada, Dynamic Aperture Studies for the LHC Separation Dipoles. Technical report (2004). LHC Project Note 349 F. Carlier, J. Coello, S. Fartoukh, E. Fol, A. García-Tabares, M. Giovannozzi, M. Hofer, A. Langer, E.H. Maclean, L. Malina, L. Medina, T.H.B. Persson, P. Skowronski, R. Tomás, F. Van der Veken, A. Wegscheider, Optics Measurement and Correction Challenges for the HL-LHC. Technical report (2017). CERN-ACC-2017-0088 T. Pugnat, B. Dalena, A. Simona, L. Bonaventur, Computation of beam based quantities with 3D final focus quadrupoles field in circular hadronic accelerators. Nucl Instrum. Methods Phys. Res. Sect. A: Accel. Spectrom. Detect. Assoc. Equip. 978, 164350 (2020) T. Pugnat, 3D magnetic field analysis of LHC final focus quadrupoles with Beam Screen. In Proceedings of IPAC 21, (2021) E.H. Maclean, R. Tomás, M. Giovannozzi, T.H.B. Persson, First measurement and correction of nonlinear errors in the experimental insertions of the CERN Large Hadron Collider. Phys. Rev. Spec. Top. Accel. Beams 18, 121002 (2015) E.H. Maclean, F. Carlier, J. Dilly, M. Giovannozzi, R. Tomás, Prospects for beam-based study of dodecapole nonlinearities in the CERN High-Luminosity Large Hadron Collider. Technical report. CERN-ACC-NOTE-2022-0020 G. Ambrosio, P. Ferracin, MQXF (results of all type of tests and global plan CERN/AUP). Presentation to 7th HL-LHC Collaboration Meeting, CIEMAT, Madrid, 13–16 November (2017) S.I. Bermudez, MQXFB status. Presentation to 10th HL-LHC Collaboration Meeting, CERN, 5–7 October (2020) G. Ambrosio, Results of the US triplet pre-series magnet tests and measurements. Presentation to 10th HL-LHC Collaboration Meeting, CERN, 5–7 October (2020) E. Todesco, M. Giovannozzi, Dynamic aperture estimates and phase-space distortions in nonlinear betatron motion. Phys. Rev. E 53, 4067 (1996) G. Apollinari, I. Béjar Alonso, O. Bruning, M. Lamont, L. Rossi, High-Luminosity Large Hadron Collider (HL-LHC): preliminary Design Report. Technical report. CERN-2015-005 M. Giovannozzi, Field quality and DA. 6th HL-LHC Collaboration Meeting (14–16 November 2016, Paris) I. Béjar Alonso, L. Rossi, HiLumi LHC Technical Design Report: Deliverable: D1.10. Technical report. CERN-ACC-2015-0140 N. Karastathis, Y. Papaphilippou, Beam-beam simulations for optimizing the performance of the High-Luminosity Large Hadron Collider Proton Physics. Technical report (2020). CERN-ACC-NOTE-2020-0026 S. Kostoglou, DA simulations with beam-beam for HL-LHC. 194th HiLumi WP2 Meeting, CERN, 27 July (2021) CERN FiDeL group documentation on the magnetic model of the LHC. Technical report A. Bazzani, E. Todesco, G. Turchetti, A normal form approach to the theory of nonlinear betatronic motion. CERN 94(02), (1994) X. Buffat, F.S. Carlier, J. Coello De Portugal, R. De Maria, J. Dilly, E. Fol, N. Fuster Martinez, D. Gamba, H. Garcia Morales, A. García-Tabares, M. Giovannozzi, M. Hofer, N. Karastathis, J. Keintzel, M. Le Garec, E.H. Maclean, L. Malina, T.H.B. Persson, P. Skowronski, F. Soubelet, R. Tomás, F. Van der Veken, L. Van Riesen-Haupt, A. Wegscheider, D.W. Wolf, J.F. Cardona, Optics Measurement and Correction Strategies for HL-LHC. Technical report (2022). CERN-ACC-2022-0004 J. Serrano, M. Cattin, The LHC AC Dipole system: an introduction. Technical report (2010). CERN-BE-Note-2010-014 R. Tomás, Normal form of particle motion under the influence of an ac dipole. Phys. Rev. ST. Accel. Beams 5, 054001 (2002) R. Tomás, Adiabaticity of the ramping process of an ac dipole. Phys. Rev. ST. Accel. Beams 8, 024401 (2005) E.H. Maclean, R. Tomás, F. Schmidt, T.H.B. Persson, Measurement of nonlinear observables in the Large Hadron Collider using kicked beams. Phys. Rev. ST. Accel. Beams 17, 081002 (2014) S. White, E. Maclean, R. Tomás, Direct amplitude detuning measurement with ac dipole. Phys. Rev. ST. Accel. Beams 16, 071002 (2013) E.H. Maclean, F.S.. Carlier, E. Cruz Alaniz, B. Dalena, J.W. Dilly, E. Fol, M. Giovannozzi, M. Hofer, L. Malina, T.H.B. Persson, J.M. Coello de Portugal, P.K. Skowronski, M.S Camillocci, R. Tomás, A. Garcia-Tabares Valdivieso, A. Wegscheider, Report from LHC MD 2158: IR-nonlinear studies. Technical report (2017). CERN-ACC-NOTE-2018-0021 M. Gasior, R. Jones, The principle and first results of betatron tune measurement by direct diode detection. Technical report (2005). LHC-Project-Report 853 A. Boccardi, M. Gasior, O.R. Jones, P. Karlsson, R.J. Steinhagen, First Results from the LHC BBQ Tune and Chromaticity Systems. Technical report (2009). CERN-LHC-Performance-Note-007 É. Forest, F. Schmidt, E. McIntosh, Introduction to the Polymorphic Tracking Code. Technical report (2002). CERN-SL-2002-044 (AP) J. Dilly, A. Markus, A. Theodoros, F. Carlier, M. Hofer, L. Malina, E.H. Maclean, E. Solfaroli Camillocci, R. Tomás, Report and Analysis from LHC MD 3311: amplitude detuning at end-of-squeeze. Technical report (2019). CERN-ACC-NOTE-2019-0042 J. Dilly, E.H. Maclean, R. Tomás, Controlling Landau Damping via Feed-Down From High-Order Correctors in the LHC and HL-LHC. In Proceedings of the 13th International Particle Accelerator Conference, page WEPOPT060, Bangkok, Thailand (2022). JACoW J. Dilly, R. Tomás, A flexible nonlinear Resonance Driving Term based Correction Algorithm with Feed-Down. In Proceedings of the 13th International Particle Accelerator Conference, page WEPOPT061, Bangkok, Thailand (2022). JACoW J. Dilly, M. Giovannozzi, R. Tomás, F. Van Der Veken, Corrections of Systematic Normal Decapole Field Errors in the HL-LHC Separation/Recombination Dipoles. In Proceedings of the 13th International Particle Accelerator Conference, page WEPOPT059, Bangkok, Thailand (2022). JACoW F.S. Carlier, A Nonlinear Future—Measurements and corrections of nonlinear beam dynamics using forced transverse oscillations. PhD thesis, University of Amsterdam (2020). CERN-THESIS-2020-025 J.W. Dilly, M. Albert, F.S. Carlier, J. Coello De Portugal, B. Dalena, E. Fol, M. Hofer, E.H. Maclean, L. Malina, T.H.B. Persson, M.S. Camillocci, M.L Spitznagel, R. Tomás, A. Garcia Tabares Valdiviesco, Report from LHC MD 3312: Replicating HL-LHC DA. Technical report (2022). CERN-ACC-NOTE-2022-0021 M. Giovannozzi, F. Van der Veken, Description of the luminosity evolution for the CERN LHC including dynamic aperture effects. Part II: application to Run 1 data. Nucl. Instrum. Methods Phys. Res. A 908(2018), 1–9, 908 (2018) E.H. Maclean, M. Giovannozzi, R.B. Appleby, Innovative method to measure the extent of the stable phase-space region of proton synchrotrons. Phys. Rev. Accel. Beams 22, 034002 (2019) E.H. Maclean, F.S. Carlier, M. Giovannozzi, R. Tomás, Report from LHC MD 2171: dynamic aperture at 6.5 TeV. Technical report (2018). CERN-ACC-NOTE-2018-0054 A. Bazzani, M. Giovannozzi, E.H. Maclean, Analysis of the non-linear beam dynamics at top energy for the CERN Large Hadron Collider by means of a diffusion model. Eur. Phys. J. Plus 135, 77 (2020) M. Giovannozzi, W. Scandale, E. Todesco, Dynamic aperture extrapolation in the presence of tune modulation. Phys. Rev. E 57, 3432 (1998) M. Giovannozzi, Proposed scaling law for intensity evolution in hadron storage rings based on dynamic aperture variation with time. Phys. Rev. ST. Accel. Beams 15, 024001 (2012) M. Giovannozzi, F. Lang, R. de Maria, Analysis of Possible Functional Forms of the Scaling Law for Dynamic Aperture as a Function of Time. Technical report (2013). CERN-ACC-2013-0170 SixTrack - 6D Tracking Code. http://sixtrack-ng.web.cern.ch/SixTrack/index.php R. De Maria, J. Andersson, V.K. Berglyd Olsen, L. Field, M. Giovannozzi, P.D. Hermes, N. Høimyr, S. Kostoglou, G. Iadarola, E. Mcintosh, A. Mereghetti, J. Molson, D. Pellegrini, T. Persson, M. Schwinzerl, E.H. Maclean, K.N. Sjobak, I. Zacharov, S. Singh, Sixtrack v and runtime environment. Int. J. Mod. Phys. A 34, 36 (2019) F.S. Carlier, R. Tomás, E.H. Maclean, T.H.B. Persson, First Demonstration of Dynamic Aperture Measurements with an AC dipole. Phys. Rev. Accel. Beams 22, 031002 (2019) The authors' copious thanks go to the CERN Operations Group and LHC Operators and Engineers in Charge for the significant amount of support lent to the beam-based studies presented here. Similarly, deep thanks go to the LHC Collimation team and CERN Beam Instrumentation Group without who's work none of the methods presented in this paper would be viable. We are deeply indebted to the CERN Magnets, Superconductors, and Cryostats Group for their extensive work on the magnetic models of both the LHC and HL-LHC, which underpins all the studies presented here. Great thanks also go the LHC Optics Measurement and Correction team for their broad support of the beam-based optics studies in the LHC. Dynamic aperture simulations presented in this paper were performed using the LHC@home citizen computing project. The LHC@home volunteers are warmly thanked for the CPU-time they have donated to the project, which is essential to facilitate the large scale tracking studies needed to study dynamic aperture in the HL-LHC. Open access funding provided by CERN (European Organization for Nuclear Research). Beams Department, CERN, Meyrin, Switzerland E. H. Maclean, F. S. Carlier, J. Dilly, M. Le Garrec, M. Giovannozzi & R. Tomás University of Malta, Msida, Malta E. H. Maclean EPFL, Lausanne, Switzerland F. S. Carlier Humboldt University of Berlin, Berlin, Germany J. Dilly Goethe University of Frankfurt, Frankfurt, Germany M. Le Garrec M. Giovannozzi R. Tomás Correspondence to E. H. Maclean. A Summary of successful skew-octupole and decapole-order RDT measurements in the LHC Skew-octupole and normal- and skew-decapole forced RDTs have been successfully measured on several occasions in the LHC at top energy. Table 5 summarises measured amplitudes and uncertainties of relevant forced RDT measurements performed in the LHC at top energy in 2018 (detailed discussion of Run 2 studies can be found in [48]) and 2022. Table 5 Summary of successful measurements of skew-octupole, normal-decapole and skew-decapole forced RDT in the LHC at top-energy. RDT values quoted are the mean and standard deviation amplitudes over all BPMs in the LHC arcs Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Maclean, E.H., Carlier, F.S., Dilly, J. et al. Prospects for beam-based study of dodecapole nonlinearities in the CERN High-Luminosity Large Hadron Collider. Eur. Phys. J. Plus 137, 1249 (2022). https://doi.org/10.1140/epjp/s13360-022-03367-2 DOI: https://doi.org/10.1140/epjp/s13360-022-03367-2
CommonCrawl
OSA Publishing > Optical Materials Express > Volume 10 > Issue 1 > Page 76 Alexandra Boltasseva, Editor-in-Chief Deterministically fabricated spectrally-tunable quantum dot based single-photon source Marco Schmidt, Martin V. Helversen, Sarah Fischbach, Arsenty Kaganskiy, Ronny Schmidt, Andrei Schliwa, Tobias Heindel, Sven Rodt, and Stephan Reitzenstein Marco Schmidt, Martin V. Helversen, Sarah Fischbach, Arsenty Kaganskiy, Ronny Schmidt, Andrei Schliwa, Tobias Heindel, Sven Rodt, and Stephan Reitzenstein* Institut für Festkörperphysik, Technische Universität Berlin, Hardenbergstraße 36, 10623 Berlin, Germany *Corresponding author: [email protected] Tobias Heindel https://orcid.org/0000-0003-1148-404X M Schmidt M Helversen S Fischbach A Kaganskiy R Schmidt A Schliwa T Heindel S Rodt S Reitzenstein •https://doi.org/10.1364/OME.10.000076 Marco Schmidt, Martin V. Helversen, Sarah Fischbach, Arsenty Kaganskiy, Ronny Schmidt, Andrei Schliwa, Tobias Heindel, Sven Rodt, and Stephan Reitzenstein, "Deterministically fabricated spectrally-tunable quantum dot based single-photon source," Opt. Mater. Express 10, 76-87 (2020) Bright and highly-polarized single-photon sources in visible based on droplet-epitaxial GaAs quantum dots in photonic crystal cavities (OME) Deterministic implementation of a bright, on-demand single-photon source with near-unity indistinguishability via quantum dot imaging (OPTICA) Phonon effects in quantum dot single-photon sources (OME) Electric fields Quantum communications Quantum key distribution Quantum light sources Quantum networks Original Manuscript: October 8, 2019 Revised Manuscript: November 14, 2019 Manuscript Accepted: November 14, 2019 Optical Materials Express Materials and Devices for Quantum Photonics (2020) Device design and fabrication Micro-photoluminescence characterization Strain-tunability of single-photon emission Theoretical analysis of strain-transfer and strain-tuning of QD structures Closed-loop stabilization of emission energy Experimental section Spectrally-tunable quantum light sources are key elements for the realization of long-distance quantum communication. A deterministically fabricated single-photon source with a photon extraction efficiency of η =(20 ± 2) %, a maximum tuning range of ΔE = 2.5 meV and a minimum g(2)(τ = 0) = 0.03 ± 0.02 is presented. The device consists of a single pre-selected quantum dot (QD) monolithically integrated into a microlens that is bonded onto a piezoelectric actuator via gold thermocompression bonding. Here, a thin gold layer simultaneously provides strain transfer and acts as a backside mirror for the QD-microlens to maximize the photon extraction efficiency. The QD-microlens structure is patterned via 3D in-situ electron-beam lithography (EBL), which allows us to pre-select and integrate suitable QDs based on their emission intensity and energy with a spectral accuracy of 1 meV for the final device. Together with strain fine-tuning, this enables the scalable realization of single-photon sources with identical emission energy. Moreover, we show that the emission energy of the source can be stabilized to µeV accuracy by closed-loop optical feedback. Thus, the combination of deterministic fabrication, spectral-tunability and high broadband photon-extraction efficiency makes the QD-microlens single-photon source an interesting building block for the realization of quantum communication networks. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Quantum communication protocols promise secure data transmission based on single-photon technology [1–3]. In this context, implementations of long-distance quantum key distribution require Bell-state measurements in quantum repeaters [4] to transfer quantum states between different nodes of a communication network. Two recent experiments, which demonstrate entanglement swapping of entangled photon pairs consecutively emitted by the same emitter, impressively underline the high potential of semiconductor QDs in this regard [5,6]. Beyond such proof-of-principle experiments and to enable large-scale quantum repeater networks, sources emitting at the same energy, on the order of the homogeneous linewidth of the emitters, are required in each node of the network. Semiconductor QDs are promising candidates for such applications, as they emit photons with simultaneously close-to-ideal indistinguishability, entanglement fidelity and extraction efficiency when integrated into suitable photonic structures like circular Bragg gratings in a hybrid device design [7,8]. However, one has to note that the self-assembled Stranski-Krastanov growth mode, which is typically used to realize high-quality InGaAs QDs, leads to randomly distributed emitters with varying shape and size, resulting in an emission band with inhomogeneous broadening of typically 10-50 meV. Noteworthy, values of only a few meV have been realized for QDs grown on inverted pyramids [9], which, however, is still three orders of magnitude larger than the homogenous linewidth of the QDs. Therefore, post-growth processing is required to meet the demands of advanced photonic quantum technology. With respect to the requirement of realizing spectrally precisely matched single-photon sources, deterministic in-situ optical and electron beam lithography techniques [10,11] allow one to pre-select and integrate bright emitters within the QD ensemble with a spectral accuracy of better than 1 meV. In combination with spectral fine-tuning, that is key to achieve spectral resonance of multiple single-photon sources within the QD's homogeneous linewidth of about 1-2 µeV, which has high potential to enable entanglement swapping between remote sources in large-scale quantum repeater networks in the future. Moreover, the precise tunability of single-photon sources is also beneficial for the coupling of single-photon emitters to other key components of advanced quantum networks, namely quantum memories, realized e.g. by atomic vapors [12], trapped atoms [13] or solid state quantum memories [14]. Various methods have been applied to achieve spectral control of the QD emission characteristics, often accompanied with drawbacks: Temperature tuning [15], for instance, suffers from increased phonon-contributions finally limiting the photon indistinguishability already above 10-15 K [16]. Electric fields can be applied to influence the QD emission via the quantum-confined Stark effect [17,18]. This scheme, however, requires complex doping and electrical contacts which complicates the device processing. Strain-tuning proved to be an excellent alternative, which can be implemented by integration of the emitter onto a piezoelectric material such as Pb(Mg1/3Nb2/3)O3-PbTiO3 (PMN-PT) [19,20]. In addition to the spectral-tunability, strain-tuning can be used to control the exciton binding energies and the fine structure splitting of QD states, which enables the generation of polarization-entangled photon pairs [21]. In view of applications of single-photon sources in secure quantum communication scenarios, high photon extraction and collection efficiencies are desirable to achieve high data transmission rates. So far, only few attempts have been made to increase the efficiency of strain-tunable single-photon sources. In one example an extraction efficiency of 57% into a numerical aperture of 0.8 has been achieved using strain-tunable nanowire antennas [22]. In this work, we present a bright spectrally-tunable single-photon source based on a deterministically fabricated QD microlens combined with a piezoelectric actuator by a flip-chip goldbonding technique. The applied in-situ EBL technique has the important advantage that suitable QDs can be pre-selected by their emission intensity and emission energy with a spectral accuracy better than 1 meV before integrating them into photonic nanostructures. Moreover, with a positioning accuracy of about 30-40 nm [23], broadband enhancement of the photon-extraction efficiency is achieved. The mentioned uncertainty in emission energy of approximately 1 meV is attributed to different charge configurations after integration of the QD into a photonic microstructure with etched surfaces [24]. We show that piezo strain-tuning can compensate this spectral uncertainty and, thus, promises a scalable route towards large scale quantum networks based on entanglement distribution between quantum light sources with identical emission energy. 2. Device design and fabrication The fabrication of our device involves three main processing steps: First a semiconductor heterostructure is grown by metal-organic chemical vapor deposition. Subsequently, a flip-chip gold thermocompression bonding process is applied, which results in a thin GaAs membrane including the QDs attached to the piezoelectric actuator. In a final step, single QDs are deterministically integrated into microlenses by means of in-situ EBL. The growth process starts with an Al0.97Ga0.03As layer with a thickness of 1 µm which is deposited on a GaAs (100) substrate, acting as an etch stop layer later on. Above this layer, 570 nm of GaAs are grown including the InGaAs QDs in a distance of 200 nm to the sample surface. The QD layer with a wafer-position dependent density of 108−109 cm−2 and an emission band with an inhomogeneous broadening of 30 meV is centered at 1.33 eV (930 nm). For the flip-chip bonding process, 200 nm of gold are deposited onto the sample using electron-beam evaporation. Additionally, a 300 nm gold layer is evaporated on a PIN-PMN-PT (Pb(In1/2Nb1/2)O3-Pb(Mg1/3Nb2/3)O3-PbTiO3) crystal. This material is chosen as it has an increased depoling temperature of ${T_C} = 140\; ^\circ \textrm{C}$ and a higher coercive field of ${E_c} = 6\; \textrm{kV}\;{\textrm{cm}^{ - 1}}$ as compared to the more commonly used PMN-PT with ${T_C} = 90\; ^\circ \textrm{C}$ and ${E_c} = 2.5\; \textrm{kV}\;{\textrm{cm}^{ - 1}}$ [25]. Next, the QD sample is placed upside-down onto the piezoelectric actuator with the two gold layers facing each other (cf. Fig. 1(a)). A pressure of 6 MPa at a temperature of approximately 600 K is applied for 4 hours to achieve a strong cohesion of the gold layers. Fig. 1. Schematic illustration of the fabrication process of a tunable QD microlens: (a) Gold thermocompression bonding of the layer structure including InGaAs QDs, followed by a wet etching step to remove the GaAs substrate and the etch stop layer. (b) Mapping process for the in-situ EBL. Suitable QDs are chosen and integrated into microlens structures. (c) The PIN-PMN-PT is contacted to transfer strain to the QD microlens for spectral-tuning of the single-photon emission. In the next step, the upper GaAs substrate is removed by a stirred solution of hydrogen peroxide and ammonium hydroxide until the etching stops at the Al0.97Ga0.03As layer. The latter is removed by hydrochloric acid such that a semiconductor membrane with a thickness of 570 nm remains on top of the gold layer. To enhance the photon-extraction efficiency and to pre-select bright QDs with a specific emission energy, 3D in-situ EBL at 10 K is applied. This method allows us to conveniently choose QDs with a target emission energy and high emission intensity within a scanned area of the sample by their cathodoluminescence (CL) characteristics. Figure 1(b) illustrates the CL mapping process. Sample areas of 20 µm x 20 µm are scanned and suitable QDs are chosen. A microlens is written into the resist on top of it, which is afterwards developed such that the structure can be transferred into the GaAs top layer by reactive-ion-enhanced plasma etching. The whole selection and EBL process takes less than 10 minutes per write field, each including up to about 5 QD-microlenses, so that tens of such devices with emission at the target wavelength can be realized in a few hours. For more details on the 3D in-situ EBL process we refer to [11]. The final device is shown in Fig. 1(c). The device and lens geometry were optimized beforehand using the commercially available software-package JCMsuite by the company JCMwave, which is based on a finite-element method. The optimum lens geometry leads to a photon extraction efficiency of 42% for a numerical aperture of 0.4 and is identified as a spherical segment with a height of 370 nm and a radius of 1264 nm. 3. Micro-photoluminescence characterization The optical properties of the final device are investigated by means of micro-photoluminescence spectroscopy under non-resonant excitation (laser wavelength: 665 nm) at a temperature of 10 K with a spectral resolution of 27 µeV. Figure 2(a) shows a spectrum of a QD microlens device (QDM1) at saturation of the excitonic lines. Excitation-power- and polarization dependent measurements are used for the assignment of the emission lines to respective quantum dot states. The most intense line at ${E_{{X^ - }}} = 1.3520$ eV is identified as a charged excitonic transition (X−), the transition at ${E_X} = 1.3536$ eV as the neutral excitonic transition (X) due to its polarization splitting of ${\Delta }{E_{FSS}} = 7$ µeV, while a charged biexcitonic line is observed at ${E_{X{X^{ +{/} - }}}} = 1.3490 $ eV. To evaluate the photon-extraction efficiency $\eta $ of the microlens device, we use a Titan-Sapphire laser ($f = 80$ MHz) to excite the QD state X− at saturation and detect the emitted photons using a calibrated experimental setup (cf. Experimental Section). At zero bias voltage applied to the piezo element we observe $\eta ({{X^ - }} )= ({17 \pm 2} )$ % for the charged excitonic transition with a linewidth of 46 µeV (FWHM). This value is smaller than 42% expected for an optimized spherical microlens, where the deviation is mainly attributed to the nonideal shape of the realized structure with noticeable surface roughness and a rather flat top. Indeed, a micromesa with similar geometry would yield a photon extraction efficiency of 18% [26]. Thus, further work needs to focus on a more precise lithography and processing of spherical microlenses or circular Bragg reflectors on top of a gold bonded structure to enhance the extraction efficiency. Fig. 2. (a) Microscope image of CL map areas taken during in-situ EBL with QD microlenses. (b) Scanning electron microscope image of a microlens. (c) Micro-photoluminescence spectrum of a QD microlens (QDM1) at T = 10 K. (d) Photon-autocorrelation measurements stating single-photon emission with g(2)(τ=0) = 0.03 ± 0.02. Next, we verify the single-photon emission of our spectrally-tunable microlens device under pulsed wetting-layer excitation at $\lambda = 897$ nm. The photon-autocorrelation measurement at saturation of the X− line in Fig. 2(d) shows pronounced antibunching at $\tau = 0$. To quantitatively evaluate the suppression of multi-photon emission events, the experimental data was fitted with a sequence of equidistant two-sided exponential functions $${g^{(2 )}}(\tau )= \left( {{p_0}{\textrm{e}^{ - \left|{\frac{\tau }{{{t_\textrm{d}}}}} \right|}} + {p_\textrm{t}}\mathop \sum \limits_{\begin{array}{{c}} {i ={-} 5}\\ {i \ne 0} \end{array}}^5 {\textrm{e}^{ - \left|{\frac{{\tau - \left( {\frac{i}{f}} \right)}}{{{t_\textrm{d}}}}} \right|}}} \right) \otimes G({\tau ,{\sigma_{\textrm{res}}}} )$$ with decay time ${t_\textrm{d}}$ convoluted with a Gaussian $G(\tau )$ with ${\sigma _{\textrm{res}}} = $300 ps /$2\sqrt {2\textrm{ln}2} $ width, accounting for the timing resolution of the Hanbury-Brown and Twiss setup. The ratio of the peak amplitudes at zero-time delay ${p_0}$ and at finite time delays ${p_\textrm{t}}$ reveals the second-order photon-autocorrelation value ${g^{(2 )}}({\tau = 0} )= 0.03 \pm 0.02$ (${t_\textrm{d}} = ({0.69 \pm 0.01} )$ ns). These results confirm that our advanced multi-step device processing enables the realization of bright single-photon sources with a high suppression of multi-photon emission events. 4. Strain-tunability of single-photon emission To demonstrate the spectral tunability of QD emission, a voltage of −600 to + 600 V is applied to the PIN-PMN-PT material, corresponding to an electric field F of −20 to + 20 kVcm−1. A positive (negative) voltage corresponds to an in-plane compression (extension) of the piezoelectric crystal transferred to the semiconductor material and the QD layer. Using the full tuning range results in a shift of the X− emission by ${\Delta }E = 2.5$ meV as shown in Fig. 3(a). Fig. 3. (a) Energy tuning of the X− emission line of QDM1 by application of an electric field F to the piezoelectric actuator. (b) Extraction efficiency (black, left axis), equal-time second-order photon autocorrelation (g(2) (τ=0)) results (red squares, right axis) and calculated g(2)(τ=0) taking the F-dependent extraction efficiency into account (red circles, right axis). X− emission energy for the full tuning range (blue, right axis). Besides the tunability of the emission energy, Fig. 3(a) also reveals a change in the emission intensity with the applied electric field, which we further investigated by measuring the photon extraction efficiency in pulsed excitation. As can be found in Fig. 3(b), the highest efficiency is achieved at an applied field of ${F_{\textrm{max}}} = 12\; \textrm{kV}\;{\textrm{cm}^{ - 1}}$ with $\eta ({{X^ - },\; \; {F_\textrm{max}}} )= ({20 \pm \; 2} )$%. The efficiency decreases down to $\eta ({{X^ - },\; \; {F_{\textrm{min}}}} )= ({6 \pm 1} )$ % at the lowest field value ${F_{\textrm{min}}} ={-} 20 $ kV cm−1. Additionally, we investigated the second-order photon autocorrelation function for different detunings. The suppression of multi-photon emission events ${g^{(2 )}}({\tau = 0} )$ remains constant and below 0.05 over a wide tuning range and increases at high negative electric fields to ${g^{(2 )}}(0 )= 0.10 \pm 0.03$ at $F ={-} 15 $ kV cm−1. The associated X− emission energy plotted in blue shows that we can obtain an effective tuning range of about 1 meV in which high extraction efficiency $> \approx $ 15% and high multi-photon suppression with ${g^{(2 )}}(0 )$ < 0.05 can be achieved. This tuning range covers well the in-situ EBL spectral accuracy so that a combination of both enables the scalable realization of SPS with identical emission energy as we demonstrate in the next section. An increased g(2)(0) can be explained by a decrease of the signal (S) to uncorrelated background (B) ratio. To support this statement we consider $g_{}^{(2 )}(\tau )= 1 + {\rho ^2}({g_\textrm{BF}^{(2 )}(\tau )- 1} )$., with $\rho = \; S/({S + B} ) $ and the background free value $g_\textrm{BF}^{(2 )}(\tau )$ [27], to describe the field dependence of g(2)(0). A direct connection to the photon extraction efficiency is obtained by taking into account that S is proportional to the measured extraction efficiency (black data points in Fig. 3(b)) and that a constant uncorrelated background contribution of ${\eta _\textrm{B}}$ is present which leads to $g_{}^{(2 )}(\tau )= 1 + {({{\eta_\textrm{Device}}/({{\eta_\textrm{Device}} + {\eta_\textrm{B}}} )} )^2}({ - 1} )$. under the assumption that the background free $g_\textrm{BF}^{(2 )}(0 )$ is zero. Very good agreement between experimental data (red squares in Fig. 3(b), with ${\eta _\textrm{Device}} = {\eta _{\textrm{NA} = 0.4}}$) and the calculated values (open red circles in Fig. 3(b)) is obtained for ${\eta _\textrm{B}}$ = 0.0035, which supports our interpretation of a signal-to-background dependent increase of g(2)(0) for negative F. The strain influence on the extraction efficiency could be connected to electric fields caused by charge states on the surface of the microlens. The charge states create a field distribution around the QD which depends on the external strain. Previous studies showed that the processing of microstructures by in-situ EBL gives a lateral positioning accuracy of 34 nm [23]. Such a deviation from the center could be sufficient for the QD to be influenced by the mentioned strain-induced electric field distribution, leading to a slight separation of the electron and hole wavefunction, which in return can reduce the emission rate as we observe in the experiment. Measurements of the decay time of the QD X- emission yield a value of approximately 0.65 ns, which is not significantly be influenced by the electric field F applied to the piezo actuator, the rise time increases from about 200 ps to 350 ps with decreasing F below zero. This change of rise time could indicate a lower capture probability in agreement with the reduced photon extraction efficiency in this field range. A more detailed description would require a detailed knowledge of the QD position in the microlens which is beyond the scope of the present work. To demonstrate the scalability of our device concept, we evaluated the strain-tuning behavior of four additional QD-microlenses QDM2-5 which were fabricated together with QDM1 on the same sample with the same target emission energy. In Fig. 4 the excitonic emission energies of these microlenses are plotted relative to the X− emission of QDM1 as function of the electric field applied to the piezo actuator. The emission energy of all four lenses can be tuned through resonance with the emission energy of QDM1 (indicated by the dashed line). This feature will be very helpful in future experiments aiming at entanglement swapping between remote QD-SPSs. The fact that spectral resonance between the five sources cannot be achieved at the same electric field is not relevant for this application, for which the sample could either be split or different samples with the same target emission energy could be realized by in-situ EBL. Fig. 4. Strain-tuning of four QD microlenses (QDM2-5) which were deterministically fabricated together with device QD1 on the same piece of sample. The excitonic emission energies of these QDM2-5 microlenses are plotted relative to the X− emission energy of QDM1. They can be tuned through resonance with QD microlens QDM1 by applying suitable electrical fields between about −20 kVcm−1 and 10 kVcm−1 to the piezo actuator. 5. Theoretical analysis of strain-transfer and strain-tuning of QD structures To further analyze the effects of the external strain we compare our measurements to results obtained by theoretical modeling of the microlens device. The additional strain exerted by the piezoelectric actuator is accounted for by adjusting the lattice constant ${a_0}$ of the lowest GaAs layer above the gold mirror to $\tilde{a} = {a_0} - c \cdot {a_0}$, and the strain distribution inside the full GaAs device is calculated in the framework of continuum elasticity. One has to distinguish between the permanent strain caused by the inherent lattice mismatch between the GaAs substrate and the InGaAs QD, and the effects of the external strain caused by the piezo-tuning. Moreover, the hydrostatic strain component can be separated from the biaxial strain component. Figure 5 shows the calculation results for the permanent strain without external influence ((a1) and (b1)) as well as the additional strain effects induced by an applied external compressive as well as tensile strain ((a2) and (b2)). The distribution across the lens structure is almost uniform, only a slight relaxation effect is visible for the hydrostatic strain component as compared to the planar area around the lens. Possible shear strain was not taken into account in the simulations, because this would add a complexity to the calculation that is outside of the scope of this work. Fig. 5. Calculated hydrostatic (a1/a2) and biaxial (b1/b2) strain distributions in a QD microlens. (a1/b1) refer to the situation in absence of external strain, while (a2) and (b2) show the additional effects by external tensile (left) and compressive strain (right). The domain is divided into (i) air, (ii) lens, (iii) QD, (iv) wetting layer, (v) spacer layer, and (vi) the piezoelectric actuator. Red (blue) color indicates the relative tensile (compressive) strain. Applied strain may affect the energies of the localized electronic states via (i) deformation potentials, thus, changing the local band positions, (ii) the alteration of the quantization energies, and (iii) the change in electron-hole Coulomb interaction. Careful analysis using eight-band k·p theory together with the configuration interaction method [28], however, revealed that effect (i) constitutes the governing contribution, whereas (ii) and (iii) are only minor contributions, which are neglected in the following discussion. The achieved tuning of ${\Delta }E = 2.5$ meV corresponds to a change in the lattice constant of $c ={\pm} 1.2 \cdot {10^{ - 3}}$ for compressive (+) and tensile (-) strain. At the position of the QD the resulting sum of the relative hydrostatic and biaxial strain components in all three directions are calculated separately to $\Delta {\epsilon _\textrm{hy}}(c )={\pm} 8.1 \cdot {10^{ - 4}}$ and $\Delta {\epsilon _\textrm{biax}}(c )={\pm} 4.65 \cdot {10^{ - 3}}$, where the hydrostatic strain is responsible for band-shifts and the biaxial strain for the heavy-hole light-hole splitting [29]. The sum of both effects is driving the change in the luminescence energy. Combined with the deformation potentials in In0.7Ga0.3As, ${a_\textrm{g}} ={-} 6725.9$ meV for the hydrostatic strain and ${b_v} ={-} 1897.2$ meV for the biaxial strain, the energy shift can be calculated as $$\Delta E(c )= {a_\textrm{g}}({\Delta {\epsilon_\textrm{hy}}(c )} )- \; \frac{1}{2}{b_v}({\Delta {\epsilon_\textrm{biax}}(c )} ) = \textrm{ } \pm 1.25\textrm{ meV}.$$ Using the piezoelectric coefficient ${d_{31}} \approx 1500\; \textrm{pC}\;{\textrm{N}^{ - 1}}$ as published by the manufacturer (CTS Corporation), we can compare the theoretically evaluated strain with the experimentally applied value. The maximum strain that is induced in one lateral direction during the measurement can be estimated to $${\epsilon ^{\textrm{exp}}} = {d_{31}} \cdot \; {F_{\textrm{max}}} = 1500{\; pC}\;{\textrm{N}^{ - 1}} \cdot 20\; \textrm{kV}\;{\textrm{cm}^{ - 1}} = 3 \cdot {10^{ - 3}},$$ as compared to the theoretical value of $c = 1.2 \cdot {10^{ - 3}}$. Matching the calculation results with the achieved tuning, it can be estimated that a fraction of $ \frac{c}{{{\epsilon ^{\textrm{exp}}}}} = 40\; \%$ of the strain effect at the piezoelectric crystal is transferred to the position of the studied QD. 6. Closed-loop stabilization of emission energy A critical aspect of our target application in quantum communication networks is the long-term spectral stability of our energy-tunable SPSs. In this regard the well-known creep behavior of piezoelectric actuators is a severe issue [30]. To illustrate this point, the time-dependence of the emission energy of another strain-tunable QD-microlens device is presented in Fig. 6(a), where the electric field was changed up from zero to 12 kVcm−1 at time t = 0. In the first 30 minutes of the measurement series, the emission energy increased rather strongly by about 350 µeV. Subsequently, in the next 120 minutes a further linear blue-shift of about 30 µeV took place because of the typical creeping behavior of the piezo-materials, before the emission finally approaches a stable value. Thus, for applications requiring large tuning ranges, a stabilization time of approximately 3 hours needs to be considered before stable operation of the SPS in this open-loop scenario. Moreover, even in the 'stable state' creep related spectral shifts on the order of several µeV do occur, preventing the implementation of entanglement swapping which requires sub-µeV spectral stability. Fig. 6. (a) Time series of the excitonic emission energy of a QD-microlens after changing the piezo field from zero to $F = 12\; \textrm{kV}\;{\textrm{cm}^{ - 1}}$ at time t = 0 in open-loop configuration. (b) Time-dependence of emission energy in closed loop configuration using an active optical feedback control. The jump in intensity at t = 2 min was caused by an intentional perturbation of the system. (c) Zoom-in view of the emission energy relative to the set point (error bars in shaded grey). (d) Corresponding histogram of relative emission energies with a standard deviation of 0.5 µeV. To improve the strain-tuning behavior and the long-term spectral stability of our devices we implemented an active feedback loop with a proportional–integral–derivative (PID) controller. We use an experimental approach similar that reported in Ref. [20]. Essentially, in this rather straight-forward approach the signal emitted by the QD-microlens is coupled to a spectrometer at adjustable time intervals of typically a few ten seconds to monitor the emission wavelength with a spectral accuracy of 0.8-1.0 µeV with an integration time between few tens milli-seconds and few seconds depending on the signal strengths. We installed a short (1 m) single-mode fiber section in the detection path before focusing the optical signal to the input slit of the spectrometer. This fiber section is crucial to enhance the spectral accuracy of the implemented control loop, as small angle deviations of the detection beam path change the position of the emission line on the spectrometer's CCD, thus preventing a reliable detection of the emission energy with the required accuracy. Within a PID control loop with optical feedback, the center energy of the target emission line is determined in each iteration by Lorentzian fitting of the detected spectrum and is compared to the setpoint energy. In case of deviations from the target energy, the voltage output to the piezo actuator is readjusted to shift the emission line back to the setpoint via adapted strain. To ensure best performance of the control loop, the optimum PID parameters are determined by the pulse response of the system. To illustrate the functionality of the described control-loop we stabilized the emission energy of a QD-microlens to a setpoint of 1.3529485 eV. Figure 6(b) shows the corresponding time evolution of the feedback-controlled center energy for a time period of approximately 90 minutes. The jump at t = 2 min marks an intentional (mechanical) perturbation, to test the dynamic response of the control loop. Within a characteristic dynamic response time of about minutes, the emission energy returns to the setpoint. Subsequently, the emission energy is stabilized efficiently by the control-loop as can be seen in the zoom-in view of the emission energy relative to the setpoint. The data yields a standard deviation as low as 0.5 µeV (1.2 µeV FWHM) as shown by the corresponding histogram (obtained for the time range of 7 to 90 minutes) in Fig. 6(d). Importantly, this value compares well with the typical homogenous linewidth (approximately 1-2 µeV) of the InGaAs QDs under study and, thus, can pave the way for future entanglement swapping experiments between remote quantum light sources. 7.1 In-situ electron beam lithography With the in-situ EBL step, QDs are chosen by their cathodoluminescence (CL) signal and integrated into microlens structures. The samples are prepared by spin-coating with the electron-beam resist AR-P 6200 (CSAR 62) and mounted onto the cold finger of a He-flow cryostat of a customized scanning electron microscope for low-temperature operation at 10 K. The reaction of the resist during development depends on the applied electron dose during exposure. This resist has a positive-tone regime at low electron doses, which are used for mapping of the CL signal. The luminescence signal is focused into a monochromator and detected with a Si charge-coupled device camera. Based on that data, QDs are chosen and microstructures are written into the resist above them with a higher electron dose. Above a certain threshold value, the resist enters a negative tone regime, such that the structures remain after development. The transition range to the complete negative-tone regime is used to create quasi-3D designs (see Ref. [11] for details). Finally, dry etching is performed by inductively-coupled-plasma reactive-ion etching. 7.2 Optical measurements The sample is mounted in a helium-flow cryostat and cooled down to 10 K. It is optically excited using a Titan-Sapphire laser that can be operated in quasi-continuous wave (CW) or pulsed ($f = 80$ MHz) mode. The photoluminescence is collected using a microscope objective with an NA of 0.4 and spectrally dispersed by a grating monochromator, before it is detected using a Si charge-coupled device camera. The setup is also equipped with a fiber-coupled Hanbury-Brown and Twiss setup using single-photon counting modules based on Si avalanche photo diodes. To evaluate the extraction efficiency into the first lens of our experimental setup, the transmission of the complete setup was measured to be ${\eta _\textrm{Setup}} = ({1.1\; \pm 0.1} )\; \%$ following the procedure described in Ref. [11]. Using a laser with repetition rate f a detected count-rate ${n_\textrm{QD}}$ corresponds to a photon-extraction efficiency of ${\eta _{\textrm{Device}}} = \frac{{{n_{\textrm{QD}}}}}{{{\eta _{\textrm{Setup}}}\ast f}}$ . Furthermore, the photon extraction efficiency is defined as ${\eta _{\textrm{Device}}} = {\eta _{\textrm{geo}}}{\eta _{{\textrm{X}^ - }}}$, where ${\eta _{\textrm{geo}}}$ denotes the purely geometrical contribution to the photon extraction efficiency of the device, while ${\eta _{{\textrm{X}^ - }}}$ the probability of emitting a photon by the X− per excitation pulse. The latter includes the occupation probability and the quantum efficiency of this QD transition, which can be influenced by the applied mechanical strain under non resonant excitation. In conclusion, we presented a spectrally-tunable single-photon source with a maximum photon extraction efficiency of $\eta = ({20 \pm 2} )$ % and a total tuning range of ${\Delta }E = 2.5$ meV. This tuning range is reduced to about 1 meV when focusing on an operation regime of $\eta\;>\;15$% and g(2)(0) < 0.05. The emission energy of our device is pre-selected with an accuracy of about 1 meV by using in-situ EBL applied to a planar sample bonded onto a piezoelectric actuator via flip-chip gold thermocompression bonding. In addition, a feedback-loop is implemented which enables locking the emission energy with a standard deviation of 0.5 µeV (FWHM: 1.2 µeV). Thus, the achieved effective tuning can serve to adjust the emission to meet the exact transition energy required e.g. for entanglement distribution in multi-node quantum networks or for the interfacing of QD based single-photon sources with quantum memories. Bundesministerium für Bildung und Forschung (03V0630, 13N14876); Deutsche Forschungsgemeinschaft (Re2974/8-1, SFB787); Horizon 2020 Framework Programme (MIQC2, SIQUST). 1. C. H. Bennett and G. Brassard, "Quantum cryptography: Public key distribution and coin tossing," Proc. of IEEE International Conference on Computers, Systems and Signal Processing, 175 (1984). 2. A. K. Ekert, "Quantum cryptography based on Bell's theorem," Phys. Rev. Lett. 67(6), 661–663 (1991). [CrossRef] 3. N. Gisin and R. Thew, "Quantum communication," Nat. Photonics 1(3), 165–171 (2007). [CrossRef] 4. H.-J. Briegel, W. Dür, J. I. Cirac, and P. Zoller, "Quantum Repeaters: The Role of Imperfect Local Operations in Quantum Communication," Phys. Rev. Lett. 81(26), 5932–5935 (1998). [CrossRef] 5. F. Basso Basset, M. B. Rota, C. Schimpf, D. Tedeschi, K. D. Zeuner, S. F. C. da Silva, M. Reindl, V. Zwiller, K. D. Jöns, A. Rastelli, and R. Trotta, "Entanglement swapping with photons generated on demand by a quantum dot," arxiv:1901.06646 (2019). 6. M. Zopf, R. Keil, Y. Chen, J. Yang, D. Chen, F. Ding, and O. G. Schmidt, "Entanglement Swapping with Semiconductor-generated Photons," arxiv:1901.07833 (2019). 7. H. Wang, H. Hu, T.-H. Chung, J. Qin, X. Yang, J.-P. Li, R.-Z. Liu, H.-S. Zhong, Y.-M. He, X. Ding, Y.-H. Deng, Q. Dai, Y.-H. Huo, S. Höfling, C.-Y. Lu, and J.-W. Pan, "On-demand semiconductor source of entangled photons which simultaneously has high fidelity, efficiency, and indistinguishability," Phys. Rev. Lett. 122(11), 113602 (2019). [CrossRef] 8. J. Liu, R. Su, Y. Wei, B. Yao, S. F. C. da Silva, Y. Yu, J. Iles-Smith, K. Srinivasan, A. Rastelli, J. Li, and X. Wang, "A solid-state source of strongly entangled photon pairs with high brightness and indistinguishability," Nat. Nanotechnol. 14(6), 586–593 (2019). [CrossRef] 9. A. Surrente, M. Felici, P. Gallo, B. Dwir, A. Rudra, G. Biasiol, L. Sorba, and E. Kapon, "Ordered systems of site-controlled pyramidal quantum dots incorporated in photonic crystal cavities," Nanotechnology 22(46), 465203 (2011). [CrossRef] 10. A. Dousse, L. Lanco, J. Suczyński, E. Semenova, A. Miard, A. Lemaître, I. Sagnes, C. Roblin, J. Bloch, and P. Senellart, "Controlled Light-Matter Coupling for a Single Quantum Dot Embedded in a Pillar Microcavity Using Far-Field Optical Lithography," Phys. Rev. Lett. 101(26), 267404 (2008). [CrossRef] 11. M. Gschrey, A. Thoma, P. Schnauber, M. Seifried, R. Schmidt, B. Wohlfeil, L. Krüger, J.-H. Schulze, T. Heindel, S. Burger, F. Schmidt, A. Strittmatter, S. Rodt, and S. Reitzenstein, "Highly indistinguishable photons from deterministic quantum-dot microlenses utilizing three-dimensional in situ electron-beam lithography," Nat. Commun. 6(1), 7662 (2015). [CrossRef] 12. K. S. Choi, H. Deng, J. Laurat, and H. J. Kimble, "Mapping photonic entanglement into and out of a quantum memory," Nature 452(7183), 67–71 (2008). [CrossRef] 13. H. P. Specht, C. Nölleke, A. Reiserer, M. Uphoff, E. Figueroa, S. Ritter, and G. Rempe, "A single-atom quantum memory," Nature 473(7346), 190–193 (2011). [CrossRef] 14. A. Tiranov, J. Lavoie, A. Ferrier, P. Goldner, V. Verma, S. Nam, R. Mirin, A. Lita, F. Marsili, H. Herrmann, C. Silberhorn, N. Gisin, M. Afzelius, and F. Bussières, "Storage of hyperentanglement in a solid-state quantum memory," Optica 2(4), 279 (2015). [CrossRef] 15. T. Farrow, P. See, A. J. Bennett, M. B. Ward, P. Atkinson, K. Cooper, D. J. P. Ellis, D. C. Unitt, D. A. Ritchie, and A. J. Shields, "Single-photon emitting diode based on a quantum dot in a micro-pillar," Nanotechnology 19(34), 345401 (2008). [CrossRef] 16. A. Thoma, P. Schnauber, M. Gschrey, M. Seifried, J. Wolters, J.-H. Schulze, A. Strittmatter, S. Rodt, A. Carmele, A. Knorr, T. Heindel, and S. Reitzenstein, "Exploring Dephasing of a Solid-State Quantum Emitter via Time- and Temperature-Dependent Hong-Ou-Mandel Experiments," Phys. Rev. Lett. 116(3), 033601 (2016). [CrossRef] 17. A. J. Bennett, R. B. Patel, J. Skiba-Szymanska, C. A. Nicoll, I. Farrer, D. A. Ritchie, and A. J. Shields, "Giant Stark effect in the emission of single semiconductor quantum dots," Appl. Phys. Lett. 97(3), 031104 (2010). [CrossRef] 18. C. Kistner, T. Heindel, C. Schneider, A. Rahimi-Iman, S. Reitzenstein, S. Höfling, and A. Forchel, "Demonstration of strong coupling via electro-optical tuning in high-quality QD-micropillar systems," Opt. Express 16(19), 15006 (2008). [CrossRef] 19. F. Ding, R. Singh, J. D. Plumhof, T. Zander, V. Křápek, Y. H. Chen, M. Benyoucef, V. Zwiller, K. Dörr, G. Bester, A. Rastelli, and O. G. Schmidt, "Tuning the Exciton Binding Energies in Single Self-Assembled InGaAs/GaAs Quantum Dots by Piezoelectric-Induced Biaxial Stress," Phys. Rev. Lett. 104(6), 067405 (2010). [CrossRef] 20. R. Trotta, P. Atkinson, J. D. Plumhof, E. Zallo, R. O. Rezaev, S. Kumar, S. Baunack, J. R. Schröter, A. Rastelli, and O. G. Schmidt, "Nanomembrane quantum-light-emitting diodes integrated onto piezoelectric actuators," Adv. Mater. 24(20), 2668–2672 (2012). [CrossRef] 21. R. Trotta, J. Martín-Sánchez, J. S. Wildmann, G. Piredda, M. Reindl, C. Schimpf, E. Zallo, S. Stroj, J. Adlinger, and A. Rastelli, "Wavelength-tunable sources of entangled photons interfaced with atomic vapours," Nat. Commun. 7(1), 10375 (2016). [CrossRef] 22. P. E. Kremer, A. C. Dada, P. Kumar, Y. Ma, S. Kumar, E. Clarke, and B. D. Gerardot, "Strain-tunable quantum dot embedded in a nanowire antenna," Phys. Rev. B 90(20), 201408 (2014). [CrossRef] 23. M. Gschrey, R. Schmidt, J.-H. Schulze, A. Strittmatter, S. Rodt, and S. Reitzenstein, "Resolution and alignment accuracy of low-temperature in situ electron beam lithography for nanophotonic device fabrication," J. Vac. Sci. Technol., B: Nanotechnol. Microelectron.: Mater., Process., Meas., Phenom. 33(2), 021603 (2015). [CrossRef] 24. A. Kaganskiy, M. Gschrey, A. Schlehahn, R. Schmidt, J.-H. Schulze, T. Heindel, A. Strittmatter, S. Rodt, and S. Reitzenstein, "Advanced in-situ electron-beam lithography for deterministic nanophotonic device processing," Rev. Sci. Instrum. 86(7), 073903 (2015). [CrossRef] 25. J. Tian and P. Han, "Crystal growth and property characterization for PIN–PMN–PT ternary piezoelectric crystals," J. Adv. Dielectr. 04(01), 1350027 (2014). [CrossRef] 26. S. Fischbach, A. Kaganskiy, E. B. Y. Tauscher, F. Gericke, A. Thoma, R. Schmidt, A. Strittmatter, T. Heindel, S. Rodt, and S. Reitzenstein, "Efficient single-photon source based on a deterministically fabricated single quantum dot - microstructure with backside gold mirror," Appl. Phys. Lett. 111(1), 011106 (2017). [CrossRef] 27. P. Michler, A. Imamoglu, A. Kiraz, C. Becher, M. D. Mason, P. J. Carson, G. F. Strouse, S. K. Buratto, W. V. Schoenfeld, and P. M. Petroff, "Nonclassical radiation from a single quantum dot," Phys. Status Solidi B 229(1), 399–405 (2002). [CrossRef] 28. A. Schliwa, M. Winkelnkemper, and D. Bimberg, "Few-particle energies versus geometry and composition of InxGa1−xAs/GaAs self-organized quantum dots," Phys. Rev. B 79(7), 075443 (2009). [CrossRef] 29. A. Schliwa, M. Winkelnkemper, and D. Bimberg, "Impact of size, shape, and composition on piezoelectric effects and electronic properties of In(Ga)As∕GaAs quantum dots," Phys. Rev. B 76(20), 205324 (2007). [CrossRef] 30. S. Vieira, "The behavior and calibration of some piezoelectric ceramics used in the STM," IBM J. Res. Dev. 30(5), 553–556 (1986). [CrossRef] C. H. Bennett and G. Brassard, "Quantum cryptography: Public key distribution and coin tossing," Proc. of IEEE International Conference on Computers, Systems and Signal Processing, 175 (1984). A. K. Ekert, "Quantum cryptography based on Bell's theorem," Phys. Rev. Lett. 67(6), 661–663 (1991). [Crossref] N. Gisin and R. Thew, "Quantum communication," Nat. Photonics 1(3), 165–171 (2007). H.-J. Briegel, W. Dür, J. I. Cirac, and P. Zoller, "Quantum Repeaters: The Role of Imperfect Local Operations in Quantum Communication," Phys. Rev. Lett. 81(26), 5932–5935 (1998). F. Basso Basset, M. B. Rota, C. Schimpf, D. Tedeschi, K. D. Zeuner, S. F. C. da Silva, M. Reindl, V. Zwiller, K. D. Jöns, A. Rastelli, and R. Trotta, "Entanglement swapping with photons generated on demand by a quantum dot," arxiv:1901.06646 (2019). M. Zopf, R. Keil, Y. Chen, J. Yang, D. Chen, F. Ding, and O. G. Schmidt, "Entanglement Swapping with Semiconductor-generated Photons," arxiv:1901.07833 (2019). H. Wang, H. Hu, T.-H. Chung, J. Qin, X. Yang, J.-P. Li, R.-Z. Liu, H.-S. Zhong, Y.-M. He, X. Ding, Y.-H. Deng, Q. Dai, Y.-H. Huo, S. Höfling, C.-Y. Lu, and J.-W. Pan, "On-demand semiconductor source of entangled photons which simultaneously has high fidelity, efficiency, and indistinguishability," Phys. Rev. Lett. 122(11), 113602 (2019). J. Liu, R. Su, Y. Wei, B. Yao, S. F. C. da Silva, Y. Yu, J. Iles-Smith, K. Srinivasan, A. Rastelli, J. Li, and X. Wang, "A solid-state source of strongly entangled photon pairs with high brightness and indistinguishability," Nat. Nanotechnol. 14(6), 586–593 (2019). A. Surrente, M. Felici, P. Gallo, B. Dwir, A. Rudra, G. Biasiol, L. Sorba, and E. Kapon, "Ordered systems of site-controlled pyramidal quantum dots incorporated in photonic crystal cavities," Nanotechnology 22(46), 465203 (2011). A. Dousse, L. Lanco, J. Suczyński, E. Semenova, A. Miard, A. Lemaître, I. Sagnes, C. Roblin, J. Bloch, and P. Senellart, "Controlled Light-Matter Coupling for a Single Quantum Dot Embedded in a Pillar Microcavity Using Far-Field Optical Lithography," Phys. Rev. Lett. 101(26), 267404 (2008). M. Gschrey, A. Thoma, P. Schnauber, M. Seifried, R. Schmidt, B. Wohlfeil, L. Krüger, J.-H. Schulze, T. Heindel, S. Burger, F. Schmidt, A. Strittmatter, S. Rodt, and S. Reitzenstein, "Highly indistinguishable photons from deterministic quantum-dot microlenses utilizing three-dimensional in situ electron-beam lithography," Nat. Commun. 6(1), 7662 (2015). K. S. Choi, H. Deng, J. Laurat, and H. J. Kimble, "Mapping photonic entanglement into and out of a quantum memory," Nature 452(7183), 67–71 (2008). H. P. Specht, C. Nölleke, A. Reiserer, M. Uphoff, E. Figueroa, S. Ritter, and G. Rempe, "A single-atom quantum memory," Nature 473(7346), 190–193 (2011). A. Tiranov, J. Lavoie, A. Ferrier, P. Goldner, V. Verma, S. Nam, R. Mirin, A. Lita, F. Marsili, H. Herrmann, C. Silberhorn, N. Gisin, M. Afzelius, and F. Bussières, "Storage of hyperentanglement in a solid-state quantum memory," Optica 2(4), 279 (2015). T. Farrow, P. See, A. J. Bennett, M. B. Ward, P. Atkinson, K. Cooper, D. J. P. Ellis, D. C. Unitt, D. A. Ritchie, and A. J. Shields, "Single-photon emitting diode based on a quantum dot in a micro-pillar," Nanotechnology 19(34), 345401 (2008). A. Thoma, P. Schnauber, M. Gschrey, M. Seifried, J. Wolters, J.-H. Schulze, A. Strittmatter, S. Rodt, A. Carmele, A. Knorr, T. Heindel, and S. Reitzenstein, "Exploring Dephasing of a Solid-State Quantum Emitter via Time- and Temperature-Dependent Hong-Ou-Mandel Experiments," Phys. Rev. Lett. 116(3), 033601 (2016). A. J. Bennett, R. B. Patel, J. Skiba-Szymanska, C. A. Nicoll, I. Farrer, D. A. Ritchie, and A. J. Shields, "Giant Stark effect in the emission of single semiconductor quantum dots," Appl. Phys. Lett. 97(3), 031104 (2010). C. Kistner, T. Heindel, C. Schneider, A. Rahimi-Iman, S. Reitzenstein, S. Höfling, and A. Forchel, "Demonstration of strong coupling via electro-optical tuning in high-quality QD-micropillar systems," Opt. Express 16(19), 15006 (2008). F. Ding, R. Singh, J. D. Plumhof, T. Zander, V. Křápek, Y. H. Chen, M. Benyoucef, V. Zwiller, K. Dörr, G. Bester, A. Rastelli, and O. G. Schmidt, "Tuning the Exciton Binding Energies in Single Self-Assembled InGaAs/GaAs Quantum Dots by Piezoelectric-Induced Biaxial Stress," Phys. Rev. Lett. 104(6), 067405 (2010). R. Trotta, P. Atkinson, J. D. Plumhof, E. Zallo, R. O. Rezaev, S. Kumar, S. Baunack, J. R. Schröter, A. Rastelli, and O. G. Schmidt, "Nanomembrane quantum-light-emitting diodes integrated onto piezoelectric actuators," Adv. Mater. 24(20), 2668–2672 (2012). R. Trotta, J. Martín-Sánchez, J. S. Wildmann, G. Piredda, M. Reindl, C. Schimpf, E. Zallo, S. Stroj, J. Adlinger, and A. Rastelli, "Wavelength-tunable sources of entangled photons interfaced with atomic vapours," Nat. Commun. 7(1), 10375 (2016). P. E. Kremer, A. C. Dada, P. Kumar, Y. Ma, S. Kumar, E. Clarke, and B. D. Gerardot, "Strain-tunable quantum dot embedded in a nanowire antenna," Phys. Rev. B 90(20), 201408 (2014). M. Gschrey, R. Schmidt, J.-H. Schulze, A. Strittmatter, S. Rodt, and S. Reitzenstein, "Resolution and alignment accuracy of low-temperature in situ electron beam lithography for nanophotonic device fabrication," J. Vac. Sci. Technol., B: Nanotechnol. Microelectron.: Mater., Process., Meas., Phenom. 33(2), 021603 (2015). A. Kaganskiy, M. Gschrey, A. Schlehahn, R. Schmidt, J.-H. Schulze, T. Heindel, A. Strittmatter, S. Rodt, and S. Reitzenstein, "Advanced in-situ electron-beam lithography for deterministic nanophotonic device processing," Rev. Sci. Instrum. 86(7), 073903 (2015). J. Tian and P. Han, "Crystal growth and property characterization for PIN–PMN–PT ternary piezoelectric crystals," J. Adv. Dielectr. 04(01), 1350027 (2014). S. Fischbach, A. Kaganskiy, E. B. Y. Tauscher, F. Gericke, A. Thoma, R. Schmidt, A. Strittmatter, T. Heindel, S. Rodt, and S. Reitzenstein, "Efficient single-photon source based on a deterministically fabricated single quantum dot - microstructure with backside gold mirror," Appl. Phys. Lett. 111(1), 011106 (2017). P. Michler, A. Imamoglu, A. Kiraz, C. Becher, M. D. Mason, P. J. Carson, G. F. Strouse, S. K. Buratto, W. V. Schoenfeld, and P. M. Petroff, "Nonclassical radiation from a single quantum dot," Phys. Status Solidi B 229(1), 399–405 (2002). A. Schliwa, M. Winkelnkemper, and D. Bimberg, "Few-particle energies versus geometry and composition of InxGa1−xAs/GaAs self-organized quantum dots," Phys. Rev. B 79(7), 075443 (2009). A. Schliwa, M. Winkelnkemper, and D. Bimberg, "Impact of size, shape, and composition on piezoelectric effects and electronic properties of In(Ga)As∕GaAs quantum dots," Phys. Rev. B 76(20), 205324 (2007). S. Vieira, "The behavior and calibration of some piezoelectric ceramics used in the STM," IBM J. Res. Dev. 30(5), 553–556 (1986). Adlinger, J. Afzelius, M. Atkinson, P. Basso Basset, F. Baunack, S. Becher, C. Bennett, A. J. Bennett, C. H. Benyoucef, M. Bester, G. Biasiol, G. Bimberg, D. Bloch, J. Brassard, G. Briegel, H.-J. Buratto, S. K. Burger, S. Bussières, F. Carmele, A. Carson, P. J. Chen, D. Chen, Y. H. Choi, K. S. Chung, T.-H. Cirac, J. I. Clarke, E. Cooper, K. da Silva, S. F. C. Dada, A. C. Dai, Q. Deng, H. Deng, Y.-H. Ding, F. Ding, X. Dörr, K. Dousse, A. Dür, W. Dwir, B. Ekert, A. K. Ellis, D. J. P. Farrer, I. Farrow, T. Felici, M. Ferrier, A. Figueroa, E. Fischbach, S. Forchel, A. Gallo, P. Gerardot, B. D. Gericke, F. Gisin, N. Goldner, P. Gschrey, M. Han, P. He, Y.-M. Heindel, T. Herrmann, H. Höfling, S. Hu, H. Huo, Y.-H. Iles-Smith, J. Imamoglu, A. Jöns, K. D. Kaganskiy, A. Kapon, E. Keil, R. Kimble, H. J. Kiraz, A. Kistner, C. Knorr, A. Krápek, V. Kremer, P. E. Krüger, L. Kumar, P. Lanco, L. Laurat, J. Lavoie, J. Lemaître, A. Li, J. Li, J.-P. Lita, A. Liu, J. Liu, R.-Z. Lu, C.-Y. Ma, Y. Marsili, F. Martín-Sánchez, J. Mason, M. D. Miard, A. Michler, P. Mirin, R. Nam, S. Nicoll, C. A. Nölleke, C. Pan, J.-W. Patel, R. B. Petroff, P. M. Piredda, G. Plumhof, J. D. Qin, J. Rahimi-Iman, A. Rastelli, A. Reindl, M. Reiserer, A. Reitzenstein, S. Rempe, G. Rezaev, R. O. Ritchie, D. A. Ritter, S. Roblin, C. Rodt, S. Rota, M. B. Rudra, A. Sagnes, I. Schimpf, C. Schlehahn, A. Schliwa, A. Schmidt, F. Schmidt, O. G. Schmidt, R. Schnauber, P. Schneider, C. Schoenfeld, W. V. Schröter, J. R. Schulze, J.-H. See, P. Seifried, M. Semenova, E. Senellart, P. Shields, A. J. Silberhorn, C. Singh, R. Skiba-Szymanska, J. Sorba, L. Specht, H. P. Srinivasan, K. Strittmatter, A. Stroj, S. Strouse, G. F. Su, R. Suczynski, J. Surrente, A. Tauscher, E. B. Y. Tedeschi, D. Thew, R. Thoma, A. Tian, J. Tiranov, A. Trotta, R. Unitt, D. C. Uphoff, M. Verma, V. Vieira, S. Wang, X. Ward, M. B. Wei, Y. Wildmann, J. S. Winkelnkemper, M. Wohlfeil, B. Wolters, J. Yang, J. Yang, X. Yao, B. Yu, Y. Zallo, E. Zander, T. Zeuner, K. D. Zhong, H.-S. Zoller, P. Zopf, M. Zwiller, V. Adv. Mater. (1) Appl. Phys. Lett. (2) IBM J. Res. Dev. (1) J. Adv. Dielectr. (1) J. Vac. Sci. Technol., B: Nanotechnol. Microelectron.: Mater., Process., Meas., Phenom. (1) Nat. Commun. (2) Nat. Nanotechnol. (1) Nat. Photonics (1) Opt. Express (1) Optica (1) Phys. Rev. B (3) Phys. Rev. Lett. (6) Phys. Status Solidi B (1) Rev. Sci. Instrum. (1) Equations on this page are rendered with MathJax. Learn more. (1) g ( 2 ) ( τ ) = ( p 0 e − | τ t d | + p t ∑ i = − 5 i ≠ 0 5 ⁡ e − | τ − ( i f ) t d | ) ⊗ G ( τ , σ res ) (2) Δ E ( c ) = a g ( Δ ϵ hy ( c ) ) − 1 2 b v ( Δ ϵ biax ( c ) ) = ± 1.25 meV . (3) ϵ exp = d 31 ⋅ F max = 1500 p C N − 1 ⋅ 20 kV cm − 1 = 3 ⋅ 10 − 3 ,
CommonCrawl
Quick Search in Journals Quick Search anywhere Skip main navigationClose Drawer MenuOpen Drawer Menu SIAM Review Multiscale Modeling & Simulation SIAM Journal on Applied Algebra and Geometry SIAM Journal on Applied Dynamical Systems SIAM Journal on Applied Mathematics SIAM Journal on Computing SIAM Journal on Control and Optimization SIAM Journal on Discrete Mathematics SIAM Journal on Financial Mathematics SIAM Journal on Imaging Sciences SIAM Journal on Mathematical Analysis SIAM Journal on Mathematics of Data Science SIAM Journal on Matrix Analysis and Applications SIAM Journal on Numerical Analysis SIAM Journal on Optimization SIAM/ASA Journal on Uncertainty Quantification Theory of Probability & Its Applications JOIN SIAM Instructions for Referees HomeSIAM Journal on Control and OptimizationVol. 47, Iss. 3 (2008)10.1137/070694016 A Priori Error Estimates for Space-Time Finite Element Discretization of Parabolic Optimal Control Problems Part I: Problems Without Control Constraints Dominik Meidner and Boris Vexler Track Citations In this paper we develop a priori error analysis for Galerkin finite element discretizations of optimal control problems governed by linear parabolic equations. The space discretization of the state variable is done using usual conforming finite elements, whereas the time discretization is based on discontinuous Galerkin methods. For different types of control discretizations we provide error estimates of optimal order with respect to both space and time discretization parameters. The paper is divided into two parts. In the first part we develop some stability and error estimates for space-time discretization of the state equation and provide error estimates for optimal control problems without control constraints. In the second part of the paper, the techniques and results of the first part are used to develop a priori error analysis for optimal control problems with pointwise inequality constraints on the control variable. [1] N. Arada, , E. Casas and , and F. Tröltzsch, Error estimates for a semilinear elliptic optimal control problem, Comput. Optim. Appl., 23 (2002), pp. 201–229. CPPPEF 0926-6003 CrossrefISIGoogle Scholar [2] R. Becker, , D. Meidner and , and B. Vexler, Efficient numerical solution of parabolic optimization problems by finite element methods, Optim. Methods Softw., 22 (2007), pp. 813–833. OMSOE2 1055-6788 CrossrefISIGoogle Scholar [3] J. H. Bramble and and S. R. Hilbert, Estimation of linear functionals on Sobolev spaces with applications to Fourier transforms and spline interpolation, SIAM J. Numer. Anal., 7 (1970), pp. 112–124. SJNAAM 0036-1429 LinkISIGoogle Scholar [4] E. Casas, , M. Mateos and , and F. Tröltzsch, Error estimates for the numerical approximation of boundary semilinear elliptic control problems, Comput. Optim. Appl., 31 (2005), pp. 193–220. CPPPEF 0926-6003 CrossrefISIGoogle Scholar [5] Google Scholar [8] K. Eriksson and and C. Johnson, Adaptive finite element methods for parabolic problems I: A linear model problem, SIAM J. Numer. Anal., 28 (1991), pp. 43–77. SJNAAM 0036-1429 LinkISIGoogle Scholar [9] K. Eriksson and and C. Johnson, Adaptive finite element methods for parabolic problems II: Optimal error estimates in $L_\infty L_2$ and $L_\infty L_\infty$, SIAM J. Numer. Anal., 32 (1995), pp. 706–740. SJNAAM 0036-1429 LinkISIGoogle Scholar [10] K. Eriksson, , C. Johnson and , and V. Thomée, Time discretization of parabolic problems by the discontinuous Galerkin method, M2AN Math. Model. Numer. Anal., 19 (1985), pp. 611–643. CrossrefISIGoogle Scholar [11] Google Scholar [12] R. Falk, Approximation of a class of optimal control problems with order of convergence estimates, J. Math. Anal. Appl., 44 (1973), pp. 28–47. JMANAK 0022-247X CrossrefISIGoogle Scholar [15] T. Geveci, On the approximation of the solution of an optimal control problem governed by an elliptic equation, M2AN Math. Model. Numer. Anal., 13 (1979), pp. 313–328. Google Scholar [16] M. Hinze, A variational discretization concept in control constrained optimization: The linear-quadratic case, Comput. Optim. Appl., 30 (2005), pp. 45–61. CPPPEF 0926-6003 CrossrefISIGoogle Scholar [17] I. Lasiecka and and K. Malanowski, On discrete-time Ritz-Galerkin approximation of control constrained optimal control problems for parabolic systems, Control Cybern., 7 (1978), pp. 21–36. CCYBAP 0324-8569 Google Scholar [19] K. Malanowski, Convergence of approximations vs. regularity of solutions for convex, control-constrained optimal-control problems, Appl. Math. Optim., 8 (1981), pp. 69–95. AMOMBN 0095-4616 CrossrefISIGoogle Scholar [20] R. S. McNight and and W. E. Bosarge, The Ritz–Galerkin procedure for parabolic control problems, SIAM J. Control Optim., 11 (1973), pp. 510–524. SJCODC 0363-0129 LinkISIGoogle Scholar [21] D. Meidner and and B. Vexler, Adaptive space-time finite element methods for parabolic optimization problems, SIAM J. Control Optim., 46 (2007), pp. 116–142. SJCODC 0363-0129 LinkISIGoogle Scholar [22] C. Meyer and and A. Rösch, Superconvergence properties of optimal control problems, SIAM J. Control Optim., 43 (2004), pp. 970–985. SJCODC 0363-0129 LinkISIGoogle Scholar [24] A. Rösch, Error estimates for parabolic optimal control problems with control constraints, Z. Anal. Anwendungen, 23 (2004), pp. 353–376. CrossrefISIGoogle Scholar [25] M. Schmich and and B. Vexler, Adaptivity with dynamic meshes for space-time finite element discretizations of parabolic equations, SIAM J. Sci. Comput., 30 (2008), pp. 369–393. SJOCE3 1064-8275 LinkISIGoogle Scholar [28] R. Winther, Error estimates for a Galerkin approximation of a parabolic control problem, Ann. Math. Pura Appl. (4), 117 (1978), pp. 173–206. CrossrefGoogle Scholar optimal control parabolic equations error estimates finite elements Space-time spectral methods for a fourth-order parabolic optimal control problem in three control constraint cases Discrete and Continuous Dynamical Systems - B, Vol. 28, No. 1 Finite Element Analysis of a Constrained Dirichlet Boundary Control Problem Governed by a Linear Parabolic Equation Thirupathi Gudi , Gouranga Mallik , and Ramesh Ch. Sau 16 December 2022 | SIAM Journal on Control and Optimization, Vol. 60, No. 6 AbstractPDF (538 KB) Adaptive space–time finite element methods for parabolic optimal control problems 9 December 2022 | Journal of Numerical Mathematics, Vol. 30, No. 4 Stabilization of partial differential equations by sequential action control 24 September 2022 | IMA Journal of Mathematical Control and Information, Vol. 39, No. 4 Error estimates for mixed and hybrid FEM for elliptic optimal control problems with penalizations 2 November 2022 | Advances in Computational Mathematics, Vol. 48, No. 6 Optimal order multigrid preconditioners for the distributed control of parabolic equations with coarsening in space and time 17 February 2022 | Optimization Methods and Software, Vol. 37, No. 5 Time domain decomposition of parabolic control problems based on discontinuous Galerkin semi-discretization Applied Numerical Mathematics, Vol. 176 Discretization and Analysis of an Optimal Control of a Variable-Order Time-Fractional Diffusion Equation with Pointwise Constraints 4 April 2022 | Journal of Scientific Computing, Vol. 91, No. 2 Residual-based a posteriori error estimates for the hp version of the finite element discretization of the elliptic Robin boundary control problem Results in Applied Mathematics, Vol. 14 Local a Posteriori Error Analysis of Finite Element Method for Parabolic Boundary Control Problems 1 March 2022 | Journal of Scientific Computing, Vol. 91, No. 1 Efficient Model Predictive Control for Parabolic PDEs with Goal Oriented Error Estimation Lars Grüne, Manuel Schaller, and Anton Schiela 22 February 2022 | SIAM Journal on Scientific Computing, Vol. 44, No. 1 AbstractPDF (1646 KB) Mixed and hybrid Petrov–Galerkin finite element discretization for optimal control of the wave equation 12 January 2022 | Numerische Mathematik, Vol. 150, No. 2 Optimal control of PDEs and FE-approximation Fully Discrete Interpolation Coefficients Mixed Finite Element Methods for Semi-Linear Parabolic Optimal Control Problem IEEE Access, Vol. 10 An Initial Value Correction Method for the Parameterized Optimal Control Problem Operations Research and Fuzziology, Vol. 12, No. 04 A Posteriori Error Estimates for Parabolic Optimal Control Problems with Controls Acting on Lower Dimensional Manifolds 6 October 2021 | Journal of Scientific Computing, Vol. 89, No. 2 Finite element approximations of parabolic optimal control problem with measure data in time 4 December 2019 | Applicable Analysis, Vol. 100, No. 12 A-posteriori reduced basis error-estimates for a semi-discrete in space quasilinear parabolic PDE 9 July 2021 | Computational Optimization and Applications, Vol. 65 Temporally Semidiscrete Approximation of a Dirichlet Boundary Control for a Fractional/Normal Evolution Equation with a Final Observation 21 May 2021 | Journal of Scientific Computing, Vol. 88, No. 1 A priori error estimates for the space-time finite element approximation of a quasilinear gradient enhanced damage model 7 July 2021 | ESAIM: Mathematical Modelling and Numerical Analysis, Vol. 55, No. 4 Discretization of a Distributed Optimal Control Problem with a Stochastic Parabolic Equation Driven by Multiplicative Noise 21 April 2021 | Journal of Scientific Computing, Vol. 87, No. 3 Unstructured Space-Time Finite Element Methods for Optimal Control of Parabolic Equations Ulrich Langer, Olaf Steinbach, Fredi Tröltzsch, and Huidong Yang 2 March 2021 | SIAM Journal on Scientific Computing, Vol. 43, No. 2 A Priori Error Estimates for the Finite Element Approximation of a Nonsmooth Optimal Control Problem Governed by a Coupled Semilinear PDE-ODE System Marita Holtmannspötter and Arnd Rösch 21 September 2021 | SIAM Journal on Control and Optimization, Vol. 59, No. 5 Space-Time Finite Element Discretization of Parabolic Optimal Control Problems with Energy Regularization 9 March 2021 | SIAM Journal on Numerical Analysis, Vol. 59, No. 2 Strong rates of convergence for a space-time discretization of the backward stochastic heat equation, and of a linear-quadratic control problem for the stochastic heat equation 4 June 2021 | ESAIM: Control, Optimisation and Calculus of Variations, Vol. 27 A Priori Error Estimates and Superconvergence of P02–P1 Mixed Finite Element Methods for Elliptic Boundary Control Problems 10 June 2021 | Numerical Analysis and Applications, Vol. 14, No. 1 A priori error estimates for the space-time finite element discretization of an optimal control problem governed by a coupled linear PDE-ODE system Mathematical Control & Related Fields, Vol. 11, No. 3 Space‐time a posteriori error analysis of finite element approximation for parabolic optimal control problems: A reconstruction approach 29 June 2020 | Optimal Control Applications and Methods, Vol. 41, No. 5 A Priori Error Estimates and Superconvergence of Splitting Positive Definite Mixed Finite Element Methods for Pseudo-Hyperbolic Integro-Differential Optimal Control Problems 25 February 2020 | Numerical Analysis and Applications, Vol. 13, No. 1 Analysis and finite element discretization for optimal control of a linear fluid–structure interaction problem with delay 20 November 2018 | IMA Journal of Numerical Analysis, Vol. 40, No. 1 Superconvergence analysis and two-grid algorithms of pseudostress-velocity MFEM for optimal control problems governed by Stokes equations Stability analysis and best approximation error estimates of discontinuous time-stepping schemes for the Allen–Cahn equation 24 April 2019 | ESAIM: Mathematical Modelling and Numerical Analysis, Vol. 53, No. 2 A Priori Error Estimates for Space-Time Finite Element Discretization of Parabolic Time-Optimal Control Problems Lucas Bonifacius, Konstantin Pieper, and 3 January 2019 | SIAM Journal on Control and Optimization, Vol. 57, No. 1 Stability and Error Estimates of Fully Discrete Schemes for the Brusselator System Konstantinos Chrysafinos, Efthymios N. Karatzas, and Dimitrios Kostas 25 April 2019 | SIAM Journal on Numerical Analysis, Vol. 57, No. 2 Error Estimates for Space-Time Discretization of Parabolic Time-Optimal Control Problems with Bang-Bang Controls 21 May 2019 | SIAM Journal on Control and Optimization, Vol. 57, No. 3 Superconvergence of splitting positive definite mixed finite element for parabolic optimal control problems 25 October 2017 | Applicable Analysis, Vol. 97, No. 16 Optimal error estimates for fully discrete Galerkin approximations of semilinear parabolic equations 1 February 2019 | ESAIM: Mathematical Modelling and Numerical Analysis, Vol. 52, No. 6 A Priori Error Estimates for State-Constrained Semilinear Parabolic Optimal Control Problems 19 June 2018 | Journal of Optimization Theory and Applications, Vol. 178, No. 2 A priori and a posteriori error estimates of H1-Galerkin mixed finite element methods for optimal control problems governed by pseudo-hyperbolic integro-differential equations Applied Mathematics and Computation, Vol. 328 Mixed Methods for Optimal Control Problems 28 August 2018 | Numerical Analysis and Applications, Vol. 11, No. 3 Improved approximation rates for a parabolic control problem with an objective promoting directional sparsity 27 January 2018 | Computational Optimization and Applications, Vol. 70, No. 1 Error estimates of the space–time spectral method for parabolic control problems Computers & Mathematics with Applications, Vol. 75, No. 2 A new multigrid method for unconstrained parabolic optimal control problems Journal of Computational and Applied Mathematics, Vol. 326 New elliptic projections and a priori error estimates of H 1 -Galerkin mixed finite element methods for optimal control problems governed by parabolic integro-differential equations Error estimates for the approximation of the velocity tracking problem with Bang-Bang controls 31 May 2017 | ESAIM: Control, Optimisation and Calculus of Variations, Vol. 23, No. 4 Error estimates and superconvergence of a mixed finite element method for elliptic optimal control problems Discrete maximal parabolic regularity for Galerkin finite element methods 30 June 2016 | Numerische Mathematik, Vol. 135, No. 3 Optimal Control of Semilinear Parabolic Equations by BV-Functions Eduardo Casas, Florian Kruse, and Karl Kunisch 6 June 2017 | SIAM Journal on Control and Optimization, Vol. 55, No. 3 A-Posteriori Error Estimation of Discrete POD Models for PDE-Constrained Optimal Control Optimal A Priori Error Estimates of Parabolic Optimal Control Problems with a Moving Point Control Error Estimates of Mixed Methods for Optimal Control Problems Governed by General Elliptic Equations 19 September 2016 | Advances in Applied Mathematics and Mechanics, Vol. 8, No. 6 Optimal Control of a Linear Unsteady Fluid–Structure Interaction Problem 4 April 2016 | Journal of Optimization Theory and Applications, Vol. 170, No. 1 Galerkin Optimal Control 21 March 2016 | Journal of Optimization Theory and Applications, Vol. 169, No. 3 A Priori Error Estimate of Splitting Positive Definite Mixed Finite Element Method for Parabolic Optimal Control Problems 24 May 2016 | Numerical Mathematics: Theory, Methods and Applications, Vol. 9, No. 2 Finite Element Method and A Priori Error Estimates for Dirichlet Boundary Control Problems Governed by Parabolic PDEs 4 June 2015 | Journal of Scientific Computing, Vol. 66, No. 3 Finite Element Approximations of Parabolic Optimal Control Problems with Controls Acting on a Lower Dimensional Manifold Wei Gong and Ningning Yan A Priori Error Estimates for Three Dimensional Parabolic Optimal Control Problems with Pointwise Control Dmitriy Leykekhman and Mixed Discontinuous Galerkin Time-Stepping Method for Semilinear Parabolic Optimal Control Problems 16 December 2015 | Computational Mathematics and Modeling, Vol. 27, No. 1 Superconvergence and a posteriori error estimates of splitting positive definite mixed finite element methods for elliptic optimal control problems A priori and a posteriori error estimates of H1 -Galerkin mixed finite element methods for elliptic optimal control problems Computers & Mathematics with Applications, Vol. 70, No. 10 Sharp A Posteriori Error Estimates for Optimal Control Governed by Parabolic Integro-Differential Equations 29 November 2014 | Journal of Scientific Computing, Vol. 65, No. 1 Superconvergence of fully discrete rectangular mixed finite element methods of parabolic control problems Error estimates for the discretization of the velocity tracking problem 23 November 2014 | Numerische Mathematik, Vol. 130, No. 4 Symmetric error estimates for discontinuous Galerkin time-stepping schemes for optimal control problems constrained to evolutionary Stokes equations 9 September 2014 | Computational Optimization and Applications, Vol. 60, No. 3 Parabolic optimal control problems with a quintic B-spline dynamic model 28 January 2015 | Nonlinear Dynamics, Vol. 80, No. 1-2 A Priori Error Estimates for a Finite Element Discretization of Parabolic Optimization Problems with Pointwise Constraints in Time on Mean Values of the Gradient of the State Francesco Ludovici and Winnifried Wollner 19 March 2015 | SIAM Journal on Control and Optimization, Vol. 53, No. 2 Crank--Nicolson Time Stepping and Variational Discretization of Control-Constrained Parabolic Optimal Control Problems Nikolaus von Daniels, Michael Hinze, and Morten Vierling 7 May 2015 | SIAM Journal on Control and Optimization, Vol. 53, No. 3 Space-Time Discontinuous Galerkin Methods for Optimal Control Problems Governed by Time Dependent Diffusion-Convection-Reaction Equations Superconvergence analysis for parabolic optimal control problems 9 June 2013 | Calcolo, Vol. 51, No. 3 A new computational approach for solving optimal control of linear PDEs problem 6 November 2014 | Acta Mathematicae Applicatae Sinica, English Series, Vol. 30, No. 3 Superconvergence for General Convex Optimal Control Problems Governed by Semilinear Parabolic Equations ISRN Applied Mathematics, Vol. 2014 A Priori Error Estimates for Finite Element Methods for H(2,1) -Elliptic Equations 30 December 2013 | Numerical Functional Analysis and Optimization, Vol. 35, No. 2 Error Estimates for Discontinuous Galerkin Time-Stepping Schemes for Robin Boundary Control Problems Constrained to Parabolic PDEs Konstantinos Chrysafinos and Efthimios N. Karatzas 3 December 2014 | SIAM Journal on Numerical Analysis, Vol. 52, No. 6 Measure Valued Directional Sparsity for Parabolic Optimal Control Problems Karl Kunisch, 2 October 2014 | SIAM Journal on Control and Optimization, Vol. 52, No. 5 Third order convergent time discretization for parabolic optimal control problems with control constraints 17 July 2013 | Computational Optimization and Applications, Vol. 57, No. 1 ELLIPTIC RECONSTRUCTION AND A POSTERIORI ERROR ESTIMATES FOR PARABOLIC OPTIMAL CONTROL PROBLEMS Journal of Applied Analysis & Computation, Vol. 4, No. 3 Model Reduction by Adaptive Discretization in Optimal Control Error estimates of expanded mixed methods for optimal control problems governed by hyperbolic integro-differential equations 14 February 2013 | Numerical Methods for Partial Differential Equations, Vol. 29, No. 5 A posteriori error estimates of mixed DG finite element methods for linear parabolic equations Applicable Analysis, Vol. 92, No. 8 Superconvergence of Finite Element Methods for Optimal Control Problems Governed by Parabolic Equations with Time-Dependent Coefficients 28 May 2015 | East Asian Journal on Applied Mathematics, Vol. 3, No. 3 A PRIORI ERROR ESTIMATES AND SUPERCONVERGENCE PROPERTY OF VARIATIONAL DISCRETIZATION FOR NONLINEAR PARABOLIC OPTIMAL CONTROL PROBLEMS Journal of applied mathematics & informatics, Vol. 31, No. 3_4 Efficient numerical realization of discontinuous Galerkin methods for temporal discretization of parabolic problems 23 October 2012 | Numerische Mathematik, Vol. 124, No. 1 Superconvergence of Fully Discrete Finite Elements for Parabolic Control Problems with Integral Constraints Superconvergence analysis of fully discrete finite element methods for semilinear parabolic optimal control problems 14 March 2013 | Frontiers of Mathematics in China, Vol. 8, No. 2 Superconvergence and a posteriori error estimates of RT1 mixed methods for elliptic control problems with an integral constraint 9 June 2013 | Numerical Analysis and Applications, Vol. 6, No. 2 Optimal A Priori Error Estimates of Parabolic Optimal Control Problems with Pointwise Control 16 October 2013 | SIAM Journal on Numerical Analysis, Vol. 51, No. 5 Error estimates for a FitzHugh–Nagumo parameter-dependent reaction-diffusion system 23 November 2012 | ESAIM: Mathematical Modelling and Numerical Analysis, Vol. 47, No. 1 A priori error estimates for space–time finite element discretization of semilinear parabolic optimal control problems 15 September 2011 | Numerische Mathematik, Vol. 120, No. 2 Crank--Nicolson Schemes for Optimal Control Problems with Evolution Equations Thomas Apel and Thomas G. Flaig 5 June 2012 | SIAM Journal on Numerical Analysis, Vol. 50, No. 3 A Discontinuous Galerkin Time-Stepping Scheme for the Velocity Tracking Problem Eduardo Casas and Konstantinos Chrysafinos 18 September 2012 | SIAM Journal on Numerical Analysis, Vol. 50, No. 5 Adaptive Space-Time Finite Element Methods for Parabolic Optimization Problems Discretization of Optimal Control Problems A Priori Error Estimates for Space-Time Finite Element Discretization of Parabolic Optimal Control Problems Symmetric error estimates for discontinuous Galerkin approximations for an optimal control problem associated to semilinear parabolic PDE's Discrete & Continuous Dynamical Systems - B, Vol. 17, No. 5 An hp -Discontinuous Galerkin Method for the Optimal Control Problem of Laser Surface Hardening of Steel 28 June 2011 | ESAIM: Mathematical Modelling and Numerical Analysis, Vol. 45, No. 6 LP Modelling for the Time Optimal Control Problem of the Heat Equation 12 February 2011 | Journal of Mathematical Modelling and Algorithms, Vol. 10, No. 3 A priori and a posteriori error estimates for the method of lumped masses for parabolic optimal control problems International Journal of Computer Mathematics, Vol. 88, No. 13 Superconvergence of Mixed Methods for Optimal Control Problems Governed by Parabolic Equations 3 June 2015 | Advances in Applied Mathematics and Mechanics, Vol. 3, No. 4 A Priori Error Estimates for Finite Element Discretizations of Parabolic Optimization Problems with Pointwise State Constraints in Time Dominik Meidner, Rolf Rannacher, and A Priori Error Analysis of the Petrov–Galerkin Crank–Nicolson Scheme for Parabolic Optimal Control Problems 27 October 2011 | SIAM Journal on Control and Optimization, Vol. 49, No. 5 Nested multigrid methods for time-periodic, parabolic optimal control problems 3 September 2011 | Computing and Visualization in Science, Vol. 14, No. 1 Error estimates of fully discrete mixed finite element methods for semilinear quadratic parabolic optimal control problem Computer Methods in Applied Mechanics and Engineering, Vol. 199, No. 23-24 Convergence of discontinuous Galerkin approximations of an optimal control problem associated to semilinear parabolic PDE's 16 December 2009 | ESAIM: Mathematical Modelling and Numerical Analysis, Vol. 44, No. 1 Analysis and finite element approximations for distributed optimal control problems for implicit parabolic equations Journal of Computational and Applied Mathematics, Vol. 231, No. 1 A priori error estimates for elliptic optimal control problems with a bilinear state equation A Priori Error Estimates for Space-Time Finite Element Discretization of Parabolic Optimal Control Problems Part II: Problems with Control Constraints Volume 47, Issue 3| 2008 Submitted:08 June 2007 Accepted:27 November 2007 Published online:19 March 2008 Copyright © 2008 Society for Industrial and Applied Mathematics MSC codes 49N10 49M25 Article & Publication Data Article DOI:10.1137/070694016 Article page range:pp. 1150-1177 ISSN (print):0363-0129 ISSN (online):1095-7138 Publisher:Society for Industrial and Applied Mathematics Close Figure Viewer Browse All FiguresReturn to Figure Change zoom level Previous FigureNext Figure Society for Industrial and Society for Industrial and Applied Mathematics © 2023 Society for Industrial and Applied Mathematics Donate to SIAM Can't sign in? Forgot your username? Enter your email address below and we will send you your username If the address matches an existing account you will receive an email with instructions to retrieve your username Too Short Weak Medium Strong Very Strong Too Long Password Changed Successfully Can't sign in? Forgot your password? Enter your email address below and we will send you the reset instructions If the address matches an existing account you will receive an email with instructions to reset your password Verify Phone Enter the verification code Your Phone has been verified
CommonCrawl
The Littlewood-Paley $ pth $-order moments in three-dimensional MHD turbulence A symmetric Random Walk defined by the time-one map of a geodesic flow Strichartz estimates and local regularity for the elastic wave equation with singular potentials Seongyeon Kim 1, , Yehyun Kwon 2, and Ihyeok Seo 1,, Department of Mathematics, Sungkyunkwan University, Suwon 16419, Republic of Korea School of Mathematics, Korea Institute for Advanced Study, Seoul 02455, Republic of Korea * Corresponding author: Ihyeok Seo Received July 2020 Revised August 2020 Published October 2020 Fund Project: The second author is supported by a KIAS Individual Grant (MG073701) at Korea Institute for Advanced Study and NRF-2020R1F1A1A01073520. The thrid author is supported by NRF-2019R1F1A1061316 We obtain weighted $ L^2 $ estimates for the elastic wave equation perturbed by singular potentials including the inverse-square potential. We then deduce the Strichartz estimates under the sole ellipticity condition for the Lamé operator $ -\Delta^\ast $. This improves upon the previous result in [1] which relies on a stronger condition to guarantee the self-adjointness of $ -\Delta^\ast $. Furthermore, by establishing local energy estimates for the elastic wave equation we also prove that the solution has local regularity. Keywords: Strichartz estimates, regularity, elastic wave equation. Mathematics Subject Classification: Primary: 35B45, 35B65; Secondary: 35L05. Citation: Seongyeon Kim, Yehyun Kwon, Ihyeok Seo. Strichartz estimates and local regularity for the elastic wave equation with singular potentials. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2020344 J. A. Barceló, L. Fanelli, A. Ruiz, M. C. Vilela and N. Visciglia, Resolvent and Strichartz estimates for elastic wave equations, Appl. Math. Lett., 49 (2015), 33-41. doi: 10.1016/j.aml.2015.04.013. Google Scholar J. A. Barceló, M. Folch-Gabayet, S. Pérez-Esteva, A. Ruiz and M. C. Vilela, Limiting absorption principles for the Navier equation in elasticity, Ann. Sc. Norm. Super. Pisa Cl. Sci., 11 (2012), 817-842. Google Scholar M. Beals and W. Strauss, $L^p$ estimates for the wave equation with a potential, Comm. Partial Differential Equations, 18 (1993), 1365-1397. doi: 10.1080/03605309308820977. Google Scholar N. Burq, F. Planchon, J. G. Stalker and A. S. Tahvildar-Zadeh, Strichartz estimates for the wave and Schrödinger equations with the inverse-square potential, J. Funct. Anal., 203 (2003), 519-549. doi: 10.1016/S0022-1236(03)00238-6. Google Scholar N. Burq, F. Planchon, J. G. Stalker and A. S. Tahvildar-Zadeh, Strichartz estimates for the wave and Schrödinger equations with potentials of critical decay, Indiana Univ. Math. J., 53 (2004), 1665-1682. doi: 10.1512/iumj.2004.53.2541. Google Scholar F. Chiarenza and M. Frasca, A remark on a paper by C. Fefferman: "The uncertainty principle", Proc. Amer. Math. Soc., 108 (1990), 407-409. doi: 10.2307/2048289. Google Scholar M. Christ and A. Kiselev, Maximal functions associated to filtrations, J. Funct. Anal., 179 (2001), 409-425. doi: 10.1006/jfan.2000.3687. Google Scholar R. Coifman and R. Rochberg, Another characterization of BMO, Proc. Amer. Math. Soc., 79 (1980), 249-254. doi: 10.2307/2043245. Google Scholar L. Cossetti, Bounds on eigenvalues of perturbed Lamé operators with complex potentials, Preprint, preprint, arXiv: 1904.08445. Google Scholar S. Cuccagna, On the wave equation with a potential, Comm. Partial Differential Equations, 25 (1999), 1549-1565. doi: 10.1080/03605300008821559. Google Scholar P. D'Ancona, On large potential perturbations of the Schrödinger, wave and Klein-Gordon equations, Commun. Pure Appl. Anal., 19 (2020), 609-640. doi: 10.3934/cpaa.2020029. Google Scholar V. Georgiev and N. Visciglia, Decay estimates for the wave equation with potential, Comm. Partial Differential Equations, 28 (2003), 1325-1369. doi: 10.1081/PDE-120024371. Google Scholar M. Goldberg, L. Vega and N. Visciglia, Counterexamples of Strichartz inequalities for Schrödinger equations with repulsive potentials, Int. Math. Res. Not., (2006), Art. ID 13927, 16 pp. doi: 10.1155/IMRN/2006/13927. Google Scholar M. Keel and T. Tao, Endpoint Strichartz estimates, Amer. J. Math., 120 (1998), 955-980. doi: 10.1353/ajm.1998.0039. Google Scholar S. Kim, I. Seo and J. Seok, Note on Strichartz inequalities for the wave equation with potential, Math. Inequal. Appl., 23 (2020), 377-382. doi: 10.7153/mia-2020-23-29. Google Scholar S. Klainerman and M. Machedon, Space-time estimates for null forms and the local existence theorem, Comm. Pure Appl. Math., 46 (1993), 1221-1268. doi: 10.1002/cpa.3160460902. Google Scholar L. D. Landau and E. M. Lifshitz, Theory of Elasticity, Pergamon, 1970. Google Scholar H. Lindblad and C. D. Sogge, On existence and scattering with minimal regularity for semilinear wave equations, J. Funct. Anal., 130 (1995), 357-426. doi: 10.1006/jfan.1995.1075. Google Scholar D. Maharani, J. Widjaja and M. Wono Setya Budhi, Boundedness of Mikhlin Operator in Morrey Space, J. Phys.: Conf. Ser., 1180 (2019), 012002. doi: 10.1088/1742-6596/1180/1/012002. Google Scholar J. E. Marsden and T. J. R. Hughes, Mathematical Foundations of Elasticity, Prentice Hall, 1983, reprinted by Dover Publications, N.Y., 1994. Google Scholar S. Petermichl, The sharp weighted bound for the Riesz transforms, Proc. Amer. Math. Soc., 136 (2008), 1237-1249. doi: 10.1090/S0002-9939-07-08934-4. Google Scholar F. Planchon, J. G. Stalker and A. S. Tahvildar-Zadeh, $L^p$ estimates for the wave equation with the inverse-square potential, Discrete Contin. Dyn. Syst., 9 (2003), 427-442. doi: 10.3934/dcds.2003.9.427. Google Scholar A. Ruiz and L. Vega, Local regularity of solutions to wave equations with time-dependent potentials, Duke Math. J., 76 (1994), 913-940. doi: 10.1215/S0012-7094-94-07636-9. Google Scholar H. Sohr, The Navier-Stokes Equations. An Elementary Functional Analytic Approach, Modern Birkhäuser Classics, Birkhäuser/Springer Basel AG, Basel, 2001. doi: 10.1007/978-3-0348-8255-2. Google Scholar [25] E. Stein, Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals, Princeton University Press, Princeton, NJ, 1993. Google Scholar R. S. Strichartz, Restrictions of Fourier transforms to quadratic surfaces and decay of solutions of wave equations, Duke Math. J., 44 (1977), 705-714. doi: 10.1215/S0012-7094-77-04430-1. Google Scholar K. Yajima, The $W^{k, p}$-continuity of wave operators for Schrödinger operators, J. Math. Soc. Japan, 47 (1995), 551-581. doi: 10.2969/jmsj/04730551. Google Scholar Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247 Jianli Xiang, Guozheng Yan. The uniqueness of the inverse elastic wave scattering problem based on the mixed reciprocity relation. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021004 Biyue Chen, Chunxiang Zhao, Chengkui Zhong. The global attractor for the wave equation with nonlocal strong damping. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021015 Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270 Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388 Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 Ludovick Gagnon, José M. Urquiza. Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation. Evolution Equations & Control Theory, 2021, 10 (1) : 129-153. doi: 10.3934/eect.2020054 Pengyan Ding, Zhijian Yang. Well-posedness and attractor for a strongly damped wave equation with supercritical nonlinearity on $ \mathbb{R}^{N} $. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021006 Patrick W. Dondl, Martin Jesenko. Threshold phenomenon for homogenized fronts in random elastic media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 353-372. doi: 10.3934/dcdss.2020329 Linglong Du, Min Yang. Pointwise long time behavior for the mixed damped nonlinear wave equation in $ \mathbb{R}^n_+ $. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020033 Mokhtari Yacine. Boundary controllability and boundary time-varying feedback stabilization of the 1D wave equation in non-cylindrical domains. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021004 Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $ L^2 $-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298 Michael Winkler, Christian Stinner. Refined regularity and stabilization properties in a degenerate haptotaxis system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 4039-4058. doi: 10.3934/dcds.2020030 Jia Cai, Guanglong Xu, Zhensheng Hu. Sketch-based image retrieval via CAT loss with elastic net regularization. Mathematical Foundations of Computing, 2020, 3 (4) : 219-227. doi: 10.3934/mfc.2020013 Huiying Fan, Tao Ma. Parabolic equations involving Laguerre operators and weighted mixed-norm estimates. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5487-5508. doi: 10.3934/cpaa.2020249 Weisong Dong, Chang Li. Second order estimates for complex Hessian equations on Hermitian manifolds. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020377 Seongyeon Kim Yehyun Kwon Ihyeok Seo
CommonCrawl
Only show content I have access to (24) Only show open access (3) Last 3 years (16) Over 3 years (95) Physics and Astronomy (33) Materials Research (11) Earth and Environmental Sciences (10) Area Studies (3) Politics and International Relations (2) Language and Linguistics (1) MRS Online Proceedings Library Archive (11) The Journal of Economic History (8) Publications of the Astronomical Society of Australia (7) Quaternary Research (6) Infection Control & Hospital Epidemiology (5) Proceedings of the Nutrition Society (4) Journal of the International Neuropsychological Society (3) British Journal of Nutrition (2) Canadian Journal of Emergency Medicine (2) Disaster Medicine and Public Health Preparedness (2) Invasive Plant Science and Management (2) Journal of Clinical and Translational Science (2) Microscopy and Microanalysis (2) Modern Asian Studies (2) Psychological Medicine (2) Journal of Law, Medicine & Ethics (1) Journal of the Marine Biological Association of the United Kingdom (1) Palliative & Supportive Care (1) Proceedings of the International Astronomical Union (1) The British Journal of Psychiatry (1) Visual Neuroscience (1) Materials Research Society (11) Economic History Association EHA JEH (8) Society for Healthcare Epidemiology of America (SHEA) (5) Nutrition Society (4) International Neuropsychological Society INS (3) Canadian Association of Emergency Physicians (CAEP) (2) International Astronomical Union (2) MiMi / EMAS - European Microbeam Analysis Society (2) Nestle Foundation - enLINK (2) Royal College of Speech and Language Therapists (2) Society for Disaster Medicine and Public Health, Inc. SDMPH (2) The Association for Asian Studies (2) Weed Science Society of America (2) American Society of Law, Medicine & Ethics (1) Canadian Association on Gerontology/L'Association canadienne de gerontologie CAG CJG (1) International Soc for Twin Studies (1) MBA Online Only Members (1) Society for American Archaeology (1) The Paleontological Society (1) The Royal College of Psychiatrists (1) Literature in Context (1) Neutron Star Extreme Matter Observatory: A kilohertz-band gravitational-wave detector in the global network Gravitational Wave Astronomy K. Ackley, V. B. Adya, P. Agrawal, P. Altin, G. Ashton, M. Bailes, E. Baltinas, A. Barbuio, D. Beniwal, C. Blair, D. Blair, G. N. Bolingbroke, V. Bossilkov, S. Shachar Boublil, D. D. Brown, B. J. Burridge, J. Calderon Bustillo, J. Cameron, H. Tuong Cao, J. B. Carlin, S. Chang, P. Charlton, C. Chatterjee, D. Chattopadhyay, X. Chen, J. Chi, J. Chow, Q. Chu, A. Ciobanu, T. Clarke, P. Clearwater, J. Cooke, D. Coward, H. Crisp, R. J. Dattatri, A. T. Deller, D. A. Dobie, L. Dunn, P. J. Easter, J. Eichholz, R. Evans, C. Flynn, G. Foran, P. Forsyth, Y. Gai, S. Galaudage, D. K. Galloway, B. Gendre, B. Goncharov, S. Goode, D. Gozzard, B. Grace, A. W. Graham, A. Heger, F. Hernandez Vivanco, R. Hirai, N. A. Holland, Z. J. Holmes, E. Howard, E. Howell, G. Howitt, M. T. Hübner, J. Hurley, C. Ingram, V. Jaberian Hamedan, K. Jenner, L. Ju, D. P. Kapasi, T. Kaur, N. Kijbunchoo, M. Kovalam, R. Kumar Choudhary, P. D. Lasky, M. Y. M. Lau, J. Leung, J. Liu, K. Loh, A. Mailvagan, I. Mandel, J. J. McCann, D. E. McClelland, K. McKenzie, D. McManus, T. McRae, A. Melatos, P. Meyers, H. Middleton, M. T. Miles, M. Millhouse, Y. Lun Mong, B. Mueller, J. Munch, J. Musiov, S. Muusse, R. S. Nathan, Y. Naveh, C. Neijssel, B. Neil, S. W. S. Ng, V. Oloworaran, D. J. Ottaway, M. Page, J. Pan, M. Pathak, E. Payne, J. Powell, J. Pritchard, E. Puckridge, A. Raidani, V. Rallabhandi, D. Reardon, J. A. Riley, L. Roberts, I. M. Romero-Shaw, T. J. Roocke, G. Rowell, N. Sahu, N. Sarin, L. Sarre, H. Sattari, M. Schiworski, S. M. Scott, R. Sengar, D. Shaddock, R. Shannon, J. SHI, P. Sibley, B. J. J. Slagmolen, T. Slaven-Blair, R. J. E. Smith, J. Spollard, L. Steed, L. Strang, H. Sun, A. Sunderland, S. Suvorova, C. Talbot, E. Thrane, D. Töyrä, P. Trahanas, A. Vajpeyi, J. V. van Heijningen, A. F. Vargas, P. J. Veitch, A. Vigna-Gomez, A. Wade, K. Walker, Z. Wang, R. L. Ward, K. Ward, S. Webb, L. Wen, K. Wette, R. Wilcox, J. Winterflood, C. Wolf, B. Wu, M. Jet Yap, Z. You, H. Yu, J. Zhang, J. Zhang, C. Zhao, X. Zhu Journal: Publications of the Astronomical Society of Australia / Volume 37 / 2020 Published online by Cambridge University Press: 05 November 2020, e047 Gravitational waves from coalescing neutron stars encode information about nuclear matter at extreme densities, inaccessible by laboratory experiments. The late inspiral is influenced by the presence of tides, which depend on the neutron star equation of state. Neutron star mergers are expected to often produce rapidly rotating remnant neutron stars that emit gravitational waves. These will provide clues to the extremely hot post-merger environment. This signature of nuclear matter in gravitational waves contains most information in the 2–4 kHz frequency band, which is outside of the most sensitive band of current detectors. We present the design concept and science case for a Neutron Star Extreme Matter Observatory (NEMO): a gravitational-wave interferometer optimised to study nuclear physics with merging neutron stars. The concept uses high-circulating laser power, quantum squeezing, and a detector topology specifically designed to achieve the high-frequency sensitivity necessary to probe nuclear matter using gravitational waves. Above 1 kHz, the proposed strain sensitivity is comparable to full third-generation detectors at a fraction of the cost. Such sensitivity changes expected event rates for detection of post-merger remnants from approximately one per few decades with two A+ detectors to a few per year and potentially allow for the first gravitational-wave observations of supernovae, isolated neutron stars, and other exotica. A Large Outbreak of Peritonitis Among Patients on Peritoneal Dialysis (PD) Following Transition in PD Equipment Sukarma Tanwar, Lauren Tanz, Ana Bardossy, Christine Szablewski, Nicole Gualandi, Matthew Brian Crist, Paige Gable, Molly Hoffman, Carolyn Herzig, Joann F Gruber, Kristina Lam, Valerie Stevens, Carries Sanders, Hollis R. Houston, Judith Noble-Wang, Zack Moore, Melissa Tobin-Dangelo, Jennifer MacFarquha, Priti Patel, Shannon Novosad Journal: Infection Control & Hospital Epidemiology / Volume 41 / Issue S1 / October 2020 Published online by Cambridge University Press: 02 November 2020, pp. s95-s96 Print publication: October 2020 Background: Peritoneal dialysis is a type of dialysis performed by patients in their homes; patients receive training from dialysis clinic staff. Peritonitis is a serious complication of peritoneal dialysis, most commonly caused by gram-positive organisms. During March‒April 2019, a dialysis provider organization transitioned ~400 patients to a different manufacturer of peritoneal dialysis equipment and supplies (from product A to B). Shortly thereafter, patients experienced an increase in peritonitis episodes, caused predominantly by gram-negative organisms. In May 2019, we initiated an investigation to determine the source. Methods: We conducted case finding, reviewed medical records, observed peritoneal dialysis procedures and trainings, and performed patient home visits and interviews. A 1:1 matched case–control study was performed in 1 state. A case had ≥2 of the following: (1) positive peritoneal fluid culture, (2) high peritoneal fluid white cell count with ≥50% polymorphonuclear cells, or (3) cloudy peritoneal fluid and/or abdominal pain. Controls were matched to cases by week of clinic visit. Conditional logistic regression was used to estimate univariate matched odds ratios (mOR) and 95% confidence intervals (CIs). We conducted microbiological testing of peritoneal dialysis fluid bags to rule out product contamination. Results: During March‒September 2019, we identified 157 cases of peritonitis across 15 clinics in 2 states (attack rate≍39%). Staphylococcus spp (14%), Serratia spp (12%) and Klebsiella spp (6.3%) were the most common pathogens. Steps to perform peritoneal dialysis using product B differed from product A in several key areas; however, no common errors in practice were identified to explain the outbreak. Patient training on transitioning products was not standardized. Outcomes of the 73 cases in the case–control study included hospitalization (77%), peritoneal dialysis failure (40%), and death (7%). The median duration of training prior to product transition was 1 day for cases and controls (P = .86). Transitioning to product B (mOR, 18.00; 95% CI, 2.40‒134.83), using product B (mOR, 18.26; 95% CI, 3.86‒∞), drain-line reuse (mOR, 4.67; 95% CI, 1.34‒16.24) and performing daytime exchanges (mOR, 3.63; 95% CI, 1.71‒8.45) were associated with peritonitis. After several interventions, including transition of patients back to product A (Fig. 1), overall cases declined. Sterility testing of samples from 23 unopened product B peritoneal dialysis solution bags showed no contamination. Conclusions: Multiple factors may have contributed to this large outbreak, including a rapid transition in peritoneal dialysis products and potentially inadequate patient training. Efforts are needed to identify and incorporate best training practices, and product advances are desired to improve the safety of patient transitions between different types of peritoneal dialysis equipment. Funding: None Disclosures: None Evaluating winter annual grass control and native species establishment following applications of indaziflam on rangeland Shannon L. Clark, Derek J. Sebastian, Scott J. Nissen, James R. Sebastian Journal: Invasive Plant Science and Management / Volume 13 / Issue 3 / September 2020 Published online by Cambridge University Press: 14 August 2020, pp. 199-209 Print publication: September 2020 Add to cart £25.00 Added to cart An error has occurred, Indaziflam, a PRE herbicide option for weed management on rangeland and natural areas, provides long-term control of invasive winter annual grasses (IWAGs). Because indaziflam only provides PRE control of IWAGs, POST herbicides such as glyphosate can be mixed with indaziflam to control germinated IWAG seedlings. Field trials were conducted at three sites on the Colorado Front Range to evaluate glyphosate dose required to provide adequate POST IWAG control and compare long-term downy brome (Bromus tectorum L.), Japanese brome (Bromus arvensis L.), and feral rye (Secale cereale L.) control with indaziflam and imazapic. Two of the three sites were void of desirable species, so species establishment through drill seeding was assessed, while the remnant native plant response was assessed at the third site. Herbicide applications were made March 2014 through April 2015, and two sites were drill seeded with native species 9 mo after herbicide application. Yearly visual control evaluations, biomass of all plant species, and drilled species stand counts were collected. Glyphosate at 474 g ae ha−1 reduced B. tectorum biomass to zero, while glyphosate at 631 g ae ha−1 was needed to reduce biomass to near zero at the S. cereale site. At all three sites, only indaziflam treatments had significant reductions in IWAG biomass compared with the nontreated check at 3 yr after treatment (YAT). By 3 YAT in the drill-seeded sites, cool-season grass frequency ranged from 37% to 69% within indaziflam treatments (73 and 102 g ai ha−1), while imazapic treatments ranged from 0% to 26% cool-season grass frequency. In the site with a remnant native plant community, indaziflam treatments resulted in a 3- to 4-fold increase in native grass biomass. These results indicate that the multiyear IWAG control provided by indaziflam can aid in desirable species reestablishment through drill seeding or response of the remnant plant community. The MeerKAT telescope as a pulsar facility: System verification and early science results from MeerTime M. Bailes, A. Jameson, F. Abbate, E. D. Barr, N. D. R. Bhat, L. Bondonneau, M. Burgay, S. J. Buchner, F. Camilo, D. J. Champion, I. Cognard, P. B. Demorest, P. C. C. Freire, T. Gautam, M. Geyer, J.-M. Griessmeier, L. Guillemot, H. Hu, F. Jankowski, S. Johnston, A. Karastergiou, R. Karuppusamy, D. Kaur, M. J. Keith, M. Kramer, J. van Leeuwen, M. E. Lower, Y. Maan, M. A. McLaughlin, B. W. Meyers, S. Osłowski, L. S. Oswald, A. Parthasarathy, T. Pennucci, B. Posselt, A. Possenti, S. M. Ransom, D. J. Reardon, A. Ridolfi, C. T. G. Schollar, M. Serylak, G. Shaifullah, M. Shamohammadi, R. M. Shannon, C. Sobey, X. Song, R. Spiewak, I. H. Stairs, B. W. Stappers, W. van Straten, A. Szary, G. Theureau, V. Venkatraman Krishnan, P. Weltevrede, N. Wex, T. D. Abbott, G. B. Adams, J. P. Burger, R. R. G. Gamatham, M. Gouws, D. M. Horn, B. Hugo, A. F. Joubert, J. R. Manley, K. McAlpine, S. S. Passmoor, A. Peens-Hough, Z. R Ramudzuli, A. Rust, S. Salie, L. C. Schwardt, R. Siebrits, G. Van Tonder, V. Van Tonder, M. G. Welz Published online by Cambridge University Press: 15 July 2020, e028 We describe system verification tests and early science results from the pulsar processor (PTUSE) developed for the newly commissioned 64-dish SARAO MeerKAT radio telescope in South Africa. MeerKAT is a high-gain ( ${\sim}2.8\,\mbox{K Jy}^{-1}$ ) low-system temperature ( ${\sim}18\,\mbox{K at }20\,\mbox{cm}$ ) radio array that currently operates at 580–1 670 MHz and can produce tied-array beams suitable for pulsar observations. This paper presents results from the MeerTime Large Survey Project and commissioning tests with PTUSE. Highlights include observations of the double pulsar $\mbox{J}0737{-}3039\mbox{A}$ , pulse profiles from 34 millisecond pulsars (MSPs) from a single 2.5-h observation of the Globular cluster Terzan 5, the rotation measure of Ter5O, a 420-sigma giant pulse from the Large Magellanic Cloud pulsar PSR $\mbox{J}0540{-}6919$ , and nulling identified in the slow pulsar PSR J0633–2015. One of the key design specifications for MeerKAT was absolute timing errors of less than 5 ns using their novel precise time system. Our timing of two bright MSPs confirm that MeerKAT delivers exceptional timing. PSR $\mbox{J}2241{-}5236$ exhibits a jitter limit of $<4\,\mbox{ns h}^{-1}$ whilst timing of PSR $\mbox{J}1909{-}3744$ over almost 11 months yields an rms residual of 66 ns with only 4 min integrations. Our results confirm that the MeerKAT is an exceptional pulsar telescope. The array can be split into four separate sub-arrays to time over 1 000 pulsars per day and the future deployment of S-band (1 750–3 500 MHz) receivers will further enhance its capabilities. Mental Disorders in Firefighters Following Large-Scale Disaster Shannon L. Wagner, Nicole White, Christine Randall, Cheryl Regehr, Marc White, Lynn E. Alden, Nicholas Buys, Mary G. Carey, Wayne Corneil, Trina Fyfe, Lynda R. Matthews, Alex Fraess-Phillips, Elyssa Krutop Journal: Disaster Medicine and Public Health Preparedness , First View Published online by Cambridge University Press: 27 May 2020, pp. 1-14 Firefighting service is known to involve high rates of exposure to potentially traumatic situations, and research on mental health in firefighting populations is of critical importance in understanding the impact of occupational exposure. To date, the literature concerning prevalence of trauma-related mental disorders such as posttraumatic stress disorder (PTSD) has not distinguished between symptomology associated routine duty-related exposure and exposure to large-scale disaster. The present systematic review synthesizes a heterogeneous cross-national literature on large-scale disaster exposure in firefighters and provides support for the hypothesis that the prevalence of PTSD, major depressive disorder, and anxiety disorders are elevated in firefighters compared with rates observed in the general population. In addition, we conducted narrative synthesis concerning several commonly assessed predictive factors for disorder and found that sociodemographic factors appear to bear a weak relationship to mental disorder, while incident-related factors, such as severity and duration of disaster exposure, bear a stronger and more consistent relationship to the development of PTSD and depression in cross-national samples. Future work should expand on these preliminary findings to better understand the impact of disaster exposure in firefighting personnel. An ultra-wide bandwidth (704 to 4 032 MHz) receiver for the Parkes radio telescope George Hobbs, Richard N. Manchester, Alex Dunning, Andrew Jameson, Paul Roberts, Daniel George, J. A. Green, John Tuthill, Lawrence Toomey, Jane F. Kaczmarek, Stacy Mader, Malte Marquarding, Azeem Ahmed, Shaun W. Amy, Matthew Bailes, Ron Beresford, N. D. R. Bhat, Douglas C.-J. Bock, Michael Bourne, Mark Bowen, Michael Brothers, Andrew D. Cameron, Ettore Carretti, Nick Carter, Santy Castillo, Raji Chekkala, Wan Cheng, Yoon Chung, Daniel A. Craig, Shi Dai, Joanne Dawson, James Dempsey, Paul Doherty, Bin Dong, Philip Edwards, Tuohutinuer Ergesh, Xuyang Gao, JinLin Han, Douglas Hayman, Balthasar Indermuehle, Kanapathippillai Jeganathan, Simon Johnston, Henry Kanoniuk, Michael Kesteven, Michael Kramer, Mark Leach, Vince Mcintyre, Vanessa Moss, Stefan Osłowski, Chris Phillips, Nathan Pope, Brett Preisig, Daniel Price, Ken Reeves, Les Reilly, John Reynolds, Tim Robishaw, Peter Roush, Tim Ruckley, Elaine Sadler, John Sarkissian, Sean Severs, Ryan Shannon, Ken Smart, Malcolm Smith, Stephanie Smith, Charlotte Sobey, Lister Staveley-Smith, Anastasios Tzioumis, Willem van Straten, Nina Wang, Linqing Wen, Matthew Whiting Published online by Cambridge University Press: 08 April 2020, e012 We describe an ultra-wide-bandwidth, low-frequency receiver recently installed on the Parkes radio telescope. The receiver system provides continuous frequency coverage from 704 to 4032 MHz. For much of the band ( ${\sim}60\%$ ), the system temperature is approximately 22 K and the receiver system remains in a linear regime even in the presence of strong mobile phone transmissions. We discuss the scientific and technical aspects of the new receiver, including its astronomical objectives, as well as the feed, receiver, digitiser, and signal processor design. We describe the pipeline routines that form the archive-ready data products and how those data files can be accessed from the archives. The system performance is quantified, including the system noise and linearity, beam shape, antenna efficiency, polarisation calibration, and timing stability. Pregnancy health in POWERMOM participants living in rural versus urban zip codes Jennifer M. Radin, Shaquille Peters, Lauren Ariniello, Shannon Wongvibulsin, Michael Galarnyk, Jill Waalen, Steven R. Steinhubl Journal: Journal of Clinical and Translational Science / Volume 4 / Issue 5 / October 2020 Published online by Cambridge University Press: 06 April 2020, pp. 457-462 Pregnant women living in rural locations in the USA have higher rates of maternal and infant mortality compared to their urban counterparts. One factor contributing to this disparity may be lack of representation of rural women in traditional clinical research studies of pregnancy. Barriers to participation often include transportation to research facilities, which are typically located in urban centers, childcare, and inability to participate during nonwork hours. POWERMOM is a digital research app which allows participants to share both survey and sensor data during their pregnancy. Through non-targeted, national outreach a study population of 3612 participants (591 from rural zip codes and 3021 from urban zip codes) have been enrolled so far in the study, beginning on March 16, 2017, through September 20, 2019. On average rural participants in our study were younger, had higher pre-pregnancy weights, were less racially diverse, and were more likely to plan a home birth compared to the urban participants. Both groups showed similar engagement in terms of week of pregnancy when they joined, percentage of surveys completed, and completion of the outcome survey after they delivered their baby. However, rural participants shared less HealthKit or sensor data compared to urban participants. Our study demonstrated the feasibility and effectiveness of enrolling pregnant women living in rural zip codes using a digital research study embedded within a popular pregnancy app. Future efforts to conduct remote digital research studies could help fill representation and knowledge gaps related to pregnant women. Evaluation of the National Healthcare Safety Network standardized infection ratio risk adjustment for healthcare-facility-onset Clostridioides difficile infection in intensive care, oncology, and hematopoietic cell transplant units in general acute-care hospitals Christopher R. Polage, Kathleen A. Quan, Keith Madey, Frank E. Myers, Debbra A. Wightman, Sneha Krishna, Jonathan D. Grein, Laurel Gibbs, Deborah Yokoe, Shannon C. Mabalot, Raymond Chinn, Amy Hallmark, Zachary Rubin, Michael Fontenot, Stuart Cohen, David Birnbaum, Susan S. Huang, Francesca J. Torriani Journal: Infection Control & Hospital Epidemiology / Volume 41 / Issue 4 / April 2020 Published online by Cambridge University Press: 13 February 2020, pp. 404-410 Print publication: April 2020 To evaluate the National Health Safety Network (NHSN) hospital-onset Clostridioides difficile infection (HO-CDI) standardized infection ratio (SIR) risk adjustment for general acute-care hospitals with large numbers of intensive care unit (ICU), oncology unit, and hematopoietic cell transplant (HCT) patients. Retrospective cohort study. Eight tertiary-care referral general hospitals in California. We used FY 2016 data and the published 2015 rebaseline NHSN HO-CDI SIR. We compared facility-wide inpatient HO-CDI events and SIRs, with and without ICU data, oncology and/or HCT unit data, and ICU bed adjustment. For these hospitals, the median unmodified HO-CDI SIR was 1.24 (interquartile range [IQR], 1.15–1.34); 7 hospitals qualified for the highest ICU bed adjustment; 1 hospital received the second highest ICU bed adjustment; and all had oncology-HCT units with no additional adjustment per the NHSN. Removal of ICU data and the ICU bed adjustment decreased HO-CDI events (median, −25%; IQR, −20% to −29%) but increased the SIR at all hospitals (median, 104%; IQR, 90%–105%). Removal of oncology-HCT unit data decreased HO-CDI events (median, −15%; IQR, −14% to −21%) and decreased the SIR at all hospitals (median, −8%; IQR, −4% to −11%). For tertiary-care referral hospitals with specialized ICUs and a large number of ICU beds, the ICU bed adjustor functions as a global adjustment in the SIR calculation, accounting for the increased complexity of patients in ICUs and non-ICUs at these facilities. However, the SIR decrease with removal of oncology and HCT unit data, even with the ICU bed adjustment, suggests that an additional adjustment should be considered for oncology and HCT units within general hospitals, perhaps similar to what is done for ICU beds in the current SIR. Pathogens causing central-line–associated bloodstream infections in acute-care hospitals—United States, 2011–2017 Shannon A. Novosad, Lucy Fike, Margaret A. Dudeck, Katherine Allen-Bridson, Jonathan R. Edwards, Chris Edens, Ronda Sinkowitz-Cochran, Krista Powell, David Kuhar Journal: Infection Control & Hospital Epidemiology / Volume 41 / Issue 3 / March 2020 Published online by Cambridge University Press: 09 January 2020, pp. 313-319 To describe pathogen distribution and rates for central-line–associated bloodstream infections (CLABSIs) from different acute-care locations during 2011–2017 to inform prevention efforts. CLABSI data from the Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN) were analyzed. Percentages and pooled mean incidence density rates were calculated for a variety of pathogens and stratified by acute-care location groups (adult intensive care units [ICUs], pediatric ICUs [PICUs], adult wards, pediatric wards, and oncology wards). From 2011 to 2017, 136,264 CLABSIs were reported to the NHSN by adult and pediatric acute-care locations; adult ICUs and wards reported the most CLABSIs: 59,461 (44%) and 40,763 (30%), respectively. In 2017, the most common pathogens were Candida spp/yeast in adult ICUs (27%) and Enterobacteriaceae in adult wards, pediatric wards, oncology wards, and PICUs (23%–31%). Most pathogen-specific CLABSI rates decreased over time, excepting Candida spp/yeast in adult ICUs and Enterobacteriaceae in oncology wards, which increased, and Staphylococcus aureus rates in pediatric locations, which did not change. The pathogens associated with CLABSIs differ across acute-care location groups. Learning how pathogen-targeted prevention efforts could augment current prevention strategies, such as strategies aimed at preventing Candida spp/yeast and Enterobacteriaceae CLABSIs, might further reduce national rates. The emission and scintillation properties of RRAT J2325−0530 at 154 MHz and 1.4 GHz B. W. Meyers, S. E. Tremblay, N. D. R. Bhat, R. M. Shannon, S. M. Ord, C. Sobey, M. Johnston-Hollitt, M. Walker, R. B. Wayth Published online by Cambridge University Press: 04 September 2019, e034 Rotating Radio Transients (RRATs) represent a relatively new class of pulsar, primarily characterised by their sporadic bursting emission of single pulses on time scales of minutes to hours. In addition to the difficulty involved in detecting these objects, low-frequency ( $ \lt 300\,\text{MHz}$ ) observations of RRATs are sparse, which makes understanding their broadband emission properties in the context of the normal pulsar population problematic. Here, we present the simultaneous detection of RRAT J2325−0530 using the Murchison Widefield Array (154 MHz) and Parkes radio telescope ( $1.4\,\text{GHz}$ ). On a single-pulse basis, we produce the first polarimetric profile of this pulsar, measure the spectral index ( $\alpha={-2.2\pm 0.1}$ ), pulse energy distributions, and present the pulse rates in the context of detections in previous epochs. We find that the distribution of time between subsequent pulses is consistent with a Poisson process and find no evidence of clustering over the $\sim\!1.5\,\text{h}$ observations. Finally, we are able to quantify the scintillation properties of RRAT J2325−0530 at 1.4 GHz, where the single pulses are modulated substantially across the observing bandwidth, and show that this characterisation is feasible even with irregular time sampling as a consequence of the sporadic emission behaviour. Effect of indaziflam on native species in natural areas and rangeland Journal: Invasive Plant Science and Management / Volume 12 / Issue 1 / March 2019 Published online by Cambridge University Press: 01 May 2019, pp. 60-67 Minimizing the negative ecological impacts of exotic plant invasions is one goal of land management. Using selective herbicides is one strategy to achieve this goal; however, the unintended consequences of this strategy are not always fully understood. The recently introduced herbicide indaziflam has a mode of action not previously used in non-crop weed management. Thus, there is limited information about the impacts of this active ingredient when applied alone or in combination with other non-crop herbicides. The objective of this research was to evaluate native species tolerance to indaziflam and imazapic applied alone and with other broadleaf herbicides. Replicated field plots were established at two locations in Colorado with a diverse mix of native forbs and grasses. Species richness and abundance were compared between the nontreated control plots and plots where indaziflam and imazapic were applied alone and in combination with picloram and aminocyclopyrachlor. Species richness and abundance did not decrease when indaziflam or imazapic were applied alone; however, species abundance was reduced by treatments containing picloram and aminocyclopyrachlor. Species richness was only impacted at one site 1 yr after treatment (YAT) by these broadleaf herbicides. Decreases in abundance were mainly due to reductions in forbs that resulted in a corresponding increase in grass cover. Our data suggest that indaziflam will control downy brome (Bromus tectorum L.) for multiple years without reduction in perennial species richness or abundance. If B. tectorum is present with perennial broadleaf weeds requiring the addition of herbicides like picloram or aminocyclopyrachlor, forb abundance could be reduced, and in some cases there could be a temporary reduction in perennial species richness. The performance and calibration of the CRAFT fly's eye fast radio burst survey C. W. James, K. W. Bannister, J.-P. Macquart, R. D. Ekers, S. Oslowski, R. M. Shannon, J. R. Allison, A. P. Chippendale, J. D. Collier, T. Franzen, A. W. Hotan, M. Leach, D. McConnell, M. A. Pilawa, M. A. Voronkov, M. T. Whiting Published online by Cambridge University Press: 22 February 2019, e009 The Commensal Real-time Australian Square Kilometre Array Pathfinder Fast Transients survey is the first extensive astronomical survey using phased array feeds. Since January 2017, it has been searching for fast radio bursts in fly's eye mode. Here, we present a calculation of the sensitivity and total exposure of the survey that detected the first 20 of these bursts, using the pulsars B1641-45 and B0833-45 as calibrators. The beamshape, antenna-dependent system noise, and the effects of radio-frequency interference and fluctuations during commissioning are quantified. Effective survey exposures and sensitivities are calculated as a function of the source counts distribution. Statistical 'stat' and systematics 'sys' effects are treated separately. The implied fast radio burst rate is significantly lower than the 37 sky−1 day−1 calculated using nominal exposures and sensitivities for this same sample by Shannon et al. (2018). At the Euclidean (best-fit) power-law index of −1.5 (−2.2), the rate is $12.7_{-2.2}^{+3.3}$ (sys) ± 3.6 (stat) sky−1 day−1 ( $20.7_{-1.7}^{+2.1}$ (sys) ± 2.8 (stat) sky−1 day−1) above a threshold of 56.6 ± 6.6(sys) Jy ms (40.4 ± 1.2(sys) Jy ms). This strongly suggests that these calculations be performed for other FRB-hunting experiments, allowing meaningful comparisons to be made between them. Academic Physical Medicine and Rehabilitation Acute Care Consultations Shannon L. MacDonald, Lawrence R. Robinson Journal: Canadian Journal of Neurological Sciences / Volume 45 / Issue 4 / July 2018 Published online by Cambridge University Press: 08 May 2018, pp. 470-473 Print publication: July 2018 The objective of this study was to describe the provision of Physical Medicine and Rehabilitation acute care consultations in the United States and Canada. Physical Medicine and Rehabilitation department chairs/division directors at academic centers in Canada and the United States were mailed an 18-item questionnaire. Seven of 13 (54%) Canadian and 26/78 (33%) American surveys were returned. A majority of Canadian and American academic institutions provide acute care consultations; however, there were some national differences. American institutions see larger volumes of patients, and more American respondents indicated using a dedicated acute care consultation service model compared with Canadians. Faculty mentorship during residency and professional development among practising emergency physicians Shannon M. Fernando, Warren J. Cheung, Stephen B. Choi, Lisa Thurgur, Jason R. Frank Journal: Canadian Journal of Emergency Medicine / Volume 20 / Issue 6 / November 2018 Print publication: November 2018 Mentorship is perceived to be an important component of residency education. However, evidence of the impact of mentorship on professional development in Emergency Medicine (EM) is lacking. Online survey distributed to attending physician members of the Canadian Association of Emergency Physicians (CAEP), using a modified Dillman method. Survey contained questions about mentorship during residency training, and perceptions of the impact of mentorship on career development. The response rate was 23.5% (309/1314). 63.6% reported having at least one mentor during residency. The proportion of participants with a formal mentorship component during residency was higher among those with mentors (44.5%) compared to those without any formal mentorship component during residency (8.0%, p<0.001). The most common topics discussed with mentors were career planning and work-life balance. The least common topics included research and finances. While many participants consulted their mentor regarding their first job (56.5%), fewer consulted their mentor regarding subspecialty training (45.1%) and research (41.1%). 71.8% chose to work in a similar centre as their mentor, but few completed the same subspecialty (24.8%), or performed similar research (30.4%). 94.1% stated that mentorship was important to success during residency. Participants in a formal mentorship program did not rate their experience of mentorship higher than those without a formal program. Among academic EM physicians with an interest in mentorship, mentorship during EM residency may have a greater association with location of practice than academic scholarship or subspecialty choice. Formal mentorship programs increase the likelihood of obtaining a mentor, but do not appear to improve reported mentorship experiences. Resident Physician Knowledge of Urine Testing and Treatment Over Four Years Shannon L. Andrews, Lilian M. Abbo, James R. Johnson, Michael A. Kuskowski, Bhavarth S. Shukla, Dimitri M. Drekonja Journal: Infection Control & Hospital Epidemiology / Volume 39 / Issue 5 / May 2018 Print publication: May 2018 We surveyed resident physicians at 2 academic medical centers regarding urinary testing and treatment as they progressed through training. Demographics and self-reported confidence were compared to overall knowledge using clinical vignette-based questions. Overall knowledge was 40% in 2011 and increased to 48%, 55%, and 63% in subsequent years (P<.001). Infect Control Hosp Epidemiol 2018;39:616–618 Genetic and Environmental Contributions of Negative Valence Systems to Internalizing Pathways Jennifer L. Cecilione, Lance M. Rappaport, Shannon E. Hahn, Audrey E. Anderson, Laura E. Hazlett, Jason R. Burchett, Ashlee A. Moore, Jeanne E. Savage, John M. Hettema, Roxann Roberson-Nay Journal: Twin Research and Human Genetics / Volume 21 / Issue 1 / February 2018 Published online by Cambridge University Press: 25 January 2018, pp. 12-23 Print publication: February 2018 The genetic and environmental contributions of negative valence systems (NVS) to internalizing pathways study (also referred to as the Adolescent and Young Adult Twin Study) was designed to examine varying constructs of the NVS as they relate to the development of internalizing disorders from a genetically informed perspective. The goal of this study was to evaluate genetic and environmental contributions to potential psychiatric endophenotypes that contribute to internalizing psychopathology by studying adolescent and young adult twins longitudinally over a 2-year period. This report details the sample characteristics, study design, and methodology of this study. The first wave of data collection (i.e., time 1) is complete; the 2-year follow-up (i.e., time 2) is currently underway. A total of 430 twin pairs (N = 860 individual twins; 166 monozygotic pairs; 57.2% female) and 422 parents or legal guardians participated at time 1. Twin participants completed self-report surveys and participated in experimental paradigms to assess processes within the NVS. Additionally, parents completed surveys to report on themselves and their twin children. Findings from this study will help clarify the genetic and environmental influences of the NVS and their association with internalizing risk. The goal of this line of research is to develop methods for early internalizing disorder risk detection. Follow Up of GW170817 and Its Electromagnetic Counterpart by Australian-Led Observing Programmes I. Andreoni, K. Ackley, J. Cooke, A. Acharyya, J. R. Allison, G. E. Anderson, M. C. B. Ashley, D. Baade, M. Bailes, K. Bannister, A. Beardsley, M. S. Bessell, F. Bian, P. A. Bland, M. Boer, T. Booler, A. Brandeker, I. S. Brown, D. A. H. Buckley, S.-W. Chang, D. M. Coward, S. Crawford, H. Crisp, B. Crosse, A. Cucchiara, M. Cupák, J. S. de Gois, A. Deller, H. A. R. Devillepoix, D. Dobie, E. Elmer, D. Emrich, W. Farah, T. J. Farrell, T. Franzen, B. M. Gaensler, D. K. Galloway, B. Gendre, T. Giblin, A. Goobar, J. Green, P. J. Hancock, B. A. D. Hartig, E. J. Howell, L. Horsley, A. Hotan, R. M. Howie, L. Hu, Y. Hu, C. W. James, S. Johnston, M. Johnston-Hollitt, D. L. Kaplan, M. Kasliwal, E. F. Keane, D. Kenney, A. Klotz, R. Lau, R. Laugier, E. Lenc, X. Li, E. Liang, C. Lidman, L. C. Luvaul, C. Lynch, B. Ma, D. Macpherson, J. Mao, D. E. McClelland, C. McCully, A. Möller, M. F. Morales, D. Morris, T. Murphy, K. Noysena, C. A. Onken, N. B. Orange, S. Osłowski, D. Pallot, J. Paxman, S. B. Potter, T. Pritchard, W. Raja, R. Ridden-Harper, E. Romero-Colmenero, E. M. Sadler, E. K. Sansom, R. A. Scalzo, B. P. Schmidt, S. M. Scott, N. Seghouani, Z. Shang, R. M. Shannon, L. Shao, M. M. Shara, R. Sharp, M. Sokolowski, J. Sollerman, J. Staff, K. Steele, T. Sun, N. B. Suntzeff, C. Tao, S. Tingay, M. C. Towner, P. Thierry, C. Trott, B. E. Tucker, P. Väisänen, V. Venkatraman Krishnan, M. Walker, L. Wang, X. Wang, R. Wayth, M. Whiting, A. Williams, T. Williams, C. Wolf, C. Wu, X. Wu, J. Yang, X. Yuan, H. Zhang, J. Zhou, H. Zovaro Published online by Cambridge University Press: 20 December 2017, e069 The discovery of the first electromagnetic counterpart to a gravitational wave signal has generated follow-up observations by over 50 facilities world-wide, ushering in the new era of multi-messenger astronomy. In this paper, we present follow-up observations of the gravitational wave event GW170817 and its electromagnetic counterpart SSS17a/DLT17ck (IAU label AT2017gfo) by 14 Australian telescopes and partner observatories as part of Australian-based and Australian-led research programs. We report early- to late-time multi-wavelength observations, including optical imaging and spectroscopy, mid-infrared imaging, radio imaging, and searches for fast radio bursts. Our optical spectra reveal that the transient source emission cooled from approximately 6 400 K to 2 100 K over a 7-d period and produced no significant optical emission lines. The spectral profiles, cooling rate, and photometric light curves are consistent with the expected outburst and subsequent processes of a binary neutron star merger. Star formation in the host galaxy probably ceased at least a Gyr ago, although there is evidence for a galaxy merger. Binary pulsars with short (100 Myr) decay times are therefore unlikely progenitors, but pulsars like PSR B1534+12 with its 2.7 Gyr coalescence time could produce such a merger. The displacement (~2.2 kpc) of the binary star system from the centre of the main galaxy is not unusual for stars in the host galaxy or stars originating in the merging galaxy, and therefore any constraints on the kick velocity imparted to the progenitor are poor. 2307: Resting state network profiles of Alzheimer disease and frontotemporal dementia: A preliminary examination Joey Annette Contreras, Shannon L. Risacher, Mario Dzemidzic, John D. West, Brenna C. McDonald, Martin R. Farlow, Brandy R. Matthews, Liana G. Apostolova, Jared Brosch, Bernard Ghetti, Joaquin GoÑi Journal: Journal of Clinical and Translational Science / Volume 1 / Issue S1 / September 2017 Published online by Cambridge University Press: 10 May 2018, p. 6 OBJECTIVES/SPECIFIC AIMS: Recent evidence from resting-state fMRI studies have shown that brain network connectivity is altered in patients with neurodegenerative disorders. However, few studies have examined the complete connectivity patterns of these well-reported RSNs using a whole brain approach and how they compare between dementias. Here, we used advanced connectomic approaches to examine the connectivity of RSNs in Alzheimer disease (AD), Frontotemporal dementia (FTD), and age-matched control participants. METHODS/STUDY POPULATION: In total, 44 participants [27 controls (66.4±7.6 years), 13 AD (68.5.63±13.9 years), 4 FTD (59.575±12.2 years)] from an ongoing study at Indiana University School of Medicine were used. Resting-state fMRI data was processed using an in-house pipeline modeled after Power et al. (2014). Images were parcellated into 278 regions of interest (ROI) based on Shen et al. (2013). Connectivity between each ROI pair was described by Pearson correlation coefficient. Brain regions were grouped into 7 canonical RSNs as described by Yeo et al. (2015). Pearson correlation values were then averaged across pairs of ROIs in each network and averaged across individuals in each group. These values were used to determine relative expression of FC in each RSN (intranetwork) and create RSN profiles for each group. RESULTS/ANTICIPATED RESULTS: Our findings support previous literature which shows that limbic networks are disrupted in FTLD participants compared with AD and age-matched controls. In addition, interactions between different RSNs was also examined and a significant difference between controls and AD subjects was found between FP and DMN RSNs. Similarly, previous literature has reported a disruption between executive (frontoparietal) network and default mode network in AD compared with controls. DISCUSSION/SIGNIFICANCE OF IMPACT: Our approach allows us to create profiles that could help compare intranetwork FC in different neurodegenerative diseases. Future work with expanded samples will help us to draw more substantial conclusions regarding differences, if any, in the connectivity patterns between RSNs in various neurodegenerative diseases. Randomized controlled trial of emergency department initiated smoking cessation counselling and referral to a community counselling service Ka Wai Cheung, Ian WH. Wong, Warren Fingrut, Amy Po Yu Tsai, Sally R. Ke, Shayan Shojaie, Jeffrey R. Brubacher, Lauren C. Stewart, Shannon Erdelyi Journal: Canadian Journal of Emergency Medicine / Volume 20 / Issue 4 / July 2018 Published online by Cambridge University Press: 11 July 2017, pp. 556-564 Worldwide, tobacco smoke is still the leading cause of preventable morbidity and mortality. Many smokers develop chronic smoking-related conditions that require emergency department (ED) visits. However, best practices for ED smoking cessation counselling are still unclear. A randomized controlled trial was conducted to determine whether an "ask, advise, and refer" approach increases 12-month, 30-day quit rates in the stable adult ED smoking population compared to usual care. Patients in the intervention group were referred to a community counselling service that offers a quitline, a text-based program, and a Web-based program. Longitudinal intention-to-treat analyses were performed. From November 2011 to March 2013, 1,295 patients were enrolled from one academic tertiary care ED. Six hundred thirty-five were allocated to usual care, and 660 were allocated to intervention. Follow-up data were available for 70% of all patients at 12 months. There was no statistically significant difference in 12-month, 30-day quit rates between the two groups. However, there was a trend towards higher 7-day quit attempts, 7-day quit rates, and 30-day quit rates at 3, 6, and 12 months in the intervention group. In this study, there was a trend towards increased smoking cessation following referral to a community counselling service. There was no statistically significant difference. However, if ED smoking cessation efforts were to provide even a small positive effect, such an intervention may have a significant public health impact given the extensive reach of emergency physicians. Impact of Hurricane Exposure on Reproductive Health Outcomes, Florida, 2004 Shannon C. Grabich, Whitney R. Robinson, Charles E. Konrad, Jennifer A. Horney Journal: Disaster Medicine and Public Health Preparedness / Volume 11 / Issue 4 / August 2017 Print publication: August 2017 Prenatal hurricane exposure may be an increasingly important contributor to poor reproductive health outcomes. In the current literature, mixed associations have been suggested between hurricane exposure and reproductive health outcomes. This may be due, in part, to residual confounding. We assessed the association between hurricane exposure and reproductive health outcomes by using a difference-in-difference analysis technique to control for confounding in a cohort of Florida pregnancies. We implemented a difference-in-difference analysis to evaluate hurricane weather and reproductive health outcomes including low birth weight, fetal death, and birth rate. The study population for analysis included all Florida pregnancies conceived before or during the 2003 and 2004 hurricane season. Reproductive health data were extracted from vital statistics records from the Florida Department of Health. In 2004, 4 hurricanes (Charley, Frances, Ivan, and Jeanne) made landfall in rapid succession; whereas in 2003, no hurricanes made landfall in Florida. Overall models using the difference-in-difference analysis showed no association between exposure to hurricane weather and reproductive health. The inconsistency of the literature on hurricane exposure and reproductive health may be in part due to biases inherent in pre-post or regression-based county-level comparisons. We found no associations between hurricane exposure and reproductive health. (Disaster Med Public Health Preparedness. 2017;11:407–411)
CommonCrawl
AutoCAD 2018 22.0 Crack Free Download For Windows Latest Download ✑ https://tiurll.com/2pwmhm AutoCAD Crack + Below is an introduction to the various parts of AutoCAD Cracked Version, including features for the novice user, common uses, and more advanced topics. Parts of AutoCAD Torrent Download Cracked AutoCAD With Keygen is an integrated suite of CAD and drafting applications, consisting of several integrated programs that work together to allow you to create plans, mechanical drawings, and images such as drawings of furniture, buildings, and more. AutoCAD Cracked 2022 Latest Version AutoCAD Cracked 2022 Latest Version is the base application and is the first program you will run when installing AutoCAD Cracked 2022 Latest Version. It is the core of the suite and comes with a lot of features that enable you to create complex drawings quickly and accurately. Many of the functions of Cracked AutoCAD With Keygen overlap with those of the other programs. For example, a drawing created in AutoCAD Activation Code can be displayed in the 3D Modeling tool and rendered in AutoCAD Cracked 2022 Latest Version Architecture, while architectural drawings created in AutoCAD Cracked 2022 Latest Version Architecture can be saved and shared as PDF drawings or other images. The other programs in the suite of AutoCAD Full Crack, such as AutoCAD Full Crack LT, Dimension, and others, provide additional functionality. AutoCAD Free Download Architecture AutoCAD Torrent Download Architecture allows you to build 3D models of buildings and other three-dimensional objects from 2D drawings. The objects can be viewed in different ways, such as looking down through the building, or looking at it from the side. AutoCAD Activation Code Architecture is a 3D modeling application that uses dynamic geometry to create accurate architectural models quickly and easily. It enables you to take multiple 2D plans and "translate" them into an accurate 3D model of the building. You can then add dimensions, materials, textures, lights, and more to the model. You can also combine 2D plans with AutoCAD Free Download drawings to create a larger 3D model. AutoCAD Torrent Download Architecture is available as a stand-alone application, or as an add-on for the larger AutoCAD Crack Free Download suite of applications. AutoCAD Crack For Windows LT AutoCAD Serial Key LT (Linear Technology) is designed to provide a user interface (UI) for the smaller enterprises. AutoCAD Crack LT includes several of the AutoCAD Product Key features, but provides only a subset of them. For example, the Features toolbar has been replaced with a 3D Navigation toolbar, but the drawing area is limited to 10,000 square feet. AutoCAD Crack Keygen LT is available as a stand-alone application or as a standalone program or add- AutoCAD Crack + (LifeTime) Activation Code [March-2022] "Design in CAD software is essentially automated if designers are willing to invest time in preparing the required input data." Gartenfeld Comparison of CAD editors for CAE Comparison of CAD editors for architecture and drafting AutoCAD Download With Full Crack homepage (not active since March 1, 2016) Category:Computer-aided design software Category:Dassault Systemes software Category:Computer-aided design software for Windows Category:Computer-aided design software for Linux Category:Computer-aided design software for MacOS Category:CAD software for Windows Category:CAD software for Linux Category:CAD software for MacOS}_{B}\right) _{s}$ and $\left( \overrightarrow {R}_{B}\right) _{n}$ coincide for all $s=1,2,\ldots,N$ and $n=N,N-1,\ldots,1$. If $\left( \overrightarrow{R}_{B}\right) _{n}=\overrightarrow {r}_{B}$ and $\left( \overrightarrow{R}_{B}\right) _{s}=\overrightarrow {r}_{B}$ for some $n,s$ with $n eq s$, then $$\begin{aligned} \left( \overrightarrow{r}_{A}\right) _{n} & =\left( \overrightarrow {r}_{B}\right) _{n},\\ \left( \overrightarrow{r}_{A}\right) _{s} & =\left( \overrightarrow {r}_{B}\right) _{s},\end{aligned}$$ for all $s=1,2,\ldots,N$. If $\left( \overrightarrow{r}_{A}\right) _{n}=\left( \overrightarrow{r}_{B}\right) _{n}$, then $\left( \overrightarrow{r}_{A}\right) _{s}=\left( \overrightarrow{r}_{B}\right) _{ 3eba37e7bb AutoCAD Full Version Open the Autocad and then select "Tools" > "File" > "Startup Repair". Category:Autodesk software Category:Mockup software Bringing sketches into a design project is a common practice. But instead of picking up a pen, pencil, or finger, CAD users can leverage the built-in ability to import digital sketches into drawings. This new AutoCAD feature, available in AutoCAD 2023, will add a new interactive marker on the Status Bar. When a user creates a new shape, AutoCAD will automatically detect that the new shape is a sketch. Once the sketch is selected, the Status Bar Marker will display the name of the shape selected (sketch) and the AutoCAD sketch options will appear. After selecting AutoCAD sketch options, the user can select the AutoCAD sketch export and import options from the Sketch and Dimension menu. This feature will enable a user to import the sketch into the drawing to quickly capture the design intent. This is an experimental feature. As with all new features in AutoCAD, the feature may change during the development process and not be available in the final release of the software. You can now import models from ParaStation directly into drawings in AutoCAD. So you can use AutoCAD to make a drawing and then send it to ParaStation to get the correct dimensions. The primary benefit of this feature is to deliver a consistent, repeatable, dimensional quality for shop drawings with ease and speed. Quick Edit and Search: Find or adjust the edges, faces, or dimensions of a selection quickly and easily. The Quick Edit and Search feature replaces the old Edit Edge, Find & Select, and Find & Draw tools. This new Edit Feature is designed to help you make and adjust drawings faster by making it easier to search for or select features and to create or edit objects. It is available in the Standard toolbar as well as the ribbon interface. Quick Edit and Search allows you to perform several tasks faster by selecting and acting on objects, and by using the new AutoCAD Highlight Feature. In addition, you can now search for specific features by entering any word or text in the Search Field. The Search field will display objects that match the text you enter. To open the Quick Edit and Search dialog box, choose Data tab | Select Objects, from the ribbon or choose Edit Objects (Shift+T) to open the dialog box. After the user selects a shape to edit, an arrow appears in the search field indicating the selection direction. When you type in the Search Field System Requirements For AutoCAD: General System Requirements: – Operating Systems: Windows 7 or newer – Processor: Intel Core 2 Duo – Memory: 2 GB RAM – Graphics: DirectX 11 graphics device – For best performance, play with a minimum of 16GB of RAM – The game does not support multi-core CPUs – The game runs on Steamworks and requires a Steam Account to play Processing and Decoding Requirements: – Processor: Intel Core 2 Duo or AMD Athlon 64 X2 https://www.surfcentertarifa.com/autocad-2018-22-0-crack-incl-product-key-for-windows-latest-2022/ http://dottoriitaliani.it/ultime-notizie/rimedi-naturali/autocad-23-1-crack-april-2022/ http://www.shpksa.com/autocad-crack-with-keygen-latest/ https://coursewriter.com/2022/06/13/autocad-crack-download-for-pc-april-2022/ http://www.delphineberry.com/?p=5112 https://colombiasubsidio.xyz/?p=7472 https://classifieds.aramsco.com/advert/2012-chevey-van-2500-extended-and-hydromaster-575-titan/ https://elsm.ch/advert/autocad-21-0-download-win-mac-latest-2022/ https://gretchenscannon.com/2022/06/13/autocad-crack-free-download-pc-windows/ https://fortymillionandatool.com/?p=14934 https://hanffreunde-braunschweig.de/autocad-download-for-pc-2/ https://muehlenbar.de/autocad-download/ https://donin.com.br/advert/autocad-24-0-updated-2022/ https://www.7desideri.it/?p=23196 https://b-labafrica.net/autocad-crack-keygen/ https://www.voyavel.it/autocad-23-1-crack-download-win-mac-2022-latest/ https://thebakersavenue.com/autocad-2023-24-2-crack-latest/ http://www.bayislistings.com/autocad-free/ http://mir-ok.ru/autocad-crack-activation-free-for-windows/
CommonCrawl
Free printable math worksheets CogAT Test Math Workbooks Interesting math Convex sets Convex sets play an important role in geometry. In order to understand what convex sets are, we need to define and observe convex combinations. Convex combinations Assume we have points $P_{0}, P_{1}, \cdots, P_{n}$. Furthermore, if we choose $\alpha_{0}, \alpha_{1}, \cdots, \alpha_{n}$ such that $0 \leq \alpha_{i} \leq 1, i =0, 1, \cdots, n$ and $\alpha_{0} + \alpha_{1} + \alpha_{2} + \cdots + \alpha_{n} = 1$, we form a new point $$P = \alpha_{0} P_{0} + \alpha_{1}P_{1} + \cdots + \alpha_{n}P_{n}.$$ Each obtained point $P$ is called a convex combination of points $P_{0}, P_{1}, \cdots, P_{n}$. For instance, imagine you have two points $P_{0}$, $P_{1}$ and line $P_{0}P_{1}$. Any point $P$ which lies on line $P_{0}P_{1}$ can be written as: $$P = (1 – \alpha) P_{0} + \alpha P_{1},$$ which is an affine combination of points $P_{0}$ and $P_{1}$. The point $P$ is a convex combination of points $P_{0}$ and $P_{1}$ because $0 \leq \alpha \leq 1$ and $0 \leq (1 – \alpha) \leq 1$. Moreover, any point on the line segment $\overline{P_{0}P_{1}}$ can be written in this way. What is a convex set? Intuitively, if we can draw a straight line between any two arbitrary points of a set which is not entirely in a set, then the set is non – convex. We say that the set is convex if any convex combination of two arbitrary points of that set is also an element of that set. More precisely, let $V$ denote a real vector space. Definition: Let $a, b \in V$. Then the set of all convex combinations of $a, b$ is the set of points (*) : $$ \{c \in V: c = (1 – \alpha)a + \alpha b, 0 \leq \alpha \leq 1 \}. $$ Definition: Let $S \subset V$. We say that $S$ is a convex set if for two points $a, b \in S$ the set (*) is a subset of $S$. In other words, a convex set is a set which contains all possible convex combinations of its points. Examples of convex sets Example 1: The empty set $\emptyset$, the singleton set $\{x\}$ and space $\mathbb{R^n}$ are convex sets. Example 2: Is an interval $[a, b] \subset \mathbb{R}$ a convex set? Let $c, d \in [a, b]$ and assume, without loss of generality, that $c < d$. Furthermore, let $\alpha \in [0, 1]$. Then $$a \leq c = (1 – \alpha)c + \alpha c < (1 – \alpha)c + \alpha d$$ $$< (1 – \alpha)d + \alpha d = d$$ $$\leq b.$$ In conclusion, an interval $[a, b] \subset \mathbb{R}$ a convex set. Example 3: Any line or a ray is a convex set, as it contains the line segment between any two of its points. Example 4: Some polygons are convex, and some are concave. Any triangle is a convex set. Also, a regular pentagon is a convex set. Convex sets in $\mathbb{R^2}$ include interiors of triangles, squares, circles, ellipses etc. Example 5: The ball $K(x, r)$ is a convex set. More precisely, if we have two vectors $y, z$ within this ball, we can use triangle inequality to show that the line segment $[y, z]$ is also contained within the ball. Example 6: All regular polyhedra (i.e.) Platonic solids are convex. Properties of convex sets Suppose you have two convex sets $A$ and $B$. Is their intersection a convex set also? Furthermore, is their union a convex set? Intersection of two convex sets is a convex set. More precisely, consider two points in the intersection $A \cap B$. Obviously, those points are elements of individual sets $A$ and $B$. Therefore, the line segment which connects them is contained in both $A$ and $B$ and hence in the set $A \cap B$. Similarly, intersection of finite number of sets (even infinite) is also a convex set. But, the same property isn't true for unions. In other words, the union of two convex sets is not necessarily a convex set (picture below). Example 7: Let $C_{1}, C_{2}$ be convex sets in $\mathbb{R^n}$. Prove that $$C_{1} + C_{2} := \{z \in \mathbb{R^n} : z = x_{1} + x_{2}, x_{1} \in C_{1}, x_{2} \in C_{2}\}$$ is convex. Let $z_{1}, z_{2} \in C_{1} + C_{2}$ and take $0 \leq \alpha \leq 1$. Furthermore, let $z_{1} = x_{1} + x_{2}$, where $x_{1} \in C_{1}, x_{2} \in C_{2}$ and $z_{2} = y_{1} + y_{2}$. Then $$(1 – \alpha) z_{1} + \alpha z_{2} = (1 – \alpha)[x_{1} + x_{2}] + \alpha [y_{1} + y_{2}]$$ $$= [(1 – \alpha) x_{1} + \alpha y_{1}] + [(1 – \alpha) x_{2} + \alpha y_{2}] \in C_{1} + C_{2},$$ because sets $C_{1}$ and $C_{2}$ are convex. The sum from Example 7 is called the Minkowski sum of convex sets. When $C_{2} = \{c\}$ is a singleton, we write the set $C_{1} + \{c\}$ as $C_{1} + c$ and call it the translation of $C_{1}$. Example 8: Let $A \subseteq \mathbb{R^n}$ be a set and $\alpha \in \mathbb{R}$. We define the scaling $\alpha A \subseteq \mathbb{R^n}$ as $$\alpha A = \{\alpha x : x \in A\}.$$ When $\alpha > 0$, the set $\alpha A$ is called a dilation of $A$. Try to prove that if $A$ is convex, then for any $\alpha \in \mathbb{R}$ the set $\alpha A$ is convex. Definition: Let $S \subseteq \mathbb{R^n}$. A set of all convex combinations of points from $S$, denoted by $conv S$ is called the convex hull of $S$. Obviously, a convex hull is a convex set. Moreover, the convex hull of a set $S$ is the smallest convex set which includes $S$. Some other properties of a convex hull: $S \subset conv S$ $\forall S', S'$ convex, $S \subset S' \rightarrow$ conv$S \subset S'$ Furthermore, any closed convex set $S$ can be written as the convex hull of a possibly infinite set of points $P$: $S =$hull$(T)$. Indeed, if $S$ is a closed convex set, it is the convex hull of itself. For Those Who Want To Learn More: Best Family Board Games to Play with Kids Summer Bridge Workbooks ~ Best Workbooks Prevent… SpaceRail - All About Marble Run Roller Coaster SpaceRails KiwiCo Crates Review ~ Tinker Crate and Eureka Crate… Should I Let My Child Take the CogAT Test Construction of number systems Construction of number systems – rational numbers Operations with complex numbers Addition of natural numbers Subtraction of natural numbers Adding and subtracting rational expressions Multiplication of natural numbers Addition and subtraction of decimal numbers Conversion of decimals, fractions and percents Comparing natural numbers and integers Comparing decimals and fractions Division of natural numbers Graphing rational functions Multiplication of decimals Multiplying and dividing integers Multiplying and dividing rational expressions Naming decimal places Partial fraction expansion Principle of mathematical induction Simplifying rational expressions Rational equations The infimum and supremum Trigonometric form of complex numbers Absolute value inequalities Arithmetic properties Bézout's theorem Binomial theorem Cardano's formula for solving cubic equations Comparing numbers Compound inequalities Congruences Definition and types of matrices Descartes Rule of Signs Euler's phi function Factoring polynomials Graphing polynomial functions Greatest common factor Integer solutions of a polynomial function Irrational root theorem Determinants of matrices Domain of a function Inequality of arithmetic and geometric means Inverse function Inverse of matrices Linear Diophantine equations Linear function Matrices and systems of equations Multi-step equations Multi-step inequalities Non-linear Diophantine equations One-step equations One-step inequalities Operations with matrices The order of math operations Quadratic inequalities Radical equations Remainder theorem Solving linear congruences Systems of equations Systems of inequalities System of linear congruences Rational root theorem Two-step equations Two-step inequalities Complex plane Angle bisectors and medians Congruent triangles Constructing angles Ellipse construction Equation of an ellipse Mutual relations between line and ellipse Pythagorean theorem? Quadrilaterals? Solid figures Triangle inequality theorem Triangle similarity theorems Basic trigonometric functions Trigonometric equations and inequalities Trigonometric identities and examples Unit circle definition of trigonometric functions Arithmetic sequence Convergent sequence theorem Derivative of a function Extrema of a function Geometric sequence Indefinite integral Limit of a function Plotting the graph of a function The Riemann integral Verbal phrases in algebraic expressions Write numbers in words Bridges of Konigsberg Banach Tarski theorem Euclidean algorithm Gabriel's horn History of pi Koch snowflake Monty Hall Problem Zenos Paradoxes The Monty Hall Problem The Monty Hall ... The Magic Square Learn about the ... The number Pi has ... Most downloaded worksheets Ones to thousands (84.5 KiB, 7,958 hits) Vectors measurement of angles (490.3 KiB, 5,914 hits) Integers - hard (1.1 MiB, 5,502 hits) Solving word problems using integers (423.7 KiB, 4,884 hits) Verbal expressions - sum (146.5 KiB, 4,341 hits) Ones to trillions (147.1 KiB, 4,189 hits) Solve by factoring (466.1 KiB, 3,957 hits) Ones to millions (101.9 KiB, 3,748 hits) Solving word problems using integers and decimals (295.3 KiB, 3,343 hits) Millionths to thousands (118.4 KiB, 3,302 hits) Copyrights © 2020 www.mathemania.com | All rights reserved. What is Mathemania? | Contact Us This website uses cookies to ensure you get the best experience on our website.Got it! More info
CommonCrawl
Speed Strength Percentages From 1980 to 2010, the percentage of steel used in vehicles relative to other materials has grown (by weight ) from approximately 53-55 percent in the early 1980s to approximately 60 percent today for North American light vehicles [1]; this reflects the ability of AHSS to meet performance demands. Modulus of Elasticity - is a measure of stiffness of an elastic material. Converting Signal Strength Percentage to dBm Values Joe Bardwell, VP of Professional Services Executive Summary WildPackets' 802. 11 wireless LAN packet analyzers, AiroPeek and AiroPeek NX, provide a measurement of RF signal strength represented by a percentage value. How to Show Android's Battery Percentage in the Menu Bar Whitson Gordon @WhitsonGordon Updated July 5, 2017, 5:46pm EDT In this day and age of weak battery life, it's incredibly important to keep an eye on your usage. Table 2: YS and UTS of SS304 and Al6061 measured using indentation and tensile tests. 5 % = 165/10 % = 165/10 × 1/100 = 165/1000 = 33/200 = 33 : 200 (ii) 0. This will build muscle mass, but the training percentage is too low to build strength. We hope you have found our workout percentage charts helpful. Because work equals force times distance, you can write the equation for power the following way, assuming that the force acts along the direction of travel: where s is the distance traveled. Why Add Speed Workouts to Marathon Training? Speedwork is actually one of the most crucial parts of marathon training (in addition to endurance and strength training, of course). Design shells to be a multiple of nozzle diameter. \(Percentage\space\ elongation=\frac{\delta l\times\ 100}{l}\) Percentage Elongation Example. Furthermore, these parts must be joined in a robust, high-speed, cost-effective manner. IMPLEMENT TODAY, FOR FREE!. Strength training specifically to improve volleyball skills leads to a faster strength adaptation and improved sports performance. Victorem Speed and Agility Leg Resistance Bands - Ultimate Speed Bands Set - Physical Fitness Workout Strength Training - Increase Muscle Endurance - Football, Basketball, Soccer, Track and Field BUILD STRENGTH and SPEED - Run faster, jump higher and dig harder with tethered resistance training that builds speed, strength, agility and flexibility. The various steels have different combinations of these characteristics based on their intended applications. Its uses three-second gusts estimated at the point of damage based on a judgment of 8 levels of damage to the 28 indicators listed below. 5 Reasons to Use Speed Deadlifts in Your Strength Training Programs Written on November 18, 2012 at 7:42 am, by Eric Cressey When my first book was published back in 2008, a lot of people were surprised that I included speed deadlifts, either because they felt too easy, or because they didn't think that deadlifting that wasn't "heavy. Top Five Themes of Weakness. WiFi signal strength is tricky. They find that in the first half of the 1990s, true technology grew at an annual rate of 1. Ninety seven percent of the variance in strength between equally trained men and women is accounted for by the difference in muscle size (Bishop, 1987). Ideally, all four types of exercise would be included in a healthy workout routine and AHA provides easy to follow guidelines for endurance and strength-training in its Recommendations for Physical Activity in Adults. A TORNADO WATCH means tornadoes are expected to develop. Okay, so it's a little bit of new training jargon for some people to wrap their heads around. Although the bar speed is fast, the weight is too light so little force is being developed. Think of it like cooking without using measuring cups and spoons. Everything we offer helps students bridge the gap between the classroom and clinical practice, while supporting health care. It turns out that roughly 68% of the universe is dark energy. For strength speed or slow strength, used with maximal weights, 65% of the total weight should come from band tension and 35% should be barbell weight. At 100 percent maximum effort, however, the percentage of slow-twitch fibers involved is only 5%, while fast-twitch fatigue resistant is 15 percent, and fast-twitch fatigable is 80 percent. Increases in shell thickness and infill percentage increase strength but also time to print and print cost. To convert a percentage to a ratio, write out the percentage number as a fraction, reduce the fraction to its simplest form and convert the new fraction to a ratio by replacing the slash mark with a colon. The most accurate way to express it is with milliwatts (mW), but you end up with tons of decimal places due to WiFi's super-low transmit power, making it difficult to read. , in the job specification. That's what the 25 percent standard is now doing. The only way to calm your inner chaos is with 100 percent. The implications for athletic-type strength training are clear. Use your best raw squat or a projected max using this calculator. To calculate the percentages for other distances (350, 400, 450, 600, etc. Therefore, slow-speed training will result in greater gains at slow movement speeds, while fast-speed training will realize the improvements in strength at faster movement speeds. The signal strength as reported by the Wi-Fi card seemed to be sufficient, yet there was a lot of jumping up and down, and there was an apparent correlation between the Wi-Fi signal strength drops and the speed of my downloading. Users wanting high-speed Internet service wherever they go find mobile broadband to be a perfect solution. The flexural strength for various composites is shown in figure 6. 6 Because these young athletes skated several days a week, their strength training was limited to 10 exercises, one or two days per week. It depicts the optimum number and range of reps given a certain percentage to increase strength. • Timing appears to play an important factor in muscle hypertrophy. For example, -40 dBm is 0. Return to Strength Tech. Share with your friends and compete with them. As the trainee progresses through the program, Days A and B are slightly modified to take into account the adaptations in the body of the lifter. Thus, 10 RM corresponds to approximately 75 percent of F M. Compared to lifting weights at a slow and controlled speed, which maximizes strength, explosive training maximizes movement economy, motor unit recruitment, and even lactate threshold (12). Free psychological tests. 1 Lean (muscle) weight (lb) +3. Both options are wrong. The higher the level of Speed, the faster your movement. 1 mph That's a difference of barely over one mile per hour. Wind Speed: The wind speed of an area is measured by using a wind meter (as shown in the picture to the left). com I think anytime you are working on a specific exercise and trying to increase your bench strength or max you need to look at what exactly you think is keeping you from increasing your lifts. Else, if you wish to do it by the book follow these steps: – Plug-in your device and wait until the percentage reaches 100% – Open Geekbench 3 and tap on Run Battery Benchmark. Are you getting all that you can from your wireless router? How do you know? We'll show you how to measure your wireless router's performance to make sure that it's running at its best. Do You Need To Do Speed Work For A Big Bench Press? by Jared Bachmeier for CriticalBench. Regardless of your experience level. For this reason, let's continue what actually works. Upon completion of the 3rd week, you simply start the wave over again, recalculating the percentages according to your new 1RM. Learn more at CertifiedFSC. At 100 percent maximum effort, however, the percentage of slow-twitch fibers involved is only 5%, while fast-twitch fatigue resistant is 15 percent, and fast-twitch fatigable is 80 percent. Five percent pleasure, fifty percent pain. x (Windows®) If the device is Mobile Broadband capable and is located outside of the Mobile Broadband Coverage Area, then the device reverts to a NationalAccess connection, if available. Our Instructors specialize in safe and effective kettlebell, barbell, and bodyweight training. How does RSSI (dBm) relate to signal quality (percent) ? Tags: RSSI , SNR , dBm , Wi-Fi Depending on your OS and application, WiFi signal strength is represented either as quality in percentage, or an RSSI value in dBm, i. Thus, 10 RM corresponds to approximately 75 percent of F M. bicycle e-tools - % grade, average speed calculator, gear-inch chart, touring calculations bicycle e-tools: percent grade, average speed calculator, gear-inch chart create a route elevation hill grade tools home. Since 2007 in the U. How much of a drop in speed is normal wired vs. 5 mm], the preferred minimum core diameter. Read here to find out how to optimize results using percentage based training Training with Percentages: Programming for Optimal Results. WiFi signal strength is tricky. Strength, power and endurance are all forms of muscular ability. Wind statistics and the Weibull distribution. Better performances can be the product of a number of factors. This page details the requirements for reaching 112% completion in the game. Traditional approach, as I love to call percent-based approach involves prescribing strength training using percentages and known (or estimated) 1RM of the lifter. At 85%, the optimal number of reps is 12, with the rep range being 2-4 reps. Speed bonus is affected by Ability Strength. speeds or of cutting hard and scaly mate-. Almost all of the testing was performed using a leg extension machine, which was set to 110 degrees ROM, with the speed of movement set to 25 degrees per second, for both the concentric and eccentric phases of the movement. The signal strength as reported by the Wi-Fi card seemed to be sufficient, yet there was a lot of jumping up and down, and there was an apparent correlation between the Wi-Fi signal strength drops and the speed of my downloading. ] Explosive Strength Explosive strength is trained at high velocity. How to Develop Mental Endurance and Strength. The Australian Strength and Conditioning Association (ASCA) is an incorporated non-profit organisation and is the peak national body for Strength and Conditioning (S&C) Professionals in Australia. Correlations of maximal speed with strength (except ankle dorsiflexion) were all significant at p <. Slight but significant decrease in cadence (3 revolutions per minute, 3. Especially for concrete, compressive strength is an important parameter to determine the performance of the material during service conditions. Two simple conclusions: It's easier to hold a lower percentage of one's maximal ability, and more efficient running is better than inefficient running. It depends not only upon the rate of application of the load. Your PT may test your muscle strength during your physical therapy evaluation and assessment and at regular intervals during your rehab to determine your progress in therapy. Join the ETD Membership and get Exclusive Access! Become a member today and gain exclusive access to Member Free Templates. Tests of the 1. Electric field strength is a quantitative expression of the intensity of an electric field at a particular location. Speed is calculated additively (not multiplicatively), and most movement speed buffs will stack. speeds or of cutting hard and scaly mate-. Speed work for raw powerlifting gets a bad rep sometimes, and I can understand why. Below are the first three days of the first week of Smolov. You can develop this trait by training at lower percentages of 1RM and moving at 1 to 1. To use this meter, you must turn on the device and wait for the screen to display a clear reading of zero. Extensive Tempo "B" targets the development of Aerobic Capacity through use of reps that range from 200 meters to 600 meters at intensities of 60-69%. The whole process goes like this: athlete knows his 1RM in particular exercise or he tests it either using 1RM test or reps-to-failure test and estimate 1RM using reconversion. Strength Standards. Your strength-to-weight ratio is simply your strength divided by your body weight. 5 percent, because of the temporary damping effect of higher investment on productivity. They should record their times so that they can chart their progress. To convert a percentage to a ratio, write out the percentage number as a fraction, reduce the fraction to its simplest form and convert the new fraction to a ratio by replacing the slash mark with a colon. Gains between the 50% and 75% infills were more modest, but still significant. Research led by Associate Professor Emmanuel Stamatakis found people who participated in strength-based exercise had a 23 percent reduction in risk of premature death by any means. Influence of closed skill and open skill warm-ups on the performance of speed, change of direction speed, vertical jump, and reactive agility in team sports athletes. Example exercises include: Second pull variations of the Clean and Snatch, Jump Squats, and Bench Press Throw @ 30-80% of 1RM. 60% max), and copper (0. Everything we offer helps students bridge the gap between the classroom and clinical practice, while supporting health care. Speed-strength has two components, starting strength and explosive strength. The percent reduction can range from 15% to 25%. Endurance vs. Its useful in finding good areas of WiFi connectivity in your WiFi network. Looking for sports training to improve your speed/agility? BreakAway Speed offers professional training for sports like football, softball and soccer. If you want to run faster, then your strength training needs to be specific to your goals. At 100 percent maximum effort, however, the percentage of slow-twitch fibers involved is only 5%, while fast-twitch fatigue resistant is 15 percent, and fast-twitch fatigable is 80 percent. It is the ability for any creature to be stronger than normally possible given their proportions. For example, a $5,000 salary cut wouldn't be a big deal for a Fortune 500 executive, but it'd be a big deal for someone making $25,000 a year because it represents 20 percent of their entire salary. 100% muscle usage is different from peak human strength in that it has little relation to actual muscle mass. Know Your Body Fat Percentage. The rationale is that strength and aerobic endurance require training methods that use a different energy system than high-speed exercises. Home » Deficit Deadlifts - Training Percentages and Carryover to Deadlift. 9*I) S2 Sample # Load (Lbs) Tensile Strength (TS) Avg. The Fujita scale was adopted in most areas outside of Great Britain. Other Factors to Consider - While the bench press is primarily considered a chest exercise, strengthening some of the secondary muscles involved is crucial to. Competitive athletes' goals for the strength-to-weight ratio differ from those for the average gym-goer. The two aspects to speed strength are starting strength and explosive strength. Many of the activities were developed by students and for students and really engage the adolescent brain. There is a total margin of victory that the individual state margins sum to, but some margins are positive and some are negative. , peak torque and the rate of torque development and walking speed in adults with stroke. In concrete practice, it is accepted that after 28 days concrete usually gains most of its strength. It is often used in science to report the difference between experimental values and expected values. MAS was developed for the purpose of increasing the specificity of training and to enable coaches to monitor training loads more accurately. Below are the first three days of the first week of Smolov. And while it does help provide an approximation of how strong your signal is, swapping it out to display an actual numeric value is a lot more precise and can change up the look of your iPhone (or at least the status bar). Composite materials are one such. Strength training uses resistance, like free weights, weight machines, resistance bands, or a person's own weight, to build muscles and strength. 4 % = 4/10 % = 4/10 × 1/100 = 4/1000 = 1/250 = 1 : 250. This is the list of item attributes as described to the user, so some attributes will be duplicated if they can both appear as positive or negative (such as a damage bonus/penalty) or if they have several different descriptions ("set_weapon_mode" is used to represent alternate. See the upper screenshot to the right: the red line is my downloading, and the violet line is the Wi-Fi signal strength. Finding a strength and conditioning coach that currently works in the field and asking for an opportunity to work as an assistant, an intern, or as a volunteer, is a great way to gain first-hand experience in strength and conditioning coaching while you are still a student. You can also try increasing your incline to two to five percent and running at that grade for one to three minutes before lowering back to flat ground for the same amount of time. Regardless, the high school 800-meter race is still 70 to 80 percent aerobic strength and 20 to 30 percent speed. Our strength standards are based on over 21,103,000 lifts entered by Strength Level users. Strength is the property of being physically strong (you can do, say, 100 push-ups) or mentally strong (you can calculate percentages in your head while people are shouting at you). Strength can be broken down to concentric, eccentric and isometric strength. Speed is simply how fast a character can move in a given amount of time. Bending Springback Calculator After a bending operation, residual stresses will cause the sheet metal to spring back slightly. Influence of recycled coarse aggregate replacement percentages on compressive strength of concrete Article in Jianzhu Cailiao Xuebao/Journal of Building Materials 9(3):297-301 · June 2006 with 32. The gel contains twice as much of the pain killer as the original Ibuleve formula, and can be used to. High school coaches and athletes know us the go-to resource for building championship quality programs. You can manage this and all other alerts in My Account. Composite materials are one such. This method has been popularized by powerlifting coach Louie Simmons, who recommends using a load between 50-60% percent of your 1RM in lifts such as the squat, deadlift, and bench press while lifting. A greater percentage of Type II muscle fibres will enhance quicker movements and therefore increase the overall running speed. Get unstuck. Strength, power and endurance are all forms of muscular ability. The increase in speed, strength, agility and muscular endurance will benefit athletes of every sport. function of time and/or speed of movement. Building run-specific strength and speed is the key to improving efficiency and gaining fitness, and this session includes just the right combination of both to help you do that. The Expression of Strength, Part 2 – Speed Strength | Breaking Muscle. 4 Treadmill Workouts to Increase Speed, Build Strength, Burn Fat, and Crush Hills Turn a tired routine into an exciting part of your training. Define tensile strength. – Leave the Dim Screen selection ON and unplug. We Thought Female Athletes Were Catching Up to Men, but They're Not. Building muscle isn't just for individuals into fitness as a hobby. Hastelloy X is a nickel base alloy that possesses exceptional strength and oxidation resistance up to 2200°F. If an athlete failed to keep their speed of movement above our arbitrary minimum velocity for any of their reps, that load would be repeated in the next session. We touched on starting strength previously, but now need to be more specific to understand what is happening. One Rep Max Calculator. Weisberger on what is normal heart function percentage: normal EF is 50-65%. High-speed steel (HSS or HS) is a subset of tool steels, commonly used as cutting tool material. At 26 feet, the office speed dropped to 305 mbps, 33 percent less than the suburban test result from the same distance. Benefits of incline treadmill walking. Having a purpose in life may help older adults maintain physical strength, function and mobility as they age, according to a new study. Short athletes can compensate lack of speed with a lower height necessary for the lift. The changes to the load combinations highlight how the design pressures calculated from the new wind speed maps relate to the design pressures using the wind speed maps in the 2007 FBCB. overnight into Friday. Rabies is an infectious viral disease that is almost always fatal following the onset of clinical symptoms. Ermakov and N. UTS = = psi (2-12) maximum load area of original cross section P max A o If the complete engineering stress-strain curve is available, as shown in Figure 3, the ultimate tensile strength appears as the stress coordinate value of the highest point on the curve. The effect of voluntary effort to influence speed of contraction on strength, muscular power, and hypertrophy development. In other words, it's how fast you can get to top speed from a standing position. Volume - Volume can constitute the number of sets per workout, the number of reps for a specific exercise at a given weight, or the total reps multiplied by the weight used. Jordan is a strength training and nutritional consultant based out of Boston Massachusetts. One must develop speed strength, which is the ability to accelerate with light to medium loads, creating explosive force. The speed at which an attack moves. com are designed to help you find serious answers to your questions about IQ, personality, or career assessment. The base dissociation constant, K b, is a measure of basicity—the base's general strength. That's what the 25 percent standard is now doing. Decreased mean power output during TT, 24 W (7 percent) per decade. Nicassio developed an innovative system during his playing days to improve his speed, strength and explosion for game time. Intensity interpretation. To support free math by tecmath onPatreon (thankyou): https://. Adjust these numbers downward by 5 percent to determine your lactate threshold numbers. "It gets you. He says that when the outfit began its testing protocol, they didn't have any direct way to measure the strength of that cell. ASAP~Athletic Strength And Power has attended over 100 strength training clinics over the years from coast to coast. Due to the higher carbon contant compared to low and medium carbon steels, the high carbon steel has higher hardness b. Unfortunately, in iOS 11, you can only view your 4G LTE reception strength to the nearest cell tower if you have an iPhone with an Intel wireless modem, not a Qualcomm one. Percent Composition (by mass) We can consider percent by mass (or weight percent, as it is sometimes called) in two ways: The parts of solute per 100 parts of solution. Think of it like cooking without using measuring cups and spoons. Bank's Executives about how their hiring strategy has changed their business from the top down. So how do training percentages relate to each of these and other strength qualities?. However, the most important thing to bear in mind when performing dynamic effort squats is not to focus on the amount of weight you are lifting, but the speed and explosive power at which you perform each rep. Athletes such as sprinters and football players benefit from these exercises. - Acceleration, Absolute Speed, or Speed Endurance Work - Moderate to Intense Multiple Jumping and Throwing - Intense Technical Work Bodybuilding Theme • Done on days with - General Strength or Medicine Ball Work - Tempo Running - Low Intensity Technical Work Transition Theme • Done on Speed/Power days in Special Situations. Like every other program that works off percentages, you have to plugin your one rep max in order to find the weight you should be using. For over 40 Years BFS has been providing the very best in strength and athletic training to athletes from all walks of life. Land-Based Strength and Conditioning for Swimming by Chat Williams, MS, CSCS*D, NSCA-CPT*D June 01, 2017. Converting a percentage to a ratio takes only a few minutes and requires paper and a pencil. View website. Rugby Warfare's 3 Step Guide To Increase Sprint Speed Step 1: Improve squatting strength Step 2: 12 week heavy resistance training ( strength training ) followed by a mini rest and then a hybrid training programme which trains speed, strength and power equally. The world's leading strength and conditioning professionals turn to us. high-speed tool steels. Building run-specific strength and speed is the key to improving efficiency and gaining fitness, and this session includes just the right combination of both to help you do that. If you have an Feedback, Questions, or Comments about our Weight Lifting Percentage Charts or our Weight Room Percentage Chart (the larger chart), please email them to us. It involves short, intense periods of cycling, from five to 30 seconds or so in duration, with heart rate reaching 95 to 100 percent of maximum during some of the longer sprints. The plate normally crushes at a rate of about 1/8 inch per second. Building an aerobic engine is a part of fitness, and running mechanics and percentage of speed reserve can extend in games that last for more than an hour. Tensile test results include ultimate tensile strength, yield strength, Young's modulus, ductility, and the strain hardening exponent. 65% max), silicon (0. Universal testing machine (tensile testing machine) with these minimum specifications: A. There is a total margin of victory that the individual state margins sum to, but some margins are positive and some are negative. Optimal stride length has an impact on other areas such as speed, reaction and recovery time, and the rate of acceleration. The entire body is worked each session. Intensity interpretation. For soils with varying percentages Of rock particles, standard procedures for developing shear strength parameters are complicated and are not universally accepted in practice. It is important to develop a general base strength, and then. The fraction of a solute in a solution multiplied by 100. Read the Beaufort Wind Force Scale, which is arranged from the numbers 0 to 12 to indicate the strength of the wind from calm to hurricane. Think of it like cooking without using measuring cups and spoons. Speed Strength: Is characterized by the ability to move at high speeds with relatively low external resistance. Wireless signal strength is. At its base, strength-speed means strength in conditions of speed. Find out how smart you are, what you like to do, and what makes you happy with our free IQ tests, career tests, and personality tests. Online converter for units of speed. Spectra has several difficult issues. , 1991), and possibly the ratio of type IIa to type IIb fibres (type IIa are a more fatigue resistant form of fast twitch fibre). com are designed to help you find serious answers to your questions about IQ, personality, or career assessment. Cricket Strength Training and Exercises Cricket is a game that would appear to require little muscular strength. Are you getting all that you can from your wireless router? How do you know? We'll show you how to measure your wireless router's performance to make sure that it's running at its best. Well-suited for athletes without well-developed classic lifts. 11 wireless LAN packet analyzers, AiroPeek and AiroPeek NX, provide a measurement of RF signal strength represented by a percentage value. Short athletes can compensate lack of speed with a lower height necessary for the lift. Learn more at CertifiedFSC. Thus, 10 RM corresponds to approximately 75 percent of F M. [citation needed] On February 1st, 2007, the Fujita scale was decommissioned, and the Enhanced Fujita Scale was introduced in the United States. Nicole Nazzaro and Jordan Smith. The FDA is encouraging dental professionals to make a simple and economic switch to "faster" X-ray film to further reduce your radiation exposure. These percentages and velocities are nearly perfect with the Soviets', who did their research on squat. For soils with varying percentages Of rock particles, standard procedures for developing shear strength parameters are complicated and are not universally accepted in practice. Understanding Signal Strength. , peak torque and the rate of torque development and walking speed in adults with stroke. There are many reasons why swimmers want to improve speed, whether it's for a faster 100-meter or better open-water time at their next triathlon. Percent to ppm converter How to convert ppm to percent. Your ability to move weight, move it with speed and continue moving it for extended periods of time will help. For concrete with nominal maximum size of aggregate greater than or equal to 1. That's what the 25 percent standard is now doing. LTE modems can be a little boring, but the high quality of Qualcomm's modems is why it's one of the largest chip makers right now. All certified DBE and ACDBE firms listed in this directory have been approved under the eligibility standards and guidelines set forth in the Title 49 Code of Federal Regulations Parts 23 and 26. Design shells to be a multiple of nozzle diameter. Combines relatively high strength, good workability, and high resistance to corrosion; widely available. Strength training results in the hypertrophy of both type I and type II muscle fibers. The above formula, step by step calculation & solved example problem may be useful. Icon and Particle Effects. As the trainee progresses through the program, Days A and B are slightly modified to take into account the adaptations in the body of the lifter. All tests at 123test. 30GB on $60 plan. It depicts the optimum number and range of reps given a certain percentage to increase strength. Status effects are various conditions, which can be either helpful or harmful, that affect an entity. 45 percent of his body weight), whereas a hyena's heart is close to 1 percent of its body weight. Given data is; Diameter of rod = D = 30 mm. Independent t-tests (p ≤ 0. The purpose of this blog is to describe this conversion process in WiFi Explorer. To put it another way, reducing infill from 100% decreases resistance less and less. "It gets you. Superhuman Strength, also called super strength or enhanced strength, is an ability commonly utilized in fiction. Our strength standards are based on over 21,103,000 lifts entered by Strength Level users. The Expression of Strength, Part 2 – Speed Strength | Breaking Muscle. Do You Need To Do Speed Work For A Big Bench Press? by Jared Bachmeier for CriticalBench. The only other elements allowed in plain-carbon steel are: manganese (1. Practical High School Strength and Conditioning Dan Giuliani, MSAL, CSCS – Volt's strength coaches are all CSCS-certified and rely on Speed 50%-65% 4-5. Power = Strength/Time. The first term, Strength-Speed refers to moving relatively heavy loads as fast as you can , The typical speed targets to develop this trait with Beast sensor should be from 0,8 to 1,0 m/s. 1% = 1/100. The ability to quickly read and comprehend books, articles and other written materials would be life-changing for a lot of us. Proponents of the percentage system consider "low" intensity to mean percentages around 60% of the 1RM and "high" intensity to mean around 90% of the 1RM. Train your math skills and test them with our math tests. Training peripheral vision to register more effectively can increase reading speed over 300 percent. Tensile Property Testing of Plastics Ultimate Tensile Strength. Pipe Burst Working Pressure Calculator Barlow's Formula. He also relies on a squat test to measure pure strength in the glutes, quads, and core—the most. Prior to Warlords, increased movement speed as a property existed in the form of enchants like [Pandaren's Step], but now appears directly on gear. requirements of a structure. Sport performance is highly dependent on the health- and skill-related components of fitness (power, speed, agility, reaction time, balance, and Body Composition coordination) in addition to the athlete's technique and level of competency in sport-specific motor skills. In the third, the input RPM is the speed of the driver pulley (usually known from the motor speed) while the output RPM is the speed of the driven pulley. High Speed Steel (HSS) refers to any of a variety of steel alloys that engineers primarily use to construct machine tool bits and blades and drill bits for industrial power tools. Your strength-to-weight ratio is simply your strength divided by your body weight. Calculating percentages can be an easy task. 8 Linux Commands: To Find Out Wireless Network Speed, Signal Strength And Other Information last updated October 21, 2019 in Categories Linux , Linux desktop , Linux laptop L inux operating systems come with a various set of tools allowing you to manipulate the Wireless Extensions and monitor wireless networks. Find out how smart you are, what you like to do, and what makes you happy with our free IQ tests, career tests, and personality tests. 2 Percent fat-2. Its useful in finding good areas of WiFi connectivity in your WiFi network. A short Power vs Strength difference Wrap-up. increase in number of muscle fibres and increase in size/ volume of muscle. 863 Likes, 43 Comments - Eddie Alvarez (@ealvarezfight) on Instagram: "Thank You Rich Pohler for all the massive gains we've made in the last three months in strength…". When the aim is to increase maximum strength by stimulating muscle hypertrophy, at least 80% of 1RM should be lifted 5 to 8 times or until failure (Zatsiorsky, 1995). These welds even though more difficult than flat must be of the same strength and quality. The speed bench is a variation of the bench press designed to increase the explosiveness in your upper body pushing lifts. Statistics Calculator will compare two percentages to determine whether there is a statistically significant difference between them. National Strength and Conditioning Association Journal, Vol. There is, however, a way for you to see your signal strength in dBm still, you just won't get the convenience of it sticking around in your status bar. Speed is calculated additively (not multiplicatively), and most movement speed buffs will stack. We have observed that an increase in the tension of a string causes an increase in the velocity that waves travel on the string. If you press the Home button to exit Field Test mode normally, the dots will return. Jordan is a strength training and nutritional consultant based out of Boston Massachusetts. I honestly can't think of a more worthwhile cause to give a shout out to. The most fundamental components of a strength training program are the amount of weight lifted (overload) and the speed of the repetition (tempo). Dealing With Excess Saliva using the Plus White 5 minute Speed Whitening System. How to calculate stretch percentage (with FREE print at home stretch percentage guide!) 08. Traditional approach, as I love to call percent-based approach involves prescribing strength training using percentages and known (or estimated) 1RM of the lifter. 1 mph That's a difference of barely over one mile per hour. 5 percent, because of the temporary damping effect of higher investment on productivity. He first appeared as a bodyguard hired by Zeniru. Rep Fitness carries equipment designed to take your fitness to the next level. ASAP~Athletic Strength And Power has attended over 100 strength training clinics over the years from coast to coast. The various steels have different combinations of these characteristics based on their intended applications. The muscle strength grading scale is often used by your physical therapist to determine how a muscle or group of muscles is working. A short Power vs Strength difference Wrap-up. SCCC Certification The Value of Accreditation. The rate at which a sample is pulled apart in the test can range from 0. We estimate that the country has accounted for 86 percent of global growth in this market since then. A common misconception about the speed of a Wi-Fi connection is that the peak speed it reaches is the speed of the connection.
CommonCrawl
Trends in Demographic and Health Survey data quality: an analysis of age heaping over time in 34 countries in Sub Saharan Africa between 1987 and 2015 Mark Lyons-Amos1 & Tara Stones1 BMC Research Notes volume 10, Article number: 760 (2017) Cite this article This paper evaluates one aspect of data quality within DHS surveys, the accuracy of age reporting as measured by age heaping. Other literature has explored this phenomenon, and this analysis build on previous work, expanding the analysis of the extent of age heaping across multiple countries, and across time. This paper makes a comparison of the magnitude of Whipple's index of age heaping across all Demographic and Health Surveys from 1986 to 2015 in Sub-Saharan Africa. A random slope multilevel model is used to evaluate the trend in the proportion of respondents within each survey rounding their age to the nearest age with terminal digit 0 or 5. The trend in the proportion of misreported ages has remained flat, in the region of 5% of respondents misreporting their age. We find that Nigeria and Ghana have demonstrated considerable improvements in age reporting quality, but that a number of countries have considerable increases in the proportion of age misreported, most notably Mali and Ethiopia with demonstrate increases in excess of 10% points. Much attention has been paid to ensuring that basic data within Demographic and Health Surveys is correctly measured. Age heaping is frequently encountered and presents significant problems for accurate collection of data. Age heaping or age preference is the tendency for people to incorrectly report their age or date of birth. Individuals' heaping behaviours favour certain ages, commonly those ending in '0' or '5' [1] although there is some evidence of minor heaping at eight [2]. At the most basic level, inclusion of women age 15–49 in DHS depends on accurate reports of the ages of women near the boundaries of that age interval in the survey. The inclusion of children under five (or another specified age) for the questions about child health, immunizations, and nutrition also depends on accurate reports of their birth dates. Many measures are age-specific, such as estimates of age-specific fertility rates and infant and child mortality rates [C]. Estimates of levels and trends in such rates may be affected by misreporting of ages and dates of birth for a woman and her children, or dates of death for her children. Age displacement of children can seriously distort estimates of current levels and recent trends in fertility and mortality and is by no means unique to DHS surveys: evaluation of censuses and community surveys have revealed severe age misreporting [2,3,4]. Additionally, age heaping can have implications for the quality of analyses into other phenomena, such as cause specific death rates [5]. This has led to a plethora of studies evaluating the quality of basic demographic data in the DHS in a variety of contexts [6,7,8,9]. Our analysis provides an evaluation of how the excess proportion varies over time and between countries. This analysis expands on previous works [7, 10], increasing the range of countries evaluated as well as capturing trends across time, to account for potential structural change which may improve the quality of retrospective data [8] as well as better data collection techniques [11, 12]. Our working hypothesis is that there should be a falling trend in the proportion of ages showing digit preference across time. As such, this paper addresses two major research aims: Capturing the overall trend in the quality of age recall data across multiple waves of DHS surveys. Evaluate the extent of cross national variation in the extent of age heaping. DHS are nationally representative, cross-sectional household surveys with multi-stage cluster sampling designs. Respondents are women of reproductive age (which are defined by DHS as between 15 and 49 years) and only women between these ages are interviewed. While a male dataset is available, and digit preference is also exhibited albeit to a lower extent for males [4], collection is much less consistent (especially for early surveys) and so the analysis is limited to females only. Exact details of the sampling designs are available on a country by country basis, and data sets can be downloaded on request from the provider. We restrict our analysis to the Sub Saharan Africa region to minimize the extent to which cross cultural variation in age heaping may play a role [13]. Whipple's index of age heaping This analysis uses Whipple's index of age heaping to measure age data quality [4, 13]. Whipple's index measures the excess proportion of ages ending in either 0 or 5. Where no ages are heaped, we expect this index to take the value 0.2. Deviation from this number indicates some degree of terminal digit preference, for example 0.25 indicating that 5% of ages have been heaped at either a zero or five terminal digit. Regression model We specify the dependent variable in our model as the excess proportion of ages ending in 0 or 5 from (Whipple's index of heaping), denoted as y tj where y is the proportion of respondents with heaped ages, indexed by year of survey t and country j. Survey years are hierarchically nested within countries. We specify a multilevel model in the form of Eq. 1, where the logit of the index of heaping is a function of the year of the survey with intercountry variation captured a random effect parameter at the country level, ν j. $$ \begin{aligned} {\text{logit}}\left( {y_{tj} } \right) = \beta_{0} + \beta_{1} t + \nu_{0j} \hfill \\ \nu_{0j} \sim N\left( {0, \;\sigma^{ 2} } \right) \hfill \\ \end{aligned} $$ To overcome the non-linearity of the proportion of age heaped at zero, we use a logit link to allow the specification of the model in the linear form of Eq. 1. We explored different specifications of the year of survey parameter by introducing square and cubic terms for the effect of year to account for non-linearity but neither of these specifications improved model fit on -2LogLikelihood significance tests. We performed tests for differences in the trend in the proportion of ages heaped over time by introducing a random slope parameter at the country level. This model is described in Eq. 2 $$ \begin{aligned} {\text{logit}}\left( {y_{tj} } \right) = \beta_{0} + \beta_{1} t + \upsilon_{0j} + \upsilon_{1j} t \hfill \\ \nu_{0j} \sim N\left( {0,\;\sigma^{2} } \right), \;\nu_{1j} \sim N(0,\;\sigma^{2} ). \hfill \\ \end{aligned} $$ In Eq. 2, the random effect parameter ν 1j allows deviation from the overall trend in Whipple's index of heaping over time according to indexation by country j. This parameter is allowed to correlate with ν 0j . Model estimation is conducted by taking the logit of Whipple's index of heaping, and using this as the response variable in a linear multilevel analysis. Models are estimated using MlwiN 2.36 [14], with Restricted Iterative Generalised Least Square (2nd order Penalised Quasi Likelihood) estimation used to account for the low number of observations per country. The countries included, the years of survey and the proportion of 0 and 5 terminal digits are presented in Table 1. The overwhelming majority of surveys exhibit proportions. Table 1 Countries for analysis and years of survey with proportion of ages with terminal digit 0 or 5 Results from the modelling are presented in Table 2. We find no evidence of a trend toward an improvement in the proportion of ages heaped, with the coefficient from both Model I and Model II being both statistically non-significant and substantively small. Table 2 Estimated multilevel model for proportion of ages heaped The introduction of the random slope parameter proved to significantly improve model fit based on a likelihood test. The predicted values by country from Model 2 are presented in Fig. 1. The overall trend in the proportion of age heaped in denoted by the red line within individual country trajectories denoted for each blue line. In general, there is a reasonable degree of clustering around the population line: the majority of countries have a portion of age heaped which is consistent over time, and in the range of between 2 and 6%. Estimated median predicted proportion of ages heaped by country across survey year Based on the predicted values of Whipple's index of heaping, we identify countries with substantial differences between survey years 1987 and 2015 based on the residuals from model 2. We identify two countries with large predicted decreases in the proportion of age heaped, where we define a large decrease as being 4% points or more. Nigeria exhibits the largest decrease in the proportion of respondents reporting a heaped age, with a decline in the predicted value of Whipple's index of 6.22% points, with the only other country exhibiting a large substantive decrease in the proportion of respondents with a heaped age being observed in Ghana with a fall of 4.28% points. A number of countries exhibit substantive increases in the proportion of respondents reporting a heaped age, again defined as an increase of 4% points or more between the predicted values of Whipple's index between 1987 and 2015. Sierra Leone, Chad and Ethopia demonstrate increases of 4.46% points, 7.38% points and 7.58% points respectively. We also note exceptionally large increases in the proportion of respondents with a heaped age in excess of 10% points between 1987 and 2015: Mali exhibits and increase of 11.78% points and Benin increases by 13.87% points. Data quality from retrospective sample surveys continues to be of major importance in social science, and basic demographic data is no exception. This paper therefore provides an assessment of the quality of age reported data within the DHS. We use all available DHS for the Sub Saharan Africa region to assess trends over time in the proportion of age reported which are heaped on terminal digits 0 and 5. Out initial research hypothesis was that there may be a secular trend toward lower proportions of age heaped. However, in our analysis, we find no evidence of a significant decline in the proportion of ages heaped. That said the predicted probabilities are at a relatively low level for most countries, and are not a substantial concern. We do however identify some major outliers: Nigeria and Ghana have considerable falls in the proportion of ages heaped, while there have been dramatic increases in Sierra Leone, Ethiopia and Chad. DHS data have provided detailed insight into developing countries but over time its methods have evolved. Research models can better cope with attitudes and behaviours in the field and the process, in recent years, allows for improved cultural translations. This has indeed reduced heaping in some areas and analyses from Sub-Saharan Africa show some improvement in data accuracy, along with increased levels of development. There appears to have been an adjustment for temporality being socially, culturally and economically defined, indicating that age heaping remains a mutable phenomenon. This has been noted for other basic demographic information [8, 11, 12] where improvements in data collection procedures and provision of written information to increasingly literate populations [10] and better collection [8, 11, 12] techniques have been means of improving the accuracy of recalled data, for example birth weight [7, 8]. Potential explanations for improving data quality largely fall into the realms of better quality information being provided by respondents, and better collection techniques. Considering the effect of respondents, increasing utilisation of written demographic information made possible by greater levels of numeracy [10] has led to improvements over time in demographic data quality. Low numeracy and vague ideas about date of birth which were potentially down to low degrees of schooling [15]. Additionally, falling rates of malnutrition may be a potential explanation, as infant protein malnutrition syndrome was and is (in poorest economies) a limiting factor in an adult's cognitive abilities (which can cause misreports in age) [16]. Consideration of the use of new techniques to reduce inaccuracies, such as calendars, as more recent versions of the DHS record additional variables Similar technique of alternate measures of timepaths using 'local calendars' that referenced local events and festivals which corresponded to the individual's personal life [12]. This method is relatively successful in that respondents memory was triggered resulting in less duration heaping. These advancements framed the motivation for our research hypothesis that the prevalence of age heaping would fall across time. While we find little evidence of this- there is no significant year effect in our models—indicating no movement toward secular improvement in the quality of age data. That said, out initial expectation of severe bias in certain contexts based on historic census information [2, 3] was also misplaced. While the lack of improvement in age data quality in the DHS is disappointing, this should be tempered by the fact the level of distortion is low to begin with. We do note some heterogeneity when taking country context into account, with some countries somewhat large changes in the degree of age misreporting. Tentatively, these changes can be explained by economic performance: relatively high growth rates in Nigeria and Ghana compared to moribund economic growth in Ethiopia and Chad exacerbated by internal conflict and violence which may have disrupted vital registration procedures. In any case, this study highlights the need to take into account country context when analysing data quality, even for standardised datasets such as the DHS. This analysis is only able to identify the proportion of ages in a population with digit preference, not whether individuals are misreporting their age. National level averages are produced: the likelihood of heaping is likely to vary between sub national groups e.g. better educated women are less likely to misreport their age than women with low educational attainment due to better numeracy [10]. A'Hearn B, Baten J, Crayen D. Quantifying quantitative literacy: age heaping and the history of human capital. J Econ Hist. 2009;69(3):783–808. Bailey M, Makannah TJ. An evaluation of age and sex data of the population censuses of Sierra Leone: 1963–1985. Genus. 1996;52(1–2):191–9. Mukherjee BN, Mukhopadhyay BK. A study of digit preference and quality of age data in Turkish censuses. Genus. 1988;44(1–2):201–27. Pardeshi GS. Age heaping and accuracy of age data collected during a community survey in the Yavatmal District, Maharashtra. Indian J Community Med. 2010;35(3):391–5. https://doi.org/10.4103/0970-0218.69256. al-Haddad BJ, Jedy-Agba E, Oga E, Adebamowo C. Age heaping and cancer rate estimation in Nigeria. Working Paper 2013–03 Minnesota Population Centre; 2013. Johnson K, Grant M, Khan S, Moore Z, Armstrong A, Sa Z. Fieldwork-related factors and data quality in the demographic and health surveys program. DHS analytical studies No. 19. Calverton, Maryland, USA: ICF Macro. 2009. http://dhsprogram.com/pubs/pdf/AS19/AS19.pdf. Accessed 3 Nov 2017. Pullum TW. An assessment of the quality of data on health and nutrition in the DHS surveys, 1993–2003. DHS Methodological Reports 6 Calverton, Maryland, USA: Macro International. 2008. http://dhsprogram.com/pubs/pdf/MR6/MR6.pdf. Accessed 3 Nov 2017. Channon AAR, Padmadas SS, McDonald JW. Measuring birth weight in developing countries: does the method of reporting in retrospective surveys matter? Matern Child Health J. 2011;15(1):12–8. https://doi.org/10.1007/s10995-009-0553-3. Cleland J. Demographic data collection in less developed countries 1946–1996. Popul Stud. 1996;50(3):433–50. Pullum TW. An assessment of age and date reporting in the DHS Surveys 1985–2003 DHS Methodological Reports No. 5. Calverton, Maryland, USA: Macro International. 2006. http://dhsprogram.com/pubs/pdf/MR5/MR5.pdf. Accessed 3 Nov 2017. Becker S, Diop-Sidibé N. Does use of the calendar in surveys reduce heaping? Stud Fam Plann. 2003;34(2):127–32. Haandrikman K, Rajeswari NV, Hutter I, Ramesh BM. Coping with time: using a local time-path calendar to reduce heaping in durations. Time Soc. 2004;13(2–3):339–62. Shryock HS, Siegel JS. Methods and materials of demography. New York: Academic Press; 1976. Leckie G, Charlton C. runmlwin—A program to run the MLwiN multilevel modelling software from within Stata. J Stat Softw. 2013;52(11):1–40. Crayen D, Baten J. Global trends in numeracy 1820–1949 and its implications for long-term growth. Explor Econ Hist. 2010;47(1):82–99. Barbieri M, Hertrich V, Grieve M. Age difference between spouses and contraceptive practice in Sub-Saharan Africa. Population. 2005;60(5/6):617–54. https://doi.org/10.2307/4148187 (English Edition, 2002-). MJLA: data analysis, conceptualisation of study. TS: data preparation and cleaning, review of literature. Both authors read and approved the final manuscript. Data are available on request from https://dhsprogram.com/Data/. Consent to publish DHS receive government authorization, use informed consent and assurance of confidentiality for ethical use of data by third parties. Ethical approval provided by University of Portsmouth Faculty of Health Science and Social Work Ethics Committee. School of Health Sciences and Social Work, University of Portsmouth, Portsmouth, UK Mark Lyons-Amos & Tara Stones Mark Lyons-Amos Tara Stones Correspondence to Mark Lyons-Amos. Lyons-Amos, M., Stones, T. Trends in Demographic and Health Survey data quality: an analysis of age heaping over time in 34 countries in Sub Saharan Africa between 1987 and 2015. BMC Res Notes 10, 760 (2017). https://doi.org/10.1186/s13104-017-3091-x Accepted: 13 December 2017 Demographic and Health Survey
CommonCrawl
7.7 Applications of Dot and Cross Product Calculus and Vectors Nelson Purchase this Material for $5 You need to sign up or log in to purchase. Subscribe for All Access Lectures 3 Videos Finding Area between Two Vectors Buy to View Work as a Application to Dot Product Torque Application of Cross Product Solutions 19 Videos Calculate |\vec{a} \times \vec{b}|, where \vec{a} = (1, 2, 1) and \vec{b} = (2, 4, 2). If \vec{a} and \vec{b} represent the sides of a parallelogram, explain why your answer for part a. makes sense, in terms of the formula for the area of a parallelogram. Q2b Calculate the amount of work done in each situation. A stove is slid 3 m across the floor against a frictional force of 150 N. A 40 kg rock falls 40 m down a slope at an angle of 50^o to the vertical. A wagon is pulled a distance of 250 m by a force of 140 N applied at an angle of 20^o to the road. Q3c A lawnmower is pushed 500 m by a force of 100 N applied at an angle of 45^o to the horizontal. Q3d Determine each of the following \vec{i} \times \vec{j} -\vec{i} \times \vec{j} \vec{i} \times \vec{k} -\vec{i} \times \vec{k} Calculate the area of the parallelogram formed by the following pairs of vectors: \vec{a} = (1, 1, 0) and \vec{b} = (1, 0, 1) \vec{a} = (1, -2, 3) and \vec{b} = (1, 2, 4) The area of the parallelogram formed by the vectors \vec{p} = (a, 1, -1) and \vec{q} = (1, 1,2) is \sqrt{35}. Determine the value(s) of a for which this is true. In \mathbb{R}^3, points A(-2, 1, 3), B(1, 0, 1) and C(2, 3, 2) form the vertices of \triangle ABC. By constructing position vectors \vec{AB} and \vec{AC}, determine the area of the triangle. By constructing position vectors \vec{BC} and \vec{CA}, determine the area of the triangle. A 10 N force is applied at the end of a wrench that is 14 cm long. The force makes an angle of 45^o with the wrench. Determine the magnitude of the torque of this force about the other end of the wrench. Parallelogram OBCA has its sides determined by \vec{OA} = \vec{a} = (4, 2, 4) and \vec{OB} =\vec{b} = (3, 1, 4). Its fourth vertex is point C. A lien is drawn from B perpendicular to side AC of the parallelogram to intersect AC at N. Determine the length of BN. For the vectors \vec{p} = (1, -2, 3), \vec{q} = (2, 1, 3), and \vec{r} = (1, 1, 0), show the following to be true. The vector (\vec{p} \times \vec{q}) \times \vec{r} can be written as a linear combination of \vec{p} and \vec{q}. Q10a (\vec{p} \times \vec{q}) \times \vec{r} = (\vec{p}\cdot \vec{r})\vec{q} - (\vec{q}\cdot \vec{r})\vec{p}
CommonCrawl
Capacity-achieving private information retrieval scheme with a smaller sub-packetization AMC Home Cryptographic properties of cyclic binary matrices May 2021, 15(2): 329-346. doi: 10.3934/amc.2020069 A note on generalization of bent boolean functions Bimal Mandal 1, and Aditi Kar Gangopadhyay 2,, CARAMBA, INRIA Nancy-Grand Est., 54600, France Department of Mathematics, Indian Institute of Technology Roorkee, 247667, India * Corresponding author: Aditi Kar Gangopadhyay Received April 2019 Revised February 2020 Published May 2021 Early access April 2020 Suppose that $ \mu_p $ is a probability measure defined on the input space of Boolean functions. We consider a generalization of Walsh–Hadamard transform on Boolean functions to $ \mu_p $-Walsh–Hadamard transforms. In this paper, first, we derive the properties of $ \mu_p $-Walsh–Hadamard transformation for some classes of Boolean functions and specify a class of nonsingular affine transformations that preserve the $ \mu_p $-bent property. We further derive the results on $ \mu_p $-Walsh–Hadamard transform of concatenation of Boolean functions and provide some secondary constructions of $ \mu_p $-bent functions. Finally, we discuss the $ \mu_p $-bentness for Maiorana–McFarland class of bent functions. Keywords: Boolean function, $ \mu_p $-Walsh–Hadamard transform, $ \mu_p $-bent function. Mathematics Subject Classification: Primary: 06E30, 94C10; Secondary: 05A05. Citation: Bimal Mandal, Aditi Kar Gangopadhyay. A note on generalization of bent boolean functions. Advances in Mathematics of Communications, 2021, 15 (2) : 329-346. doi: 10.3934/amc.2020069 A. Canteaut, S. Carpov, C. Fontaine, T. Lepoint, M. Naya-Plasencia, P. Paillier and R. Sirdey, Stream ciphers: A practical solution for efficient homomorphic-ciphertext compression, Journal of Cryptology, 31 (2018), 885-916. doi: 10.1007/s00145-017-9273-9. Google Scholar C. Carlet, P. Méaux and Y. Rotella, Boolean functions with restricted input and their robustness, application to the FLIP cipher, IACR Transactions on Symmetric Cryptology, 3 (2017), 192-227. Google Scholar [3] T. W. Cusick and P. Stǎnicǎ, Cryptographic Boolean Functions and Applications, 2nd Edition, Elsevier-Academic Press, London, 2017. Google Scholar J. F. Dillon, Elementary Hadamard difference sets, Proceedings of the Sixth Southeastern Conference on Combinatorics, Graph Theory, and Computing, Congressus Numerantium, Utilitas Math., Winnipeg, Man., (1975), 237-249. Google Scholar H. Dobbertin, Construction of bent functions and balanced Boolean functions with high nonlinearity, Fast Software Encryption (FSE 1994), LNCS, Springer-Verlag, (1995), 61-74. doi: 10.1007/3-540-60590-8_5. Google Scholar P. Erdös and A. Rényi, On the evolution of random graphs, Magyar Tud. Akad. Mat. Kutató Int. Közl, 5 (1960), 17-61. Google Scholar S. Gangopadhyay, A. K. Gangopadhyay, S. Pollatos and P. Stǎnicǎ, Cryptographic Boolean functions with biased inputs, Cryptography and Communications, 9 (2017), 301-314. doi: 10.1007/s12095-015-0174-1. Google Scholar S. Gangopadhyay, G. Paul, N. Sinha and P. Stǎnicǎ, Generalized nonlinearity of $S$-boxes, Advances in Mathematics of Communications, 12 (2018), 115-122. doi: 10.3934/amc.2018007. Google Scholar H. Hatami, A remark on Bourgain's distributional inequality on the Fourier spectrum of Boolean functions, Online Journal of Analytic Combinatorics, (2006), Art. 3, 6 pp. Google Scholar M. Heidari, S. S. Pradhan and R. Venkataramanan, Boolean functions with biased inputs: Approximation and noise sensitivity, IEEE International Symposium on Information Theory (ISIT), (2019), 1192-1196. doi: 10.1109/ISIT.2019.8849233. Google Scholar S. Kavut, S. Maitra and D. Tang, Construction and search of balanced Boolean functions on even number of variables towards excellent autocorrelation profile, Designs, Codes and Cryptography, 87 (2019), 261-276. doi: 10.1007/s10623-018-0522-1. Google Scholar M. Khairallah, A. Chattopadhyay, B. Mandal and S. Maitra, On hardware implementation of Tang-Maitra Boolean functions, Arithmetic of Finite Fields, Lecture Notes in Comput. Sci., Springer, Cham, 11321 (2018), 111-127. doi: 10.1007/978-3-030-05153-2_6. Google Scholar Y. Lu and Y. Desmedt, Bias analysis of a certain problem with applications to $E_0$ and Shannon ciper, Information Security and Cryptology—ICISC 2010, Lecture Notes in Comput. Sci., Springer, Heidelberg, 6829 (2011), 16-28. doi: 10.1007/978-3-642-24209-0_2. Google Scholar B. Mandal, S. Maitra and P. Stǎnicǎ, On the existence and non-existence of some classes of bent-negabent functions, submitted. Google Scholar S. Maitra, B. Mandal, T. Martinsen, D. Roy and P. Stǎnicǎ, Tools in analyzing linear approximation for Boolean functions related to FLIP, Progress in Cryptology—INDOCRYPT 2018, Lecture Notes in Comput. Sci., Springer, Cham, 11356 (2018), 282-303. doi: 10.1007/978-3-030-05378-9_16. Google Scholar S. Maitra, B. Mandal, T. Martinsen, D. Roy and P. Stǎnicǎ, Analysis on Boolean function in a restricted (biased) domain, IEEE Transactions on Information Theory, 66 (2020), 1219-1231. doi: 10.1109/TIT.2019.2932739. Google Scholar R. L. McFarland, A family of noncyclic difference sets, Journal of Combinatorial Theory, Series A, 15 (1973), 1-10. Google Scholar S. Mesnager, Z. C. Zhou and C. S. Ding, On the nonlinearity of Boolean functions with restricted input, Cryptography and Communications, 11 (2019), 63-76. doi: 10.1007/s12095-018-0293-6. Google Scholar S. Mesnager, Bent Functions, Fundamentals and Results, Springer, [Cham], 2016. doi: 10.1007/978-3-319-32595-8. Google Scholar A. Montanaro and T. J. Osborne, Quantum Boolean functions, Chicago J. Theor. Comput. Sci., (2010), Art 1, 45 pp. doi: 10.4086/cjtcs.2010.001. Google Scholar [21] R. O'Donnell, Analysis of Boolean Functions, Cambridge University Press, New York, 2014. doi: 10.1017/CBO9781139814782. Google Scholar M. G. Parker, Generalised S-Box Nonlinearity, NESSIE Public Document, 11.02.03: NES/DOC/UIB/WP5/020/A. Google Scholar M. G. Parker, The constabent properties of Goley-Devis-Jedwab sequences, Int. Symp. Information Theory, Sorrento, Italy, (2000). Google Scholar C. Riera and M. G. Parker, Generalized bent criteria for Boolean functions. Ⅰ, IEEE Transactions on Information Theory, 52 (2006), 4142-4159. doi: 10.1109/TIT.2006.880069. Google Scholar O. S. Rothaus, On "bent" functions, Journal of Combinatorial Theory, Series A, 20 (1976), 300-305. doi: 10.1016/0097-3165(76)90024-8. Google Scholar K.-U. Schmidt, M. G. Parker and A. Pott, Negabent functions in Maiorana-McFarland class, equences and Their Applications—SETA 2008, Lecture Notes in Comput. Sci., Springer, Berlin, 5203 (2008), 390-402. doi: 10.1007/978-3-540-85912-3_34. Google Scholar P. Stǎnicǎ, S. Gangopadhyay, A. Chaturvedi, A. K. Gangopadhyay and S. Maitra, Investigations on bent and negabent function via nega-Hadamard transform, IEEE Transactions on Information Theory, 58 (2012), 4064-4072. doi: 10.1109/TIT.2012.2186785. Google Scholar D. Tang and S. Maitra, Constructions of $n$-variable ($n\equiv 2 \bmod 4$) balanced Boolean functions with maximum absolute value in autocorrelation spectra $<2^{\frac{n}{2}}$, IEEE Transactions on Information Theory, 64 (2018), 393-402. doi: 10.1109/TIT.2017.2769092. Google Scholar D. Tang, S. Kavut, B. Mandal and S. Maitra, Modifying Maiorana-McFarland type bent functions for good cryptographic properties and efficient implementation, SIAM Journal on Discrete Mathematics, 33 (2019), 238-256. doi: 10.1137/18M1202864. Google Scholar Tingting Pang, Nian Li, Li Zhang, Xiangyong Zeng. Several new classes of (balanced) Boolean functions with few Walsh transform values. Advances in Mathematics of Communications, 2021, 15 (4) : 757-775. doi: 10.3934/amc.2020095 Michel C. Delfour. Hadamard Semidifferential, Oriented Distance Function, and some Applications. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021076 Li Zhang, Xiaofeng Zhou, Min Chen. The research on the properties of Fourier matrix and bent function. Numerical Algebra, Control & Optimization, 2020, 10 (4) : 571-578. doi: 10.3934/naco.2020052 Na Zhao, Zheng-Hai Huang. A nonmonotone smoothing Newton algorithm for solving box constrained variational inequalities with a $P_0$ function. Journal of Industrial & Management Optimization, 2011, 7 (2) : 467-482. doi: 10.3934/jimo.2011.7.467 Guozhen Lu, Yunyan Yang. Sharp constant and extremal function for the improved Moser-Trudinger inequality involving $L^p$ norm in two dimension. Discrete & Continuous Dynamical Systems, 2009, 25 (3) : 963-979. doi: 10.3934/dcds.2009.25.963 Hideshi Yamane. Local and global analyticity for $\mu$-Camassa-Holm equations. Discrete & Continuous Dynamical Systems, 2020, 40 (7) : 4307-4340. doi: 10.3934/dcds.2020182 Yanfeng Qi, Chunming Tang, Zhengchun Zhou, Cuiling Fan. Several infinite families of p-ary weakly regular bent functions. Advances in Mathematics of Communications, 2018, 12 (2) : 303-315. doi: 10.3934/amc.2018019 Hans Rullgård, Eric Todd Quinto. Local Sobolev estimates of a function by means of its Radon transform. Inverse Problems & Imaging, 2010, 4 (4) : 721-734. doi: 10.3934/ipi.2010.4.721 Yingying Li, Ying Fu, Changzheng Qu. The two-component $ \mu $-Camassa–Holm system with peaked solutions. Discrete & Continuous Dynamical Systems, 2020, 40 (10) : 5929-5954. doi: 10.3934/dcds.2020253 Jingqun Wang, Lixin Tian, Weiwei Guo. Global exact controllability and asympotic stabilization of the periodic two-component $\mu\rho$-Hunter-Saxton system. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 2129-2148. doi: 10.3934/dcdss.2016088 Chunming Tang, Maozhi Xu, Yanfeng Qi, Mingshuo Zhou. A new class of $ p $-ary regular bent functions. Advances in Mathematics of Communications, 2021, 15 (1) : 55-64. doi: 10.3934/amc.2020042 Yuri Latushkin, Alim Sukhtayev. The Evans function and the Weyl-Titchmarsh function. Discrete & Continuous Dynamical Systems - S, 2012, 5 (5) : 939-970. doi: 10.3934/dcdss.2012.5.939 J. William Hoffman. Remarks on the zeta function of a graph. Conference Publications, 2003, 2003 (Special) : 413-422. doi: 10.3934/proc.2003.2003.413 H. N. Mhaskar, T. Poggio. Function approximation by deep networks. Communications on Pure & Applied Analysis, 2020, 19 (8) : 4085-4095. doi: 10.3934/cpaa.2020181 Hassan Emamirad, Philippe Rogeon. Semiclassical limit of Husimi function. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 669-676. doi: 10.3934/dcdss.2013.6.669 Ken Ono. Parity of the partition function. Electronic Research Announcements, 1995, 1: 35-42. Tomasz Downarowicz, Yonatan Gutman, Dawid Huczek. Rank as a function of measure. Discrete & Continuous Dynamical Systems, 2014, 34 (7) : 2741-2750. doi: 10.3934/dcds.2014.34.2741 Junchao Zhou, Yunge Xu, Lisha Wang, Nian Li. Nearly optimal codebooks from generalized Boolean bent functions over $ \mathbb{Z}_{4} $. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020121 Giovanni Colombo, Khai T. Nguyen. On the minimum time function around the origin. Mathematical Control & Related Fields, 2013, 3 (1) : 51-82. doi: 10.3934/mcrf.2013.3.51 Welington Cordeiro, Manfred Denker, Michiko Yuri. A note on specification for iterated function systems. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3475-3485. doi: 10.3934/dcdsb.2015.20.3475 Bimal Mandal Aditi Kar Gangopadhyay
CommonCrawl
What's the difference between average velocity and instantaneous velocity? Suppose the distance $x$ varies with time as: $$x = 490t^2.$$ We have to calculate the velocity at $t = 10\ \mathrm s$. My question is that why can't we just put $t = 10$ in the equation $$x = 490t^2$$ which gives us total distance covered by the body and then divide it by 10 (since $t = 10\ \mathrm s$) which will give us the velocity, like this:- $$v~=~\frac{490 \times 10 \times 10}{10} ~=~ 4900\ \frac{\mathrm{m}}{\mathrm{s}}$$ Why we should use differentiation, like this: $$ \begin{array}{rl} x & = 490t^2 \\ \\ v & = \mathrm dx/\mathrm dt \\ & = \mathrm d(490t^2)/\mathrm dt \\ & = 490 \times 2 \times t \\ & = 490 \times 2 \times 10 \\ & = 9800\, \frac{\mathrm{m}}{\mathrm{s}} \end{array} $$ Which not only creates confusion but also gives different answer. Any help is highly appreciated. kinematics velocity differentiation calculus AccidentalFourierTransform The MathemagicianThe Mathemagician $\begingroup$ This feels like a math.SE question. See for example the nearly 200 results of the search math.stackexchange.com/search?q=average+instantaneous $\endgroup$ – AccidentalFourierTransform May 8 '18 at 18:05 $\begingroup$ Related: physics.stackexchange.com/q/100331/2451 and links therein. $\endgroup$ – Qmechanic♦ May 8 '18 at 18:47 $\begingroup$ The "Be nice." policy applies at all times. In particular casting aspersion on other user because you disagree with the way they voted is not an acceptable use of the site. $\endgroup$ – dmckee --- ex-moderator kitten May 8 '18 at 18:59 $\begingroup$ Here's a question I think is very relevant to this type of question, I won't link my answer directly, but as someone who has had the same kind of confusion in the past but eventually a much clearer understanding I offered my own explanation: math.stackexchange.com/q/1321769/2812 $\endgroup$ – AaronLS May 8 '18 at 20:18 $\begingroup$ Your computation starts at 0 and ends at 10. Why is the ten seconds before the time in question more relevant than the ten seconds after, which you ignore? Can you explain why you chose to treat the behaviour before the time in question as relevant, but the behaviour after as irrelevant? $\endgroup$ – Eric Lippert May 9 '18 at 14:04 Your question is legitimate and I don't understand why it got downvoted. The confusion arises in the difference between average and instantaneous velocity. Consider this example: a car moves at 10 m/s for 5 seconds, then stops at a light for another five seconds. What is the velocity of the car after 7 seconds? According to your calculation, it would be $\frac{5 \,\textrm{s}\cdot10\,\textrm{m/s}}{7 \, \textrm{s}}\approx 7.14$ m/s, which is obviously wrong because the car is completely at rest after 7 seconds. What you just computed is the average velocity of the car during those 7 seconds. Asking for the velocity of a body at a given point in time is equivalent to asking "how much will the position change after an infinitesimal amount of time?", which is, in non rigorous terms, like taking an infinitesimal amount of space $dx$ and dividing it by an infinitesimal amount of time $dt$ (this is not how derivatives actually mathematically are defined, but it works at an intuitive level). The average velocity during an infinitesimal amount of time becomes the instantaneous velocity and is computed using the derivative. in our previous example we would obtain $0$, because at 7 seconds, and just before and just after 7 seconds, the car is at rest. $\begingroup$ Nice answer. Interesting aside to your comment "this is not how derivatives actually mathematically are defined;" you can actually rigorously derive calculus with infinitesimals rather than via limit based analysis by extending the real number to the so called Hyperreal numbers which contain the reals plus infinitesimals like $dx$ and infinity ($=1/dx$). This alternative form of analysis is sometimes called non-standard analysis $\endgroup$ – Punk_Physicist May 8 '18 at 18:02 $\begingroup$ "Is $\frac{\mathrm{d}y}{\mathrm{d}x}$ not a ratio?" explains it. The tl;dr is that classical mathematics ("real analysis") prohibited the ratio definition based on infinitely small differences because it literally refused to allow infinitely small differences to be defined. The absurdity of it's on-par with saying that we can't talk about quantum wave functions because they're not defined in classical mechanics. I mean, the derivative is a ratio, even if real analysis fails to provide an adequate framework. $\endgroup$ – Nat May 8 '18 at 19:06 $\begingroup$ This question was no doubt downvoted because elementary research on the web or in any basic physics would have provided an answer. $\endgroup$ – ZeroTheHero May 8 '18 at 19:42 $\begingroup$ @Nat: to talk about infinitesimals to someone who doesn't distinguish clearly between calculating the average of something over distinct intervals and doesn't understand the notion of derivative, is kind of ridiculous. And, by the way, you'll be hard-pressed to find infinitesimals used in mainstream mathematics. $\endgroup$ – Martin Argerami May 9 '18 at 0:06 $\begingroup$ Or for another car example, as my high school physics teacher put it, to estimate the total time of a trip including back roads, stop lights, and a highway stretch, you care about your average speed. If a cop pulls you over on the highway part, the officer cares about your instantaneous speed. $\endgroup$ – aschepler May 9 '18 at 11:16 One thing you should notice about your method is that you get a different result depending on what time range you're averaging over. You're averaging from time t = 0 to t = 10, but what's so special about t = 0? If you do the same thing, but start from t = 5, you get: $$v = \frac{490 \times 10 \times 10 - 490 \times 5 \times 5}{10 - 5} = 7350$$ Since the goal is to determine the instantaneous velocity at a particular time, the fact that the result depends on some other time that you include in the equation should be a strong indication that your result is not just about the desired time, it's about the range of time as a whole. When you compute a derivative, you're calculating the limit of the result of this as the size of this time range gets smaller and smaller, which approaches the infinitessimal period that we call "instantaneous". Another way to think about this intuitively is that the instantaneous velocity is what you would see if you had a speedometer and you looked at it at time t = 10. The speedometer reading at that time is not an average since you started the car, it's just that momentary velocity (this is a simplification, since the internal mechanism of the speedometer is necessarily averaging over a short period of time, but it gets the point across). BarmarBarmar $\begingroup$ Apart from being correct (easy), it's also very clear, and short (not so easy) $\endgroup$ – Neil_UK May 9 '18 at 12:34 $\begingroup$ Good point amount number of arguments. The lay use of "speed" clearly treats it as being a property of an object at a particular time, not a property of an interval. $\endgroup$ – Acccumulation May 9 '18 at 15:08 $\begingroup$ @Acccumulation Thanks. That prompted me to add another paragraph that relates this to the way we view speed in everyday life. $\endgroup$ – Barmar May 9 '18 at 15:56 You're calculating the difference quotient,$$ \frac{f\left(b\right)-f\left(a\right)}{b-a} \,,$$where $x\left(t\right)=490t^2$, $b=10$, and $a=0$ such that$$ \frac{f\left(b\right)-f\left(a\right)}{b-a}~=~ \frac{490 \times {10}^2-490 \times 0^2}{10-0}~=~ 4900. $$ The difference quotient converges on the derivative as the endpoints become infinitely close. Your calculation approach is pretty much like how computers perform parts of the finite difference method, if you make that interval smaller. For example, let's use your method where $x_b=10+0.001$ and $x_a=10-0.001$; then, asking WolframAlpha to do this math for us, we've got$$ \frac{f\left(b\right)-f\left(a\right)}{b-a}~=~ \frac{490 \times {\left(10+0.001\right)}^2-490 \times {\left(10-0.001\right)}^2}{\left({10+0.001}\right)-\left({10+0.001}\right)}~\approx~ 9800. $$This is approximately the same value as we get from the analytical calculus approach. That said, if an equation needs the instantaneous rate of change, $\frac{\mathrm{d}f\left(x\right)}{\mathrm{d}x}$, then that's what it needs. As you've noted, the difference quotient can be a very different number when the difference between $x_b$ and $x_a$ isn't negligibly small. NatNat Here you can see the position $x$ changing with respect to $t$ as $x=490t^2$. You can see that the position changes faster on the right side. The velocity increases and the end velocity is obviously different from the initial velocity or the average one. I hope the visualisation helps to reinforce what the other answers explain in words. DžurisDžuris As others have said, when you compute $$\frac{x(10)-x(0)}{10-0}$$ you are computing the average velocity over the 10 second interval from $t=0$ to $t=10$. Graphically, this corresponds to finding the slope of the secant line (line crossing the graph at 2 points) shown in the graph below: The question, however, is asking for the instantaneous velocity at $t=10$, which corresponds graphically to the slope of the tangent line at $(10, 4900)$, as shown in the graph below: If you plot both lines on the same graph you can see that they do not have the same slope; in fact the tangent line is exactly twice as steep as the secant line. This demonstrates visually that the instantaneous velocity is not the same as the average velocity. Now here is a curious coincidence: the instantaneous velocity at $t=0$ is exactly $0$, the instantaneous velocity at $t=10$ is $9800$, so the average velocity on the interval $0 \le t \le 10$ happens to be equal to the average of the instantaneous velocities at the two endpoints of the interval! This does not happen in general -- in most cases, "average velocity over $[a,b]$" means something different from "average of the instantaneous velocities at $a$ and $b$" -- but it does always happen when the position function is quadratic (figuring out why this is true is a fun exercise). mweissmweiss You are mixing up the difference between instantaneous velocity and average velocity. Your method looks at average velocity, which is the change in position divided by the time it takes to travel that distance. This does not tell us the velocity at t = 10 seconds though. In general, the object could be moving very fast, slowly, or even at rest at t = 10 but still have the same average velocity. The time derivative of position gives us the instantaneous velocity at some time. So either method you described gives a velocity, they just do not describe the same things. BioPhysicistBioPhysicist If the speed is constant, average velocity and instantaneous velocity are the same, regardless which interval you use to calculate average velocity or which point you use to calculate instantaneous velocity. In you example, the velocity is changing - you can tell because the distance is proportional to $t^2$, not to t - therefore, both average and instantaneous velocities will be different for different time intervals or different points in time, respectively. V.F.V.F. Not the answer you're looking for? Browse other questions tagged kinematics velocity differentiation calculus or ask your own question. If an object moved 5 meters in a second, how can its velocity be 10 m/s? Acceleration: Value Disparity? Distinguish between instantaneous speed and instantaneous velocity Why is using Newton's First Law failing me? Why the distance equation is not working while calculating distance manually? One dimensional motion Uniform motion with non-uniform velocity Doubt in kinematics (equations of motion) Average velocity and instantaneous velocity Difference between adding Force Vectors and adding Velocity Vectors Differential form of the velocity equation in non-standard configuration
CommonCrawl
Intelligent hyperspectral target detection for reliable IoV applications Zixu Wang ORCID: orcid.org/0000-0002-1322-63111, Lizuo Jin1 & Kaixiang Yi2 EURASIP Journal on Wireless Communications and Networking volume 2022, Article number: 79 (2022) Cite this article In recent years, hyperspectral imagery has played a significant role in IoV (Internet of Vehicles) vision areas such as target acquisition. Researchers are focusing on integrating detection sensors, detection computing units, and communication units into vehicles to expand the scope of target detection technology with hyperspectral imagery. As imaging spectroscopy technology gradually matures, the spectral resolution of captured hyperspectral images is increasing. At the same time, the volume of data is also increasing. As a result, the reliability of IoV applications is challenged. In this paper, an intelligent hyperspectral target detection method based on deep learning network is proposed. It is based on the residual network structure with the addition of an attention mechanism. The trained network model requires few computational resources and can provide the results in a short time. Our method improves the value of mAP50 by an average of 3.57% for all categories and by up to 5% for a single category on the public dataset. As we all know, the accuracy of the object detection result is very important for the reliability of IoV applications. However, road conditions change rapidly, and the diversity and complexity of pedestrians and obstacles increase the difficulty of target detection. The traditional images have been unable to meet the requirements of target detection tasks. Sometimes it can easily lead to the intelligent driving system getting the wrong environmental information. Therefore, hyperspectral images have been introduced into IoV in recent years. However, object recognition from hyperspectral images is computationally complex. Traditional hyperspectral target recognition algorithms are not only slow and less robust, but also cannot be applied to hyperspectral target recognition systems. To solve the above problems, one of these research areas is to optimize the communication mechanism and a dynamic task offloading mechanism that can handle the massive amount of data that flows among vehicles and edge servers, to meet the real-time requirements in IoV [1, 2]. Another research area is to optimize the detection algorithms. The feature compression algorithm which is based on the traditional method is proposed to process large dimensionality of hyperspectral image data, such as the projection method. The dimensionality of hyperspectral image data is reduced through feature compression to reduce the computational difficulty. But feature compression method is much more difficult to process, it is complex to solve the feature projection, and the spectral information of the target is reduced after data compression. Nowadays, with the rapid development of neural network technology, it has been widely used in the field of deep learning [3,4,5]. Neural networks can simulate the mechanism of feature extraction at the level of the human brain. The more layers the neural network has, the larger its parameters and the stronger its feature extraction capability. Therefore, more and more excellent neural networks have been proposed, achieving better results and less processing time than traditional methods. Meanwhile, their powerful feature extraction capabilities have attracted the attention of scholars in the field of hyperspectral, resulting in the creation of many methods for the classification and detection of targets based on neural networks to extract the spectral properties of hyperspectral images [6, 7]. Therefore, in order to achieve target detection in hyperspectral images, it is of great interest to use neural networks for hyperspectral target detection. So in this paper, a neural network-based model for hyperspectral target detection (Sequeeze and Excitation for Hyperspectral Target Detection: SEHyp) is proposed by analyzing the feature of hyperspectral images. The model adopts the first-order target detection YOLO model structure and proposes a new feature extraction module with an attention mechanism for the spectral features of hyperspectral images. The attention mechanism adaptively weights the feature information to highlight the important feature information in the channel and attenuate the irrelevant feature information, thus improving the network accuracy. In addition, the model output module is modified to adopt a dual output, with separate outputs for target coordinates and categories, to achieve prediction of target coordinates and categories. After the experimental verification, a good detection result is obtained. The trained network model can achieve high accuracy and robustness, it can provide the results in a short time, which is crucial and can meet the reliable requirement of intelligent applications of IoV. Related word Imaging spectroscopy is an important detection method for acquiring spatial and spectral information from materials, which can be used to obtain hyperspectral images and plays a crucial role in the field of visual perception, such as target identification, classification, and recognition. Imaging spectroscopy is a detection technique that can acquire both spatial and spectral information about a target by extracting the spectral features of a feature at an early stage. The following is the current state of research in hyperspectral target detection technology [8, 9]. JX Yu and others propose a novel workflow performance prediction model (DAG-transformer) that fully exploits the sequential and graphical relationships between workflows to improve the embedding representation and perceptual ability of the deep neural network. Their study provides a new way to facilitate workflow planning [10]. Harsanyi invented the detection method of constrained energy minimization (CEM) [11]. The main idea of CEM method is to extract information in a certain area by reducing the information interference in other areas, and CEM has achieved good results in small target detection and is widely used. However, the CEM method requires prior knowledge of ideal targets for hyperspectral images, so CEM algorithms can only be used to detect ideal targets. Jimenenz applied genetic algorithms and projection methods to the extraction and categorization of hyperspectral image features [12]. A data processing method of projection tracking proposed by Prof. Friedman [13] was specifically designed for linear dimensionality reduction of high-dimensional data, and the projection method reduces the dimensionality of data to reduce the difficulty of subsequent data processing. However, as mentioned in the background of the study, the feature compression method is difficult to handle and the projection method is complicated to find the eigenprojections, so this method is only suitable for the target detection of pure point image elements. Li Wang proposed the SSSERN algorithm [14]. The attention mechanism is mainly introduced by adding the SE module to the residual network. The accuracy of the network is improved. There are also detection algorithms based on sparse representation [15, 16], which represent the image background as a more representative basis vector or spectrum and use the product of spectral prior knowledge and related parameters to represent the original hyperspectral data. Li et al. proposed the BJSR (background joint sparse representation) algorithm [17], an anomaly detection algorithm for hyperspectral images using background joint sparse representation by estimating an adaptive orthogonal background complementary subspace by BJSR, which adaptively selects the most representative background basis vectors for local regions, and then proposed an unsupervised adaptive subspace detection method to suppress the background and highlight the anomalous components at the same time. Although the sparse representation method can detect anomalous pixel points, it may receive the influence of sensor noise and multiple reflections of electromagnetic waves during hyperspectral acquisition, which leads to the spectral variation of the substance, resulting in poor detection of the algorithm, and the method can only identify the anomalous targets and cannot classify the targets. Hyperspectral image detection Hyperspectral images, which are composed of tens to hundreds of wavelength images, are three-dimensional structured images with both spatial and spectral information. The features extracted from hyperspectral images can be roughly divided into two categories: spatial features and spectral features, which are essentially two very different kinds of features. Spatial features are a reflection of the target's position, shape, size, and other information in two-dimensional space, while spectral features are a reflection of the target's ability to reflect light at different wavelengths, which is known as the "spectral fingerprint" and is one of the important optical properties of matter [18,19,20,21,22]. Hyperspectral images are always used as input data for target detection IoV applications. Hyperspectral images are the product of a combination of imaging and spectroscopic techniques. It can store both spatial and spectral information in the range photographed in a kind of data cube. This data cube can detect and identify objects with similar appearance but different materials. Hyperspectral images are not RGB images that only simply integrate the R, G, and B bands. The higher the spectral resolution of the hyperspectral image, the higher the number of spectral channels. For different targets with different representation capabilities, objects with different morphology have rich spatial information, while the spectral information may be weak, when using spatial features to identify and classify the targets can achieve higher accuracy. When the objects are similar in shape, their spatial information is weak, but the spectral information is rich, so the objects can be distinguished by the spectral information of a certain wavelength to obtain higher accuracy. Figure 1 shows the spectral image of the hyperspectral image. Each spectral image is represented as a grayscale image. Hyperspectral image display map Although hyperspectral stores a large amount of information, it also brings problems such as large amount of data and redundancy. The target detection application of the IoV needs to get target feature information from these data. Then, the specified target and its coordinates and categories are detected from the feature information. Most of the traditional hyperspectral target detection algorithms only consider the difference of spectra between different objects and utilize the spectral information of hyperspectral images, while the spatial information of objects is less used. And their spectral information extraction methods generally use feature compression methods to reduce the computational effort, or manually extract the feature spectral bands for difference analysis. The design of feature compression methods is difficult, such as the complicated feature projection in the projection method. The manual extraction of the feature spectrum is designed only for a single target, which is less adaptable to other targets. With the rapid development of computer hardware, deep learning technology has been improved unprecedentedly. Neural networks, as the most widely used theory in deep learning, has demonstrated powerful high-level (more abstract and semantic) feature representation and learning capabilities. It is able to extract nonlinear correlation features between data. This makes a qualitative leap in the detection accuracy and speed of target detection technology [23]. However, models for target detection on hyperspectral images are relatively rare, and most of them are color images or grayscale images, which lack spectral information compared to hyperspectral images. Therefore, the feature extraction module of the traditional neural network target detection model pays less attention to the spectral information and lacks feature extraction for feature extraction between channels. Target detection network analysis The target detection task can be divided into two operations, which are target localization and target classification. Target localization is to detect the location of the target in the image and output the coordinates of the target box, which are continuous data and belong to the regression task. Target classification is to classify the targets in the target frame, and the predicted values belong to discrete data, and only a specified number of categories are predicted for the targets. However, in the neural network target detection model, the target coordinate frame and the target category prediction values are often output only in the last convolution operation at the same time, and the two different task operations increase the difficulty of convolution prediction, so it is also necessary to improve the output module of the target detection model by using two convolution operations, respectively. Deep learning-based algorithms for target detection can be broadly divided into two-stage and single-stage methods. They have similar frameworks, and both the two-stage and single-stage algorithms can be divided into the structure shown in Fig. 2. Only for different types of detection models, the designed improvements focus on different aspects. The first-order model is an end-to-end model, where the input data is passed through the neural network and the prediction results are directly output. Target detection architecture The backbone module is used to extract features from the input data. Common backbone networks include VGG16, ResNet50, CSPResNeXt50, and CSPDarknet53-[24,25,26,27]. Usually, some functional layers are inserted in the middle of the backbone and head modules, which are used to collect feature mappings from different stages for fusion. These functional layers are called neck modules. For single-stage algorithms, the DensePrediction module completes the regression prediction of the prediction frame and categories, while for two-stage algorithms, further regression operations are required on the preselected frame [28]. As Sparse Prediction is shown in Fig. 2, the detection time of the second-order model therefore increases. However, all these models only process target detection on color images, which have only three channels, while hyperspectral images have tens or even hundreds of channels, and a complete and continuous spectral curve can be extracted from each pixel point of hyperspectral images. In order to extract hyperspectral image features effectively, it is necessary to improve the structure of traditional target detection models. In the target detection model structure, the backbone module is used to obtain the feature map by convolution operation on the input data with convolution kernels. However, the convolution operation only performs feature extraction on the image space dimension. For hyperspectral images, the spectral information of matter is also an important basis for judging the target, so it is necessary to operate on the data channel dimension in the feature extraction module of the model. In the hybrid-CNN network model [29], a joint null-spectrum operation is used for feature extraction of the spatial dimension and spectral segment dimension in the classification operation of the target. It employs an attention mechanism that allows the network to adaptively weight the feature information to highlight a certain important channel feature information. Meanwhile, attenuating irrelevant channel characteristic information. The network accuracy improves a lot. Although what we do in this paper is target detection, its method can also be borrowed by introducing the attention mechanism in the feature extraction module of target detection to improve the feature extraction capability for the channel dimension of the input data. In summary, in order to solve the problems of complex feature compression and poor adaptability of traditional target detection, this paper will research on how to adopt neural network methods for target detection of hyperspectral images. For the neural network method, most of the target detection models are for color images. In order to be able to utilize the unique spectral features of hyperspectral images, we consider introducing an attention mechanism in the feature extraction stage to improve the feature extraction ability of the network between channels. We also decouple the target localization task and the target classification task in target detection and try to use two convolutional operations in the output module to predict the two tasks separately. Proposed methodology In this paper, we propose a neural network-based hyperspectral target detection model, whose network structure is shown in Fig. 3. The overall network structure of SEHyp consists of a convolutional neural network with a first-order target detection framework, which contains three major network modules: backbone module, neck module, and head module. The backbone module consists of six ResSEblock blocks to extract the feature information from the input data. The neck module consists of SPP blocks and a pyramidal convolutional structure, which uses different stages of feature information output from the feature extraction module as input, then performs fusion of different feature information to achieve detection of targets of different sizes. Finally, the head module is the output module of the network model, which is used to output the prediction value, including the coordinates of the target frame, the target category and the confidence level, which is used to determine whether there is a target in the target frame. SEHyp network structure SEHyp backbone As mentioned in the introduction part for the spectral characteristics of hyperspectral images, modifications are needed in the feature extraction module, so the attention residual module (ReSEblock) is proposed in this paper, and its network structure is shown in Fig. 4. The overall structure of the network consists of a residual network, and an attention mechanism is added to the final output link in order to be able to extract feature information between channels. ResSEblock module structure ResSEblock contains N Resblocks. N is the number multiplied by Resblock in Fig. 4. The number of channels is halved by the convolution of the input feature map, and then fused on both sides. This not only reduces parameters, but also reduces computation, while the number of channels remains the same. Finally, the output of the ResSEblock also goes through the SE block. It selectively emphasizes informative features through an attention mechanism. Nonlinear features between spectral bands are effectively extracted by selectively emphasizing informative features. Resblock mainly consists of 1×1 and 3×3 convolution kernels. It uses 1×1 convolution kernels to compress the number of channels. This not only reduces the number of parameters of the model, but also reduces the number of computations of the model. The residual structure can effectively prevent gradient explosion and gradient disappearance as the number of network layers is added. It operates by jumping connections, adding the input feature map data to the output feature map data, and then transferring the result to the next layer. Finally, nonlinearity is introduced with the Mish activation function. The Mish formula is shown in formula (1) $$Mish=x*tanh\left(ln\left(1 + {e}^{x}\right)\right)$$ The SE block (Sequeeze-and-Excitation Block) was proposed by Jiehu et al [30]. It is an implementation of the attention mechanism which can improve the response of channel features. The SE module adaptively recalibrates the representation of feature channels. It learns to use global information to selectively enhance channel feature representations and suppress useless parts. The structure of the SE module is shown in Fig. 5. The whole SE module can be divided into three steps. First, global average pooling is performed on U, outputs channel eigenvalues of 1 × 1 × C size. The data were then subjected to two 1 × 1 convolution operations. The first convolution compressed C channels into C/r channels and used the ReLU activation function to add nonlinearity to the data, where r is the compression ratio; the second convolution uses the sigmoid activation function to restore the channel to the C channel again. The obtained 1 × 1 × C weight data are multiplied with the input feature map of the corresponding channels as the next level input feature map. The mathematical formula is as follows: SE module structure $$Out=X*Sigmoid\left({F}_{2}\left(ReLU\left({F}_{1}\left({F}_{sq}\left(X\right),{W}_{1}\right)\right),{W}_{2}\right)\right)$$ W1 and W2 are parameters in the convolution operation, and F1(⋅, ⋅) and F2(⋅, ⋅) are convolution operations. The SE module selectively emphasizes informative features to enhance important channels and weakens non-important channels to improve the representation of features by learning and adaptively weighted features. Therefore, adding an attention mechanism to the feature extraction network allows the network to adaptively weight the feature information to highlight important features. The accuracy of the network can then be improved. SEHyp neck The neck of SEHyp model consists of SPP module and pyramid structured convolutional layers. The feature information is fused by different pooling operations and upsampling or downsampling and finally outputs the fused feature Si. The SSP module uses 1 × 1, 3 × 3, 5 × 5 and 13 × 13 pooling kernels for max pooling. The feature values get different perceptual field by pooling of different sizes. This allows different feature information to be obtained, therefore improving the detection capability of the network for small targets and the localization accuracy. The role of the neck module of the SEHyp model is to fuse different feature information, enabling the network to improve the detection of targets of different sizes. The main work of this module is to fuse the feature information extracted from the backbone module. As shown in Fig. 6, the ith ReSEblock block in the backbone module is represented by μi(xi|wi), where xi denotes the input data of the ith block and wi denotes the network parameters of the block. Use Q(a|wQ) for the neck module, where a is the input data of the neck module and wQ is its network parameters. The output feature values μi(x5|w5), μi(x6|w6) and μi(x7|w7) of the backbone module are used as the input of the neck module parameter a, the different stages of the convolution layer obtain different feature information, the perceptual field of each pixel point is different, and the perceptual field obtained at different depth network structures will increase accordingly, so inputting feature information with different perceptual fields to improve the feature fusion efficiency can enhance the accuracy of different size target detection. SEHyp neck module structure SEHyp head The SEHyp head module is an output module of the model. For the model output module coupling problem, as shown in Fig. 7, the two branches are used to predict the target class and coordinates. SEHyp head module structure Object classification determination is a classification problem, while object location prediction is a regression problem. If both types of predictions use a convolution operation, the information is combined. This can make regression more difficult. Moreover, the spectral information unique to the hyperspectral map plays a crucial role in the detection, so two convolutional branches are used here for classification and coordinate prediction respectively. Two parallel branches are used to do the prediction of two tasks separately, so that different head modules can do their respective tasks and reduce the difficulty of prediction regression. The head module is denoted by H(Si|wHi), Si is input data of the head module, which is the feature information output by the neck module. The results Classi, Boxi and Predicioni output finally, where Classi is the probability of each category, Boxi is the coordinate of the center point of the target box with the box length and width values, Predicioni is the confidence level, which is used to determine whether the target exists in the box. The output of the head module, the target classification prediction output sizes are (52, 52, ClassNum × 3), (26, 26, ClassNum × 3) and (13, 13, ClassNum×3). Here, ClassNum is the number of models that can be classified, and 3 is the three prediction frames that the model predicts for each pixel. The target coordinates lose the predicted output sizes for (52, 52, 15), (26, 26, 15) and (13, 13, 15). 15 is calculated from the coordinate point and confidence from the three boxes, whereas the confidence level is used to determine if there is a target in the box. In order for the model to perform the inverse process, a loss function is also required to calculate the difference between the predicted and true values. The size of the final predicted output value is K × K × ((ClassNum + 5) × 3), where the side length of the grid is K. Therefore, the grid has a total of K × K grids. Each grid predicts three prediction frames. And each predicted frame requires the predicted class, the vertex coordinates of the frame, and the confidence. The loss function is shown in formula (3). $$\begin{aligned} {\text{loss}}\left( {{\text{object}}} \right) & = \lambda_{{{\text{coord}}}} \mathop \sum \limits_{i = 0}^{K \times K} \mathop \sum \limits_{j = 0}^{M} I_{ij}^{{{\text{obj}}}} \left( {2{ } - { }w_{i} \times { }h_{i} { }} \right)\left[ {1 - CIOU} \right] \\ & \quad - \mathop \sum \limits_{i = 0}^{K \times K} \mathop \sum \limits_{j = 0}^{M} I_{ij}^{{{\text{obj}}}} \left[ {\hat{C}_{i} \log \left( {C_{i} } \right) + \left( {1 - C_{i} } \right){\text{log}}\left( {1 - C_{i} } \right)} \right] \\ & \quad - \lambda_{{{\text{noobj}}}} \mathop \sum \limits_{i = 0}^{K \times K} \mathop \sum \limits_{j = 0}^{M} I_{ij}^{{{\text{obj}}}} \left[ {\hat{C}_{i} \log \left( {C_{i} } \right) + \left( {1 - \hat{C}_{i} } \right){\text{log}}\left( {1 - C_{i} } \right)} \right] \\ & \quad - \mathop \sum \limits_{i = 0}^{K \times K} \mathop \sum \limits_{j = 0}^{M} I_{ij}^{{{\text{obj}}}} \mathop \sum \limits_{{c \in {\text{classes}}}}^{ } \left[ {\hat{p}_{i} \left( {\text{c}} \right){\text{log}}\left( {p_{i} \left( {\text{c}} \right)} \right) + \left( {1 - \hat{p}_{i} \left( c \right)} \right){\text{log}}\left( {1 - p_{i} \left( c \right)} \right)} \right] \\ \end{aligned}$$ The loss function \({I}_{ij}^{\mathrm{obj}}\) is used to determine whether there is a target in the jth prediction box of the ith grid. When it is 1, there is a target. When it is 0, there is not. Therefore, when there is no target in the grid, only the fourth row of the formula is calculated that means only the confidence loss is calculated. The first two lines of the loss function are the loss function for the predicted frame. The CIOU algorithm [15, 31] was used. For the traditional IOU loss function [32], if the two boxes do not intersect, the distance between the two boxes cannot be reflected, which means the loss is 0. It does not accurately reflect the size of the overlap of the two boxes. The CIOU loss function solves the problem that the loss is 0 when the two frames do not overlap by calculating the Euclidean distance between the centroids of the two frames. It also increases the scale loss of the predicted frame. Thus, the regression accuracy is improved. This improves the accuracy of the regression. λcoord is the weight coefficient, K is the side length of the grid, M is the number of predicted frames per grid, w and h are the width and height of the predicted frame, and x and y are the coordinates of the grid center point of the predicted frame. The formula for calculating confidence loss is in the third and fourth lines. Confidence loss is calculated using cross-entropy. The loss value is still calculated when there are no objects in the grid. But its share in loss is controlled by λnoobj weights. The last line of the equation is the loss function for the class. The cross-entropy loss function is used. But the class loss is only calculated if there are targets in the grid. In summary, the learning process of the hyperspectral target detection network model is as follows. Data processing is performed on the training data. Input the data into the network model. Perform data feature extraction by the backbone module. The feature values extracted from the previous module are fused with the features by the neck module. Input the fused features into the head module to make the final output data prediction. Calculate the loss function from the corresponding image labels and predicted values, and update the network parameters. Repeat steps (2)–(6) until the network converges or the training count is completed. Experimental results and analysis This section describes the training regime and experiments for our models. Experimental setting In this section, we conduct simulation experiments to verify the deep learning-based hyperspectral target detection algorithm. The data categories used in the experiments are six different categories of shoes. The object coordinates and category labels in the images are stored in an XML file which is in the same format as the coco dataset labels. The data source is divided into two parts. One is the real hyperspectral image acquired by the hyperspectral image acquisition system I built in the optical laboratory, and the other is the hyperspectral image generated by the AWAN algorithm. The hyperspectral acquisition system is an image acquisition system based on the principle of single-slit push-broom spectral imaging. The initial model of the system is shown in Fig. 8. The AWAN algorithm was proposed by Li Yunsong et al. [33]. It performs spectral reconstruction from RGB images based on deep learning. The paper proposes the AWCA module. This is an adaptive weighted attention mechanism. It can improve the reconstruction accuracy of the network. Thus, the simulation accuracy of RGB image to hyperspectral image is improved. The AWAN algorithm is used to convert the acquired RGB images into hyperspectral images. This expands the training dataset. The process of building a hyperspectral image acquisition system There are 10236 hyperspectral images which are divided into 6 categories. The spectral bands range from 400 to 700 nm, separated by 10 nm. There are a total of 31 spectral bands, which are stored in mat file format. Eighty percentage of the dataset is used as the training set, 10% as the validation set, and 10% as the test set. The training set is used to train the neural network model. The validation set is used to verify the effect of network training. The test set is used to test the actual learning ability of the network. Experimental evaluation index The evaluation metrics used in this experiment mainly include: Precision, Recall, Average Accuracy (AP), and Mean Average Precision (mAP). (1) IOU: The IOU formula is used to determine the similarity of two rectangular boxes, as shown in formula (4). When calculating AP, it is usually necessary to state at what IOU value the average correct rate is. For example, AP50 means that when the IOU is greater than or equal to 0.5, the prediction box has selected the target. $$\mathrm{IOU}=\frac{\mathrm{Area\;\;\; of\;\;\; intersection\;\;\; of\;\;\; two\;\;\; rectangular\;\;\; boxes}}{\mathrm{Area\;\;\; of\;\;\; two\;\;\; rectangular\;\;\; boxes\;\;\; merged}}$$ (2) Precision, Recall, and F1: As shown in Fig. 9, the True class is the true value, and the Hypothesized class is the predicted value. Y is the positive class and N is the negative class. TP represents the probability that the model predicts that the target exists and the true target also exists. TN represents the probability that the model predicts that the target does not exist and the true target does. FP represents the probability that the model predicts that the target exists and the real target does not. TN represents the probability that the model predicts that the target exists and the true target does not. Precision is the probability of the positive class, which is the percentage of true classes among the positive classes predicted by the model shown in Eq. (5). Recall is the ratio of the positive class detected by the model to the true class. It is the percentage of true classes detected by the model which is shown in Eq. (6). F1 score can combine the balance of recall and precision, which can well distinguish the advantages and disadvantages of algorithms. It is shown in Eq. (7). Schematic diagram of evaluation indicators $$\mathrm{Precision}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FP}}$$ $$\mathrm{Recall}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}$$ $$F1=2\times \frac{\mathrm{Precision}\times \mathrm{Recall}}{\mathrm{Precision}+\mathrm{Recall}}$$ (3) AP and mAP: AP (Average Precision) is the average precision. Precision and Recall tend to be mutually exclusive. When Precision was little, Recall was large. Precision was large when Recall was little. AP balances the two well using the surface area under the Precision and Recall curves. The area under the curve is the AP value for that category. The larger the area, the more accurate the model is for that class. mAP (mean Average Precision) is the mean average precision. It is used to measure the performance of the model in all categories. For object detection systems, usually multiple objects are detected. AP is an evaluation index for a single category. Therefore, mAP obtained by calculating the average of APs of all categories can effectively measure the quality of the model for all categories. Experimental environment and procedure This experiment was performed on Ubuntu 20.4. Pytorch is used to build deep learning models. The main software components used are: torch version 1.2, torchvision version 0.4, tqdm version 4.60, opencv_python version 4.1.2.30, h5py version 2.10, etc. The server hardware configuration is: Intel Xeon 6226R processor, 256G RAM, 11TB hard drive capacity, two GeForce RTX3090. For model training, the backbone network uses pretrained weights. First, a backbone network is used for classification training on RGB images. The weights of the backbone network will not be too random. It is helpful for the training weights of the later target detection model to converge better. The input size of the hyperspectral object detection model is 416 × 416 × 31. The training method is freezing training. It is divided into two stages: freezing period and thawing period. Freeze period freeze the backbone network. That is, the parameters of the feature extraction network do not change during freezing. Only the parameters of neck and head are fine-tuned. The epoch number is 50 and the learning rate is 0.001. The learning rate is changed by the cosine annealing method. During the thawing period, the backbone network is thawed. The entire model parameters were trained with 50 epochs and a learning rate of 0.0001. It will decrease as the number of training cycles increases. The optimizer used for model training is adaptive moment estimation (Adam) with parameters β1 = 0.9, β2 = 0.999, weight_decay = 0.0005. In this experiment, the detection performance of three models, YHyp network with only modified inputs, YSE network with added attention mechanism, and SEHyp network designed in this chapter is tested. Figure 10 illustrates the change of loss function values between the training set and the test set during the training process. It can be seen that the loss function decreases faster initially, which is due to the fact that during the training process, migration learning is used. The method first trains the feature extraction module in a classification task and then migrates it to the model. So the freeze training is performed first to freeze the parameters of the feature extraction module and train only the parameters of the neck and head modules, Therefore, after several training sessions, the network can quickly adapt to the target detection task and the loss function decreases faster. When the training reaches 50 times, the training will be unfrozen and the parameters of the feature extraction module will be changed, so it can be found from the figure that the loss has a large decline after the 50th training. Network model training process The detection results are shown in Figs. 11, 12, 13, and 14, which is part of the results detected by the SEHyp model. Figures 12, 13, and 14 show the pseudo-color image, because the hyperspectral image has 31 spectral bands containing visible light from violet to red, which cannot be displayed directly, so the 9th spectral band, the 19th spectral band, and the 29th spectral band are extracted from the hyperspectral image to form the RGB triple channel of the color image, so the color image is displayed. In Fig. 11, the separate results are also presented in the grayscale image of the 19th spectral band. The detection results are displayed in the 19th spectrum, which has a better light intensity in the middle, because the appearance of the target is not well displayed in the front of the visible spectrum in hyperspectral images. SEHyptarget detection results (19th spectrum) Pseudo-RGB images 1(29th, 19th, and 9th spectrum) As shown in Figs. 15, 16, and 17, three metrics of target detection effectiveness are illustrated in the figure, Fig. 15 shows the value of F1 for target detection, Fig. 16 shows the value of Precision for target detection, and Fig. 17 shows the value of recall rate for target detection. When the IOU threshold value is taken as 0.5, F1 and recall rate can be found by the figure that the present method has been improved considerably, which indicates that the overall effectiveness of the model with the ability to detect the presence of targets has been increased. For the precision value, the method is lower than the detection precision of the YSE model, but its value still reaches 86.71%, and the accuracy of the detected targets is high. Value of F1 in each model Value of Precision in each model Value of Recall in each model The results of the three network tests are shown in Table 1. APss, mAP50, and mAP75 are also shown here as evaluation metrics. APss is the value of AP50 in the Skating and Skiing category of the detection dataset. mAP50 and mAP75 are the average value of AP when IOU is set to 0.5 and 0.75. We also count the detection time on the GPU to judge the model performance. As the attention mechanism SE is added to the backbone network, the value of AP for skating and skiing category is shown in Fig 18. Compared with YHyp, the overall method has improved, especially in the Recall value greater than 0.8, the Precision has improved more. Compared with YSE, the area of AP decreases when Recall is less than 0.7, but the area of AP is improved. Overall, the value of APss is increased by 5%, the mAP value is significantly improved. mAP50 value is increased by 3.57%. But the detection time increases with the increase of backbone network parameters. The overall time is within 1 second, which can meet the requirements of the IoV system. In the case of the improved head module for binary branch prediction, the value of mAP50 is improved. But the time did not add much. Because the number of channels is reduced by using a 1 × 1 convolution operation before entering branch prediction. And the subsequent computations are also reduced. It can be seen that the hyperspectral target detection model designed in this paper can effectively perform the hyperspectral target detection task of IoV applications. Although the time-consuming has increased, it is still within the acceptable range for the IoV system. Table 1 Model test results The value of AP in the Skating and Skiing shoes class It can be seen that the hyperspectral target recognition model based on deep learning developed in this paper can effectively perform the task of hyperspectral target recognition. Compared with other traditional detection algorithms, the algorithm not only can be processed for the collected hyperspectral images, thus greatly improving the detection accuracy in the process of IoV applications. The introduction of the attention mechanism makes the cost of time small, which is crucial and can meet the reliable requirement of intelligent applications of IoV. A two-order target detection architecture model can be added in the future. Although the two-order target detection architecture model is computationally time-consuming, the detection accuracy is generally higher than that of the first-order target detection model. The data used to support the findings of this study are available from the corresponding author upon request. IoV: Internet of vehicle RGB: Red, green, and blue SE: Sequeeze-and-excitation SEHyp: Sequeeze and excitation for hyperspectral target detection Mean average precision L. Liu, M. Zhao, M. Yu, M.A. Jan, D. Lan, A. Taherkordi, Mobility-aware multi-hop task offloading for autonomous driving in vehicular edge computing and networks. IEEE Trans. Intell. Transp. Syst. (2022). https://doi.org/10.1109/TITS.2022.3142566 S. Mao, L. Liu, N. Zhang, M. Dong, J. Zhao, J. Wu, V.C. Leung, Reconfigurable intelligent surface-assisted secure mobile edge computing networks. IEEE Trans. Veh. Technol. (2022). https://doi.org/10.1109/TVT.2022.3162044 M. Li, J. He, R. Zhou, L. Ning, Y. Liang, Research on prediction model of mixed gas concentration based on CNN-LSTM network, In: 2021 3rd International Conference on Advanced Information Science and System (AISS 2021). Association for Computing Machinery, New York (2021) T. Zhu, H. Gu, Z. Chen, A median filtering forensics CNN approach based on local binary pattern. In: Proceedings of the 11th International Conference on Computer Engineering and Networks. Springer, Singapore, pp. 258–266 (2022) Y. Gu, J. Li, A novel WiFi gesture recognition method based on CNN-LSTM and channel attention, in: 2021 3rd International Conference on Advanced Information Science and System (AISS 2021). Association for Computing Machinery, New York (2021) W. Wang, S. Dou, Z. Jiang et al., A fast dense spectral–spatial convolution network framework for hyperspectral images classification. Remote Sens. 10(7), 1068 (2018) W. Ma, Q. Yang, Y. Wu et al., Double-branch multi-attention mechanism network for hyperspectralimage classification. Remote Sens. 11(11), 1307 (2019) L. Dong, Q. Ni, W. Wu, C. Huang, T. Znati, D.Z. Du, A proactive reliable mechanism-based vehicular fog computing network. IEEE Int. Things J. 7(12), 11895–11907 (2020) L. Dong, W. Wu, Q. Guo, M.N. Satpute, T. Znati, D.Z. Du, Reliability-aware offloading and allocation in multilevel edge computing system. IEEE Trans. Reliab. 70(1), 200–211 (2021) J. Yu, M. Gao, Y. Li, Z. Zhang, W.H. Ip, K.L. Yung, Workflow performance prediction based on graph structure aware deep attention neural network. J. Ind. Inf. Integr. 27, 100337 (2022) J.C. Harsanyi, Detection and classification of subpixel spectral signatures in hyperspectral image sequences, Baltimore County (1993) S.S. Chiang, C.I. Chang, I.W. Ginsberg, Unsupervised target detection in hyperspectral images using projection pursuit. IEEE Trans. Geosci. Remote Sens. 39(7), 1380–1391 (2001) J.H. Friedman, W. Stuetzle, Projection pursuit regression. J. Am. Stat. Assoc. 76(376), 817–823 (1981) L. Wang, J. Peng, W. Sun, Spatial-spectral squeeze-and-excitation residual network for hyperspectral image classification. Remote Sens. 11, 884 (2019) J. Liu, W. Zhang, Z. Wu, A distributed and parallel anomaly detection in hyperspectral images based on low-rank and sparse representation, in: IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium(2018) X. Ou, Y. Zhang, H. Wang, Hyperspectral image target detection via weighted joint k-nearest neighbor and multitask learning sparse representation. IEEE Access 8, 11503–11511 (2020) J. Li, H. Zhang, L. Zhang, L. Ma, Hyperspectral anomaly detection by the use of background joint sparse representation. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 8(6), 2523–2533 (2015) W. Lan, V. Baeten, B. Jaillais, Comparison of near-infrared, mid-infrared, raman spectroscopy and near- infrared hyperspectral imaging to determine chemical, structural and rheological properties of apple purees. J. Food Eng. 323, 111002–111002 (2022) P. Shi, Q. Ling, J. Wu, Z. Lin, Collaborative representation based on the constraint of spectral characteristics for hyperspectral anomaly detection, in 2021 3rd International Conference on Advances in Computer Technology, Information Science and Communication (CTISC), pp. 199–204 (2021) G. Squeo, D. De Angelis, C. Summo, Assessment of macronutrients and alpha-galactosides of texturized vegetable proteins by near infrared hyperspectral imaging. J. Food Compos. Anal. 108, 104459–104459 (2022) T. Mu, R. Nie, C. Ma, J. Liu, Hyperspectral and panchromatic image fusion based on CNMF, in 2021 3rd International Conference on Advances in Computer Technology, Information Science and Communication (CTISC), pp. 293–297 (2021) L. Yan, M. Zhao, X. Wang, Object detection in hyperspectral images. IEEE Signal Process. Lett. 28, 508–512 (2021) L. Dong, M.N. Satpute, W. Wu, Two-phase multi-document summarization through content-attention-based subtopic detection. IEEE Trans. Comput. Soc. Syst. 8(6), 1379–1392 (2021) J. Redmon, A. Farhadi, Yolov3: an incremental improvement (2018) K. He, X. Zhang, S. Ren, Deep residual learning for image recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016) K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition (2014) S. Xie, R. Girshick, P. Dollár, Z. Tu, K. He, Aggregated residual transformations for deep neural networks, in 2017 IEEE conference on computer vision and pattern recognition (CVPR), pp. 5987–5995 (2017) L. Dong, Q. Guo, W. Wu, Speech corpora subset selection based on time-continuous utterances features. J Comb Optim 37, 1237–1248 (2019) J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018) J. Hu, L. Shen, S. G, Squeeze-and-excitation networks, in IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018) Z. Zheng, P. Wang, W. Liu, Distance-iou loss: faster and better learning for bounding boxregression. Proc. AAAI Conf. Artif. Intell. 34, 12993–13000 (2020) J. Yu, Y. Jiang, Z. Wang, Unitbox: An advanced object detection network, in Proceedings of the 24th ACM International Conference on Multimedia, pp. 516–520 (2016) J. Li, C. Wu, R. Song, Adaptive weighted attention network with camera spectral sensitivity prior for spectral reconstruction from rgb images, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2020 (2020) The authors acknowledged the anonymous reviewers and editors for their efforts in valuable comments and suggestions. Our work is supported by the'Research and Development of Coal Mine Big Data Platform with Artificial Intelligence Applications' program of China Coal Energy Research Institute Co. Ltd. Southeast University, Nanjing, 210096, China Zixu Wang & Lizuo Jin National University of Defense Technology, Nanjing, 210096, China Kaixiang Yi Zixu Wang Lizuo Jin Zixu Wang proposes the theoretical analysis and carries out experiments. Lizuo Jin conceived of the study and participated in its design and coordination and helped to draft the manuscript. Kaixiang Yi is responsible for the experimental part of the update and processing. All authors read and approved the final manuscript. Correspondence to Zixu Wang. Wang, Z., Jin, L. & Yi, K. Intelligent hyperspectral target detection for reliable IoV applications. J Wireless Com Network 2022, 79 (2022). https://doi.org/10.1186/s13638-022-02161-z DOI: https://doi.org/10.1186/s13638-022-02161-z Hyperspectral image Target detection
CommonCrawl
1.1Early concepts 1.220th century 1.321st century 2In fiction 3Physics 3.1Apparent gravitational field 3.2Cable section 3.3Cable materials 4Structure 4.1Base station 4.2Cable 4.3Climbers 4.4Powering climbers 4.5Counterweight 5Applications 5.1Launching into deep space 5.2Extraterrestrial elevators 6Construction 6.1Safety issues and construction challenges 6.2Economics 7International Space Elevator Consortium (ISEC) 8Related concepts Space elevator Proposed type of space transportation system A space elevator is conceived as a cable fixed to the equator and reaching into space. A counterweight at the upper end keeps the center of mass well above geostationary orbit level. This produces enough upward centrifugal force from Earth's rotation to fully counter the downward gravity, keeping the cable upright and taut. Climbers carry cargo up and down the cable. Space elevator in motion rotating with Earth, viewed from above North Pole. A free-flying satellite (green dot) is shown in geostationary orbit slightly behind the cable. A space elevator, also referred to as a space bridge, star ladder, and orbital lift, is a proposed type of planet-to-space transportation system,[1] often depicted in science fiction. The main component would be a cable (also called a tether) anchored to the surface and extending into space. The design would permit vehicles to travel up the cable from a planetary surface, such as the Earth's, directly into orbit, without the use of large rockets. An Earth-based space elevator cannot be constructed with a tall tower supported from below due to the immense weight - instead, it would consist of a cable with one end attached to the surface near the equator and the other end attached to a counterweight in space beyond geostationary orbit (35,786 km altitude). The competing forces of gravity, which is stronger at the lower end, and the upward centrifugal force, which is stronger at the upper end, would result in the cable being held up, under tension, and stationary over a single position on Earth. With the tether deployed, climbers could repeatedly climb up and down the tether by mechanical means, releasing their cargo to and from orbit.[2] The concept of a tower reaching geosynchronous orbit was first published in 1895 by Konstantin Tsiolkovsky.[3] His proposal was for a free-standing tower reaching from the surface of Earth to the height of geostationary orbit. Like all buildings, Tsiolkovsky's structure would be under compression, supporting its weight from below. Since 1959, most ideas for space elevators have focused on purely tensile structures, with the weight of the system held up from above by centrifugal forces. In the tensile concepts, a space tether reaches from a large mass (the counterweight) beyond geostationary orbit to the ground. This structure is held in tension between Earth and the counterweight like an upside-down plumb bob. The cable thickness is adjusted based on tension; it has its maximum at a geostationary orbit and the minimum on the ground. Available materials are not strong and light enough to make an Earth space elevator practical.[4][5][6] Some sources expect that future advances in carbon nanotubes (CNTs) could lead to a practical design.[2][7][8] Other sources believe that CNTs will never be strong enough.[9][10][11] Possible future alternatives include boron nitride nanotubes, diamond nanothreads[12][13] and macro-scale single crystal graphene.[14] The concept is applicable to other planets and celestial bodies. For locations in the solar system with weaker gravity than Earth's (such as the Moon or Mars), the strength-to-density requirements for tether materials are not as problematic. Currently available materials (such as Kevlar) are strong and light enough that they could be practical as the tether material for elevators there.[15] Early concepts[edit] Konstantin Tsiolkovsky The key concept of the space elevator appeared in 1895 when Russian scientist Konstantin Tsiolkovsky was inspired by the Eiffel Tower in Paris. He considered a similar tower that reached all the way into space and was built from the ground up to the altitude of 35,786 kilometers, the height of geostationary orbit.[16] He noted that the top of such a tower would be circling Earth as in a geostationary orbit. Objects would acquire horizontal velocity due to the Earth's rotation as they rode up the tower, and an object released at the tower's top would have enough horizontal velocity to remain there in geostationary orbit. Tsiolkovsky's conceptual tower was a compression structure, while modern concepts call for a tensile structure (or "tether"). 20th century[edit] Building a compression structure from the ground up proved an unrealistic task as there was no material in existence with enough compressive strength to support its own weight under such conditions.[17] In 1959, the Russian engineer Yuri N. Artsutanov suggested a more feasible proposal. Artsutanov suggested using a geostationary satellite as the base from which to deploy the structure downward. By using a counterweight, a cable would be lowered from geostationary orbit to the surface of Earth, while the counterweight was extended from the satellite away from Earth, keeping the cable constantly over the same spot on the surface of the Earth. Artsutanov's idea was introduced to the Russian-speaking public in an interview published in the Sunday supplement of Komsomolskaya Pravda in 1960,[18] but was not available in English until much later. He also proposed tapering the cable thickness in order for the stress in the cable to remain constant. This gave a thinner cable at ground level that became thickest at the level of geostationary orbit. Both the tower and cable ideas were proposed in David E. H. Jones' quasi-humorous Ariadne column in New Scientist, December 24, 1964. In 1966, Isaacs, Vine, Bradner and Bachus, four American engineers, reinvented the concept, naming it a "Sky-Hook", and published their analysis in the journal Science.[19] They decided to determine what type of material would be required to build a space elevator, assuming it would be a straight cable with no variations in its cross section area, and found that the strength required would be twice that of any then-existing material including graphite, quartz, and diamond. In 1975, an American scientist, Jerome Pearson, reinvented the concept, publishing his analysis in the journal Acta Astronautica. He designed[20] a cross-section-area altitude profile that tapered and would be better suited to building the elevator. The completed cable would be thickest at the geostationary orbit, where the tension was greatest, and would be narrowest at the tips to reduce the amount of weight per unit area of cross section that any point on the cable would have to bear. He suggested using a counterweight that would be slowly extended out to 144,000 kilometers (89,000 miles) (almost half the distance to the Moon) as the lower sections of the elevator were built. Without a large counterweight, the upper portion of the cable would have to be longer than the lower due to the way gravitational and centrifugal forces change with distance from Earth. His analysis included disturbances such as the gravitation of the Moon, wind and moving payloads up and down the cable. The weight of the material needed to build the elevator would have required thousands of Space Shuttle trips, although part of the material could be transported up the elevator when a minimum strength strand reached the ground or be manufactured in space from asteroidal or lunar ore. After the development of carbon nanotubes in the 1990s, engineer David Smitherman of NASA/Marshall's Advanced Projects Office realized that the high strength of these materials might make the concept of a space elevator feasible, and put together a workshop at the Marshall Space Flight Center, inviting many scientists and engineers to discuss concepts and compile plans for an elevator to turn the concept into a reality. In 2000, another American scientist, Bradley C. Edwards, suggested creating a 100,000 km (62,000 mi) long paper-thin ribbon using a carbon nanotube composite material.[21] He chose the wide-thin ribbon-like cross-section shape rather than earlier circular cross-section concepts because that shape would stand a greater chance of surviving impacts by meteoroids. The ribbon cross-section shape also provided large surface area for climbers to climb with simple rollers. Supported by the NASA Institute for Advanced Concepts, Edwards' work was expanded to cover the deployment scenario, climber design, power delivery system, orbital debris avoidance, anchor system, surviving atomic oxygen, avoiding lightning and hurricanes by locating the anchor in the western equatorial Pacific, construction costs, construction schedule, and environmental hazards.[2][7][22] 21st century[edit] To speed space elevator development, proponents have organized several competitions, similar to the Ansari X Prize, for relevant technologies.[23][24] Among them are Elevator:2010, which organized annual competitions for climbers, ribbons and power-beaming systems from 2005 to 2009, the Robogames Space Elevator Ribbon Climbing competition,[25] as well as NASA's Centennial Challenges program, which, in March 2005, announced a partnership with the Spaceward Foundation (the operator of Elevator:2010), raising the total value of prizes to US$400,000.[26][27] The first European Space Elevator Challenge (EuSEC) to establish a climber structure took place in August 2011.[28] In 2005, "the LiftPort Group of space elevator companies announced that it will be building a carbon nanotube manufacturing plant in Millville, New Jersey, to supply various glass, plastic and metal companies with these strong materials. Although LiftPort hopes to eventually use carbon nanotubes in the construction of a 100,000 km (62,000 mi) space elevator, this move will allow it to make money in the short term and conduct research and development into new production methods."[8] Their announced goal was a space elevator launch in 2010. On February 13, 2006, the LiftPort Group announced that, earlier the same month, they had tested a mile of "space-elevator tether" made of carbon-fiber composite strings and fiberglass tape measuring 5 cm (2.0 in) wide and 1 mm (approx. 13 sheets of paper) thick, lifted with balloons.[29] In April 2019, Liftport CEO Michael Laine admitted little progress has been made on the company's lofty space elevator ambitions, even after receiving more than $200,000 in seed funding. The carbon nanotube manufacturing facility that Liftport announced in 2005 was never built.[30] In 2007, Elevator:2010 held the 2007 Space Elevator games, which featured US$500,000 awards for each of the two competitions, ($1,000,000 total) as well as an additional $4,000,000 to be awarded over the next five years for space elevator related technologies.[31] No teams won the competition, but a team from MIT entered the first 2-gram (0.07 oz), 100-percent carbon nanotube entry into the competition.[32] Japan held an international conference in November 2008 to draw up a timetable for building the elevator.[33] In 2012, the Obayashi Corporation announced that it could build a space elevator by 2050 using carbon nanotube technology.[34] The design's passenger climber would be able to reach the GEO level after an 8-day trip.[35] Further details have been published in 2016.[36] In 2013, the International Academy of Astronautics published a technological feasibility assessment which concluded that the critical capability improvement needed was the tether material, which was projected to achieve the necessary specific strength within 20 years. The four-year long study looked into many facets of space elevator development including missions, development schedules, financial investments, revenue flow, and benefits. It was reported that it would be possible to operationally survive smaller impacts and avoid larger impacts, with meteors and space debris, and that the estimated cost of lifting a kilogram of payload to GEO and beyond would be $500.[37][38][self-published source?] In 2014, Google X's Rapid Evaluation R&D team began the design of a Space Elevator, eventually finding that no one had yet manufactured a perfectly formed carbon nanotube strand longer than a meter. They thus decided to put the project in "deep freeze" and also keep tabs on any advances in the carbon nanotube field.[39] In 2018, researchers at Japan's Shizuoka University launched STARS-Me, two CubeSats connected by a tether, which a mini-elevator will travel on.[40][41] The experiment was launched as a test bed for a larger structure.[42] In 2019, the International Academy of Astronautics published "Road to the Space Elevator Era",[43] a study report summarizing the assessment of the space elevator as of summer 2018. The essence is that a broad group of space professionals gathered and assessed the status of the space elevator development, each contributing their expertise and coming to similar conclusions: (a) Earth Space Elevators seem feasible, reinforcing the IAA 2013 study conclusion (b) Space Elevator development initiation is nearer than most think. This last conclusion is based on a potential process for manufacturing macro-scale single crystal graphene[14] with higher specific strength than carbon nanotubes. In fiction[edit] Main article: Space elevators in fiction In 1979, space elevators were introduced to a broader audience with the simultaneous publication of Arthur C. Clarke's novel, The Fountains of Paradise, in which engineers construct a space elevator on top of a mountain peak in the fictional island country of "Taprobane" (loosely based on Sri Lanka, albeit moved south to the Equator), and Charles Sheffield's first novel, The Web Between the Worlds, also featuring the building of a space elevator. Three years later, in Robert A. Heinlein's 1982 novel Friday, the principal character mentions a disaster at the "Quito Sky Hook" and makes use of the "Nairobi Beanstalk" in the course of her travels. In Kim Stanley Robinson's 1993 novel Red Mars, colonists build a space elevator on Mars that allows both for more colonists to arrive and also for natural resources mined there to be able to leave for Earth. In Larry Niven's book Rainbow Mars, Niven describes a space elevator built on Mars. In David Gerrold's 2000 novel, Jumping Off The Planet, a family excursion up the Ecuador "beanstalk" is actually a child-custody kidnapping. Gerrold's book also examines some of the industrial applications of a mature elevator technology. The concept of a space elevator, called the Beanstalk, is also depicted in John Scalzi's 2005 novel, Old Man's War. In a biological version, Joan Slonczewski's 2011 novel The Highest Frontier depicts a college student ascending a space elevator constructed of self-healing cables of anthrax bacilli. The engineered bacteria can regrow the cables when severed by space debris. Physics[edit] Apparent gravitational field[edit] An Earth space elevator cable rotates along with the rotation of the Earth. Therefore, the cable, and objects attached to it, would experience upward centrifugal force in the direction opposing the downward gravitational force. The higher up the cable the object is located, the less the gravitational pull of the Earth, and the stronger the upward centrifugal force due to the rotation, so that more centrifugal force opposes less gravity. The centrifugal force and the gravity are balanced at geosynchronous equatorial orbit (GEO). Above GEO, the centrifugal force is stronger than gravity, causing objects attached to the cable there to pull upward on it. The net force for objects attached to the cable is called the apparent gravitational field. The apparent gravitational field for attached objects is the (downward) gravity minus the (upward) centrifugal force. The apparent gravity experienced by an object on the cable is zero at GEO, downward below GEO, and upward above GEO. The apparent gravitational field can be represented this way:: Ref[44] Table 1 The downward force of actual gravity decreases with height: g r = − G M / r 2 {\displaystyle g_{r}=-GM/r^{2}} The upward centrifugal force due to the planet's rotation increases with height: a = ω 2 r {\displaystyle a=\omega ^{2}r} Together, the apparent gravitational field is the sum of the two: g = − G M r 2 + ω 2 r {\displaystyle g=-{\frac {GM}{r^{2}}}+\omega ^{2}r} g is the acceleration of apparent gravity, pointing down (negative) or up (positive) along the vertical cable (m s−2), gr is the gravitational acceleration due to Earth's pull, pointing down (negative)(m s−2), a is the centrifugal acceleration, pointing up (positive) along the vertical cable (m s−2), G is the gravitational constant (m3 s−2 kg−1) M is the mass of the Earth (kg) r is the distance from that point to Earth's center (m), ω is Earth's rotation speed (radian/s). At some point up the cable, the two terms (downward gravity and upward centrifugal force) are equal and opposite. Objects fixed to the cable at that point put no weight on the cable. This altitude (r1) depends on the mass of the planet and its rotation rate. Setting actual gravity equal to centrifugal acceleration gives:: Ref[44] p. 126 r 1 = ( G M ω 2 ) 1 3 {\displaystyle r_{1}=\left({\frac {GM}{\omega ^{2}}}\right)^{\frac {1}{3}}} This is 35,786 km (22,236 mi) above Earth's surface, the altitude of geostationary orbit.: Ref[44] Table 1 On the cable below geostationary orbit, downward gravity would be greater than the upward centrifugal force, so the apparent gravity would pull objects attached to the cable downward. Any object released from the cable below that level would initially accelerate downward along the cable. Then gradually it would deflect eastward from the cable. On the cable above the level of stationary orbit, upward centrifugal force would be greater than downward gravity, so the apparent gravity would pull objects attached to the cable upward. Any object released from the cable above the geosynchronous level would initially accelerate upward along the cable. Then gradually it would deflect westward from the cable. Cable section[edit] Historically, the main technical problem has been considered the ability of the cable to hold up, with tension, the weight of itself below any given point. The greatest tension on a space elevator cable is at the point of geostationary orbit, 35,786 km (22,236 mi) above the Earth's equator. This means that the cable material, combined with its design, must be strong enough to hold up its own weight from the surface up to 35,786 km (22,236 mi). A cable which is thicker in cross section area at that height than at the surface could better hold up its own weight over a longer length. How the cross section area tapers from the maximum at 35,786 km (22,236 mi) to the minimum at the surface is therefore an important design factor for a space elevator cable. To maximize the usable excess strength for a given amount of cable material, the cable's cross section area would need to be designed for the most part in such a way that the stress (i.e., the tension per unit of cross sectional area) is constant along the length of the cable.[44][45] The constant-stress criterion is a starting point in the design of the cable cross section area as it changes with altitude. Other factors considered in more detailed designs include thickening at altitudes where more space junk is present, consideration of the point stresses imposed by climbers, and the use of varied materials.[46] To account for these and other factors, modern detailed designs seek to achieve the largest safety margin possible, with as little variation over altitude and time as possible.[46] In simple starting-point designs, that equates to constant-stress. For a constant-stress cable with no safety margin, the cross-section-area as a function of distance from Earth's center is given by the following equation:[44] Several taper profiles with different material parameters A ( r ) = A s exp ⁡ [ ρ g R 2 T ( 1 R + R 2 2 R g 3 − 1 r − r 2 2 R g 3 ) ] {\displaystyle A(r)=A_{s}\exp \left[{\frac {\rho gR^{2}}{T}}\left({\frac {1}{R}}+{\frac {R^{2}}{2R_{g}^{3}}}-{\frac {1}{r}}-{\frac {r^{2}}{2R_{g}^{3}}}\right)\right]} g is the gravitational acceleration at Earth's surface (m·s−2), A s {\displaystyle A_{s}} is the cross-section area of the cable at Earth's surface (m2), ρ is the density of the material used for the cable (kg·m−3), R is the Earth's equatorial radius, R g {\displaystyle R_{g}} is the radius of geosynchronous orbit, T is the stress the cross-section area can bear without yielding (N·m−2), its elastic limit. Safety margin can be accounted for by dividing T by the desired safety factor.[44] Cable materials[edit] Using the above formula we can calculate the ratio between the cross-section at geostationary orbit and the cross-section at Earth's surface, known as taper ratio:[note 1] A ( R g ) / A s = exp ⁡ [ ρ T × 4.85 × 10 7 ] {\displaystyle A(R_{g})/A_{s}=\exp \left[{\frac {\rho }{T}}\times 4.85\times 10^{7}\right]} Taper ratio as a function of specific strength Taper ratio for some materials[44] (kg/m3) Specific strength (MPa)/(kg/m3) Taper ratio Steel 5,000 7,900 0.63 1.6×1033 Kevlar 3,600 1,440 2.5 2.5×108 Single wall carbon nanotube 130,000 1,300 100 1.6 The taper ratio becomes very large unless the specific strength of the material used approaches 48 (MPa)/(kg/m3). Low specific strength materials require very large taper ratios which equates to large (or astronomical) total mass of the cable with associated large or impossible costs. Structure[edit] One concept for the space elevator has it tethered to a mobile seagoing platform. There are a variety of space elevator designs proposed for many planetary bodies. Almost every design includes a base station, a cable, climbers, and a counterweight. For an Earth Space Elevator the Earth's rotation creates upward centrifugal force on the counterweight. The counterweight is held down by the cable while the cable is held up and taut by the counterweight. The base station anchors the whole system to the surface of the Earth. Climbers climb up and down the cable with cargo. Base station[edit] Modern concepts for the base station/anchor are typically mobile stations, large oceangoing vessels or other mobile platforms. Mobile base stations would have the advantage over the earlier stationary concepts (with land-based anchors) by being able to maneuver to avoid high winds, storms, and space debris. Oceanic anchor points are also typically in international waters, simplifying and reducing the cost of negotiating territory use for the base station.[2] Stationary land-based platforms would have simpler and less costly logistical access to the base. They also would have the advantage of being able to be at high altitudes, such as on top of mountains. In an alternate concept, the base station could be a tower, forming a space elevator which comprises both a compression tower close to the surface, and a tether structure at higher altitudes.[17] Combining a compression structure with a tension structure would reduce loads from the atmosphere at the Earth end of the tether, and reduce the distance into the Earth's gravity field the cable needs to extend, and thus reduce the critical strength-to-density requirements for the cable material, all other design factors being equal. Cable[edit] Carbon nanotubes are one of the candidates for a cable material[34] A seagoing anchor station would also act as a deep-water seaport. A space elevator cable would need to carry its own weight as well as the additional weight of climbers. The required strength of the cable would vary along its length. This is because at various points it would have to carry the weight of the cable below, or provide a downward force to retain the cable and counterweight above. Maximum tension on a space elevator cable would be at geosynchronous altitude so the cable would have to be thickest there and taper as it approaches Earth. Any potential cable design may be characterized by the taper factor – the ratio between the cable's radius at geosynchronous altitude and at the Earth's surface.[47] The cable would need to be made of a material with a high tensile strength/density ratio. For example, the Edwards space elevator design assumes a cable material with a tensile strength of at least 100 gigapascals.[2] Since Edwards consistently assumed the density of his carbon nanotube cable to be 1300 kg/m3,[21] that implies a specific strength of 77 megapascal/(kg/m3). This value takes into consideration the entire weight of the space elevator. An untapered space elevator cable would need a material capable of sustaining a length of 4,960 kilometers (3,080 mi) of its own weight at sea level to reach a geostationary altitude of 35,786 km (22,236 mi) without yielding.[48] Therefore, a material with very high strength and lightness is needed. For comparison, metals like titanium, steel or aluminium alloys have breaking lengths of only 20–30 km (0.2–0.3 MPa/(kg/m3)). Modern fiber materials such as kevlar, fiberglass and carbon/graphite fiber have breaking lengths of 100–400 km (1.0–4.0 MPa/(kg/m3)). Nanoengineered materials such as carbon nanotubes and, more recently discovered, graphene ribbons (perfect two-dimensional sheets of carbon) are expected to have breaking lengths of 5000–6000 km (50–60 MPa/(kg/m3)), and also are able to conduct electrical power.[citation needed] For a space elevator on Earth, with its comparatively high gravity, the cable material would need to be stronger and lighter than currently available materials.[49] For this reason, there has been a focus on the development of new materials that meet the demanding specific strength requirement. For high specific strength, carbon has advantages because it is only the sixth element in the periodic table. Carbon has comparatively few of the protons and neutrons which contribute most of the dead weight of any material. Most of the interatomic bonding forces of any element are contributed by only the outer few electrons. For carbon, the strength and stability of those bonds is high compared to the mass of the atom. The challenge in using carbon nanotubes remains to extend to macroscopic sizes the production of such material that are still perfect on the microscopic scale (as microscopic defects are most responsible for material weakness).[49][50][51] As of 2014, carbon nanotube technology allowed growing tubes up to a few tenths of meters.[52] In 2014, diamond nanothreads were first synthesized.[12] Since they have strength properties similar to carbon nanotubes, diamond nanothreads were quickly seen as candidate cable material as well.[13] Climbers[edit] A conceptual drawing of a space elevator climber ascending through the clouds. A space elevator cannot be an elevator in the typical sense (with moving cables) due to the need for the cable to be significantly wider at the center than at the tips. While various designs employing moving cables have been proposed, most cable designs call for the "elevator" to climb up a stationary cable. Climbers cover a wide range of designs. On elevator designs whose cables are planar ribbons, most propose to use pairs of rollers to hold the cable with friction. Climbers would need to be paced at optimal timings so as to minimize cable stress and oscillations and to maximize throughput. Lighter climbers could be sent up more often, with several going up at the same time. This would increase throughput somewhat, but would lower the mass of each individual payload.[53] As the car climbs, the cable takes on a slight lean due to the Coriolis force. The top of the cable travels faster than the bottom. The climber is accelerated horizontally as it ascends by the Coriolis force which is imparted by angles of the cable. The lean-angle shown is exaggerated. The horizontal speed, i.e. due to orbital rotation, of each part of the cable increases with altitude, proportional to distance from the center of the Earth, reaching low orbital speed at a point approximately 66 percent of the height between the surface and geostationary orbit, or a height of about 23,400 km. A payload released at this point would go into a highly eccentric elliptical orbit, staying just barely clear from atmospheric reentry, with the periapsis at the same altitude as LEO and the apoapsis at the release height. With increasing release height the orbit would become less eccentric as both periapsis and apoapsis increase, becoming circular at geostationary level.[54][55] When the payload has reached GEO, the horizontal speed is exactly the speed of a circular orbit at that level, so that if released, it would remain adjacent to that point on the cable. The payload can also continue climbing further up the cable beyond GEO, allowing it to obtain higher speed at jettison. If released from 100,000 km, the payload would have enough speed to reach the asteroid belt.[46] As a payload is lifted up a space elevator, it would gain not only altitude, but horizontal speed (angular momentum) as well. The angular momentum is taken from the Earth's rotation. As the climber ascends, it is initially moving slower than each successive part of cable it is moving on to. This is the Coriolis force: the climber "drags" (westward) on the cable, as it climbs, and slightly decreases the Earth's rotation speed. The opposite process would occur for descending payloads: the cable is tilted eastward, thus slightly increasing Earth's rotation speed. The overall effect of the centrifugal force acting on the cable would cause it to constantly try to return to the energetically favorable vertical orientation, so after an object has been lifted on the cable, the counterweight would swing back toward the vertical, a bit like a pendulum.[53] Space elevators and their loads would be designed so that the center of mass is always well-enough above the level of geostationary orbit[56] to hold up the whole system. Lift and descent operations would need to be carefully planned so as to keep the pendulum-like motion of the counterweight around the tether point under control.[57] Climber speed would be limited by the Coriolis force, available power, and by the need to ensure the climber's accelerating force does not break the cable. Climbers would also need to maintain a minimum average speed in order to move material up and down economically and expeditiously.[58] At the speed of a very fast car or train of 300 km/h (190 mph) it will take about 5 days to climb to geosynchronous orbit.[59] Powering climbers[edit] Both power and energy are significant issues for climbers – the climbers would need to gain a large amount of potential energy as quickly as possible to clear the cable for the next payload. Various methods have been proposed to get that energy to the climber: Transfer the energy to the climber through wireless energy transfer while it is climbing. Transfer the energy to the climber through some material structure while it is climbing. Store the energy in the climber before it starts – requires an extremely high specific energy such as nuclear energy. Solar power – After the first 40 km it is possible to use solar energy to power the climber[60] Wireless energy transfer such as laser power beaming is currently considered the most likely method, using megawatt-powered free electron or solid state lasers in combination with adaptive mirrors approximately 10 m (33 ft) wide and a photovoltaic array on the climber tuned to the laser frequency for efficiency.[2] For climber designs powered by power beaming, this efficiency is an important design goal. Unused energy would need to be re-radiated away with heat-dissipation systems, which add to weight. Yoshio Aoki, a professor of precision machinery engineering at Nihon University and director of the Japan Space Elevator Association, suggested including a second cable and using the conductivity of carbon nanotubes to provide power.[33] Counterweight[edit] Space Elevator with Space Station Several solutions have been proposed to act as a counterweight: a heavy, captured asteroid;[16][61] a space dock, space station or spaceport positioned past geostationary orbit a further upward extension of the cable itself so that the net upward pull would be the same as an equivalent counterweight; parked spent climbers that had been used to thicken the cable during construction, other junk, and material lifted up the cable for the purpose of increasing the counterweight.[46] Extending the cable has the advantage of some simplicity of the task and the fact that a payload that went to the end of the counterweight-cable would acquire considerable velocity relative to the Earth, allowing it to be launched into interplanetary space. Its disadvantage is the need to produce greater amounts of cable material as opposed to using just anything available that has mass. Applications[edit] Launching into deep space[edit] An object attached to a space elevator at a radius of approximately 53,100 km would be at escape velocity when released. Transfer orbits to the L1 and L2 Lagrangian points could be attained by release at 50,630 and 51,240 km, respectively, and transfer to lunar orbit from 50,960 km.[62] At the end of Pearson's 144,000 km (89,000 mi) cable, the tangential velocity is 10.93 kilometers per second (6.79 mi/s). That is more than enough to escape Earth's gravitational field and send probes at least as far out as Jupiter. Once at Jupiter, a gravitational assist maneuver could permit solar escape velocity to be reached.[44] Extraterrestrial elevators[edit] A space elevator could also be constructed on other planets, asteroids and moons. A Martian tether could be much shorter than one on Earth. Mars' surface gravity is 38 percent of Earth's, while it rotates around its axis in about the same time as Earth. Because of this, Martian stationary orbit is much closer to the surface, and hence the elevator could be much shorter. Current materials are already sufficiently strong to construct such an elevator.[63] Building a Martian elevator would be complicated by the Martian moon Phobos, which is in a low orbit and intersects the Equator regularly (twice every orbital period of 11 h 6 min). Phobos and Deimos may get in the way of a geostationary space elevator, however, they may contribute useful resources to the project. Phobos is projected to contain high amounts of carbon. If carbon nanotubes become feasible for a tether material, there will be an abundance of carbon in Mars local region. This could provide readily available resources for the future colonization on Mars. Space elevator Phobos Earth vs Mars vs Moon gravity at elevation Phobos is Synchronously orbiting Mars, where the same face stays facing the planet at ~6,028 km above the Martian surface. A space elevator could extend down from Phobos to Mars 6,000 km, about 28 kilometers from the surface, and just out of the atmosphere of Mars. A similar space elevator cable could extend out 6,000 km the opposite direction that would counterbalance Phobos. In total the space elevator would extend out over 12,000 km which would be below Areostationary orbit of Mars (17,032 km). A rocket launch would still be needed to get the rocket and cargo to the beginning of the space elevator 28 km above the surface. The surface of Mars is rotating at 0.25 km/s at the equator and the bottom of the space elevator would be rotating around Mars at 0.77 km/s, so only 0.52 km/s of Delta-v would be needed to get to the space elevator. Phobos orbits at 2.15 km/s and the outer most part of the space elevator would rotate around Mars at 3.52 km/s.[64][65] The Earth's Moon is a potential location for a Lunar space elevator, especially as the specific strength required for the tether is low enough to use currently available materials. The Moon does not rotate fast enough for an elevator to be supported by centrifugal force (the proximity of the Earth means there is no effective lunar-stationary orbit), but differential gravity forces means that an elevator could be constructed through Lagrangian points. A near-side elevator would extend through the Earth-Moon L1 point from an anchor point near the center of the visible part of Earth's Moon: the length of such an elevator must exceed the maximum L1 altitude of 59,548 km, and would be considerably longer to reduce the mass of the required apex counterweight.[66] A far-side lunar elevator would pass through the L2 Lagrangian point and would need to be longer than on the near-side: again, the tether length depends on the chosen apex anchor mass, but it could also be made of existing engineering materials.[66] Rapidly spinning asteroids or moons could use cables to eject materials to convenient points, such as Earth orbits;[67] or conversely, to eject materials to send a portion of the mass of the asteroid or moon to Earth orbit or a Lagrangian point. Freeman Dyson, a physicist and mathematician, has suggested[citation needed] using such smaller systems as power generators at points distant from the Sun where solar power is uneconomical. A space elevator using presently available engineering materials could be constructed between mutually tidally locked worlds, such as Pluto and Charon or the components of binary asteroid 90 Antiope, with no terminus disconnect, according to Francis Graham of Kent State University.[68] However, spooled variable lengths of cable must be used due to ellipticity of the orbits. Construction[edit] Main article: Space elevator construction The construction of a space elevator would need reduction of some technical risk. Some advances in engineering, manufacturing and physical technology are required.[2] Once a first space elevator is built, the second one and all others would have the use of the previous ones to assist in construction, making their costs considerably lower. Such follow-on space elevators would also benefit from the great reduction in technical risk achieved by the construction of the first space elevator.[2] Prior to the work of Edwards in 2000,[21] most concepts for constructing a space elevator had the cable manufactured in space. That was thought to be necessary for such a large and long object and for such a large counterweight. Manufacturing the cable in space would be done in principle by using an asteroid or Near-Earth object for source material.[69][70] These earlier concepts for construction require a large preexisting space-faring infrastructure to maneuver an asteroid into its needed orbit around Earth. They also required the development of technologies for manufacture in space of large quantities of exacting materials.[71] Since 2001, most work has focused on simpler methods of construction requiring much smaller space infrastructures. They conceive the launch of a long cable on a large spool, followed by deployment of it in space.[2][21][71] The spool would be initially parked in a geostationary orbit above the planned anchor point. A long cable would be dropped "downward" (toward Earth) and would be balanced by a mass being dropped "upward" (away from Earth) for the whole system to remain on the geosynchronous orbit. Earlier designs imagined the balancing mass to be another cable (with counterweight) extending upward, with the main spool remaining at the original geosynchronous orbit level. Most current designs elevate the spool itself as the main cable is paid out, a simpler process. When the lower end of the cable is long enough to reach the surface of the Earth (at the equator), it would be anchored. Once anchored, the center of mass would be elevated more (by adding mass at the upper end or by paying out more cable). This would add more tension to the whole cable, which could then be used as an elevator cable. One plan for construction uses conventional rockets to place a "minimum size" initial seed cable of only 19,800 kg.[2] This first very small ribbon would be adequate to support the first 619 kg climber. The first 207 climbers would carry up and attach more cable to the original, increasing its cross section area and widening the initial ribbon to about 160 mm wide at its widest point. The result would be a 750-ton cable with a lift capacity of 20 tons per climber. Safety issues and construction challenges[edit] Main article: Space elevator safety For early systems, transit times from the surface to the level of geosynchronous orbit would be about five days. On these early systems, the time spent moving through the Van Allen radiation belts would be enough that passengers would need to be protected from radiation by shielding, which would add mass to the climber and decrease payload.[72] A space elevator would present a navigational hazard, both to aircraft and spacecraft. Aircraft could be diverted by air-traffic control restrictions. All objects in stable orbits that have perigee below the maximum altitude of the cable that are not synchronous with the cable would impact the cable eventually, unless avoiding action is taken. One potential solution proposed by Edwards is to use a movable anchor (a sea anchor) to allow the tether to "dodge" any space debris large enough to track.[2] Impacts by space objects such as meteoroids, micrometeorites and orbiting man-made debris pose another design constraint on the cable. A cable would need to be designed to maneuver out of the way of debris, or absorb impacts of small debris without breaking.[citation needed] Economics[edit] Main article: Space elevator economics With a space elevator, materials might be sent into orbit at a fraction of the current cost. As of 2000, conventional rocket designs cost about US$25,000 per kilogram (US$11,000 per pound) for transfer to geostationary orbit.[73] Current space elevator proposals envision payload prices starting as low as $220 per kilogram ($100 per pound),[74] similar to the $5–$300/kg estimates of the Launch loop, but higher than the $310/ton to 500 km orbit quoted[75] to Dr. Jerry Pournelle for an orbital airship system. Philip Ragan, co-author of the book Leaving the Planet by Space Elevator, states that "The first country to deploy a space elevator will have a 95 percent cost advantage and could potentially control all space activities."[76] International Space Elevator Consortium (ISEC)[edit] The International Space Elevator Consortium (ISEC) is a US Non-Profit 501(c)(3) Corporation[77] formed to promote the development, construction, and operation of a space elevator as "a revolutionary and efficient way to space for all humanity".[78] It was formed after the Space Elevator Conference in Redmond, Washington in July 2008 and became an affiliate organization with the National Space Society[79] in August 2013.[78] ISEC hosts an annual Space Elevator conference at the Seattle Museum of Flight.[80][81][82] ISEC coordinates with the two other major societies focusing on space elevators: the Japanese Space Elevator Association[83] and EuroSpaceward.[84] ISEC supports symposia and presentations at the International Academy of Astronautics[85] and the International Astronautical Federation Congress[86] each year. Related concepts[edit] The conventional current concept of a "Space Elevator" has evolved from a static compressive structure reaching to the level of GEO, to the modern baseline idea of a static tensile structure anchored to the ground and extending to well above the level of GEO. In the current usage by practitioners (and in this article), a "Space Elevator" means the Tsiolkovsky-Artsutanov-Pearson type as considered by the International Space Elevator Consortium. This conventional type is a static structure fixed to the ground and extending into space high enough that cargo can climb the structure up from the ground to a level where simple release will put the cargo into an orbit.[87] Some concepts related to this modern baseline are not usually termed a "Space Elevator", but are similar in some way and are sometimes termed "Space Elevator" by their proponents. For example, Hans Moravec published an article in 1977 called "A Non-Synchronous Orbital Skyhook" describing a concept using a rotating cable.[88] The rotation speed would exactly match the orbital speed in such a way that the tip velocity at the lowest point was zero compared to the object to be "elevated". It would dynamically grapple and then "elevate" high flying objects to orbit or low orbiting objects to higher orbit. The original concept envisioned by Tsiolkovsky was a compression structure, a concept similar to an aerial mast. While such structures might reach space (100 km, 62 mi), they are unlikely to reach geostationary orbit. The concept of a Tsiolkovsky tower combined with a classic space elevator cable (reaching above the level of GEO) has been suggested.[17] Other ideas use very tall compressive towers to reduce the demands on launch vehicles.[89] The vehicle is "elevated" up the tower, which may extend as high as above the atmosphere, and is launched from the top. Such a tall tower to access near-space altitudes of 20 km (12 mi) has been proposed by various researchers.[89][90][91] Other concepts for non-rocket spacelaunch related to a space elevator (or parts of a space elevator) include an orbital ring, a pneumatic space tower,[92] a space fountain, a launch loop, a skyhook, a space tether, and a buoyant "SpaceShaft".[93] ^ Specific substitutions used to produce the factor 4.85×107: A ( R g ) / A s = exp ⁡ [ ρ × 9.81 × ( 6.378 × 10 6 ) 2 T ( 1 6.378 × 10 6 + ( 6.378 × 10 6 ) 2 2 ( 4.2164 × 10 7 ) 3 − 1 4.2164 × 10 7 − ( 4.2164 × 10 7 ) 2 2 ( 4.2164 × 10 7 ) 3 ) ] {\displaystyle A(R_{g})/A_{s}=\exp \left[{\frac {\rho \times 9.81\times (6.378\times 10^{6})^{2}}{T}}\left({\frac {1}{6.378\times 10^{6}}}+{\frac {(6.378\times 10^{6})^{2}}{2(4.2164\times 10^{7})^{3}}}-{\frac {1}{4.2164\times 10^{7}}}-{\frac {(4.2164\times 10^{7})^{2}}{2(4.2164\times 10^{7})^{3}}}\right)\right]} ^ "What is a Space Elevator?". The International Space Elevator Consortium. 2014. Retrieved August 22, 2020. ^ a b c d e f g h i j k Edwards, Bradley Carl. "The NIAC Space Elevator Program". NASA Institute for Advanced Concepts ^ Hirschfeld, Bob (January 31, 2002). "Space Elevator Gets Lift". TechTV. Archived from the original on June 8, 2005. Retrieved September 13, 2007. The concept was first described in 1895 by Russian author K. E. Tsiolkovsky in his 'Speculations about Earth and Sky and on Vesta.' ^ Fleming, Nic (February 15, 2015). "Should We give up on the dream of space elevators?". BBC. Retrieved January 4, 2021. 'This is extremely complicated. I don't think it's really realistic to have a space elevator,' said Elon Musk during a conference at MIT, adding that it would be easier to 'have a bridge from LA to Tokyo' than an elevator that could take material into space. ^ Donahue, Michelle Z. (January 21, 2016). "People Are Still Trying to Build a Space Elevator". Smithsonian Magazine. Retrieved January 4, 2020. 'We understand it's a difficult project,' YojiIshikawa says. 'Our technology is very low. If we need to be at 100 to get an elevator built – right now we are around a 1 or 2. But we cannot say this project is not possible.' ^ "Why the world still awaits its first space elevator". The Economist. January 30, 2018. Retrieved January 4, 2020. The chief obstacle is that no known material has the necessary combination of lightness and strength needed for the cable, which has to be able to support its own weight. Carbon nanotubes are often touted as a possibility, but they have only about a tenth of the necessary strength-to-weight ratio and cannot be made into filaments more than a few centimetres long, let alone thousands of kilometres. Diamond nanothreads, another exotic form of carbon, might be stronger, but their properties are still poorly understood. ^ a b "Space Elevators: An Advanced Earth-Space Infrastructure for the New Millennium", NASA/CP-2000-210429, Marshall Space Flight Center, Huntsville, Alabama, 2000 (archived) ^ a b Cain, Fraser (April 27, 2005). "Space Elevator Group to Manufacture Nanotubes". Universe Today. Retrieved March 5, 2006. ^ Aron, Jacob (June 13, 2016). "Carbon nanotubes too weak to get a space elevator off the ground". New Scientist. Retrieved January 3, 2020. Feng Ding of the Hong Kong Polytechnic University and his colleagues simulated CNTs with a single atom out of place, turning two of the hexagons into a pentagon and heptagon, and creating a kink in the tube. They found this simple change was enough to cut the ideal strength of a CNT to 40 GPa, with the effect being even more severe when they increased the number of misaligned atoms... That's bad news for people who want to build a space elevator, a cable between the Earth and an orbiting satellite that would provide easy access to space. Estimates suggest such a cable would need a tensile strength of 50 GPa, so CNTs were a promising solution, but Ding's research suggests they won't work. ^ Christensen, Billn (June 2, 2006). "Nanotubes Might Not Have the Right Stuff". Space.com. Retrieved January 3, 2020. recent calculations by Nicola Pugno of the Polytechnic of Turin, Italy, suggest that carbon nanotube cables will not work... According to their calculations, the cable would need to be twice as strong as that of any existing material including graphite, quartz, and diamond. ^ Whittaker, Clay (June 15, 2016). "Carbon Nanotubes Can't Handle a Space Elevator". Popular Science. Retrieved January 3, 2020. Alright, space elevator plans are back to square one, people. Carbon nanotubes probably aren't going to be our material solution for a space elevator, because apparently even a minuscule (read: atomic) flaw in the design drastically decreases strength. ^ a b Calderone, Julia (September 26, 2014). "Liquid Benzene Squeezed to Form Diamond Nanothreads". Scientific American. Retrieved July 22, 2018. ^ a b Anthony, Sebastian (September 23, 2014). "New diamond nanothreads could be the key material for building a space elevator". Extremetech. Zeff Davis, LLC. Retrieved July 22, 2018. ^ a b "Space Elevator Technology and Graphene: An Interview with Adrian Nixon". July 23, 2018. ^ Moravec, Hans (1978). Non-Synchronous Orbital Skyhooks for the Moon and Mars with Conventional Materials. Carnegie Mellon University. frc.ri.cmu.edu ^ a b "The Audacious Space Elevator". NASA Science News. Archived from the original on September 19, 2008. Retrieved September 27, 2008. ^ a b c Landis, Geoffrey A. & Cafarelli, Craig (1999). Presented as paper IAF-95-V.4.07, 46th International Astronautics Federation Congress, Oslo Norway, October 2–6, 1995. "The Tsiolkovski Tower Reexamined". Journal of the British Interplanetary Society. 52: 175–180. Bibcode:1999JBIS...52..175L. ^ Artsutanov, Yu (1960). "To the Cosmos by Electric Train" (PDF). liftport.com. Young Person's Pravda. Archived from the original (PDF) on May 6, 2006. Retrieved March 5, 2006. ^ Isaacs, J. D.; A. C. Vine, H. Bradner and G. E. Bachus; Bradner; Bachus (1966). "Satellite Elongation into a True 'Sky-Hook'". Science. 151 (3711): 682–3. Bibcode:1966Sci...151..682I. doi:10.1126/science.151.3711.682. PMID 17813792. S2CID 32226322. ^ Pearson, J. (1975). "The orbital tower: a spacecraft launcher using the Earth's rotational energy" (PDF). Acta Astronautica. 2 (9–10): 785–799. Bibcode:1975AcAau...2..785P. CiteSeerX 10.1.1.530.3120. doi:10.1016/0094-5765(75)90021-1. ^ a b c d Bradley C. Edwards, "The Space Elevator" ^ Science @ NASA, "Audacious & Outrageous: Space Elevators" Archived September 19, 2008, at the Wayback Machine, September 2000 ^ Boyle, Alan (August 27, 2004). "Space elevator contest proposed". NBC News. ^ "The Space Elevator – Elevator:2010". Archived from the original on January 6, 2007. Retrieved March 5, 2006. ^ "Space Elevator Ribbon Climbing Robot Competition Rules". Archived from the original on February 6, 2005. Retrieved March 5, 2006. ^ "NASA Announces First Centennial Challenges' Prizes". 2005. Retrieved March 5, 2006. ^ Britt, Robert Roy (March 24, 2005). "NASA Details Cash Prizes for Space Privatization". Space.com. Retrieved March 5, 2006. ^ "What's the European Space Elevator Challenge?". European Space Elevator Challenge. Retrieved April 21, 2011. ^ Groshong, Kimm (February 15, 2006). "Space-elevator tether climbs a mile high". New Scientist. Retrieved March 5, 2006. ^ "If a space elevator was ever going to happen, it could have gotten its start in N.J. Here's how it went wrong". NJ.com. March 28, 2019. Retrieved May 11, 2019. ^ Elevator:2010 – The Space Elevator Challenge. spaceward.org ^ Spaceward Games 2007. The Spaceward Foundation ^ a b Lewis, Leo (September 22, 2008). "Japan hopes to turn sci-fi into reality with elevator to the stars". The Times. London. Retrieved May 23, 2010. Lewis, Leo; News International Group; accessed September 22, 2008. ^ a b "Going up: Japan builder eyes space elevator". Phys.org. February 22, 2012. ^ Daley, Jason (September 5, 2018). "Japan Takes Tiny First Step Toward Space Elevator". Smithsonian Magazine. ^ Ishikawa, Y. (2016). "Obayashi Corporation's Space Elevator Construction Concept". Journal of the British Interplanetary Society. 69: 227–239. Bibcode:2016JBIS...69..227I. Retrieved January 5, 2021. ^ Swan, Peter A.; Raitt, David I.; Swan, Cathy W.; Penny, Robert E.; Knapman, John M. (2013). Space Elevators: An Assessment of the Technological Feasibility and the Way Forward. Virginia, US: International Academy of Astronautics. pp. 10–11, 207–208. ISBN 9782917761311. ^ Swan, P., Penny, R., Swan, C. "Space Elevator Survivability, Space Debris Mitigation", Lulu.com Publishers, 2011[self-published source] ^ Gayomali, Chris (April 15, 2014). "Google X Confirms The Rumors: It Really Did Try To Design A Space Elevator". Fast Company. Retrieved April 17, 2014. ^ Snowden, Scott (October 2, 2018). "A colossal elevator to space could be going up sooner than you ever imagined". NBC News. ^ Barber, Meghan (September 12, 2018). "Japan is trying to build an elevator to space". Curbed.com. Retrieved September 18, 2018. ^ "Japan Testing Miniature Space Elevator Near the International Space Station". September 4, 2018. ^ Swan P A, Raitt D I, Knapman J M, Tsuchida A, Fitzgerald M A, Ishikawa Y (May 30, 2019). Road to the Space Elevator Era. International Academy of Astronautics. ISBN 978-0-9913370-3-3. {{cite book}}: CS1 maint: uses authors parameter (link) ^ a b c d e f g h Aravind, P. K. (2007). "The physics of the space elevator" (PDF). American Journal of Physics. 45 (2): 125. Bibcode:2007AmJPh..75..125A. doi:10.1119/1.2404957. ^ Artuković, Ranko (2000). "The Space Elevator". zadar.net ^ a b c d Edwards BC, Westling EA. (2002) The Space Elevator: A Revolutionary Earth-to-Space Transportation System. San Francisco: Spageo Inc. ISBN 0-9726045-0-2. ^ Globus, Al; et al. "NAS-97-029: NASA Applications of Molecular Nanotechnology" (PDF). NASA. Retrieved September 27, 2008. ^ This 4,960 km "escape length" (calculated by Arthur C. Clarke in 1979) is much shorter than the actual distance spanned because centrifugal forces increase (and gravity decreases) dramatically with height: Clarke, A.C. (1979). "The space elevator: 'thought experiment', or key to the universe?". Archived from the original on January 3, 2014. Retrieved January 5, 2010. ^ a b Scharr, Jillian (May 29, 2013). "Space Elevators On Hold At Least Until Stronger Materials Are Available, Experts Say". Huffington Post. ^ Feltman, R. (March 7, 2013). "Why Don't We Have Space Elevators?". Popular Mechanics. ^ Templeton, Graham (March 6, 2014). "60,000 miles up: Space elevator could be built by 2035, says new study". Extreme Tech. Retrieved April 14, 2014. ^ Wang, X.; Li, Q.; Xie, J.; Jin, Z.; Wang, J.; Li, Y.; Jiang, K.; Fan, S. (2009). "Fabrication of Ultralong and Electrically Uniform Single-Walled Carbon Nanotubes on Clean Substrates" (PDF). Nano Letters. 9 (9): 3137–3141. Bibcode:2009NanoL...9.3137W. CiteSeerX 10.1.1.454.2744. doi:10.1021/nl901260b. PMID 19650638. Archived from the original (PDF) on August 8, 2017. ^ a b Lang, David D. "Space Elevator Dynamic Response to In-Transit Climbers" (PDF). ^ Gassend, Blaise. "Falling Climbers". Retrieved December 16, 2013. ^ "Space elevator to low orbit?". Endless Skyway. May 19, 2010. Retrieved December 16, 2013. ^ Gassend, Blaise. "Why the Space Elevator's Center of Mass is not at GEO". Retrieved September 30, 2011. ^ Cohen, Stephen S.; Misra, Arun K. (2009). "The effect of climber transit on the space elevator dynamics". Acta Astronautica. 64 (5–6): 538–553. Bibcode:2009AcAau..64..538C. doi:10.1016/j.actaastro.2008.10.003. ^ Courtland, Rachel. "Space elevator trips could be agonisingly slow". New Scientist. Retrieved May 28, 2021. ^ Bill Fawcett, Michael Laine & Tom Nugent jr. (2006). LIFTPORT. Canada: Meisha Merlin Publishing, Inc. p. 103. ISBN 978-1-59222-109-7. ^ Swan, P. A.; Swan, C. W.; Penny, R. E.; Knapman, J. M.; Glaskowsky, P. N. "Design Consideration for Space Elevator Tether Climbers" (PDF). ISEC. Archived from the original (PDF) on January 16, 2017. During the last ten years, the assumption was that the only power available would come from the surface of the Earth, as it was inexpensive and technologically feasible. However, during the last ten years of discussions, conference papers, IAA Cosmic Studies, and interest around the globe, many discussions have led some individuals to the following conclusions: • Solar Array technology is improving rapidly and will enable sufficient energy for climbing • Tremendous advances are occurring in lightweight deployable structures ^ Chodosh, Sara (March 29, 2017). "This building hanging from an asteroid is absurd – but let's take it seriously for a second". Popular Science. Retrieved September 4, 2019. ^ Engel, Kilian A. "IAC-04-IAA.3.8.3.04 Lunar transportation scenarios utilising the space elevator" (PDF). www.spaceelevator.com. Archived from the original (PDF) on April 24, 2012. ^ Forward, Robert L. and Moravec, Hans P. (March 22, 1980) Space Elevators. Carnegie Mellon University. "Interestingly enough, they are already more than strong enough for constructing skyhooks on the moon and Mars." ^ Weinstein, Leonard M. (January 2003). "Space Colonization Using Space-Elevators from Phobos" (PDF). AIP Conference Proceedings. 654: 1227–1235. Bibcode:2003AIPC..654.1227W. doi:10.1063/1.1541423. hdl:2060/20030065879. S2CID 1661518. Retrieved December 23, 2022. ^ Weinstein, Leonard (2003). "Space Colonization Using Space-Elevators from Phobos". AIP Conference Proceedings. AIP Conference Proceedings. Vol. 654. pp. 1227–1235. Bibcode:2003AIPC..654.1227W. doi:10.1063/1.1541423. hdl:2060/20030065879. ^ a b pearson, Jerome; Levin, Eugene; Oldson, John; Wykes, Harry (2005). "Lunar Space Elevators for Cislunar Space Development Phase I Final Technical Report" (PDF). ^ Ben Shelef, the Spaceward Foundation Asteroid Slingshot Express - Tether-based Sample Return ^ Graham FG (2009). "Preliminary Design of a Cable Spacecraft Connecting Mutually Tidally Locked Planetary Bodies". 45th AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit. doi:10.2514/6.2009-4906. ISBN 978-1-60086-972-3. ^ D.V. Smitherman (Ed.), Space Elevators: An Advanced Earth-Space Infrastructure for the New Millennium Archived March 28, 2015, at the Wayback Machine, NASA/CP-2000-210429, Marshall Space Flight Center, Huntsville, Alabama, 2000 ^ Hein, A.M., Producing a Space Elevator Tether Using a NEO: A Preliminary Assessment, International Astronautical Congress 2012, IAC-2012, Naples, Italy, 2012 ^ a b Space Elevators: An Assessment of the Technological Feasibility and the Way Forward, Page 326, http://www.virginiaedition.com/media/spaceelevators.pdf ^ "Space elevators: 'First floor, deadly radiation!'". New Scientist. Reed Business Information Ltd. November 13, 2006. Retrieved January 2, 2010. ^ Williams, Matt; Universe Today (September 11, 2018). "A Japanese company is about to test a tiny space elevator in space". phys.org. Retrieved November 6, 2021. ^ The Spaceward Foundation. "The Space Elevator FAQ". Mountain View, CA. Archived from the original on February 27, 2009. Retrieved June 3, 2009. ^ Pournelle, Jerry (April 23, 2003). "Friday's VIEW post from the 2004 Space Access Conference". Retrieved January 1, 2010. ^ Ramadge, Andrew; Schneider, Kate (November 17, 2008). "Race on to build world's first space elevator". news.com.au. ^ "ISEC IRS filing". apps.irs.gov. Retrieved February 9, 2019. ^ a b "What is ISEC? : About Us". ISEC. Archived from the original on July 7, 2012. Retrieved June 2, 2012. ^ "NSS Affiliates". www.nss.org. Retrieved August 30, 2015. ^ David, Leonard (September 22, 2014). "Space Elevator Advocates Take Lofty Look at Innovative Concepts". Space.com. Retrieved February 13, 2019. ^ "The International Space Elevator Consortium (ISEC) 2017 Space Elevator Conference". National Space Society. August 14, 2017. Retrieved February 13, 2019. ^ Boucher, Marc (July 17, 2012). "Annual Space Elevator Conference Set for August 25–27". SpaceRef. Retrieved February 13, 2019. ^ "Japan Space Elevator Association". 一般|JSEA 一般社団法人 宇宙エレベーター協会. Retrieved August 30, 2015. ^ "Eurospaceward". Eurospaceward. August 30, 2015. Retrieved August 30, 2015. ^ Akira, Tsuchida (October 2, 2014). "Homepage of the Study Group 3.24, Road to Space Elevator Era". The International Academy of Astronautics (IAA). The International Academy of Astronautics (IAA). Retrieved August 30, 2015. ^ "IAC 2014 Meeting Schedule". International Astronautical Federation. Retrieved August 30, 2015. ^ "CLIMB: The Journal of the International Space Elevator Consortium", Volume 1, Number 1, December 2011, This journal is cited as an example of what is generally considered to be under the term "Space Elevator" by the international community. [1] Archived December 18, 2013, at the Wayback Machine ^ Moravec, Hans P. (October–December 1977). "A Non-Synchronous Orbital Skyhook". Journal of the Astronautical Sciences. 25: 307–322. Bibcode:1977JAnSc..25..307M. ^ a b Quine, B.M.; Seth, R.K.; Zhu, Z.H. (2009). "A free-standing space elevator structure: A practical alternative to the space tether" (PDF). Acta Astronautica. 65 (3–4): 365. Bibcode:2009AcAau..65..365Q. CiteSeerX 10.1.1.550.4359. doi:10.1016/j.actaastro.2009.02.018. ^ Landis, Geoffrey (1998). "Compression structures for Earth launch". 34th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit. doi:10.2514/6.1998-3737. ^ Hjelmstad, Keith, "Structural Design of the Tall Tower", Hieroglyph, November 30, 2013. Retrieved September 1, 2015. ^ Scientists envision inflatable alternative to tethered space elevator, ZDNet, June 17, 2009. Retrieved February 2013. ^ Space Shaft: Or, the story that would have been a bit finer, if only one had known..., "Knight Science Journalism Tracker (MIT)", July 1, 2009. A conference publication based on findings from the Advanced Space Infrastructure Workshop on Geostationary Orbiting Tether "Space Elevator" Concepts Archived March 28, 2015, at the Wayback Machine (PDF), held in 1999 at the NASA Marshall Space Flight Center, Huntsville, Alabama. Compiled by D.V. Smitherman, Jr., published August 2000. "The Political Economy of Very Large Space Projects" HTML PDF, John Hickman, Ph.D. Journal of Evolution and Technology Vol. 4 – November 1999. A Hoist to the Heavens By Bradley Carl Edwards Ziemelis K. (2001) "Going up". In New Scientist 2289: 24–27. Republished in SpaceRef. Title page: "The great space elevator: the dream machine that will turn us all into astronauts." The Space Elevator Comes Closer to Reality. An overview by Leonard David of space.com, published March 27, 2002. Krishnaswamy, Sridhar. Stress Analysis – The Orbital Tower (PDF) LiftPort's Roadmap for Elevator To Space SE Roadmap (PDF) Shiga, David (March 28, 2008). "Space elevators face wobble problem". New Scientist. Alexander Bolonkin, "Non Rocket Space Launch and Flight". Elsevier, 2005. 488 pgs. ISBN 978-0-08044-731-5. Spaceflight portal Science portal Wikimedia Commons has media related to Space elevators. Listen to this article (54 minutes) This audio file was created from a revision of this article dated 29 May 2006 (2006-05-29), and does not reflect subsequent edits. (Audio help · More spoken articles) The Economist: Waiting For The Space Elevator (June 8, 2006 – subscription required) CBC Radio Quirks and Quarks November 3, 2001 Riding the Space Elevator Times of London Online: Going up ... and the next floor is outer space The Space Elevator: 'Thought Experiment', or Key to the Universe? Archived February 1, 2020, at the Wayback Machine. By Sir Arthur C. Clarke. Address to the XXXth International Astronautical Congress, Munich, September 20, 1979. International Space Elevator Consortium Website Space Elevator entry at The Encyclopedia of Science Fiction In fiction Electromagnetic propulsion Carbon nanotube Space tether Momentum exchange tether Launch loop Lunar space elevator Orbital ring Skyhook Space fountain List of competitions Elevator:2010 Yuri Artsutanov Bradley C. Edwards Jerome Pearson KC Space Pirates LaserMotive LiftPort Group Non-rocket spacelaunch Megascale engineering Static structures Pneumatic freestanding tower Orbiting skyhooks Space elevators Dynamic structures Endo-atmospheric tether Projectile launchers Coilgun Mass driver StarTram Space gun Blast wave accelerator Ram accelerator Slingatron Reaction drives Air launch Spaceplanes Laser propulsion Beam-powered propulsion Buoyant lifting Buoyant space port High-altitude platform Rocket sled launch Fusion rocket Reusable launch system Ion thruster Plasma propulsion engine Helicon thruster VASIMR Nuclear pulse propulsion Solar sail Propellant depot Laser communication in space Collingridge dilemma Differential technological development Ephemeralization Cyberethics Neuroethics Robot ethics Exploratory engineering Fictional technology Proactionary principle Technological change Technological unemployment Technological convergence Technological evolution Technological paradigm Technology forecasting Accelerating change Future-oriented technology analysis Horizon scanning Retrieved from "https://en.wikipedia.org/w/index.php?title=Space_elevator&oldid=1132905160" Spacecraft propulsion Spaceflight technology Vertical transport devices Space access Hypothetical technology Accuracy disputes from February 2020 Use American English from April 2021 All Wikipedia articles written in American English All articles with self-published sources Articles with self-published sources from February 2020 Articles with unsourced statements from April 2014 Spoken articles
CommonCrawl
Immensely Happy Introduction to Projective Geometry Solutions 5.11 Similarity Transformations A 12 minute read, posted on 24 Sep 2019 Last modified on 24 Sep 2019 Tags computer vision, projective geometry, problem solution Please read this introduction first before looking through the solutions. Here's a quick index to all the problems in this section. 1. Are there similarity transformations of all six types? Give an example of each possible type. $a_{11} \ne a_{33}$ $\begin{pmatrix} a_{11} & 0 & 0 \\ 0 & -a_{11} & 0 \\ 0 & 0 & a_{33} \end{pmatrix}$ $a_{23} \ne 0$ $\begin{pmatrix} a_{11} & 0 & a_{13} \\ 0 & -a_{11} & a_{23} \\ 0 & 0 & -a_{11} \end{pmatrix}$ $\begin{pmatrix} a_{11} & 0 & 0 \\ 0 & a_{11} & 0 \\ 0 & 0 & a_{33} \end{pmatrix}$ If we try out the possible configurations for collineations of type IV with required fixed line $x_3 = 0$ and fixed point either $(1, 0, 0)$ or $(0, 1, 0)$, we can see that neither of these can fulfill similarity constraints. So, there are no similarities of type IV. Type V $a_{13} \ne 0, a_{23} \ne 0$ $\begin{pmatrix} a_{11} & 0 & a_{13} \\ 0 & a_{11} & a_{23} \\ 0 & 0 & a_{11} \end{pmatrix}$ Type VI $\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$ 2. Except in the case of triangles, equality of the measures of the corresponding angles is not a sufficient condition for the similarity of the two polygons. In view of this fact, what is the justification of calling the angle-preserving transformations of this section similarity transformations? Two polygons are considered similar if their corresponding angles are congruent and the measures of their corresponding sides are proportional1. We already know that similarity transformations preserve angles. Let's find out if they preserve ratios of line segments independent of their direction. As similarity transformations are affine, the square of the scaling factor will be $$\frac{(a_{11}^2 + a_{12}^2) + 2(a_{11}a_{12} - k^2a_{12}a_{11})m + (a_{11}^2 + a_{12}^2)m^2}{1 + m^2}$$ $$= \frac{(a_{11}^2 + a_{12}^2)(1 + m^2) + 2(a_{11}a_{12} - a_{12}a_{11})m}{1 + m^2}$$ $$= \frac{(a_{11}^2 + a_{12}^2)(1 + m^2)}{1 + m^2}$$ As $m$ is real-valued, $m^2 \ne 1$ and so the square of the scaling factor is $$a_{11}^2 + a_{12}^2$$ This quantity is independent of the direction. Hence similarity transformations ensure that the measures of the corresponding sides of the transformed polygon are proportional to that of the original polygon. This is probably the rationale behind calling these type of transformations similarity transformations. 3. A similarity transformation of which $k=1$ is said to be direct; one for which $k=-1$ is said to be indirect. Show that the set of all direct similarity transformations is a subgroup of the group of all similarity transformations. Do the indirect similarity transformations form a group? A direct (also called orientation preserving) similarity has the form $$\begin{pmatrix} a_{11} & a_{12} & a_{13} \\ -a_{12} & a_{11} & a_{23} \\ 0 & 0 & a_{33} \end{pmatrix}$$ Composing two direct similarities $A$ and $B$ we get $$\begin{pmatrix} a_{11}b_{11} - a_{12}b_{12} & a_{11}b_{12} + a_{12}b_{11} & a_{13}b_{33} + a_{12}b_{23} + a_{11}b_{13} \\ -a_{11}b_{12} - a_{12}b_{11} & a_{11}b_{11} - a_{12}b_{12} & a_{23}b_{33} + a_{11}b_{23} - a_{12}b_{13} \\ 0 & 0 & a_{33}b_{33} \end{pmatrix}$$ which is clearly a direct similarity transformation. The inverse of a direct similarity transformation $A$ is of the form $$\frac{1}{a_{12}^2a_{33} + a_{11}^2a_{33}}\begin{pmatrix} a_{11}a_{33} & -a_{12}a_{33} & a_{12}a_{23} - a_{11}a_{13} \\ a_{12}a_{33} & a_{11}a_{33} & -a_{12}a_{13} - a_{11}a_{23} \\ 0 & 0 & a_{12}^2 + a_{11}^2 \end{pmatrix}$$ which is also a direct similarity transformation. Associativity follows from the rules of matrix multiplication and the identity transformation is the identity of the group as it is also a direct similarity transformation. Hence the set of all direct similarity transformations is a subgroup of the group of similarity transformations. As the identity transformation is not an indirect similarity transformation, the set of indirect similarity transformations does not have an identity and hence does not form a group. 4. Show that every direct similarity transformation leaves invariant the circular points at infinity, $I:(1,i,0)$ and $J:(1,-i,0)$. Show that every indirect similarity transformation interchanges $I$ and $J$. By solving the characteristic equation of a general direct similarity transformation, it is easy to see that the circular points at infinity are the characteristic vectors of every direct similarity transformation. Multiplying the circular points by a general indirect transformation, we can see that it interchanges $I$ and $J$. 5. Show that the image of any circle under an arbitrary similarity transformation is a circle. As a similarity transformation is an affine transformation, a circle can only be transformed into an ellipse. Under a similarity transformation, the scaling factor only depends on the parameters of the transformation and is independent of direction. Hence the circle will be scaled equally in all directions resulting in the image being a circle. 6. If $A$ and $A'$ are distinct points, and if $B$ and $B'$ are distinct points, show that there are exactly two similarity transformations, one direct and one indirect, which map $A$ and $A'$ and $B$ onto $B'$. We know from #4 that the circular points at infinity are invariant under a direct similarity transformation. Hence, as we have four points and their images in total, we can uniquely determine a direct similarity transformation that maps $A$ to $A'$ and $B$ to $B'$. We also know from #4 that the circular points at infinity are interchanged by an indirect similarity transformation. Hence, as we have four points and their images, we can uniquely determine an indirect similarity transformation that maps $A$ to $A'$ and $B$ to $B'$. As a similarity transformation must either be direct or indirect, there can be no other similarity transformation, other than the two determined above, that achieves the given mapping. 7. (a) Find a similarity transformation which will map the triangle whose vertices are $(0,0,1)$, $(2,0,1)$, $(0,2,1)$ onto the triangle whose vertices are $(-1,0,1), (-1,1,1), (0,0,1)$. (b) Is there a similarity transformation which will map the triangle whose vertices are $(0,0,1)$, $(2,0,1)$, $(0,1,1)$ onto the triangle whose vertices are $(-1,0,1), (-1,2,1), (-4,0,1)$? To uniquely determine a similarity transformation, we only need to find the values of 6 parameters ($a_{11}, a_{12}, k_1a_{12}, k_1a_{11}, a_{13}, a_{23}$) for the process described in the solution to Exercise #1, Sec 5.8, and we only need 6 equations from 3 points and their images to do this. (a) The solution to this following system of linear equations will give us the required paramaters. $$\begin{pmatrix} 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 2 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 2 & 0 & 1 \\ 0 & 2 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 2 & 1 \end{pmatrix}\begin{pmatrix} a_{11} \\ a_{12} \\ a_{13} \\ -ka_{12} \\ ka_{11} \\ a_{23} \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \\ -1 \\ 1 \\ 0 \\ 0 \end{pmatrix}$$ Solving this, we get the matrix of similarity to be $$\begin{pmatrix} 0 & \frac{1}{2} & -1 \\ \frac{1}{2} & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ (b) The linear system for this exercise is $$\begin{pmatrix} 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 2 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 2 & 0 & 1 \\ 0 & 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 \end{pmatrix}\begin{pmatrix} a_{11} \\ a_{12} \\ a_{13} \\ -ka_{12} \\ ka_{11} \\ a_{23} \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \\ -1 \\ 2 \\ -4 \\ 0 \end{pmatrix}$$ Solving this we get values for $a_{12}:-3$ and $ka_{12}: -1$ that are not compatible for a similarity transformation. Hence, no similarity can achieve this mapping. 8. (a) Is there a similarity transformation which will map the square whose vertices are $(0,0,1)$, $(1,0,1)$, $(1,1,1)$, $(0,1,1)$ onto the square whose vertices are $(1,0,1)$, $(0,1,1)$, $(-1,0,1)$, $(0,-1,1)$? If so, find its equations. (b) Is there a similarity transformation which will map the circle whose center is the origin and whose radius is $1$ onto the circle whose center is the point $(4, 3, 1)$ and whose radius is $5$? If so, find its equations. (a) As a square is transformed into another square, the transformation is a similarity. Taking the first three points, the linear system for this exercise is $$\begin{pmatrix} 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 1 \end{pmatrix}\begin{pmatrix} a_{11} \\ a_{12} \\ a_{13} \\ -ka_{12} \\ ka_{11} \\ a_{23} \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 1 \\ -1 \\ 0 \end{pmatrix}$$ Solving this, we get the matrix of similarity to be $$\begin{pmatrix} -1 & -1 & 1 \\ 1 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ Applying this similarity transform to the fourth point $(0, 1, 1)$, we get its image to be $(0, -1, 1)$ which is consistent with the given mapping. (b) As a circle is transformed into another circle, the transformation is a similarity. Taking two points on the circle with the center and their respective images, the linear system for this exercise is $$\begin{pmatrix} 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 1 \\ -1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 1 \end{pmatrix}\begin{pmatrix} a_{11} \\ a_{12} \\ a_{13} \\ -ka_{12} \\ ka_{11} \\ a_{23} \end{pmatrix} = \begin{pmatrix} 4 \\ 3 \\ 9 \\ 3 \\ -1 \\ 3 \end{pmatrix}$$ Solving this, we get the matrix of similarity to be $$\begin{pmatrix} 5 & 0 & 4 \\ 0 & 5 & 3 \\ 0 & 0 & 1 \end{pmatrix}$$ This can be interpreted as a scaling by a factor of 5 combined with a translation of $(4, 3)$. 9. By what factor is the length of a general segment altered by an arbitrary similarity transformation? As we discovered in #2, the scaling factor of a segment under an arbitrary similarity transformation is $\sqrt{a_{11}^2 + a_{12}^2}$. 10. What conditions must be satisfied in order that a similarity transformation shall be of period 2? Give an example of such a transformation. A similarity transformation can be interpreted as a composition of scaling, translation, rotation and reflection. Thus, for a similarity transformation to be an involution, it must either be a pure rotation about an arbitrary center by an angle of $n\pi$ where $n$ is an integer or a pure reflection. Let's verify these claims. Multiplying a general similarity matrix by itself we get $$\begin{pmatrix} a_{11}^2 - a_{12}^2k & a_{11}a_{12}(1 + k) & a_{12}a_{23} + a_{11}a_{13} + a_{13} \\ -a_{11}a_{12}(1 + k) & a_{11}^2 - a_{12}^2k & k(a_{11}a_{23} - a_{12}a_{13}) + a_{23} \\ 0 & 0 & 1 \end{pmatrix}$$ For this to be an involution, the following conditions must hold $a_{11}^2 - a_{12}^2k = 1$ $a_{11}a_{12}(1 + k) = 0$ $a_{12}a_{23} + a_{11}a_{13} + a_{13} = 0$ $k(a_{11}a_{23} - a_{12}a_{13}) + a_{23} = 0$ From these conditions, we find that $a_{11} \ne 0$ for $a_{12}$ to be real. If $a_{12} = 0$, then either both $a_{23}$ and $a_{13}$ must be zero with $a_{11} = 1$ leading to the identity transform, or $a_{11} = -1$ with $a_{13}$ and $a_{23}$ being allowed to take on any arbitrary value. This gives us a matrix of the form $$\begin{pmatrix} -1 & 0 & a_{13} \\ 0 & -1 & a_{23} \\ 0 & 0 & 1 \end{pmatrix}$$ which is clearly the matrix of a rotation about an arbitrary center by $n\pi$. The image below shows a polygon that has undergone this transformation. View in GeoGebra If $k = -1$, then $a_{11}^2 + a_{12}^2 = 1$ and $a_{13} = -(a_{12}a_{23})/(1 + a_{11})$. Taking $a_{11}$ as $cos(\theta)$ and $a_{12}$ as $sin(\theta)$ we get the matrix $$\begin{pmatrix} cos(\theta) & sin(\theta) & -\frac{sin(\theta)a_{23}}{1 + cos(\theta)} \\ sin(\theta) & -cos(\theta) & a_{23} \\ 0 & 0 & 1 \end{pmatrix}$$ which is clearly the matrix of a reflection about an arbitrary axis. The image below shows a polygon that has undergone this transformation. 11. If $l_1 m_1 + l_2 m_2 = 0$, tan $\theta$ is undefined and the proof of Theorem 1 appears to break down. Investigate this case and show that right angles are also preserved by the general similarity transformation. Well, he's given us the punch line right there in the question! Two lines will be perpendicular if $l_1m_1 + l_2m_2 = 0$. Using the equations of a general transformation and expressing this equation in terms of image coordinates we get $$(a_{12}a_{22} + a_{11}a_{21})(l'_1m'_2 + l'_2m'_1) + \\ (a_{12}a_{32} + a_{11}a_{31})(l'_1m'_3 + l'_3m'_1) + \\ (a_{22}a_{32} + a_{21}a_{31})(l'_2m'_3 + l'_3m'_2) + \\ (a_{31}^2 + a_{32}^2)l'_3m'_3 + (a_{22}^2 + a_{21}^2)l'_2m'_2 + (a_{12}^2 + a_{11}^2)l'_1m'_1 = 0$$ If we want the transformation to preserve the $90^\circ$ angle then we expect $l'_1m'_1 + l'_2m'_2 = 0 \implies l'_1m'_1 = -l'_2m'_2$. Substituting this into the previous equation we get $$(a_{12}a_{22} + a_{11}a_{21})(l'_1m'_2 + l'_2m'_1) + \\ (a_{12}a_{32} + a_{11}a_{31})(l'_1m'_3 + l'_3m'_1) + \\ (a_{22}a_{32} + a_{21}a_{31})(l'_2m'_3 + l'_3m'_2) + \\ (a_{31}^2 + a_{32}^2)l'_3m'_3 + (a_{22}^2 + a_{21}^2 - a_{12}^2 - a_{11}^2)l'_2m'_2 = 0$$ For this to be true for any two lines, the following must hold. $a_{12}a_{22} + a_{11}a_{21} = 0$ $a_{31}^2 + a_{32}^2 = 0$ $a_{22}^2 + a_{21}^2 - a_{12}^2 - a_{11}^2 = 0$ For $a_{31}^2 + a_{32}^2 = 0$ to be true, each of $a_{31}$ and $a_{32}$ must be zero, as squares of real numbers cannot be negative. $a_{12}a_{22} + a_{11}a_{21} = 0$, implies that $a_{12} = ka_{21}, a_{11} = ka_{22}$ for some real valued scalar $k$. Substituting this in $a_{22}^2 + a_{21}^2 - a_{12}^2 - a_{11}^2 = 0$ we get $(1 - k^2)(a_{21}^2 + a_{22}^2) = 0$ If $a_{21}$ and $a_{22}$ were simultaneously 0 then the matrix would become singular, hence $k = \pm 1$. These are the same constraints we obtained in the derivation of the matrix of a similarity transformation for an arbitrary angle. Hence right angles are also preserved by the general similarity transformation. Math Planet. Similarity, Polygons. https://www.mathplanet.com/education/geometry/similarity/polygons. [return] © 2019-2020 immenselyhappy.com. All Rights Reserved.
CommonCrawl
How much thrust is needed by an aircraft to have vertical takeoff? In some airshows, I have seen some big aircraft (like 737) takeoff almost vertical. Generally, fully-loaded passenger/cargo planes don't do that, I guess due to safety issues but I think there needs a lot of extra thrust to have an almost vertical takeoff. So is a vertical takeoff possible in every kind of airplane (without excessive weight and giving full thrust) or only some specific ones can do it? aerodynamics lift voretaq7 NitinGNitinG $\begingroup$ To have vertical lift you need as much thrust as the weight of aircraft. Most figter airraft can do this but not commercial plane. $\endgroup$ – vasin1987 Jan 13 '15 at 11:50 $\begingroup$ @vasin1987 figter airraft? Is that a kind of raft that floats in the air upon which fig trees grows? Or did you mean Fighter Aircraft? $\endgroup$ – haneefmubarak Jan 14 '15 at 5:33 $\begingroup$ The important thing to note is that the aircraft isn't gaining altitude any faster than usual: it just stays on the runway longer before pulling up sharply because it looks more impressive. If anything, this usually results in a lower overall climb rate (measured as time-to-altitude). $\endgroup$ – Jon Story Jan 14 '15 at 11:46 First, let's agree on terminology: What you saw in airshows is a vertical flight path. Flying horizontally first, the airplane pitched up until the nose was pointing straight into the sky. Surprisingly, no thrust is needed to perform this maneuver. Even gliders can do it. What happens is that kinetic energy is converted to potential energy, the rate of potential energy increase being proportional to flight speed and aircraft mass. If you start fast enough, this vertical flying can be maintained for several seconds, until the aircraft runs out of speed and stops in midair, followed by an uncontrolled drop. Skilled pilots orient the aircraft in the right direction by starting a rotation around the vertical axis at the top of the climb, so the following drop lets them pick up speed again with the correct nose-down attitude. Now potential energy is converted back into kinetic energy until speed is sufficient for a pullout. In aerobatics, this maneuver is called a stall turn or a hammerhead stall. A few conditions apply, however. The airplane must be able to fly fast enough to have the needed potential energy to sustain the maneuver through the pitch-up phase. This is helped if its engines add energy, so the kinetic energy bleeds off more slowly. Also, at the top of the maneuver it is flying at zero g, and this requires at least that all items on board are securely fastened. Lastly, the pitch-up needs a load factor bigger than 1 g, and the higher the maximum load factor is, the tighter this pitch-up can be flown. Now the question has been changed: The vertical flight path is flown right after take-off. This limits the entry speed for the maneuver, and gliders will not be able to do this. If we take the 737 from the question and fly it with no payload and little fuel, the flight mass $m$ of a 737-700 is 40 tons, and the installed thrust is about 200 kN (sea level static). Let's assume that the pilot accelerates after takeoff to a horizontal speed $v$ = 100 m/s (194 KTAS) while retracting the flaps, the kinetic energy ($0.5\cdot m\cdot v^2$) is equivalent to a potential energy ($m \cdot g \cdot h$) of an altitude gain h of $$h = \frac{v^2}{2\cdot g} = 510 m$$ The engines deliver less thrust with increasing speed; maybe 40% of the weight, so the airplane will still accelerate for the first 18° - 20° of the 90° flight path change. This will delay the point when speed has been bled off and add maybe 150 m to h. At 100 m/s a pull-up with a radius of 500 m will add a load factor of 2 g. The pilot needs to pull less first and harder at the end of the maneuver in order to stay within the maximum load factor of 2.5. When speed bleeds off, so will wing lift, and in the second half of the pull-up the wing will not create enough lift to change the flight path enough in order to reach the desired vertical attitude. Also, the aircraft will be very low for a safe recovery. This makes it rather doubtful that an airliner can be pulled up to a vertical climb after takeoff. If the maneuver is started at a higher speed and with a little more distance from the ground, I see no reason why it should not safely be possible. Peter KämpfPeter Kämpf $\begingroup$ From a physics standpoint, you're right, and this is how it works for sufficiently-maneuverable aircraft, but are airliners actually sufficiently maneuverable for a truly vertical flight path without stalling before they get to 90 degrees? Personally, I'd tend to guess that what the OP really saw was something more like 30-45 degrees nose-up attitude, not really 90. If anyone could shed some light on whether any airliners are really capable of safely accomplishing a (briefly) vertical flight path, though, it would be appreciated. $\endgroup$ – reirab Jan 13 '15 at 16:34 $\begingroup$ Nitpick: the kinetic energy will not bleed off more slowly. It will bleed off (i.e. be converted into PE) at the same rate; the engines will just "add" (i.e. convert from fuel) more kinetic energy into the system, so converting all of it takes longer. $\endgroup$ – imallett Jan 13 '15 at 17:49 $\begingroup$ @imallett: This is very nitpicky! I was thinking about "speed will bleed off more slowly", but that would have required to translate speed to kinetic energy in the mind of the reader. So I settled for the easier to understand (and still not wrong) term. $\endgroup$ – Peter Kämpf Jan 13 '15 at 20:27 For a craft weighing x kg you need g*x Newtons of thrust minimum for sustained vertical flight (ref.: high-school physics). In other words for each metric ton of weight you need around 9.81 kN of thrust. The A380 has a operational empty weight of 276 t so it would need 2707 kN of thrust to sustain a vertical climb. Its 4 engines each producing 320kN don't even come halfway. However pitching up to vertical and continuing on while shedding speed is possible without thrust. ratchet freakratchet freak $\begingroup$ I couldn't easily find any references so don't want to make the edit willy-nilly, but I believe it would be more accurate to say that the engines produce 320 kN each? $\endgroup$ – user Jan 13 '15 at 13:32 $\begingroup$ @MichaelKjörling Yes, you're right. A380 has 288,000 lb of max thrust and an empty operating weight of 610,000 lb. Its MTOW is 1,268,000 lb. So, at absolute best (really better than the best, since you can't take off with no fuel,) it would have a thrust/weight ratio of 0.47 and, at MTOW, it would be 0.23. So, no sustained vertical flight for the A380. :) What would be fun would be to get a 737 (with re-enforced wings and greatly lengthened landing gear) outfitted with GE90-115b's. Each one of them would be capable of accelerating the aircraft in a vertical climb by itself. :) $\endgroup$ – reirab Jan 13 '15 at 16:47 $\begingroup$ An empty concord would be at 0.85, it should be able to fly vertical for quite some time :D $\endgroup$ – Antzi Jan 14 '15 at 2:41 Despite the hype that some news articles like to add to their stories, with talk of "near-vertical" takeoffs and "terrifying" turns, those maneuvers are less extreme than they look. While certainly higher than normal, 30 degree pitch after takeoff or 60 degrees of bank are not vertical. Yet the air show still said that was too much, so typical displays of non-aerobatic planes are going to be even less than that. To answer the question in the title, take a look at the AV-8B Harrier II attack jet. It has a max vertical takeoff weight of 20755 lb, and its engine produces 23500 lbf of thrust. This corresponds to a thrust-to-weight ratio of about 1.13:1, since the thrust must exceed the force of gravity in order to accelerate the plane upwards. In contrast, the 787-9 from the above example has 142,000 lbf of thrust, and weighs between 304,000 lb and 557,000 lb. If the plane weighed 400,000 lb on that takeoff, the thrust-to-weight ratio was about 0.36:1. Although converting kinetic energy to potential energy will allow a plane to climb faster, the amount of kinetic energy available to an airliner at takeoff is more limited. Not the answer you're looking for? Browse other questions tagged aerodynamics lift or ask your own question. What is the maximum take off climb angle of a Boeing 737 MAX? What is the relation between aircraft descent rate and speed? What are the reasons why we do not have VTOL commercial airliners? Why do some airplanes have vertical strakes? How much lift do planes produce before rotation? How much thrust was provided by a typical Meredith Effect radiator? How to calculate the lift starting from the vertical speed? How much lift comes from the fuselage on modern jets? Is wing sweep needed on supersonic aircraft? Why weren't large, high-power radial engines built to take advantage of the Meredith effect? How much might soft skin reduce aircraft skin friction?
CommonCrawl
Comparative clinical evaluation of atlas and deep-learning-based auto-segmentation of organ structures in liver cancer Sang Hee Ahn1, Adam Unjin Yeo2, Kwang Hyeon Kim1, Chankyu Kim1, Youngmoon Goh3, Shinhaeng Cho4, Se Byeong Lee1, Young Kyung Lim1, Haksoo Kim1, Dongho Shin1, Taeyoon Kim1, Tae Hyun Kim1, Sang Hee Youn1, Eun Sang Oh1 & Jong Hwi Jeong1 Accurate and standardized descriptions of organs at risk (OARs) are essential in radiation therapy for treatment planning and evaluation. Traditionally, physicians have contoured patient images manually, which, is time-consuming and subject to inter-observer variability. This study aims to a) investigate whether customized, deep-learning-based auto-segmentation could overcome the limitations of manual contouring and b) compare its performance against a typical, atlas-based auto-segmentation method organ structures in liver cancer. On-contrast computer tomography image sets of 70 liver cancer patients were used, and four OARs (heart, liver, kidney, and stomach) were manually delineated by three experienced physicians as reference structures. Atlas and deep learning auto-segmentations were respectively performed with MIM Maestro 6.5 (MIM Software Inc., Cleveland, OH) and, with a deep convolution neural network (DCNN). The Hausdorff distance (HD) and, dice similarity coefficient (DSC), volume overlap error (VOE), and relative volume difference (RVD) were used to quantitatively evaluate the four different methods in the case of the reference set of the four OAR structures. The atlas-based method yielded the following average DSC and standard deviation values (SD) for the heart, liver, right kidney, left kidney, and stomach: 0.92 ± 0.04 (DSC ± SD), 0.93 ± 0.02, 0.86 ± 0.07, 0.85 ± 0.11, and 0.60 ± 0.13 respectively. The deep-learning-based method yielded corresponding values for the OARs of 0.94 ± 0.01, 0.93 ± 0.01, 0.88 ± 0.03, 0.86 ± 0.03, and 0.73 ± 0.09. The segmentation results show that the deep learning framework is superior to the atlas-based framwork except in the case of the liver. Specifically, in the case of the stomach, the DSC, VOE, and RVD showed a maximum difference of 21.67, 25.11, 28.80% respectively. In this study, we demonstrated that a deep learning framework could be used more effectively and efficiently compared to atlas-based auto-segmentation for most OARs in human liver cancer. Extended use of the deep-learning-based framework is anticipated for auto-segmentations of other body sites. Accuracy and precision of the delineated target volumes and surrounding organs at risk (OARs) is critical in radiotherapy treatment processing. However, to-this-date, these segmentation-based delineations are completed manually by physicians in the majority of clinical cases, which is a time-consuming task associated with an increased workload. Consequently, the reproducibility of this process is not always guaranteed, and ultimately depends on the physician's experience [1]. In addition, manual re-segmentation is often necessary owing to anatomical changes and/or tumor responses over the course of the radiotherapy. As such, model-based [2, 3] and atlas-based [4,5,6,7] auto-segmentation methods have been developed to maximize the efficiency gain, and concurrently minimize inter-observer variation. Various model-based methods have been published. Specifically, Qazi et al. [3] demonstrated use of adaptive model-based auto-segmentation of the normal and target structures for the head and neck, and Chen et al. [2] showed that active shape model-based segmentation could yield accuracy improvements of the order of 10.7% over atlas-based segmentation for lymph node regions. In the last few years, machine learning technology has been actively applied to various medical fields, such as for cancer diagnosis [8,9,10], medical imaging [11], radiation treatment [11, 12], and pharmacokinetics [13]. The application of one of the deep learning models [14], the convolutional neural network (CNN) [15], has recently yielded remarkable results in medical image segmentation [16,17,18,19,20]. The main advantage of deep learning methods is that they automatically generate the most suitable model from given training datasets. Therefore, a comparative study of the accuracy of each model is required to use auto-segmentation in clinical practice. Recently, Lustberg et al. [21] compared the auto contouring results in five organ structures with the use of the prototype of a commercial deep-learning contouring program (Mirada DLC Expert, Mirada Medical Ltd., Oxford, United Kingdom) with those obtained from an atlas-based contouring program (Mirada Medical Ltd., Oxford, United Kingdom). In this study, we used the open source deep learning library, Keras (where the model can be loaded into the Tensorflow backend) instead of the commercial program. In addition, our neural network is based on Fusion net, an extension of the U-net suitable for medical image segmentation. This study aims to evaluate the clinical feasibility of an open source deep learning framework, using 70 liver cancer patients by comparing its performance against a commercially available atlas-based auto-segmentation framework. Clinical datasets Seventy patients with liver cancer diagnosed at the National Cancer Center in South Korea between the year of 2016–2017 were included in this study. All patients were treated with proton therapy, using 10 fractions of 660 or 700 cGy, with respective total doses of 6600 cGy and 7000 cGy. The characteristics of the patients are listed in Table 1. All computer tomography (CT) images were acquired using a General Electric (GE) Light speed radiotherapy (RT) system (GE Medical Systems, Milwaukee, WI). We used abdominal CT images with, the following dimensions for each axial slice: image matrix = 512 × 512, slice numbers = 80–128, pixel spacing = 1.00–1.04 mm, and slice thickness = 2.50 mm. Manually segmented contours for each organ were delineated by three senior expert physicians, and included segmentations of the heart, liver, kidney (left, right), and stomach. Manually segmented contours included the organ contours of the heart, liver, kidney (left, right), and stomach, which were mutually accepted by the three senior physicians following a joint discussion. Table 1 Patient characteristics in this study The study protocol conformed to the ethical guidelines of the Declaration of Helsinki as revised in 1983, and was approved by institutional review board (IRB) of National Cancer Center without IRB number. All patient data has been fully anonymized, and all methods were performed in accordance with the relevant guidelines and regulations outlined by our institution. Deep convolutional neural network The network used was based on the open-source library Keras (version 2.2.4) [22] and the reference implementation of Fusion Net [23]. This network is a deep neural network which was developed based on the application of a residual CNN as an extension of U-net [24] to enable more accurate end-to-end image segmentation. It consists of a down-sampling (encoding) path and an up-sampling (decoding) path, as shown in Fig. 1. On the encoding path, we used a residual block layer (three convolution layer and one skip connection) between the two 3 × 3 convolution layers. Each of these layers was followed by a rectified linear unit (ReLu) [25], and one maximum pooling. On the decoding path, we used a 2 × 2 transposed convolution and a residual block layer between the two 3 × 3 convolution layers followed by a ReLu activation function. To avoid overfitting during the training stage, batch normalization [26] and dropout [27] were added to the layers. In the final layer, we used a 1 × 1 convolution network with a sigmoid activation function and a dice similarity coefficient loss function [28]. We used Adam [29] as an optimizer with the following training parameters: a learning rate of 1.0E-05, mini-batch size of twelves images, and a weight decay. A more detailed specification of our deep neural network, such as the number of feature maps, their sizes and ingredients, are listed Table 2. The experiments were conducted on a computer workstation with an Intel i7 central processing unit (CPU) with a 24 GB main memory, and a computer unified device architecture (CUDA) library on the graphics processing unit (GPU) (NVIDIA GeForce TITAN-Xp with 12 GB of memory). Network training of the deep convolutional neural network (DCNN) took approximately 48 h to run 2000 epochs on the training and validation datasets. Segmentation of two-dimensional computer tomography (2D-CT) slice image using a (a) Fusion-Net-based deep convolutional neural network, b Atlas segmentation of MIM software. (conv: convolutional layer, res: residual layer, drop: dropout layer, batchnorm: batch normalization, max: maximum pooling layer, deconv: deconvolutional layer, merge: addition with the feature map from the encoding path by using a skip connection) Table 2 Architecture of the proposed convolutional neural network Segmentation image preprocessing CT planning images from patients and the required contouring information used for training of the DCNN were obtained using the Eclipse planning software (version 13.6, Varian Oncology Systems, Palo Alto, CA, USA). All CT images were converted to grayscale images, and the contouring points were converted to segmented label images in a binary format, as shown in Fig. 2. Hounsfield unit (HU) values were windowed in the range of − 100–600 to exclude irrelevant organs All images were downsampled from the conventional size of 512 × 512 pixels to the size of 256 × 256 pixels owing to graph card memory resource limitations and reduced DCNN training time constraints. Grayscale CT and segmented label images of the (a) heart (H), b liver (L), c right kidney (RK), d left kidney (LK), and e stomach (S) used for DCNN model learning Deep-learning-based segmentation The deep-learning-based segmentation process consisted of three steps. The first was the random separation into training and validation sets consisting of 45 and 15 patient datasets, respectively, and the preprocessing and preparation of 10 independent test dataset images for the deep convolutional neural network. In the second step, we trained the DCNN using the training datasets for each of the organs. In the final step, the test image set was segmented into a test dataset with DCNN (Fig. 3). Work flowchart for deep convolution neural network (DCNN) training and testing Atlas-based-segmentation Atlas-based segmentation is a method used to locate the interface between the test image and the optimally matched organs from labeled, segmented reference image data [30]. The commercial atlas-based contouring software MIM Maestro 6.5 (MIM Software Inc., Cleveland, OH, USA) was used to generate the contours of the ten patients automatically test datasets for the OARs. Segmentation processing was performed on a single organ basis instead of multiple organ segmentation, and the outcomes were compared with those of the deep-learning-based segmentation conducted using the same conditions. We used MIM supported label fusion algorithms based on the majority vote (MV) algorithm. For segmentation, a training set with data from 60 patients was registered to the MIM Maestro 6.5 atlas library with CT planning images alongside the respective manual contours of the heart, liver, kidney, and stomach. The slice thicknesses of the CT images were not changed during their registration with the atlas library. Quantitative evaluations of auto-segmentation To quantitatively evaluate the accuracy of deep learning and atlas-based auto-segmentations, the Dice similarity coefficient (DSC) and Hausdorff distance (HD) were used for quantitative analyses on accuracy [31]. The DSC method, calculates the overlapping results of two different volumes according to the equation, $$ DSC\ \left( dice\ similarity\ coefficient\right)=\frac{2\left|A\cap B\right|}{\left|A\right|+\left|B\right|}, $$ where A is the manual contouring volume, and B is the auto-segmentation volume (deep learning and atlas segmentation results). DSC takes values between zero and one. When the DSC value approaches zero, the manual and auto-segmentation outcomes differ significantly. However, as the DSC value approaches unity, the two-volumes exhibit increased similarities. The second method is the HD method. After calculating the Euclidean distance of the surfaces of each contour point between A and B, the similarity of A and B is determined according to the distance of the nearest maximum distance. HD is thus defined as, $$ Hausdorff\ distance\ (HD)=\mathit{\max}\ \left(h\left(A,B\right),h\left(B,A\right)\right), $$ where h(A, B) is the directed HD from A to B and is given by. $$ h=\mathit{\max}\left(\min \left(\left\Vert a-b\right\Vert \right)\right) $$ $$ \mathrm{a}\in \mathrm{A},\mathrm{b}\in \mathrm{B} $$ As the HD approaches zeros, the difference between the manual contouring and auto contouring becomes smaller. By contrast, if the coefficient is greater than zero, the similarity between the two volumes decreases. The third method is the volume overlap error (VOE) [32]. VOE can be calculated by subtracting the Jaccard coefficient from the value of unit by comparing dissimilarities between the two volumes. $$ \mathrm{VOE}\ \left(\mathrm{volume}\ \mathrm{overlap}\ \mathrm{error}\right)=1-\frac{\left|A\cap B\right|}{\left|A\cup B\right|}, $$ The last method is the relative volume difference (RVD) [32]. RVD compares the sizes between two volumes. $$ \mathrm{RVD}\ \left(\mathrm{relative}\ \mathrm{volume}\ \mathrm{difference}\right)=\frac{\left|B\right|-\left|A\right|}{\left|A\right|}, $$ Contrary to DSC, as the VOE and RVD approach zero, the manual contouring and auto contouring volumes only yield small volume differences, and values larger than zero reduce the similarity between the two volumes. For quantitative evaluations, the DSC and HD were calculated for each test dataset, and the results are shown in Tables 3 and 4. For qualitative visual assessment, Fig. 4 shows, a specific patient case where the three delineation methods are compared, i.e., the atlas-based (Catlas), deep-learning-based (Cdeep), and manual contouring methods (Cmanual). In all the organ cases studied herein (i.e., heart, liver, kidney, and stomach), the Cdeep results more accurate matched to the Cmanual compared to the Catlas results. However, both Catlas and Cdeep were not excluded in the hepatic artery region (Fig. 4, red arrow). For the kidney case, neither the Catlas nor the Cdeep outcomes differed significantly from the evoked Cmanual outcomes from DSC. Table 3 Comparison of dice similarity coefficients (DSC) obtained from atlas and deep-learning-based segmentations in the cases of the four tested organs (heart, liver, kidney, stomach). Averages and standard deviations are listed for all the ten tested cases Table 4 Comparison of Hausdorff distances (HD) for atlas against deep-learning-based segmentation for the with four organs (heart, liver, kidney, stomach). Averages and standard deviations are listed for ten tested cases Selected CT slices of one of the studied patients with a manual contour (green), atlas-based contour (red), and deep-learning-based contour (blue) for the (a) heart, (b) liver, (c) right kidney, (d) left kidney, and (e) stomach The methods of auto-segmentation were quantitatively compared using DSC, HD, VOE and RVD metrics against manual contours (i.e., the reference), and are presented in Tables 3, 4, 5, and 6, respectively. Table 5 Comparison of volume overlap error (VOE) for atlas-based segmentation against deep-learning-based segmentation with four organs (heart, liver, kidney, stomach). Averages and standard deviations are listed for ten test cases Table 6 Comparison of relative volume difference (RVD) for atlas-based segmentation against deep-learning-based segmentation with four organs (heart, liver, kidney, stomach). Averages and standard deviations are listed for ten test cases The average DSC values (± SD) of Catlas are 0.92 (±0.04), 0.93 (±0.02), 0.86 (±0.07), 0.85 (±0.11), and 0.60 (±0.13) for the heart, liver, right kidney, left kidney, and stomach, respectively. The respective outcomes for the same DSC analyses for Cdeep are 0.94 (±0.01), 0.93 (±0.01), 0.88 (±0.03), 0.86 (±0.03), and 0.73 (±0.09), for the heart, liver, right kidney, left kidney, and stomach, respectively. The HD values (± SD) for Catlas are 2.16 (±1.52) mm, 2.23 (±0.81) mm, 1.78 (±1.34) mm, 1.90 (±1.24) mm, and 6.76 (±2.31) mm, for the heart, liver, right kidney, left kidney, and stomach, respectively. The respective outcomes for the HD values based on the same analysis for Cdeep are 1.61 (±0.28) mm, 2.17 (±0.39) mm, 1.61 (±0.52) mm, 1.88 (±0.31) mm, and 4.86 (±1.57) mm, for the heart, liver, right kidney, left kidney, and stomach, respectively, as shown in Fig. 6. The average DSC outcomes for Cdeep are higher in all the cases except for the liver. Specifically, there was a maximum difference of 21.67% in the stomach case, as shown in Table 8. It is important to note that the standard deviations of the DSC values for Catlas were higher than those of Cdeep for all the studied structures, i.e., Catlas exhibits broader interquartile ranges than Cdeep in the boxplot, as shown in Fig. 5. Comparison of Hausdorff distances (HD) for deep learning contour (Cdeep) and atlas-based contour (Catlas) segmentations for the heart (H), liver (L), right kidney (RK), left kidney (LK), and stomach (S) Table 7 Differences between HD mean values associated with the deep learning and atlas-based contouring methods Comparison of Dice similarity coefficient (DSC) value of deep learning contour (Cdeep) and atlas-based contour (Catlas) segmentations for the heart (H), liver (L), right kidney (RK), left kidney (LK), and stomach (S) The VOE and RVD results showed significant differences between Catlas and Cdeep compared to DSC, as shown in Figs. 7 and 8. In Table 8, average of DSC results in the liver case were not different, but the VOE and RVD showed a more accurate difference of ~ 3%, and the heart, kidney (left, right) and stomach also showed significantly differences than DSC, as shown in Tables 9 and 10. In addition, Christ et al. [32] have also published a liver case auto segmentation study, whereby VOE and RVD yielded more sensitive differences compared to the DSC results. Comparison of volume overlap error (VOE) for deep learning contour (Cdeep) and atlas-based contour (Catlas) segmentations for the heart (H), liver (L), right kidney (RK), left kidney (LK), and stomach (S) Comparison of relative volume difference (RVD) for deep learning contour (Cdeep) and atlas-based contour (Catlas) segmentations for the heart (H), liver (L), right kidney (RK), left kidney (LK), and stomach (S) Table 8 Differences between DSC mean values associated with the deep-learning and atlas-based contouring methods Table 9 Differences between VOE mean values associated with the deep-learning-based and atlas-based contouring methods Table 10 Differences between RVD mean values associated with the deep-learning-based and atlas-based contouring methods In this study, 70 CT patient datasets (45 for training, 15 for validation, and 10 for testing) were used to compare the performances of the atlas-and deep-learning-based auto-segmentation frameworks. In the study of La Macchia et al. [33], the DSC results obtained from the auto-segmentation analyses for the heart, liver, left kidney and right kidney, with the use of the three commercially available systems (ABAS 2.0, MIM 5.1.1, and Velocity AI 2.6.2) were in the ranges of 0.87–0.88, 0.90–0.93, 0.81–0.89, and 0.83–0.89, respectively. The heart yielded lower DSC scores than our reported results, whereas the other organ cases were similar to our segmented results. However, poorer performance outcomes were evoked in the case of the stomach compared to the other organs in terms of DSC owing to the fact that the performance of our method depended on the presence of gas bubbles and on the variation of the stomach shapes among the studied patient cases (Table 8). Nevertheless, as shown in Tables 7 and 8, it is important to note that the deep learning method yielded more accurate results both in terms of the DSC (by 21.67%) and HD (− 1.90 mm) compared to the atlas-based method. The time-efficiency was based on the average times required by the atlas and deep-learning-based segmentation methods for the four organs, which were 75 s and 76 s, respectively (i.e., there was no statistically significant difference because p-values were larger than 0.05 when a ranked Wilcoxon test was performed). However, in the case of the atlas-based segmentation, the time required for multi-organ segmentation can be reduced. A recent study by Gibson et al. [34] demonstrated a multi-organ segmentation approach using the deep learning framework. Our future studies will be undertaken based on the implementation of multi-organ segmentation using DCNN to investigate the impact of discrepancies among different segmentation methods in radiation treatment planning. It is also important to note that this study is associated with some limitations. First, to compare the segmentation performances of the two methods using the same conditions, we did not use the image datasets which were obtained by cropping the relevant regions-of-interest [16]. Secondly, we did not perform post-image processing. Third, the number of test sets was only ten. Finally, the limitation associated with the use of our deep learning network, was based on the fact that the CT image was a three-dimensional (3D)-volume matrix, and each two-dimensional (2D) image was structurally connected to the previous image. However, DCNN does not take into account this structural connectivity because it uses a 2D convolution filter. All these factors may affect the performance of the auto-segmentation process. In post-image processing, Kim et al. [35] showed that the accuracy of the predicted contouring may vary differs according to the smoothing level of the contouring boundary surface. However, it would be difficult to represent statistically significant data for all clinical cases using such a small test dataset. In addition, recent studies have used 3D convolution filters to perform medical image segmentation. Milletari et al. [36] performed volumetric segmentation of magnetic resonance (MR) prostate images with a 3D volumetric CNN, an average dice score of 0.87 ± 0.03 and an average HD of 5.71 ± 1.02 mm. The HD exhibited a difference in accuracy which depended on the image size. The size of the CT and segmented labeled images were reduced to half the original sizes (i.e., to 256 × 256) because of the limitations of the graphic card memory and training time constraints. The standard deviations (SD) of the HD results after image interpolation to the matrix sizes of 64 × 64 pixels, 128 × 128 pixels, and 512 × 512 pixels, compared to the current pixels array size 256 × 256 pixels were ± 0.63 mm, ± 0.58 mm, ± 0.97 mm, ± 0.90 mm, and ± 1.03 mm for the heart, liver, right kidney, left kidney, and stomach, respectively. Accordingly, when the segmentation image size is changed, the HD result may yield a difference up to approximately 1 mm. However, comparison of the SD of the HD results of the current pixel array size (256 × 256) and the original CT pixel array size (512 × 512 pixels) yielded differences which were equal to ±0.02 mm, ± 0.04 mm, ± 0.04 mm, ± 0.07 mm, and ± 0.08, in the cases of the heart, liver, right kidney, left kidney, and stomach, respectively. Despite the aforementioned limitations, in this study, we compared the auto segmentation outcomes obtained with the use of the atlas, which is the auto segmentation tool currently used in clinical practice, with the use of an open source-based tool [21] rather than the commercial program [20]. In particular, HD is a sensitive index which indicates whether segmentation yields localized disagreements. Therefore, it is an important indicator for assessing the accuracy of the segmented boundaries. Considering the limitation of the SD differences based on pixel array size differences (comparison of the array sizes of 256 × 256 and 512 × 512) mentioned above, the deep-learning-based contouring is superior to the atlas-based contouring method regarding the HD results. The segmentation results of the heart, liver, kidney, and stomach, based on the use of the auto-segmentation with deep-learning-based contouring showed good performance outcomes both in terms of DSC and HD compared to the atlas-based contouring. Loi et al. [37] proposed a sufficient DSC threshold > 0.85 for volumes greater than 30 ml for auto-segmentations. In this study, the vast majority met this criterion except in the case of the stomach, whereby only one of the test sets yielded DSC values greater than 0.85 in the case where, the deep learning method was used (Table 3). Recent technological developments in diagnostic imaging modalities have led to frequent fusions of images, including the paradigms of MR–Linac, PET–CT, and MR–CT image fusions. To apply this to adaptive RT, efficient OAR delineation is necessary in the daily adaptive treatment protocol to minimize the total treatment time. There is one important issue that needs to be considered to contour the OARs correctly, which pertains to the motion artifacts attributed to the respiratory motion of the patients. The movement of the organ increases the contour uncertainty of the OARs. Combining the auto-segmentation with the reduction of motion artifacts [38] will enable more accurate delineation of the organs affected by respiration. Therefore, application of deep-learning-based auto-segmentation possesses tremendous potential, and is expected to have a greater impact in the near future in achieving effective and efficient radiotherapy workflow. In summary, we applied an open-source, deep learning framework to an auto-segmentation application in liver cancer and demonstrated its performance improvements compared to the atlas-based approach. Deep-learning-based auto-segmentation is considered to yield an acceptable accuracy as well as good reproducibility for clinical use. Additionally, it can significantly reduce the contouring time in OARs destined to undergo radiation treatment planning. We envisage that deep learning-based auto-segmentation will become clinically useful, especially when it is applied in the daily adaptive plans which are based on multi-imaging modality-guided treatments. The data are not available for public access because of patient privacy concerns, but are available from the corresponding author on reasonable request. Computer tomography DCNN: Deep convolution neural network DSC: Dice similarity coefficient Hausdorff distance Majority vote OARs: Organs at risk ReLu: Rectified linear unit RVD: Relative volume difference SD: VOE: Volume overlap error Vinod SK, Jameson MG, Min M, Holloway LC. Uncertainties in volume delineation in radiation oncology: a systematic review and recommendations for future studies. Radiother Oncol. 2016;121(2):169–79. Chen A, Deeley MA, Niermann KJ, Moretti L, Dawant BM. Combining registration and active shape models for the automatic segmentation of the lymph node regions in head and neck CT images. Med Phys. 2010;37(12):6338–46. Qazi AA, Pekar V, Kim J, Xie J, Breen SL, Jaffray DA. Auto-segmentation of normal and target structures in head and neck CT images: a feature-driven model-based approach. Med Phys. 2011;38(11):6160–70. Xu Y, Xu C, Kuang X, Wang H, Chang EI, Huang W, et al. 3D-SIFT-flow for atlas-based CT liver image segmentation. Med Phys. 2016;43(5):2229–41. Daisne J-F, Blumhofer A. Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: a clinical validation. Radiat Oncol. 2013;8(1):154. Sjöberg C, Lundmark M, Granberg C, Johansson S, Ahnesjö A, Montelius A. Clinical evaluation of multi-atlas based segmentation of lymph node regions in head and neck and prostate cancer patients. Radiat Oncol. 2013;8(1):229. Thomson D, Boylan C, Liptrot T, Aitkenhead A, Lee L, Yap B, et al. Evaluation of an automatic segmentation algorithm for definition of head and neck organs at risk. Radiat Oncol. 2014;9(1):173. Karabatak M, Ince MC. An expert system for detection of breast cancer based on association rules and neural network. Expert Syst Appl. 2009;36(2):3465–9. Bejnordi BE, Veta M, Van Diest PJ, Van Ginneken B, Karssemeijer N, Litjens G, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA. 2017;318(22):2199–210. Übeyli ED. Implementing automated diagnostic systems for breast cancer detection. Expert Syst Appl. 2007;33(4):1054–62. Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, et al. Deep learning in medical imaging and radiation therapy. Med Phys. 2019;46(1):e1–e36. Kang J, Schwartz R, Flickinger J, Beriwal S. Machine learning approaches for predicting radiation therapy outcomes: a clinician's perspective. Int J Radiat Oncol Biol Phys. 2015;93(5):1127–35. Poynton M, Choi B, Kim Y, Park I, Noh G, Hong S, et al. Machine learning methods applied to pharmacokinetic modelling of remifentanil in healthy volunteers: a multi-method comparison. J Int Med Res. 2009;37(6):1680–91. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems; 2012. p. 1097–105. Hu P, Wu F, Peng J, Liang P, Kong D. Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution. Phys Med Biol. 2016;61(24):8676. Dong H, Yang G, Liu F, Mo Y, Guo Y. Automatic brain tumor detection and segmentation using U-Net based fully convolutional networks. In: Annual conference on medical image understanding and analysis. Cham: Springer; 2017. p. 506–17. Zhou X, Takayama R, Wang S, Hara T, Fujita H. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method. Med Phys. 2017;44(10):5221–33. Tong N, Gou S, Yang S, Ruan D, Sheng K. Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks. Med Phys. 2018;45(10):4558–67. Yoon HJ, Jeong YJ, Kang H, Jeong JE, Kang DY. Medical image analysis using artificial intelligence. Progress Med Phys. 2019;30(2):49–58. Lustberg T, van Soest J, Gooding M, Peressutti D, Aljabar P, van der Stoep J, et al. Clinical evaluation of atlas and deep learning based automatic contouring for lung cancer. Radiother Oncol. 2018;126(2):312–7. Keras CF. The Python deep learning library. In: Astrophysics Source Code Library; 2018. Quan TM, Hildebrand DG, Jeong W-K. Fusionnet: A deep fully residual convolutional neural network for image segmentation in connectomics. 2016. arXiv preprint arXiv:1612.05360. https://arxiv.org/abs/1612.05360. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: In: international conference on medical image computing and computer-assisted intervention. Cham: Springer; 2015. p. 234–41. Nair V, Hinton GE. Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th international conference on machine learning (ICML–10); 2010. p. 807–14. Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR.–2015.–Vol. abs/1502.03167. http://arxiv.org/abs/1502.03167. Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:12070580. 2012. https://arxiv.org/abs/1207.0580. Roth HR, Shen C, Oda H, Oda M, Hayashi Y, Misawa K, Mori K. Deep learning and its application to medical image segmentation. Medical Imaging Technology. 2018;36(2):63–71. Kingma DP, Adam BJ. A method for stochastic optimization. In: Proceedings of the 3rd International Conference for Learning Representations (ICLR 2015); 2015. Taha AA, Hanbury A. Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC Med Imaging. 2015;15:29. Sharp G, Fritscher KD, Pekar V, Peroni M, Shusharina N, Veeraraghavan H, et al. Vision 20/20: perspectives on automated image segmentation for radiotherapy. Med Phys. 2014;41(5):050902. Christ PF, Ettlinger F, Grün F, Elshaera MEA, Lipkova J, Schlecht S, Rempfler M. Automatic liver and tumor segmentation of CT and MRI volumes using cascaded fully convolutional neural networks. arXiv preprint arXiv:1702.05970; 2017. La Macchia M, Fellin F, Amichetti M, Cianchetti M, Gianolini S, Paola V, et al. Systematic evaluation of three different commercial software solutions for automatic segmentation for adaptive therapy in head-and-neck, prostate and pleural cancer. Radiat Oncol. 2012;7(1):160. Gibson E, Giganti F, Hu Y, Bonmati E, Bandula S, Gurusamy K, et al. Automatic multi-organ segmentation on abdominal CT with dense v-networks. IEEE Trans Med Imaging. 2018;37(8):1822–34. Kim H, Monroe JI, Lo S, Yao M, Harari PM, Machtay M, et al. Quantitative evaluation of image segmentation incorporating medical consideration functions. Med Phys. 2015;42(6 Part 1):3013–23. Milletari F, Navab N, Ahmadi SA. V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision (3DV); 2016. p. 565–71. IEEE. Loi G, Fusella M, Lanzi E, Cagni E, Garibaldi C, Iacoviello G, et al. Performance of commercially available deformable image registration platforms for contour propagation using patient-based computational phantoms: a multi-institutional study. Med Phys. 2018;45(2):748–5. Jiang W, Liu Z, Lee KH, Chen S, Ng YL, Dou Q, et al. Respiratory motion correction in abdominal MRI using a densely connected U-Net with GAN-guided training. arXiv preprint arXiv:1906.09745. 2019. This study was supported by the National Cancer Center Grant (1810273). Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea Sang Hee Ahn , Kwang Hyeon Kim , Chankyu Kim , Se Byeong Lee , Young Kyung Lim , Haksoo Kim , Dongho Shin , Taeyoon Kim , Tae Hyun Kim , Sang Hee Youn , Eun Sang Oh & Jong Hwi Jeong Peter MacCallum Cancer Centre, Melbourne, VIC, Australia Adam Unjin Yeo Department of Radiation Oncology, Asan Medical Center, Seoul, South Korea Youngmoon Goh Department of Radiation Oncology, Chonnam National University Medical School, Gwangju, South Korea Shinhaeng Cho Search for Sang Hee Ahn in: Search for Adam Unjin Yeo in: Search for Kwang Hyeon Kim in: Search for Chankyu Kim in: Search for Youngmoon Goh in: Search for Shinhaeng Cho in: Search for Se Byeong Lee in: Search for Young Kyung Lim in: Search for Haksoo Kim in: Search for Dongho Shin in: Search for Taeyoon Kim in: Search for Tae Hyun Kim in: Search for Sang Hee Youn in: Search for Eun Sang Oh in: Search for Jong Hwi Jeong in: SHA, AUY, and JHJ conceived the study, participated in its design and coordination and helped draft the manuscript. THK, SHY, and ESO, generated the manual contours. SHA, KHK, CK, YG, SC, SBL, YKL, HK, DS, and TK, analyzed parts of the data, and interpreted the data. SHA developed and designed the software and wrote the technical part. All authors read and approved the final manuscript. Correspondence to Jong Hwi Jeong. This study was approved by our institutional review board and conducted in accordance with the ethical standards of the Declaration of Helsinki. The authors declare that they have no competing interest. Ahn, S.H., Yeo, A.U., Kim, K.H. et al. Comparative clinical evaluation of atlas and deep-learning-based auto-segmentation of organ structures in liver cancer. Radiat Oncol 14, 213 (2019) doi:10.1186/s13014-019-1392-z DOI: https://doi.org/10.1186/s13014-019-1392-z Atlas-based auto-segmentation Deep-learning-based auto-segmentation Deep convolution neural network (DCNN)
CommonCrawl
Amit Solomon Book on Lattices Hebrew Dictionary for Geometry I have recently started my Ph.D studies at the mathematics department at the Hebrew University of Jerusalem, Israel. My advisor is the one and only Jake Solomon, with whom I (try to) work on problems in differential and symplectic geometry, related to mathematical physics. For more information, please see the research page. If for some reason you feel like looking at my curriculum vitae, you are probably ill and should see a doctor (a real doctor!). Oh, and if you are in Jerusalem on a Wednesday morning, you should come to the topology and geometry seminar. It's fun! Einstein Institute of Mathematics Edmond J. Safra Campus, Givat Ram The Hebrew University of Jerusalem Jerusalem, 91904, Israel. Office: Ross 37. Phone: +972-2-5494025. Email: $\mathtt{amit}@\mathtt{math}{.}\mathtt{huji}{.}\mathtt{ac}{.}\mathtt{il}$ Office hours: Wednesdays, 17:00-18:00 Apologies for the low quality of the picture, it was cloudy that day. :כ"א טבת, תשע"ה יום הלשון העברית Generally speaking, I am interested in geometry. More particularly, I am mostly interested in differential and symplectic geometry. Currently, I am working on the construction of a Morse-Witten complex for non-Morse functions, with hope to apply the methods we develop to FJRW theory. In short, given a smooth function with a degenerate Hessian on a closed manifold which behaves "like a homogeneous function" near its critical points, we try to construct a chain complex whose homology recovers the singular homology of the manifold. This in turn will enable us to define the invariants of FJRW theory (a mathematical theory related to high energy physics) without the need for perturbation. We also hope to be able to use our methods to prove mirror-symmetry results in this context. Click here if you would like to read a more detailed (user-friendly!) description. Morse Theory Morse theory illustrates the intimate relationship between the critical points of a smooth function on a manifold and the topology of the manifold: Given a generic function with non-degenerate critical points, one can construct a chain complex, known as the Morse-Witten complex, whose homology equals the singular homology of the manifold. Namely, the graded abelian group is the free abelian group generated by the critical points, and the boundary operator is given by counting (with sign) the isolated trajectories of the gradient flow connecting pairs of critical points. What goes wrong in the degenerate case Unfortunately, when the critical points of the function are degenerate, the Morse complex construction fails miserably: A single degenerate critical point may contribute more than one generator to the homology of the manifold, so we can no longer use a chain complex generated by critical points to calculate the homology of the manifold. More importantly, the boundary operator is not well defined -- while the space of unparametrized connecting orbits is usually still compact, it might not be finite since it may fail to be a manifold. Moreover, stable trajectories do not "glue together" in a unique way with unstable trajectories when they meet at a common critical point. Hence, a challenge lies in showing that the square of the boundary map is zero. So what can we do? We consider a special class of functions with degenerate singularities, which behave like a homogeneous function near each critical point. These functions are called "semihomogeneous" (I should note that we hope to also apply our methods for semiquasihomogeneous functions). Jake and I were able to make the first step towards extending the Morse complex for such functions. Namely, we endow the stable set of a degenerate critical point with a natural stratification generalizing the concept of the stable manifold. We are now working on the next step, which is to develop a gluing theory for the situation at hand, and then defining the chain complex and boundary operator. Finally, we will have to show that the homology of the chain complex so obtained equals the singular homology of the manifold. Why should we consider semi(quasi)homogeneous functions, and how does it relate to FJRW theory? First, let's say a few words about FJRW theory: It is the A-model for Landau-Ginzburg theory, which is a quantum field theory with a unique classical vacuum state and a potential energy with a degenerate critical point. So that's where physics comes in. FJRW theory was developed recently by Fan, Jarvis and Ruan (obviously, the "W" stands for Witten), and is conjecturally dual to Gromov-Witten theory, i.e., the A-model for the Calabi-Yau theory (Moreover, this duality interacts with mirror symmetry.). This is a rapidly growing field with many fascinating directions, and I refer the reader to the original papers by Fan, Jarvis and Ruan: [1], [2]. As you probably imagined, the problem lies in the degeneration of the singularity (of the potential energy) involved in the Landau-Ginzburg theory. At the moment, in order to define the virtual fundamental class (and hence the associated cohomological field theory, from which one extracts the invariants of FJRW theory), one needs to use a Morse deformation of the degenerate singularity. However, there's a bright side to the story: the singularity is semiquasihomogeneous. Hence, if we successfully construct a Morse-Witten complex for such functions, we will be able to construct the virtual fundamental class without need of a deformation. This is important for several reasons: First, a Morse-deformation bifurcates a degenerate singularity into possibly many non-degenerate ones, which makes computations practically impossible unless special conditions are satisfied (e.g., contributions from solutions to the W-spin equation are trivial). Furthermore, a Morsification cannot always be done equivariantly, while in many cases (the orbifold case) it is important to be able to preserve the action of a finite group of diffeomorphisms. I enjoy teaching and find it very important to try and explain the nuances and formalism to students on the one hand, while unravelling definitions and technicalities to see the ideas on the other hand. I have had the pleasure of TA-ing unfortunate students in the following courses: Spring 2014: Linear Algebra (2). Fall 2015: Mathematical Methods (1). Spring 2015: Mathematical Methods (2). Fall 2016: Advanced Infinitesimal Calculus (1). Spring 2016: Introduction to Topology. Picture taken by Konstantin Golubev, the first of his name. Together with Tsachik Gelander from the Weizmann Institute of Science, I am writing a book on the subject of lattices in locally compact groups. This is an ongoing long term project, so please do not expect frequent updates or rapid progress. A draft of the first two chapters will be available here soon. The aim of these chapters is to introduce the reader to the basic ideas and results in the theory: Chapter 1: Basic Definitions and results Space of closed subgroups The Chabauty topology. The space of closed subgroups. Measures on homogeneous spaces Invariant measures on groups. Invariant measures on quotients. Cofinite and cocompact subgroups. Chapter 2: Lattices — a first encounter Definition. Basic properties of lattices Intersection of a lattice with other subgroups. Lattices inside the space of closed subgroups. Fixed point properties of lattices vs. the ambient group. The inclusion of $\text{SL}(n,\mathbb{Z})$ in $\text{SL}(n,\mathbb{R})$ A simple group with no lattices Some notations. The proof of the proposition. In the next chapter we delve into the reach theory of lattices in solvable and nilpotent Lie groups, so stay tuned! Together with Sara Tukachinsky, and with the encouragement of Jake, we translate geometric terms to Hebrew, with the hope that soon discussions on geometry in Israel will be in Hebrew, without having to suddenly use English terms. We will be delighted to hear any suggestions and ideas, as well as terms in need of translation. Please feel free to email me. For the most updated version of the dictionary, click here. Please note the hyper-links in the dictionary! Woo-hoo!☺ מילון למונחי הגאומטריה בעידודו של יעקב, שרה טוקצ'ינסקי ואני שוקדים על כתיבת מילון עברי למונחי הגאומטריה. זאת, מתוך תקווה כי המילון יסייע לקיומו של שיח עברי פורה ושוטף בגאומטריה. על אחת כמה וכמה, אנו מקווים כי מילון זה יקל על לימוד גאומטריה (ומתמטיקה ככלל) בשפה העברית. אנו נשמח לכל הארה והצעה, ונגיל על כל בקשה לתרגומו של מונח הזקוק לתרגום. אנא הרגישו חופשיים לשלוח לי דוא"ל בנושא. הגרסה העדכנית של המילון זמינה בקישור זה. נפנה את תשומת לבכם לקישורים השלובים במילון עצמו. היאח! ☺ Wise words by Dr. Seuss (Drawn by yours truly.) Also, this comics. Bootstrapped personal page theme by Boaz Arad, get yours here
CommonCrawl
Self Organizing Maps for the Extraction of Deep Inelastic Scattering Observables Askanazi, Evan, Physics - Graduate School of Arts and Sciences, University of Virginia Liuti, Simonetta, Department of Physics, University of Virginia The most fundamental composite particles in nuclear physics are hadrons, which can be composed of two or three quarks. Hadrons which consist of three quarks are Baryons; two notable types of Baryons are protons and neutrons which are the fundamental particles that comprise atomic nuclei and therefore are the fundamental building blocks of matter. Hadrons which consist of two quarks are mesons; this type of hadron forms from interactions in matter occurring at very high energies. Currently, nuclear scattering experiments are used to probe the structure of hadrons. The experiments consist of beams of leptons fired at designated target hadrons; leptons are a type of spin $\frac{1}{2}$ particle that like quarks and gluons has an unknown substructure. The leptons used in the scattering experiments of interest for this analysis are electrons and muons. Deep inelastic scattering (DIS) collisions are a critical example of scattering experiments that use leptons fired at high enough energies at the target hadrons to enable the user to determine the structure of these hadrons; the goal of these computations is to create theoretical models based on DIS data. The DIS between leptons and target hadrons can be probed using Quantum Chromo Dynamics, or QCD. QCD is a field theory used to describe and analyze strong interactions which occur among partons within the hadron. QCD provides a framework for separating the cross section of DIS into components that can be computed by expansions of the strong couplings and components that can only be computed by experiment, or the ``soft'' parts. Artificial neural networks (ANNs) provide a novel method for modeling the ``soft'' parts of DIS that eliminate user bias in making these models fit the experimental data. ANNs are sets of data organized into nodes, referred to as neurons, that take input data models and use layers of neurons containing computational algorithms to transform them into final sets of data neurons. Previous attempts to use ANNs to model DIS data have used supervised networks, where the final data set was used as a guidance step each time the ANN algorithm is used; this has led to success in eliminating bias in theoretical models but has not made it possible to visualize and classify these models. A new type of neural network, capable of dimensional reduction of data, without the supervising process of the previous networks is needed to effectively model functions describing nuclear scattering for a range of kinematics and to enable us to analyze the models formed during the ANN algorithm based on their behaviors and quality of fit to experimental data sets. The Self Organizing Map (SOM) is an ANN, using unsupervised learning, that was successfully used to create such desired, unbiased theoretical models of the Parton Distribution Functions, or PDFs. In addition, the SOM successfully showed the relationship between how well the generated models fit data sets and the models' behavior by making it possible to observe how the PDFs cluster on two dimensional maps. The SOM was particularly useful in probing DIS models because this procedure made it possible to analyze various conditions placed upon the models, in terms of qualitative and quantitative analysis of the resulting cluster formation, and to determine errors in model formations based on these clusters. PHD (Doctor of Philosophy) https://doi.org/10.18130/V38876 FINthesisJan13.pdf Uploaded:January 18, 2016
CommonCrawl
Mathematical difference between white and black notes in a piano The division of the chromatic scale in $7$ natural notes (white keys in a piano) and $5$ accidental ones (black) seems a bit arbitrary to me. Apparently, adjacent notes in a piano (including white or black) are always separated by a semitone. Why the distinction, then? Why not just have scales with $12$ notes? (apparently there's a musical scale called Swara that does just that) I've asked several musician friends, but they lack the math preparation for giving me a valid answer. "Notes are like that because they are like that". I need some mathematician with musical knowledge (or a musician with mathematical knowledge) to help me out with this. Mathematically, is there any difference between white and black notes, or do we make the distinction just for historical reasons? music-theory egarciaegarcia 2,70233 gold badges1111 silver badges66 bronze badges $\begingroup$ I guess I forgot to actually answer your question about the white and black keys. The short answer is that we make an arbitrary choice when we choose to privilege things that are close to C on the circle of fifths, but this arbitrary choice happens to sound fairly nice. (Or does it? Maybe we're just too used to it.) $\endgroup$ – Qiaochu Yuan Nov 24 '10 at 10:28 $\begingroup$ @J.M.: many non-Western scales; see e.g. en.wikipedia.org/wiki/Musical_scale#Non-Western_scales . $\endgroup$ – Qiaochu Yuan Nov 24 '10 at 11:33 $\begingroup$ @Jefromi The white keys also form the dorian scale on D, the phrygian on E, the lydian on F, the mixolydian on G, the aeolian on A, and the locrian on B! This little collection indicates there are deep historical reasons for the patterns of white and black keys that are not directly related to equal temperament or the chromatic ("12 tone") scale. They are best understood in terms of music that emphasized the consonances afforded by octaves, fifths, thirds, and sixths (integral frequency ratios of 2:1, 3:2, 5:4, and 6:5--but not 4:3!). Originally, black keys interpolated notes. $\endgroup$ – whuber Nov 24 '10 at 21:39 $\begingroup$ Obviously, there are plenty of differences between C and C#. For instance, C# relies on a garbage collector to manage memory automatically whereas C requires you to do it manually... Oh, sorry; wrong site. $\endgroup$ – xmm0 Nov 25 '10 at 3:05 $\begingroup$ Thanks a lot! However, what makes this question great are its answers. $\endgroup$ – egarcia Jan 2 '11 at 0:35 The first thing you have to understand is that notes are not uniquely defined. Everything depends on what tuning you use. I'll assume we're talking about equal temperament here. In equal temperament, a half-step is the same as a frequency ratio of $\sqrt[12]{2}$; that way, twelve half-steps makes up an octave. Why twelve? At the end of the day, what we want out of our musical frequencies are nice ratios of small lintegers. For example, a perfect fifth is supposed to correspond to a frequency ratio of $3 : 2$, or $1.5 : 1$, but in equal temperament it doesn't; instead, it corresponds to a ratio of $2^{ \frac{7}{12} } : 1 \approx 1.498 : 1$. As you can see, this is not a fifth; however, it is quite close. Similarly, a perfect fourth is supposed to correspond to a frequency ratio of $4 : 3$, or $1.333... : 1$, but in equal temperament it corresponds to a ratio of $2^{ \frac{5}{12} } : 1 \approx 1.335 : 1$. Again, this is not a perfect fourth, but is quite close. And so on. What's going on here is a massively convenient mathematical coincidence: several of the powers of $\sqrt[12]{2}$ happen to be good approximations to ratios of small integers, and there are enough of these to play Western music. Here's how this coincidence works. You get the white keys from $C$ using (part of) the circle of fifths. Start with $C$ and go up a fifth to get $G$, then $D$, then $A$, then $E$, then $B$. Then go down a fifth to get $F$. These are the "neighbors" of $C$ in the circle of fifths. You get the black keys from here using the rest of the circle of fifths. After you've gone up a "perfect" perfect fifth twelve times, you get a frequency ratio of $3^{12} : 2^{12} \approx 129.7 : 1$. This happens to be rather close to $2^7 : 1$, or seven octaves! And if we replace $3 : 2$ by $2^{ \frac{7}{12} } : 1$, then we get exactly seven octaves. In other words, the reason you can afford to identify these intervals is because $3^{12}$ happens to be rather close to $2^{19}$. Said another way, $$\log_2 3 \approx \frac{19}{12}$$ happens to be a good rational approximation, and this is the main basis of equal temperament. (The other main coincidence here is that $\log_2 \frac{5}{4} \approx \frac{4}{12}$; this is what allows us to squeeze major thirds into equal temperament as well.) It is a fundamental fact of mathematics that $\log_2 3$ is irrational, so it is impossible for any kind of equal temperament to have "perfect" perfect fifths regardless of how many notes you use. However, you can write down good rational approximations by looking at the continued fraction of $\log_2 3$ and writing down convergents, and these will correspond to equal-tempered scales with more notes. Of course, you can use other types of temperament, such as well temperament; if you stick to $12$ notes (which not everybody does!), you will be forced to make some intervals sound better and some intervals sound worse. In particular, if you don't use equal temperament then different keys sound different. This is a major reason many Western composers composed in different keys; during their time, this actually made a difference. As a result when you're playing certain sufficiently old pieces you aren't actually playing them as they were intended to be heard - you're using the wrong tuning. Edit: I suppose it is also good to say something about why we care about frequency ratios which are ratios of small integers. This has to do with the physics of sound, and I'm not particularly knowledgeable here, but this is my understanding of the situation. You probably know that sound is a wave. More precisely, sound is a longitudinal wave carried by air molecules. You might think that there is a simple equation for the sound created by a single note, perhaps $\sin 2\pi f t$ if the corresponding tone has frequency $f$. Actually this only occurs for tones which are produced electronically; any tone you produce in nature carries with it overtones and has a Fourier series $$\sum \left( a_n \sin 2 \pi n f t + b_n \cos 2 \pi n f t \right)$$ where the coefficients $a_n, b_n$ determine the timbre of the sound; this is why different instruments sound different even when they play the same notes, and has to do with the physics of vibration, which I don't understand too well. So any tone which you hear at frequency $f$ almost certainly also has components at frequency $2f, 3f, 4f, ...$. If you play two notes of frequencies $f, f'$ together, then the resulting sound corresponds to what you get when you add their Fourier series. Now it's not hard to see that if $\frac{f}{f'}$ is a ratio of small integers, then many (but not all) of the overtones will match in frequency with each other; the result sounds a more complex note with certain overtones. Otherwise, you get dissonance as you hear both types of overtones simultaneously and their frequencies will be similar, but not similar enough. Edit: You should probably check out David Benson's "Music: A Mathematical Offering", the book Rahul Narain recommended in the comments for the full story. There was a lot I didn't know, and I'm only in the introduction! Will Orrick Qiaochu YuanQiaochu Yuan $\begingroup$ Before the invention of well and equal temperament, harpsichords had to be retuned each time one wanted to play in a different key, right? So I guess the black/white key distinction got stuck from that period. Just found this. $\endgroup$ – Raskolnikov Nov 24 '10 at 11:42 $\begingroup$ Just to add one sentence that is implicit in Qiaochu's answer: in equal temperament one may as well just have 12 identical keys. But in pythagorean tuning and well-temperament, there's quite a difference between the different keys; hence it is less odd to have the keys not be completely symmetrical. The white keys correspond to the "preferred" diatonic scale of C-major. If you train your ears a bit, you maybe able to hear the difference of, say, Bach's Well-tempered Klavier played on modernly tuned instrument versus a well-tempered one. $\endgroup$ – Willie Wong Nov 24 '10 at 11:49 $\begingroup$ And if you listen to Eastern music (Greek/Turkish music), as long as you have any musical training at all, you will immediately hear the difference of a comma between the major third you are used to and their interval between "C" and "E". But this is getting far from mathematics. $\endgroup$ – Willie Wong Nov 24 '10 at 11:53 $\begingroup$ "The origin of the consonance of the octave turns out to be the instruments we play. Stringed and wind instruments naturally produce a sound that consists of exact integer multiples of a fundamental frequency. If our instruments were different, our musical scale would no longer be appropriate. For example, in the Indonesian gamelan, the instruments are all percussive. Percussive instruments do not produce exact integer multiples of a fundamental... So the western scale is inappropriate, and indeed not used, for gamelan music." -- maths.abdn.ac.uk/~bensondj/html/maths-music.html $\endgroup$ – Rahul Nov 24 '10 at 15:17 $\begingroup$ Excellent answer, Qiaochu; my comment above (which I had to make really curt to have it fit in the length limit) is in regards to your wondering about the physics of vibration. Just to spell it out, most string and wind instruments work by vibrating a practically one-dimensional thing (a string, or a column of air); the modes of vibration in one dimension are just sinusoids, and you can only fit a whole number of them to meet the boundary conditions. (Also, the book I linked above is excellent, and free.) $\endgroup$ – Rahul Nov 24 '10 at 15:21 The first answer is great, so I'll try to approach the question from another angle. First, there are several different scales, and different cultures use different ones. It depends on the mathematics of the instruments as much as on cultural factors. Our scale has a very long history that can be traced to the ancient Greeks and Pythagoras in particular. They noticed (by hearing) that stringed instruments could produce different notes by adjusting the length of the string, and that some combinations sounded better. The Greeks had a lot of interest in mathemathics, and it seemed "right" for them to search for "perfect" combinations—perfect meaning that they should be expressed in terms of fractions of small integer numbers. They noticed that if you double or halve the string length, you get the same note (the concept of an octave); other fractions, such as $2/3$, $3/4$, also produced "harmonic" combinations. That's also the reason why some combinations sound better, as it can be explained by physics. When you combine several sine waves, you hear several different notes that are the result of the interference between the original waves. Some combinations sound better while others produce what we call "dissonance". So, in theory, you can start from an arbitrary frequency (or note) and build a scale of "harmonic" notes using these ratios (I'm using quotes because the term harmonic has a very specific meaning in music, and I'm talking in broad and imprecise terms). The major and minor scales of Western music can be approximately derived from this scheme. Both scales (major and minor) have $7$ notes. The white keys in the piano correspond to the major scale, starting from the C note. Now, if you get the C note and use the "perfect" fractions, you'll get the "true" C major scale. And that's where the fun begins. If you take any note in the C major scale, you can treat that note as the start of another scale. Take for instance the fifth of C (it's the G), and build a new major scale, now starting from G instead of C. You'll get another seven notes. Some of them are also on the scale of C; others are very close, but not exactly equal; and some fall in the middle of the notes in the scale of C. If you repeat this exercise with all notes, you'll end up building $12$ different scales. The problem is that the interval is not regular, and there are some imprecisions. You need to retune the instrument if you want to have the perfect scale. The concept of "chromatic" scale (with $12$ notes, equally spaced) was invented to solve this "problem". The chromatic scale is a mathematical approximation, that is close enough for MOST people (but not all). People with "perfect" ear can listen the imperfections. In the chromatic scale, notes are evenly spaced using the twelfth root of two. It's a geometric progression, that matches with good precision all possible major and minor scales. The invention of the chromatic scale allows players to play music in arbitrary scales without retuning the instrument—you only need to adjust the scale by "offsetting" a fixed number of positions, or semitones, from the base one of the original scale. All in all, that's just convention, and a bit of luck. The white keys are an "historical accident", being the keys of the major scale of C. The other ones are needed to allow for transposition. Also bear in mind that (1) the keys need to have a minimum width to allow for a single finger, and (2) if you didn't have the black keys, the octave would be too wide for "normal" hands to play. So the scheme with a few intermediate keys is needed anyway, and the chromatic scale that we use is at least as good (or better) as any other possible scale. Carlos RibeiroCarlos Ribeiro $\begingroup$ Hi Carlos! Your answer was very good, and it actually helped me understand Qiaochu Yuan's better. I'm giving him the correct response because he was faster, but yours was a close second - so +1 to you, and thanks! $\endgroup$ – egarcia Nov 26 '10 at 14:21 $\begingroup$ I'd say this is the better answer. I know a lot about temperament / tuning, but I find it hard to imagine someone who was new to the subject being able to understand much of what Qiaochu was saying. $\endgroup$ – John Gowers Mar 5 '12 at 0:55 $\begingroup$ @Donkey_2009 how is that relevant? The correct answer need not be understood to be correct. Anyways, the question wasn't about tuning. It was about the white vs. black keys, really. $\endgroup$ – sehe Jun 26 '12 at 8:31 $\begingroup$ There is no "correct" answer here, there are just explicative and enlightening ones. Both these answers are so, in different ways. $\endgroup$ – Noldorin Sep 6 '13 at 2:46 The answers given are pretty good from a musical, mathematical, and socialogical / historical reason. But they miss the fundamental reason why there are $12$ notes in a western scale (or $5$ notes in an eastern pentatonic, etc.), and why it's those particular $12$ notes (or $5$). Qiaochu almost nailed it by pointing out that we like notes which are simple integer ratios. But why? The fundamental reason stems from the physics of common early instruments -- flutes (including the human voice) and plucked strings -- and from the physics of the tympanum in the ear. As Qiaochu noted, sound is not composed of a single sine wave frequency but rather a sum of many sine waves. The "note" we hear is the frequency of the primary (loudest) wave coming from these instruments. But frequencies exist in that wave as well, albeit largely masked by the primary. These are known informally as harmonics or overtones. The first several harmonics of flutes and plucked strings are similar and very straightforward: If the primary is normalized to frequency $1$, then the second loudest harmonic is typically $1/2$ (an octave above), the third is usually $1/3$ (an octave and a fifth above), the fourth is usually $1/4$ (two octaves), the fifth is usually $1/5$ (two octaves and a major third), and the sixth is usually $1/6$ (two octaves and a fifth). If the primary note is C1, these translate roughly into C2, G3, C4, E4, and G4. If the harmonics continued in this way -- and they don't always -- various other notes appear. This matters because if you want to play TWO instruments together, you'd like their harmonics to coincide even if they're playing different notes. Otherwise the excess of harmonics sounds bad to the ear. In the worst case, very close but not entirely overlapping harmonics create "beats" -- seeming alternating loud and soft periods of time -- which are irritating to listen to and tough on the ear. To get harmonics to coincide in multiple instruments or even successive notes, you have to pick notes for them to play where their harmonics have a strong overlap. For example, this is also why the major fourth is useful even though it doesn't often appear early. It's because if one instrument is playing C, if the other instrument is playing major fourth but lower by an octave, they'll overlap nicely. I believe these note selections (guaranteeing harmonics in harmony, so to speak) influenced the evolution of scale choices -- especially the pentatonic, that is, the black notes), and the division of the octave into $12$ pieces. One early instrument which is totally out of whack from this is the bell. Bells and gongs can be tuned to have a variety of harmonics, but the most common ones -- foundry bells -- have a very loud, unusual third harmonic: minor third or E flat. It is so loud and incongruous that they sound terrible, even disturbing, when played along with strings, flutes, voices, etc. In fact, entire musical pieces have to be written specially for carillons (large multibell instruments) in order to guarantee proper overlap of harmonics. Generally this means that the entire piece has to be written in fully diminished chords. Major chords sound among the worst because of the clash between the major third in the chord and the minor third coming from the root's loud third harmonic. FooFoo $\begingroup$ The minor third harmonic on tubular bells is truly the worst. The Claude T. Smith wind band arrangement of the hymn Eternal Father, Strong to Save has a very loud major chord that diminuendos into nothing, and at the end of the diminuendo a tubular bell of the tonic note is struck, and it just plain sounds like the percussionist hit a second, wrong note. $\endgroup$ – 75th Trombone Apr 26 '18 at 19:42 The math of frequency relationships here is sound (pun intended) but they don't help explain the white vs black key piano layout. Here's the historical imperitive thay led to this layout for "Western Music". First consider the major triad: root + third + fifth notes of the "diatonic scale". They follow the harmonic series: 1 - root 2 - octave (doubling of root frequency) 3 - fifth (triple of the root - 3:2 relationship to the octave) 4 - double octave (4x) 5 - 10th (double octave of a third) 6 - octave fifth These are notes a static length of tubing can produce by blowing into it: the bugle. Combinations of these notes create frequencies that make choirs sound heaven-ly. The frequencies align and blend into pure complex vibraations that are the sum and the differencies (harmonic overtones) of these relationships. Choirs can tune themselves dynamically to create these frequency alignments that are percieved as being perfectly consonant. Upbeat western music focus on the 3 major chords found in the diatonic scale: 1+3+5 root major chord - white keys C - E - G 4+6+1 4th chord - white keys F - A - C 5+7+2 5th chord - whote keys G - B - D The basics of western folk music are the 1 - 4 - 5 sequences of chords. Learn C, F and G on a guitar and you can play the bulk of the classic Country song book. Put the notes of these chords into a scale and you get that row of 7 white keys: C - D - E - F - G - A - B (repeat until you can't hear it). So, they western scale is based upon frequency relationships that make combinations of notes "ring" in consonance in it's purest form... like the Gregorian Chants of the Roman Church. So, a basic "western keyboard" could be made from just these 7 notes repeated across the frequency spectrum. Look at the layout of a Greek Lyre (a harp) and that's what you will find. A sequence following the diatonic scale which sounds pleasant if you just strum across the strings due to the tuning of even multiples (adjusted by octaves). OK... now adding the black keys is a compromise of tuning specific notes so that you can build these 1+3+5 chords from any starting point and thus play a song adjusted up or down to any starting point. The piano will never achieve that sonic mathematical glimpse into the "music of the spheres" that the self-adjusting choir can to make a chord mathematically perfect in alignment but it's the "keyboard" for the modern composer... the effective "musical qwerty" that a composer or a pianist begins to visualize chord "shapes" as hand positions. With a lot of practice a pianist can pre-visualize sound in terms of finger and hand movements much like a solid touch typist starts to set words and sentences as a sequence of movements. The addition of the black keys was called a "Well Tempered" tuning and Bach was one of the first composers to create whole bodies of compositions that worked through the Major and Minor keys of the 12 scales that you noticed intially when inspecting the keyboard. If you look into other musical cultures you will find a different approaches to standardizing sound relationships that do not focus on the 1 - 4 - 5 chords. This music to a culturally trained western ear is less predicatable in nature and that lack of predictablility can make the music frustrating or exciting... music "speaks" to us in terms of pure sensory inputs that can move, excite, bore or confuse us. So, the piano keyboard is designed to be the perfect delivery system for an individual to produce the range of complexity that western music has achieved. The modern keyboard synthesizers are now able to produce the full range of the western orchestra in terms of "instruments" and I'm hoping someone create one that micro-adjusts notes based upon the surrounding context... shifting a note up or down slightly from the "well tempered" compromise to the pitch that makes a chord "ring" and produce the upper harmonic overtones that make a great orchestra truly "heavenly". Maybe it's already been done. mcdtracymcdtracy $\begingroup$ Thank you. Your answer provided historical and some poetical background, as well as purely mathematical. The math part, however, is very similar to Qiaochu's, so I'm giving the answer to him. Yours was quite enjoyable, so +1 to you. You should write more answers here :). $\endgroup$ – egarcia Nov 26 '10 at 14:28 $\begingroup$ Your mention of a dynamically self adjusting context based synthesizer strikes me as rather interesting. $\endgroup$ – Iiridayn Nov 29 '10 at 19:41 $\begingroup$ @michaelc I know someone that did just that. I've not heard it, so I'm not sure whether it would really be heavenly. $\endgroup$ – sehe Jun 26 '12 at 8:33 $\begingroup$ @michaelc, mcdtracy, and sehe check this out: justonic.com and youtube.com/watch?v=BhZpvGSPx6w $\endgroup$ – Ulf Åkerstedt Sep 1 '12 at 22:03 The math in this thread is awesome, but I'm not sure it addresses the original question about the "difference between white and black notes". The other responses in this thread provide enough math to understand that each octave can be more-or-less naturally divided into twelve semitones. The Western music tradition further evolved to be based around what's called the "diatonic scale". A musical scale is a sequence of pitches within one octave; scales can be defined by the number of semitones between each successive note. For example, the Whole Tone Scale consists entirely of whole tones; it has six distinct pitches, each of which is two semitones higher than the last. So you might represent it with the string '222222' — that is, take a note, then the note 2 semitones higher, then the note 2 semitones higher, etc., until the last "2" takes you to the note an octave above where you started. The Diatonic Scale that Western music is based around could likewise be represented by the string '2212221'. If you start with a C on a keyboard and go up, you'll see that the white keys conform to that pattern of semitones. That, generally, is why the black keys are in that particular pattern. Of course, you can start a scale on any pitch, not just C. That's why the "same" diatonic scale in a different key will involve a unique set of sharps and flats. Now, the Diatonic Scale can also be represented by '2212221' shifted to the left or right any number of times. For example, '2122212', '1222122', etc. are also Diatonic; these are called the "modes" of the Diatonic scale. Each Diatonic mode can be played on only the white keys of the piano by starting on a different pitch. 2212221 is called the Ionian mode (this is also generically called the Major scale), and can be played on the white keys starting with C. 2122212 is the Dorian mode and can be played on the white keys starting with D. 1222122 is the Phrygian mode, starting on E. 2221221 is the Lydian mode, starting on F. 2212212 is the Mixolydian mode, starting on G. 2122122 is the Aeolian mode (the Minor scale), starting on A. 1221222 is the (awesome) Locrian mode, starting on B. Each mode has its own unique "sound", which (in my opinion, at least) derives precisely from the different placement of the semitones within each scale. And of course there are scrillions of non-Diatonic scales that have nothing whatsoever to do with how the modern keyboard came to be. EDIT to add a shorter, less implicit answer: The white keys alone can be used to play the set of diatonic scales listed above; the black keys are "different" because they are the remaining chromatic pitches not used in that set of diatonic scales. 75th Trombone75th Trombone $\begingroup$ This was very interesting. I actually thought that the Diatonic Scale was '222222'. So +1 for you. Two things though: a) why do you think Locrian mode is awesome? b) You actually didn't speak about black and white keys at all... maybe you forgot a closing paragraph? $\endgroup$ – egarcia Nov 26 '10 at 14:40 $\begingroup$ I think Locrian is awesome because it's the weirdest sounding mode; it begins with a semitone and doesn't include the perfect fifth above the root. It might not actually be my favorite one to listen to, but it's definitely the weirdest and most unique. As for your point B, I guess I did leave it somewhat implicit; the white keys by themselves play a certain set of diatonic scales, and the black keys are the remaining five pitches not included in that set. $\endgroup$ – 75th Trombone Nov 26 '10 at 16:59 $\begingroup$ @egarcia: Yes he did. The white keys are the major ("Diatonic") scale with the root-note C. Now, "why is '2212221' the major scale" is another question I am still confused about... $\endgroup$ – BlueRaja - Danny Pflughoeft May 19 '11 at 22:28 $\begingroup$ @BlueRaja - Danny Pflughoeft: There are many ways of expressing how the Major scale is constructed. Here's my favorite: If the fundamental pitch of an instrument is a C1, the first overtone is C2; then G2, C3, E3, G3. So the first 3 unique pitches in this harmonic series are C, E, and G; a pleasant-sounding chord called a major triad. All harmonic series yield major triads this way. Take a pitch, a perfect 5th above it, and a perfect 5th below it, and construct major triads off each note. The unique pitches in that set are the major scale. $\endgroup$ – 75th Trombone Jul 7 '11 at 19:07 $\begingroup$ An offtopic aside: the observation that each scale/mode has its own "feel" and "mood" is the starting point of Indian classical music, which develops the idea to a much greater extent than Western classical music (while entirely ignoring polyphony/harmony, so central in Western music). Each "raga" (or "raag") in Indian classical music is sort of like a scale/mode (with constraints on order of notes, emphasis, etc.); but instead of ≈ a dozen modes, there are about 50 common ragas and any competent musician knows 200–300. Each raga is meant to induce a specific mood in a (good) listener. $\endgroup$ – ShreevatsaR Jul 31 '11 at 15:24 There is no mathematical difference between the white and black notes. Adjacent notes on modern piano keyboards are typically tuned 1/12 of an octave apart. Quiaochu explains this most completely, but what it boils down to is that there is no difference. We haven't always and don't today always use equal temperament on keyboard instruments but even then the difference between white and black notes would be arbitrary. The distinction is for historical and cultural reasons. There is a cool picture here showing Nicholas Farber's Organ (1361), which used an 8 + 4 layout rather than the modern 7 + 5 layout we see today. http://en.wikipedia.org/wiki/Musical_keyboard#Size_and_historical_variation There are examples of instruments in use today that use a chromatic keyboard with no differentiation between the "white" and the "black" notes. See the Bayan and the Bandoneon accordion type instruments. At the New England Conservatory in the classroom where they teach a class on quarter tones, they keep two pianos tuned a quarter tone off from each other. In that case, a full 24-note chromatic quarter tone octave must be played alternating notes on the two pianos. This is only the beginning of this particular rabbit hole. $\begingroup$ Well actually I think I didn't know enough to ask my question properly. My musical knowledge is so limited that my first impulse was asking "What are the maths behind music". But I thought that wasn't specific enough so I tried something more concrete. The answers I've received to my second answer actually helped me with that first question I didn't know how to ask. I appreciate the point you made with the different keyboard layouts and the classroom. Thanks, +1! $\endgroup$ – egarcia Nov 26 '10 at 14:51 Note also that many cultures use a pentatonic scale. This would correspond to playing only the notes CDEGA. As explained in Qiaochu's answer, we want notes that are in small rational intervals, and particularly notes that are in small rational intervals from the tonic. Exactly which set of notes is chosen varies from culture to culture, with Western music using the 7 white keys, but many other cultures only using the 5 pentatonics. $\begingroup$ True, and another great thing about the piano keyboard is that you have the diatonic scale on the white keys and the pentatonic scale on the black keys. $\endgroup$ – Michael Nov 25 '10 at 19:01 $\begingroup$ The pentatonic is familiar to most people from honky-tonk and blues music. The blues scale is very close to a pentatonic. $\endgroup$ – isomorphismes Mar 19 '11 at 17:22 To add to this thread, you can understand why certain notes sound good/bad together by looking at trig sum/product formulas, e.g.: cos(a) + cos(b) = 2 * cos(a - b) * cos(a + b) What this means is that when you add two tones/frequencies 'a' and 'b', it is equivalent to taking one wave of frequency 'a + b' and modulating its amplitude with another of frequency 'a - b'. The frequency 'a + b' will be a faster vibration, and the frequency 'a - b' will be a slower vibration. When the two original frequencies are close (e.g. A = 440Hz and A# = 466Hz), the 'a - b' component will be heard as an unpleasant low frequency beating (here, 26Hz). When the two original frequencies are integer ratios of each other (e.g. 3/2, 4/3) as in chords, then the resulting 'a + b' and 'a - b' frequencies will also be integer ratios of each other. The resulting wave will be simple and sounds harmonious. This is why integer ratios of notes are so important in music. It helps to plot sums of sines graphically to see this in action. unconedunconed $\begingroup$ This seems to be an "appealing and popular, but incorrect explanation", dating to Galileo. See the introduction of the book mentioned above "Music: A Mathematical Offering". $\endgroup$ – ShreevatsaR Nov 26 '10 at 6:04 $\begingroup$ To reiterate @ShreevatsaR's comment, the notion popularized by Helmholtz that perception of dissonance derives from beats can be shown inadequate: http://arxiv.org/html/1202.4212v1/#sec_5_0_0 $\endgroup$ – user253804 Aug 4 '15 at 4:48 Start at F and go up a fifth (to C). (In a keyboard with 12-key octaves, that's 7 steps.) Repeat that process (through the circle of fifths). You'll hit all the white keys and then all the black keys -- F, C, G, D, A, E, B, F#, C#, G#, D#, A# -- note, these keys are usually represented with flats). So it turns out if you're splitting tones on 3 : 2 (fifth) or 4 : 3 (fourth), the least common multiple is twelve. In practice, the 3 : 2 is similar enough that it gives a sort of 'secure' or content feeling. The 4 : 3 gives a slightly edgier feeling but one which is somewhat counterposed perfectly against this secure feeling. So a fourth + a fifth will give you an octave. So why we want all twelve keys is that we're saying that we want the fifth (dominant) and the fourth (subdominant) to come together and make a whole. This is sort of mirrored with the fact that the first 7/12 of the circle of fifths form the basis and the second 5/12 are 'overlayed' on top (with black keys). J. M. is a poor mathematician Tom HaradaTom Harada Just to let you see that other tunings are possible and thus other keyboards: http://www.kylegann.com/tuning.html RaskolnikovRaskolnikov $\begingroup$ +1 The material of the link and its siblings gives good explanations, and also, I believe in a sense although not spelled out, basis for white and black keys along with a mathematical background in terms of wave length integer ratios. $\endgroup$ – Ulf Åkerstedt Sep 1 '12 at 22:04 I'm no musician, but as far as I know, audio waves are felt only then "round/sound", iff they repeat faster than a specific frequency. That frequency is probably that of our brain waves: being awake and in a non-meditating state, that is faster than like 18 or more Hz; neither can you shiver faster nor hear lower frequencies than your brain waves. Audio waves have a length of $$\mathrm{lcm}\{m,n\}·2\pi$$ if they are of the shape $$a_1·\sin(m·2\pi·t+s_1)+a_2·\sin(n·2\pi·t+s_2)$$. The notes double their frequency each octave; therefore they have a logarithmic scale and not a linear. Good violin and harp players can play all fitting ("sound" sounding) combination of frequencies, but instruments with keys lack the variety. (Qiaochu Yuan did answer faster than me, while I was on the phone. Seems to be more complete than I could have answered. I have nothing to add.) comonadcomonad Other responses do a good job of explaining the 12-note chromatic scale. From those 12 tones, if one starts to build a series of tones starting on a single note and going up the circle of fifths, there are two natural stopping points where you have a complete-sounding scale that spans the octaves and has relatively equal spacing between the notes with no gaps: five notes, which gives whole-step and minor-third intervals; and seven notes, which gives whole-step and half-step intervals. These two scales (pentatonic and heptatonic) correspond to the spacing of the black keys and white keys on the keyboard. They are mirror images of each other around the circle of fifths. So the two colors of notes are not "different," but rather a natural division into two symmetrical sclaes built from going opposite directions around the circle of fifths. In the standard tuning system C is "privileged" because it is (essentially) the note where we start building the circle of fifths to create these two scales. manlonmanlon $\begingroup$ For the final sentence: no. As the answer by Tom Harada shows, you get the white keys (before the black ones) when you start building fifths on F, not on C. If there is any reason that C is privileged (given the separation into white an black keys) is that the Ionian mode (Major scale) has turned out to be predominant in Western music (see answer by 75th trombone). The fact that the naming of notes by letters (in English) starts from A, not C, suggests that maybe Aeolian mode (Minor scale) was predominant at some earlier point in time. $\endgroup$ – Marc van Leeuwen Dec 20 '12 at 8:00 The "circle of fifths" is a by product of the preference for diatonic scales. If you layout the chromatic (12-tone) scale without the white and raised black arrangement you'd use the same logic to describe a "circle of sevenths" (counting upo semitones from C to G). So, the arrangement makes solid sense when applied to the human hand. We need to be able to span interesting distances with the "octave" interval be very useful for most pianists as a basic required for anyone older that 10-12. Some pianists can span 10ths with relative ease but they are in the minority. The piano music of Rachmaninoff is riddled with these massive but musically sonorus intervals. The are the major third expanded to the pure natural interval (10 keys apart) of the "bugle" overtone series. I can reach the 10th's on the white to white instances but the black to white (Bb to D for example) are beyond me. And doing them quickly and accurately is the mark of true mastery of the instrument... it's like being able to dunk: genetics help and no amount of effort can help a small handed pianist. $\begingroup$ The thing that helps a small-handed pianist is a smaller piano. There is no sense in making the instrument a standard size, and it is actually ridiculous that the keys are as big as they are in the modern age, where we can make the same percussion action with small keys, allowing for more precision. $\endgroup$ – Ron Maimon Sep 2 '12 at 3:16 $\begingroup$ Given the conventional weird way of naming intervals (an interval of a single semitone or tone is called a "second", not a "first"), measuring by semitones would cause the C-G interval to be called an "eighth" rather than a "seventh". Unless of course one would take the occasion of changing terminology to do away with that weird convention once and for all. $\endgroup$ – Marc van Leeuwen Dec 20 '12 at 8:08 If you only had a repetitive series of keys on your piano it would be a bit difficult to visually get some reference points. I think this is the main reason wh BenoitBenoit Check out this paper which is about the regular 12-gon and music theory. It will help you answer this question, as well as many others that are similar to it. Matt CalhounMatt Calhoun I think in a tiny tiny nutshell... the reason for the 12-division is because a very practical solution for Western music, and the layout of black/white "evolved" into this form because it lacked re-engineering. There is no particularly "mathemagical" thing about it. In other words... square two: it's an arbitrary choice. If you're looking for something that uses 12-divided octave as a practical solution and is engineered for facility, check out the layout of the Russian Bayan (accordion). It's pretty awesome. As for something that is engineered for facility but does not divide the octave into 12 parts, your common fretless string instruments are good examples. Again, all I've said has been mentioned above. Just beware of the overtly "mathemagical" ones, they don't say much about the music but rather put it in a fancy straightjacket. WKZ If you really want to know all about it then you should read 'On the sensation of tone' by Helmholtz. blunders $\begingroup$ Here it is in Google Books. $\endgroup$ – J. M. is a poor mathematician Nov 26 '10 at 3:18 $\begingroup$ Please, Rudi, it should be Helmhol t z. :) $\endgroup$ – Robert Filter Jan 19 '11 at 8:37 A somewhat grapical representation of what Carlos Ribeiro was talking about. "If you take any note in the C major scale, you can treat that note as the start of another scale. Take for instance the fifth of C (it's G), and build a new major scale, starting from G instead of C. You'll get another seven notes. Some of them are also on the scale of C; others are very close, but not exactly equal; and some fall in the middle of the notes in the scale of C. " Note the semi-tone interval EF and BC on the C scale. When trying to reproduce the same scale starting at D, we run into a problem. Alphabetically, the third note should be an F, but F is a semitone too low for that spot. In order to maintain the same sounding scale, we need to introduce a NEW note, called F#. C - D - E F - G - A - B C (C scale) D - E - F#G - A - B - C#D (D scale) E - F#-G#A - B - C#- D#E (E scale) F - G - AA#- C - D - E F (F scale) Note that in actual writing of music the A–A# would be written as A–Bb so that the 'A' line of the staff wouldn't be ambiguous. RamenChef I agree with the theory that the distinction between the notes is used for visual aid and reference points. In addition to that, it was meant to be treated as a vertically rising instrument as if you were to go up a ladder of sorts and those accidentals ( in the case of C, the black notes) are the grips to reach to the next level. As we would refer to them as leading notes. There is also another reason why there are that many notes. Almost all scales are a variation of the major scale or aeolian mode. This scale is designed to have a certain number of tones and semitones to give it the feel of a major scale. If there were too many tones or too many semitones it would not be the same because it will produce too much dissonance or invariably consonance. That is why there is a standard tuning for pianos i.e. A440. If the interval in vibration were to be changed it would not be the same because if the vibrations aren't in sync, the resonation will be totally off. That is why there can be only soo many notes on a piano and make sense to the human ear. Other tunings are possible but the same effect is made that the intervals are kept in a strict way to keep harmony. So, getting back to your question Mathematically, yes there is a reason for that specific order of white keys and black keys. Most of its relation deals with the mode theory of 12 notes and the circle of fifths where if you were to expand the notes on the piano it will form a perfect circle in diminished chords of C as the cardinal points. If you were to go in fifths in a clockwise direction the circle will be C g d A e b F#/Gb d#/db Ab Eb Bb F where when compressed into one octave it turns out with 7 natural notes and 5 accidentals Fendrix One day I shall do a serious study of this! there is truth in all these answers, the white notes give us our do-re-mi (major scale) starting on C, this scale has a mixture of tones and semitones, and dictate where the black notes should go and how many we need. The re-tuning to equal temperament is a fudge, and if you were to analyse a tuned keyboard, not all semitones are equally spaced. Other intervals are also compromised, so a major third in one key may have notes further apart than one in a different key. Composers have long been aware of this, and aware that the key they select for a composition can make a significant difference to the "mood" (that is after you have selected major or minor). Classical Indian Music uses a system of scales (ragas). There are several hundred of these, and they will be fitted to specific moods, times of day, types of occasion etc. These are not random variations from any Western scale, and have nothing to do with the keyboards we commonly use. Our keyboard system is just for keyboards - a string instrument may not play exactly the same pitch as a piano for a given note (unless it is an open string), because they will tend to use something closer to the original pythagorean scale. PS I am a working musician with a bachelors degree in maths! operanut Many, many answers to this one already, but, in the framework of Pythagorean tuning, there actually is a clear mathematical distinction between black keys and white keys that has not yet, I think, been explicitly stated. Apparently, adjacent notes in a piano (including white or black) are always separated by a semitone. Why the distinction, then? In equal temperament, the ratio of the frequencies of two pitches separated by one semitone is $\sqrt[12]{2}$, no matter what the pitches are. But in other tunings, the ratio cannot be kept equal. In Pythagorean tuning, which tries to make fifths perfect as far as possible, there are two different types of semitone, a wider semitone when the higher pitch is a black key, and a narrower semitone when the higher pitch is a white key. Hence, in Pythagorean tuning at least, there is a clear mathematical distinction between white keys and black keys. Of course, which notes are white keys and which are black keys depends on which note is used to start building the scale. Starting from $F$ produces the traditional names for the keys. To see how this works, start from $F$ and generate ascending fifths, $$ F,\ C,\ G,\ D,\ A,\ E,\ B,\ F\sharp,\ C\sharp,\ G\sharp,\ D\sharp,\ A\sharp, $$ with frequencies in exact $\frac{3}{2}$ ratios (dividing by $2$ as needed to keep all pitches within an octave of the starting $F$). You find that you cannot add the $13^\text{th}$ note, $E\sharp$, without coming awfully close to the base note, $F$. The separation between $F$ and $E\sharp$ is called the Pythagorean comma, and is roughly a quarter of a semitone. So if you stop with $A\sharp$, you have divided the octave into $12$ semitones, which you discover are not all the same. Five of the $12$ semitones are slightly wider than the other seven. These two distinct semitones are called the Pythagorean diatonic semitone, with a frequency ratio of $\frac{256}{243}$ or about $90.2$ cents, and the Pythagorean chromatic semitone, with a frequency ratio of $\frac{2187}{2048}$ or about $113.7$ cents. (In equal temperament, a semitone is exactly $100$ cents. The number of cents separating $f_1$ and $f_2$ is defined to be $1200\log_2f_2/f_1$.) The Pythagorean diatonic semitone and the Pythagorean chromatic semitone differ from each other by a Pythagorean comma (about $23.5$ cents). You find that the semitone ending at $F$, that is, the interval between $E$ and $F$, is a diatonic semitone, whereas the semitone ending at $F\sharp$, that is, the semitone between $F$ and $F\sharp$, is a chromatic semitone. The other diatonic semitones end at $G$, $A$, $B$, $C$, $D$, and $E$, while the other chromatic semitones end at $G\sharp$, $A\sharp$, $C\sharp$, and $D\sharp$. Some things to note: If you start with a note other than $F$, the diatonic and chromatic semitones will be situated differently, but you will always end up with seven diatonic ones and five chromatic semitones, with the chromatic semitones appearing in a group of three and a group of two as in the traditional keyboard layout. A great many tuning systems have been devised, which play with the definitions of the semitones or introduce new ones. It is only in equal temperament that the distinction between the two semitones is completely erased. Some additional detail: starting from the octave, one can progressively subdivide larger intervals into smaller ones by adding notes from the progression of fifths. At the initial stage you have the octave. $$ \begin{array}{c|c|c|c} \text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ \hline F & 1 & \\ F & 2 & 2 & 1200 \end{array} $$ Interpolating a note a fifth higher than $F$ divides the octave into two unequal intervals, a fifth and a fourth. (Added notes will be shown in red.) $$ \begin{array}{c|c|c|c} \text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ \hline F & 1 & \\ \color{red}{C} & \color{red}{\frac{3}{2}} & \color{red}{\frac{3}{2}} & \color{red}{702.0}\\ F & 2 & \frac{4}{3} & 498.0 \end{array} $$ Adding a third note, the note a fifth above $C$, splits the fifth into a whole tone (ratio $\frac{9}{8})$ and a fourth. $$ \begin{array}{c|c|c|c} \text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ \hline F & 1 & \\ \color{red}{G} & \color{red}{\frac{9}{8}} & \color{red}{\frac{9}{8}} & \color{red}{203.9}\\ C & \frac{3}{2} & \frac{4}{3} & 498.0\\ F & 2 & \frac{4}{3} & 498.0 \end{array} $$ Two more additions split the fourths and produce the pentatonic scale, which is built of whole tones and minor thirds. $$ \begin{array}{c|c|c|c} \text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ \hline F & 1 & \\ G & \frac{9}{8} & \frac{9}{8} & 203.9\\ \color{red}{A} & \color{red}{\frac{81}{64}} & \color{red}{\frac{9}{8}} & \color{red}{203.9}\\ C & \frac{3}{2} & \frac{32}{27} & 294.1\\ \color{red}{D} & \color{red}{\frac{27}{16}} & \color{red}{\frac{9}{8}} & \color{red}{203.9}\\ F & 2 & \frac{32}{27} & 294.1 \end{array} $$ We may split each of the minor thirds into a whole tone and a (diatonic) semitone, which produces the diatonic scale. $$ \begin{array}{c|c|c|c} \text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ \hline F & 1 & \\ G & \frac{9}{8} & \frac{9}{8} & 203.9\\ A & \frac{81}{64} & \frac{9}{8} & 203.9\\ \color{red}{B} & \color{red}{\frac{729}{512}} & \color{red}{\frac{9}{8}} & \color{red}{203.9}\\ C & \frac{3}{2} & \frac{256}{243} & 90.2\\ D & \frac{27}{16} & \frac{9}{8} & 203.9\\ \color{red}{E} & \color{red}{\frac{243}{128}} & \color{red}{\frac{9}{8}} & \color{red}{203.9}\\ F & 2 & \frac{256}{243} & 90.2 \end{array} $$ Adding five more fifths splits each of the five whole tones into a chromatic semitone and a diatonic semitone to produce the chromatic scale. $$ \begin{array}{c|c|c|c} \text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ \hline F & 1 & \\ \color{red}{F\sharp} & \color{red}{\frac{2187}{2048}} & \color{red}{\frac{2187}{2048}} & \color{red}{113.7}\\ G & \frac{9}{8} & \frac{256}{243} & 90.2\\ \color{red}{G\sharp} & \color{red}{\frac{19683}{16384}} & \color{red}{\frac{2187}{2048}} & \color{red}{113.7}\\ A & \frac{81}{64} & \frac{256}{243} & 90.2\\ \color{red}{A\sharp} & \color{red}{\frac{177147}{131072}} & \color{red}{\frac{2187}{2048}} & \color{red}{113.7}\\ B & \frac{729}{512} & \frac{256}{243} & 90.2\\ C & \frac{3}{2} & \frac{256}{243} & 90.2\\ \color{red}{C\sharp} & \color{red}{\frac{6561}{4096}} & \color{red}{\frac{2187}{2048}} & \color{red}{113.7}\\ D & \frac{27}{16} & \frac{256}{243} & 90.2\\ \color{red}{D\sharp} & \color{red}{\frac{59049}{32768}} & \color{red}{\frac{2187}{2048}} & \color{red}{113.7}\\ E & \frac{243}{128} & \frac{256}{243} & 90.2\\ F & 2 & \frac{256}{243} & 90.2 \end{array} $$ There is no fundamental reason to stop here. Adding five more fifths creates a $17$-note scale by dividing each of the wider chromatic semitones into a new small interval, the Pythagorean comma (frequency ratio $531441/524288=3^{12}/2^{19}$ or about $23.5$ cents), and a diatonic semitone. We call the new notes $E\sharp$, $B\sharp$, $F\sharp\sharp$, $C\sharp\sharp$, $G\sharp\sharp$. Note that $E\sharp$ is a Pythagorean comma higher than its enharmonic equivalent $F$, $B\sharp$ is a Pythagorean comma higher than its enharmonic equivalent $C$, $F\sharp\sharp$ is a Pythagorean comma higher than its enharmonic equivalent $G$, and so on. $$ \begin{array}{c|c|c|c} \text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ \hline F & 1 & & \\ \color{red}{E\sharp} & \color{red}{\frac{531441}{524288}} & \color{red}{\frac{531441}{524288}} & \color{red}{23.5}\\ F\sharp & \frac{2187}{2048} & \frac{256}{243} & 90.2\\ G & \frac{9}{8} & \frac{256}{243} & 90.2\\ \color{red}{F\sharp\sharp} & \color{red}{\frac{4782969}{4194304}} & \color{red}{\frac{531441}{524288}} & \color{red}{23.5}\\ G\sharp & \frac{19683}{16384} & \frac{256}{243} & 90.2\\ A & \frac{81}{64} & \frac{256}{243} & 90.2\\ \color{red}{G\sharp\sharp} & \color{red}{\frac{43046721}{33554432}} & \color{red}{\frac{531441}{524288}} & \color{red}{23.5}\\ A\sharp & \frac{177147}{131072} & \frac{256}{243} & 90.2\\ B & \frac{729}{512} & \frac{256}{243} & 90.2\\ C & \frac{3}{2} & \frac{256}{243} & 90.2\\ \color{red}{B\sharp} & \color{red}{\frac{1594323}{1048576}} & \color{red}{\frac{531441}{524288}} & \color{red}{23.5}\\ C\sharp & \frac{6561}{4096} & \frac{256}{243} & 90.2\\ D & \frac{27}{16} & \frac{256}{243} & 90.2\\ \color{red}{C\sharp\sharp} & \color{red}{\frac{14348907}{8388608}} & \color{red}{\frac{531441}{524288}} & \color{red}{23.5}\\ D\sharp & \frac{59049}{32768} & \frac{256}{243} & 90.2\\ E & \frac{243}{128} & \frac{256}{243} & 90.2\\ F & 2 & \frac{256}{243} & 90.2 \end{array} $$ In the next few iterations, $12$ fifths are added, shaving a Pythagorean comma off of each diatonic semitone, thereby producing a $29$-note scale with $17$ Pythagorean commas ($23.5$ cents) and $12$ intervals of $66.8$ cents; $12$ more fifths are added, shaving a Pythagorean comma off of each $66.8$ cent interval, thereby producing a $41$-note scale with $29$ Pythagorean commas ($23.5$ cents) and $12$ intervals of $43.3$ cents; $12$ further fifths are added, shaving a Pythagorean comma off of each $43.3$ cent interval, thereby producing a $53$-note scale with $41$ Pythagorean commas ($23.5$ cents) and $12$ intervals of $19.8$ cents. Note that at some steps in this process the two intervals obtained are more nearly equal than at others, and that those scales whose intervals are nearly equal are very well approximated by an equal-tempered scale. The lengths of the scales where this happens coincide with denominators of convergents of the continued fraction expansion of $\log_2 3$, that is, at $2$, $5$, $12$, $41$, $53$, $306$, $665$, etc. A spectacular improvement is seen in the $665$-note scale, where the two intervals are $1.85$ cents and $1.77$ cents. In contrast, the intervals in the $306$-note scale are relatively far apart: $5.38$ cents and $3.62$ cents. From this perspective, the $12$-note scale is remarkably good. I should emphasize that this is only the barest beginning of a discussion of tuning systems. It is desirable to accommodate small whole number ratios other than $\frac{3}{2}$ such as $\frac{5}{4}$ (the major third) and $\frac{6}{5}$ (the minor third), which necessitates various adjustments. It is also desirable to be able to play music in different keys, which forces other compromises. Many of these issues are discussed in the other answers. Will OrrickWill Orrick protected by J. M. is a poor mathematician Sep 26 '11 at 1:32 Not the answer you're looking for? Browse other questions tagged music-theory or ask your own question. Mathematical explanation behind a picture posted (lifted from facebook) "Casual" mathematical facts with practical consequences What is Octave Equivalence? Example of a power of 3 which is close to a power of 2 How to calculate the average molar mass of the atmosphere? What mathematical property does equal temperament have that lets it form keys? Music — Is the diatonic scale optimal in some sense? Different mathematical models for Audio? Their dimensions and limitations? Number of series of this form whose product is 2 Months and notes. Pure coincidence? How many 7-note musical scales are possible within the 12-note system? What is the solution of normalized harmonic series based on $4/3$ between one and two?
CommonCrawl
Works by W. J. Mitchell ( view other items matching `W. J. Mitchell`, view all matches ) Disambiguations Disambiguations: W. J. T. Mitchell [63] W. J. Mitchell [8] Animal Rites: American Culture, the Discourse of Species, and Posthumanist Theory.Cary Wolfe & W. J. T. Mitchell - 2003 - University of Chicago Press.details In Animal Rites, Cary Wolfe examines contemporary notions of humanism and ethics by reconstructing a little known but crucial underground tradition of theorizing the animal from Wittgenstein, Cavell, and Lyotard to Lévinas, Derrida, ... Animal Rights in Applied Ethics $30.99 new Amazon page Iconology: Image, Text, Ideology.W. J. T. Mitchell - 1987 - University of Chicago Press.details "[Mitchell] undertakes to explore the nature of images by comparing them with words, or, more precisely, by looking at them from the viewpoint of verbal language.... The most lucid exposition of the subject I have ever read."—Rudolf Arnheim, _Times Literary Supplement_. Gilles Deleuze in Continental Philosophy $3.08 used $20.00 new $28.00 from Amazon Amazon page Iconology: Image, Text, Ideology.W. J. T. Mitchell - 1986 - Journal of Aesthetics and Art Criticism 45 (2):211-214.details What Do Pictures Want?: The Lives and Loves of Images.W. J. T. Mitchell - 2006 - Journal of Aesthetics and Art Criticism 64 (2):291-293.details Depiction in Aesthetics The Covering Lemma Up to a Woodin Cardinal.W. J. Mitchell, E. Schimmerling & J. R. Steel - 1997 - Annals of Pure and Applied Logic 84 (2):219-255.details Axioms of Set Theory in Philosophy of Mathematics Spatial Form in Literature: Toward a General Theory.W. J. T. Mitchell - 1980 - Critical Inquiry 6 (3):539-567.details Although the notion of spatiality has always lurked in the background of discussions of literary form, the self-conscious use of the term as a critical concept is generally traced to Joseph Frank's seminal essay of 1945, "Spatial Form in Modern Literature."1 Frank's basic argument is that modernist literary works are "spatial" insofar as they replace history and narrative sequence with a sense of mythic simultaneity and disrupt the normal continuities of English prose with disjunctive syntactic arrangements. This argument has been (...) attacked on several fronts. An almost universal objection is that spatial form is a "mere metaphor" which has been given misplaced concreteness and that it denies the essentially temporal nature of literature. Some critics will concede that the metaphor contains a half-truth, but one which is likely to distract attention from more important features of the reading experience. The most polemical attacks have come from those who regard spatial form as an actual, but highly regrettable, characteristic of modern literature and who have linked it with antihistorical and even fascist ideologies.2 Advocates of Frank's position, on the other hand, have generally been content to extrapolate his premises rather than criticize them, and have compiled an ever-mounting list of modernist texts which can be seen, in some sense, as "antitemporal." The whole debate can best be advanced, in my view, not by some patchwork compromise among the conflicting claims but by a radical, even outrageous statement of the basic hypothesis in its most general form. I propose, therefore, that far from being a unique phenomenon of some modern literature, and far from being restricted to the features which Frank identifies in those works , spatial form is a crucial aspect of the experience and interpretation of literature in all ages and cultures. The burden of proof, in other words, is not on Frank to show that some works have spatial form but on his critics to provide an example of any work that does not. · 1. Frank's essay first appeared in Sewanee Review 53 and was revised in his The Widening Gyre . Frank's basic argument has not changed essentially even in his most avante-garde statements; he still regards spatial form "as a particular phenomenon of modern avante-garde writing." See "Spatial Form: An Answer to Critics," Critical Inquiry 4 : 231-52. A useful bibliography, "Space and Spatial Form in Narrative," is being complied by Jeffrey Smitten .· 2. This charge generally links the notion of spatial form with Wyndham Lewis and Ezra Pound, the imagist movement, the "irrationality" and pessimistic antihistoricism of modernism, and the conservative Romantic tradition. Frank discusses the complex motives behind these associations in the work of Robert Weimann and Frank Kermode in his "Answer to Critics," pp. 238-48. W. J. T. Mitchell, editor of Critical Inquiry, is the author of Blake's Composite Art, and The Last Dinosaur Book: The Life and Times of a Cultural Icon. The present essay is part of Iconology: The Image in Literature and the Visual Arts. "Diagrammatology" appeared in the Spring 1981 issue of Critical Inquiry. Leon Surette responds to the current essay in "'Rational Form in Literature'". (shrink) Husserl: Philosophy of Mind in Continental Philosophy On Narrative.W. J. T. Mitchell - 1981 - Journal of Aesthetics and Art Criticism 41 (4):456-461.details Narrative in Aesthetics Image, Space, Revolution: The Arts of Occupation.W. J. T. Mitchell - 2012 - Critical Inquiry 39 (1):8-32.details Is there a dominant global image—call it a world picture—that links the Occupy movement to the Arab Spring? Or is there any single image that captures and perhaps even motivated the widely noticed synergy and infectious mimicry between Tahrir Square and Zuccotti Park? The Late Derrida.W. J. T. Mitchell & Arnold I. Davidson (eds.) - 2007 - University of Chicago Press.details The rubric "The Late Derrida," with all puns and ambiguities cheerfully intended, points to the late work of Jacques Derrida, the vast outpouring of new writing by and about him in the period roughly from 1994 to 2004. In this period Derrida published more than he had produced during his entire career up to that point. At the same time, this volume deconstructs the whole question of lateness and the usefulness of periodization. It calls into question the "fact" of his (...) turn to politics, law, and ethics and highlights continuities throughout his oeuvre. The scholars included here write of their understandings of Derrida's newest work and how it impacts their earlier understandings of such classic texts as Glas and Of Grammatology . Some have been closely associated with Derrida since the beginning—both in France and in the United States—but none are Derrideans. That is, this volume is a work of critique and a deep and continued engagement with the thought of one of the most significant philosophers of our time. It represents a recognition that Derrida's work has yet to be addressed—and perhaps can never be addressed—in its totality. (shrink) Donald Davidson in 20th Century Philosophy Jacques Derrida in Continental Philosophy $15.30 used $19.04 new Amazon page Picturing Terror : Derrida's Autoimmunity.W. J. T. Mitchell - 2007 - In W. J. T. Mitchell & Arnold I. Davidson (eds.), The Late Derrida. University of Chicago Press. pp. 277-290.details Derrida: Value Theory in Continental Philosophy Romanticism and the Life of Things: Fossils, Totems, and Images.W. J. T. Mitchell - 2001 - Critical Inquiry 28 (1):167-184.details Art and the Public Sphere.Maryann de Julio & W. J. T. Mitchell - 1994 - Substance 23 (2):130.details Present Tense 2020: An Iconology of the Epoch.W. J. T. Mitchell - 2021 - Critical Inquiry 47 (2):370-406.details 10. Books of Critical Interest Books of Critical Interest (Pp. 622-631).Nancy Fraser, Peter Schwenger, Robert Morris, Bruce Holsinger, Garrett Stewart, Kate McLoughlin, Fredric Jameson, Ian Hunter & W. J. T. Mitchell - 2008 - Critical Inquiry 34 (3):543-562.details Art, Fate, and the Disciplines: Some Indicators.W. J. T. Mitchell - 2009 - Critical Inquiry 35 (4):1022.details Diagrammatology.W. J. T. Mitchell - 1981 - Critical Inquiry 7 (3):622-633.details Partitions of Large Rado Graphs.M. Džamonja, J. A. Larson & W. J. Mitchell - 2009 - Archive for Mathematical Logic 48 (6):579-606.details Let κ be a cardinal which is measurable after generically adding ${\beth_{\kappa+\omega}}$ many Cohen subsets to κ and let ${\mathcal G= ( \kappa,E )}$ be the κ-Rado graph. We prove, for 2 ≤ m < ω, that there is a finite value ${r_m^+}$ such that the set [κ] m can be partitioned into classes ${\langle{C_i:i (...) G}$ in ${\mathcal G}$ such that ${[\mathcal{G} ^\ast ] ^m\cap C_i}$ is monochromatic. It follows that ${\mathcal{G}\rightarrow (\mathcal{G} ) ^m_{<\kappa/r_m^+}}$ , that is, for any coloring of ${[ \mathcal {G} ] ^m}$ with fewer than κ colors there is a copy ${\mathcal{G} ^{\prime}}$ of ${\mathcal{G}}$ such that ${[\mathcal{G} ^{\prime} ] ^{m}}$ has at most ${r_m^+}$ colors. On the other hand, we show that there are colorings of ${\mathcal{G}}$ such that if ${\mathcal{G} ^{\prime}}$ is any copy of ${\mathcal{G}}$ then ${C_i\cap [\mathcal{G} ^{\prime} ] ^m\not=\emptyset}$ for all ${i 2 we have ${r_m^+ > r_m}$ where r m is the corresponding number of types for the countable Rado graph. (shrink) Areas of Mathematics in Philosophy of Mathematics Addressing Media.W. J. T. Mitchell - 2008 - Mediatropes 1 (1):1-18.details Media Ethics in Applied Ethics Preface to "Occupy: Three Inquiries in Disobedience".W. J. T. Mitchell - 2012 - Critical Inquiry 39 (1):1-7.details If journalism is the first draft of history, these three essays might be described as a stab at a second draft. It is an attempt by three scholars from different disciplines, with sharply contrasting methodologies, to provide an account of the protest movements of 2011, from the Arab Spring to Occupy Wall Street. We deploy the perspectives of ethnography, political thought, and iconology in an effort to produce a multidimensional picture of this momentous year of revolutions, uprisings, mass demonstrations, and—most (...) centrally—the occupations of public space by protest movements. (shrink) Groundhog Day and the Epoché.W. J. T. Mitchell - 2021 - Critical Inquiry 47 (S2):S95-S99.details Jónsson Cardinals, Erdös Cardinals, and the Core Model.W. J. Mitchell - 1999 - Journal of Symbolic Logic 64 (3):1065-1086.details We show that if there is no inner model with a Woodin cardinal and the Steel core model K exists, then every Jónsson cardinal is Ramsey in K, and every δ-Jónsson cardinal is δ-Erdös in K. In the absence of the Steel core model K we prove the same conclusion for any model L[E] such that either V = L[E] is the minimal model for a Woodin cardinal, or there is no inner model with a Woodin cardinal and V is (...) a generic extension of L[E]. The proof includes one lemma of independent interest: If V = L[A], where A $\subset$ κ and κ is regular, then L κ [A] is a Jónsson algebra. The proof of this result, Lemma 2.5, is very short and entirely elementary. (shrink) Cardinals and Ordinals in Philosophy of Mathematics Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic Direct download (10 more) 10. Said, Palestine, and the Humanism of Liberation Said, Palestine, and the Humanism of Liberation (Pp. 443-461).Saree Makdisi, W. J. T. Mitchell, Aamir R. Mufti, Roger Owen, Gyan Prakash, Dan Rabinowitz, Jacqueline Rose, Gayatri Spivak & Daniel Barenboim - 2005 - Critical Inquiry 31 (2):526-529.details The Languages of LandscapeLandscape and PowerToil and Plenty: Images of the Agricultural Landscape in England 1780-1890The Idea of the English Landscape Painter: Genius as Alibi in the Early Nineteenth CenturyArt and Science in German Landscape Painting 1770-1840The Spectacle of Nature: Landscape and Bourgeois Culture in Nineteenth-Century France. [REVIEW]Stephanie Ross, Mark Roskill, W. J. T. Mitchell, Christiana Payne, Kay Dian Kriz, Timothy F. Mitchell & Nicholas Green - 2000 - Journal of Aesthetics and Art Criticism 58 (4):407.details Topics in Aesthetics in Aesthetics Cloning Terror: The War of Images 2001–2004.W. J. T. Mitchell - 2008 - In Diarmuid Costello & Dominic Willsdon (eds.), The Life and Death of Images: Ethics and Aesthetics. Cornell University Press. pp. 179--207.details Cloning in Applied Ethics Editor's Note: On Narrative.W. J. T. Mitchell - 1980 - Critical Inquiry 7 (1):1-4.details The essays included in this special issue of Critical Inquiry are a product of the symposium on "Narrative: The Illusion of Sequence" held at the University of Chicago on 26-28 October 1979. The rather special character of this symposium was not fragmented into concurrent or competing sessions, and all the speakers remained throughout the entire weekend to discuss the papers of their fellow participants. Several distinguished participants, in fact, did not read papers but confined their contributions to the conversations which (...) developed over the several sessions of the three-day program. The impact of these sustained discussions is reflected in the revisions which the authors made in preparing their papers for this special issue, and thus this collection is a "product" of the symposium in a fairly precise sense. (shrink) Poststructuralism in Continental Philosophy Medium Theory: Preface to the 2003 "Critical Inquiry" Symposium.W. J. T. Mitchell - 2004 - Critical Inquiry 30 (2):324.details An Interview with Barbara Kruger.W. J. T. Mitchell & Barbara Kruger - 1991 - Critical Inquiry 17 (2):434-448.details Mitchell: Could we begin by discussing the problem of public art? When we spoke a few weeks ago, you expressed some uneasiness with the notion of public art, and I wonder if you could expand on that a bit.Kruger: Well, you yourself lodged it as the "problem" of public art and I don't really find it problematic inasmuch as I really don't give it very much thought. I think on a broader level I could say that my "problem" is with (...) categorization and naming: how does one constitute art and how does one constitute a public? Sometimes I think that if architecture is a slab of meat, then so-called public art is a piece of garnish laying next to it. It has a kind of decorative function. Now I'm not saying that it always has to be that way—at all—and I think perhaps that many of my colleagues are working to change that now. But all too often, it seems the case.Mitchell: Do you think of your own art, insofar as it's engaged with the commercial public sphere—that is, with advertising, publicity, mass media, and other technologies for influencing a consumer public—that it is automatically a form of public art? Or does it stand in opposition to public art?Kruger: I have a question for you: what is a public sphere which is an uncommercial public sphere? Barbara Kruger is an artist who works with words and pictures. W. J. T. Mitchell, editor of Critical Inquiry, is Gaylord Donnelly Distinguished Professor of English and art at the University of Chicago. (shrink) Jürgen Habermas in Continental Philosophy Fellows, MR, See Cesati, M.M. Gitik, W. J. Mitchell, T. Glafi, T. Strahm, M. Grohe, G. Hjorth, A. S. Kechris, S. Shelah & X. Yi - 1996 - Annals of Pure and Applied Logic 82:343.details Floating AuthorshipAgainst Theory: Literary Studies and the New Pragmatism.Peggy Kamuf & W. J. T. Mitchell - 1986 - Diacritics 16 (4):2.details Introduction: Pluralism and Its Discontents.W. J. T. Mitchell - 1986 - Critical Inquiry 12 (3):467-467.details Wayne Booth, 1921–2005.W. J. T. Mitchell - 2006 - Critical Inquiry 32 (2):375.details Dead Again.W. J. T. Mitchell - 2007 - In W. J. T. Mitchell & Arnold I. Davidson (eds.), The Late Derrida. University of Chicago Press. pp. 219-228.details Public Conversation: What the %$#! Happened to Comics?W. J. T. Mitchell & Art Spiegelman - 2014 - Critical Inquiry 40 (3):20-35.details The Violence of Public Art: "Do the Right Thing".W. J. T. Mitchell - 1990 - Critical Inquiry 16 (4):880-899.details The question naturally arises: Is public art inherently violent, or is it a provocation to violence? Is violence built into the monument in its very conception? Or is violence simply an accident that befalls some monuments, a matter of the fortunes of history? The historical record suggests that if violence is simply an accident that happens to public art, it is one that is always waiting to happen. The principal media and materials of public art are stone and metal sculpture (...) not so much by choice as by necessity. "A public sculpture," says Lawrence Alloway, "should be invulnerable or inaccessible. It should have the material strength to resist attack or be easily cleanable, but it also needs a formal structure that is not wrecked by alterations."12 The violence that surrounds public art is more, however, than simply the ever-present possibility of an accident—the natural disaster or random act of vandalism. Much of the world's public art—memorials, monuments, triumphal arches, obelisks, columns, and statues—has a rather direct reference to violence in the form of war or conquest. From Ozymandias to Caesar to Napoleon to Hitler, public art has served as a kind of monumentalizing of violence, and never more powerfully than when it presents the conqueror as a man of peace, imposing a Napoleonic code or a pax Romana on the world. Public sculpture that is too frank or explicit about this monumentalizing of violence, whether the Assyrian palace reliefs of the ninth century b.c., or Morris's bomb sculpture proposal of 1981, is likely to offend the sensibilities of a public committed to the repression of its own complicity in violence.13 The very notion of public art as we receive it is inseparable from what Jürgen Habermas has called "the liberal model of the public sphere," a dimension distinct from the economic, the private, and the political. This ideal realm provides the space in which disinterested citizens may contemplate a transparent emblem of their own inclusiveness and solidarity, and deliberate on the general good, free of coercion, violence, or private interests.14 12. Lawrence Alloway, "The Public Sculpture Problem," Studio International 184 : 124.13. See Leo Bersani and Ulysse Dutoit, "The Forms of Violence," October, no. 8 : 17-29, for an important critique of the "narrativization" of violence in Western art and an examination of the alternative suggested by the Assyrian palace reliefs.14. Habermas first introduced this concept in The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society, trans. Thomas Burger and Frederick Lawrence . First published in 1962, it has since become the focus of an extensive literature. See also Habermas's short encyclopedia article, "The Public Sphere," trans. Sara Lennox and Frank Lennox, New German Critique 1 : 49-55, and the introduction to it by Peter Hohendahl in the same issue, pp. 45-48. I owe much to the guidance of Miriam Hansen and Lauren Berlant on this complex and crucial topic. W. J. T. Mitchell, editor of Critical Inquiry, is Gaylord Donnelly Distinguished Service Professor of English and art at the University of Chicago. His recent book is Iconology: Image, Text, Ideology. (shrink) Realism, Irrealism, and Ideology: A Critique of Nelson Goodman.W. J. T. Mitchell - 1991 - Journal of Aesthetic Education 25 (1):23.details "Critical Inquiry" and the Ideology of Pluralism.W. J. T. Mitchell - 1982 - Critical Inquiry 8 (4):609-618.details The criterion of "arguability" has tended to steer Critical Inquiry away from the kind of pluralism which defines itself as neutral, tolerant eclecticism toward a position which I would call "dialectical pluralism." This sort of pluralism is not content with mere diversity but insists on pushing divergent theories and practices toward confrontation and dialogue. Its aim is not the mere preservation or proliferation of variety but the weeding out of error, the elimination of trivial or marginal contentions, and the clarification (...) of fundamental and irreducible differences. The goal of dialectical pluralism is not liberal toleration of opposing views from a neutral ground but transformation, conversion, or, at least, the kind of communication which clarifies exactly what is at stake in any critical conflict. A good dramatization of Critical Inquiry's editorial ideal would be the dialogue of the devil and angel in Blake's Marriage of Heaven and Hell, an exchange in which each contestant enters into and criticizes the metaphysics of his contrary and which ends happily with the angel transformed into a devil. (shrink) Michel Foucault in Continental Philosophy Toleration in Normative Theories in Social and Political Philosophy Pluralism as Dogmatism.W. J. T. Mitchell - 1986 - Critical Inquiry 12 (3):494-502.details It may seem a bit perverse to argue that pluralism is a kind of dogmatism, since pluralists invariably define themselves as antidogmatists. Indeed, the world would seem to be so well supplied with overt dogmatists—religious fanatics, militant revolutionaries, political and domestic tyrants—that it will probably seem unfair to suggest that the proponents of liberal, tolerant, civilized open-mindedness are guilty of a covert dogmatism. My only excuse for engaging in this exercise is that it may help to shake up some rather (...) firmly fixed ideas about dogmatism held by those who advocate some version of pluralism. Dogmatism, I want to argue, has had a very had press, some of it deserved, some of it based in misunderstandings and ignorance. Much of that bad press stems, I will suggest, from the dominance of pluralism as an intellectual ideology since the Enlightenment. If "dogmatism" is a synonym for irrationality, infelixibility, and authoritarianism, the fault lies as much with pluralism as it does with any actual dogmatism. I'd like to begin, therefore, with a definition of dogmatism that comes, not from its pluralist foes, but from a historian of religion who treats it as a fairly neutral term, describing a complex and ancient feature of social institutions. This definition comes from E. Royston Pike's Encyclopedia of Religion and Religions:DOGMA . A religious doctrine that is to be received on authority—whether of a Divine revelation, a Church Council, Holy Scripture, or a great and honoured religious teacher—and not, at least in the first instance, because it may be proved true in the light of reason. Almost always there is associated with dogma the element of Faith. The term comes from the Greek word for "to seem," and it meant originally that which seems true to anyone, i.e. has been approved or decided beyond cavil. In the New Testament it is applied to decisions of the Christian church in Jerusalem, enactments of the Jewish law, and imperial decrees, all of which were things to be accepted without argument. A little later it had come to mean simple statements of Christian belief and practice; and it was not until the 4th century, when the heretics were showing how far from simple the basic Christian beliefs really were, that it acquired the meaning of a theological interpretation of a religious fact. Then came the division of the Church into a Western and an Eastern branch, and never again was it possible to frame a dogma that might be universally held. The 39 Articles of the Church of England, the principles deduced from Calvin's "Institutes" and John Wesley's "Sermons," and the items that compose the Mormon creed may all be classed as dogmas.1 W. J. T. Mitchell, editor of Critical Inquiry, is professor of English and a member of the Committee on Art and Design at the University of Chicago. His most recent book is Iconology: Image, Text, Ideology. (shrink) The Ends of Theory: The Beijing Symposium on Critical Inquiry.W. J. T. Mitchell & Wang Ning - 2005 - Critical Inquiry 31 (2):265.details Comics as Media: Afterword.W. J. T. Mitchell - 2014 - Critical Inquiry 40 (3):255-265.details Poetic Justice: 9-11 to Now.W. J. T. Mitchell - 2012 - Critical Inquiry 38 (2):241-249.details The author, Editor of Critical Inquiry, discusses our new website and the changing face of criticism in the age of terror. Martin Heidegger in Continental Philosophy "Ut Pictura Theoria": Abstract Painting and the Repression of Language.W. J. T. Mitchell - 1989 - Critical Inquiry 15 (2):348-371.details This may be an especially favorable moment in intellectual history to come to some understanding of notions like "abstraction" and "the abstract," if only because these terms seem so clearly obsolete, even antiquated, at the present time. The obsolescence of abstraction is exemplified most vividly by its centrality in a period of cultural history that is widely perceived as being just behind us, the period of modernism, ranging roughly from the beginning of the twentieth century to the aftermath of the (...) Second World War.1art is now a familiar feature of our cultural landscape; it has become a monument to an era that is passing from living memory into history. The experiments of cubism and abstract expressionism are no longer "experimental" or shocking: abstraction has not been associated with the artistic avant-garde for at least a quarter of a century, and its central masterpieces are now firmly entrenched in the tradition of Western painting and safely canonized in our greatest museums. That does not mean that there will be no more abstract paintings, or that the tradition is dead; on the contrary, the obsolescence we are contemplating is in a very precise sense the precondition for abstraction's survival as a tradition that resists any possible assault from an avant-garde. Indeed, the abstract probably has more institutional and cultural power as a rearguard tradition than it ever did as an avant-garde overturning of tradition. For that very reason its self-representations need to be questioned more closely than ever, especially its account of its own nature and history. This seems important, not just to set the record straight about what abstract art was, but to enable critical and artistic experimentation in the present, and a more nuanced account of both pre-and postmodern at, both of which are in danger of being swallowed up by the formulas of abstract formalism. If art and criticism are to continue to play an oppositional and interventionist role in our time, passive acceptance and reproduction of a powerful cultural tradition like abstract art will simply not do. 1. I define modernism and "the age of abstraction" here in familiar art historical terms, as a period extending from Kandinsky and Malevich to Jasper Johns and Morris Louis. There are other views of this matter which would trace modernism back to the emergence of an avant-garde in the 1840s , or to romanticism , or to the eighteenth century . My claim would be that "the abstract" as such only becomes a definitive slogan for modernism with the emergence of abstract painting around 1900. W. J. T. Mitchell, editor of Critical Inquiry, is professor of English and a member of the Committee on Art and Design at the University of Chicago. His most recent book is Iconology: Image, Text, Ideology. (shrink) Editor's Note: The Language of Images.W. J. T. Mitchell - 1980 - Critical Inquiry 6 (3):359-362.details Holy Landscape: Israel, Palestine, and the American Wilderness.W. J. T. Mitchell - 2000 - Critical Inquiry 26 (2):193-223.details Havana Diary: Cuba's Blue Period.W. J. T. Mitchell - 2008 - Critical Inquiry 34 (3):601-611.details Existentialism in Continental Philosophy Report From Morocco.W. J. T. Mitchell - 2012 - Critical Inquiry 38 (4):892-901.details Every once in awhile an academic drudge gets to visit a place that dreams are made of. We all know the little game in which American scholars compete to mention the exotic locations they have been to: Paris, London, Beijing, Mumbai. But I have never aroused such open jealousy in my colleagues until I uttered the word "Casablanca."For knowledgeable tourists, this is something of a puzzle. Casablanca is routinely disrespected by the guidebooks for its lack of an authentically ancient medina (...) or a labyrinthine souk, and its paucity of museums leaves the tourist with relatively few obvious destinations. One suspects that much of the aura surrounding the city's name comes from the wholly fictional movie and the associated mystique of Humphrey Bogart and Ingrid Bergman. Moroccans are notably marginal in the film, which, in a kind of doubling of colonial occupation, treats Casablanca as an outpost of the Vichy French regime under the thumb of the Nazis. Rick's Café Américain never existed until quite recently, when a retired American diplomat decided to capitalize on the legendary bistro with a simulacrum. The real city is quite modern, with the relics of 1920s colonial art-deco-French architecture serving as a main attraction, along with the thoroughly contemporary mosque of Hassan II, designed by a French architect and finished only in the 1990s. There is also the Corniche, with its surfing beaches and exclusive cafés, clubs, and hotels. (shrink) Gitik Moti. The Negation of the Singular Cardinal Hypothesis From O = K ++. Annals of Pure and Applied Logic, Vol. 43 , Pp. 209–234. [REVIEW]W. J. Mitchell - 1991 - Journal of Symbolic Logic 56 (1):344-344.details Bashir Makhoul and Gordon Hon. The Origins of Palestinian Art. Liverpool: Liverpool University Press, 2013. 269 Pp. [REVIEW]W. J. T. Mitchell - 2016 - Critical Inquiry 42 (3):720-721.details Seeing "Do the Right Thing".W. J. T. Mitchell - 1991 - Critical Inquiry 17 (3):596-608.details I might as well say at the outset that, although I can return Christensen's compliment, and call his response "thoughtful," I am most interested in those places where the fullness of his thought, and particularly of his own language, has paralyzed his thought in compulsively repetitious patterns, and led him into interpretive maneuvers that he would surely be skeptical about in the reading of a literary text. Even more interesting is the way Christensen's antipathy to the film, and the violence (...) of the language in which eh expresses the antipathy, has prevented him from registering the plainest sensory and perceptual elements of the film text. In a rather straightforward and literal sense, Christensen has neither seen nor heard Do the Right Thing, but has screened a fantasy film of his own projection. To say Christensen has projected a fantasy, however, is not to say that his response is eccentric or merely private. On the contrary, it is a shared and shareable response, a reflex in the public imaginary of American culture at the present time. As such, it deserves patient and detailed examination. W. J. T. Mithcell, editor of Critical Inquiry, is Gaylord Donnelly Distinguished Service Professor of English and art at the University of Chicago. His most recent book is Iconology: Image, Text, Ideology. (shrink) Editorial Note.W. J. T. Mitchell - 2020 - Critical Inquiry 46 (4):944-945.details Edward Said: Continuing the Conversation.W. J. T. Mitchell - 2005 - Critical Inquiry 31 (2):365.details
CommonCrawl
Atomic-scale interactions between quorum sensing autoinducer molecules and the mucoid P. aeruginosa exopolysaccharide matrix Viscoelastic properties of Pseudomonas aeruginosa variant biofilms Erin S. Gloag, Guy K. German, … Daniel J. Wozniak The biofilm matrix scaffold of Pseudomonas aeruginosa contains G-quadruplex extracellular DNA structures Thomas Seviour, Fernaldo Richtia Winnerdy, … Staffan Kjelleberg Bursting out: linking changes in nanotopography and biomechanical properties of biofilm-forming Escherichia coli to the T4 lytic cycle Shiju Abraham, Yair Kaufman, … Edo Bar-Zeev Three-dimensional low shear culture of Mycobacterium bovis BCG induces biofilm formation and antimicrobial drug tolerance Daire Cantillon, Justyna Wroblewska, … Simon J. Waddell Breakdown of Vibrio cholerae biofilm architecture induced by antibiotics disrupts community barrier function Francisco Díaz-Pascual, Raimo Hartmann, … Knut Drescher Single microcolony diffusion analysis in Pseudomonas aeruginosa biofilms Jagadish Sankaran, Nicholas J. H. J. Tan, … Thorsten Wohland Genetic requirements and transcriptomics of Helicobacter pylori biofilm formation on abiotic and biotic surfaces Skander Hathroubi, Shuai Hu & Karen M. Ottemann Non-equilibrium dynamics of bacterial colonies—growth, active fluctuations, segregation, adhesion, and invasion Kai Zhou, Marc Hennes, … Benedikt Sabass Physical mechanisms driving the reversible aggregation of Staphylococcus aureus and response to antimicrobials Céline Burel, Rémi Dreyfus & Laura Purevdorj-Gage Oliver J. Hills1, Chin W. Yong2,3, Andrew J. Scott4, Deirdre A. Devine5, James Smith1 & Helen F. Chappell1 Biofilms Biopolymers in vivo Computational biophysics Density functional theory Structure prediction Mucoid Pseudomonas aeruginosa is a prevalent cystic fibrosis (CF) lung coloniser whose chronicity is associated with the formation of cation cross-linked exopolysaccharide (EPS) matrices, which form a biofilm that acts as a diffusion barrier, sequestering cationic and neutral antimicrobials, and making it extremely resistant to pharmacological challenge. Biofilm chronicity and virulence of the colony is regulated by quorum sensing autoinducers (QSAIs), small signalling metabolites that pass between bacteria, through the biofilm matrix, regulating genetic responses on a population-wide scale. The nature of how these molecules interact with the EPS is poorly understood, despite the fact that they must pass through EPS matrix to reach neighbouring bacteria. Interactions at the atomic-scale between two QSAI molecules, C4-HSL and PQS—both utilised by mucoid P. aeruginosa in the CF lung—and the EPS, have been studied for the first time using a combined molecular dynamics (MD) and density functional theory (DFT) approach. A large-scale, calcium cross-linked, multi-chain EPS molecular model was developed and MD used to sample modes of interaction between QSAI molecules and the EPS that occur at physiological equilibrium. The thermodynamic stability of the QSAI-EPS adducts were calculated using DFT. These simulations provide a thermodynamic rationale for the apparent free movement of C4-HSL, highlight key molecular functionality responsible for EPS binding and, based on its significantly reduced mobility, suggest PQS as a viable target for quorum quenching. Pseudomonas aeruginosa is a Gram-negative bacterium capable of colonising a wide variety of different environments and habitats. The cystic fibrosis (CF) lung is one such environment and colonisation by P. aeruginosa is highly prevalent, leading to increased mortality in CF patients1. P. aeruginosa lung infections currently account for the majority of the morbidity and mortality seen in CF patients2 and the chronicity of P. aeruginosa infections is associated with the bacterium's ability to form a biofilm3. Bacterial biofilms are comprised of colonies of bacterial cells enveloped within an extracellular matrix (ECM)4. The ECM itself encompasses polysaccharide, protein, lipid and nucleic acid constituents, referred to collectively as the matrixome5. Explanted lungs of deceased CF patients reveal that it is the mucoid P. aeruginosa phenotype that is primarily responsible for destruction of the CF lung6. The mucoid phenotype is characterised as exopolysaccharide (EPS) alginate overproducing, where its matrixome is primarily composed of linear, acetylated, anionic alginate7,8,9,10, cross-linked by calcium (Ca2+), an ion significantly elevated in the CF lung11. Recently, we have used quantum chemical Density-Functional Theory (DFT) to construct molecular models, structurally representative of the mucoid P. aeruginosa EPS, that prove Ca2+ can induce highly stable EPS aggregation relative to other biological ions elevated in CF sputum12. The formation of stable P. aeruginosa cation cross-linked biofilm matrices relies on the quorum sensing signalling pathway. Quorum sensing (QS) is a mechanism where individual bacterial cells perceive the cell density in their local environment and, in turn, coordinate gene expression on a population-wide scale13. It is a mechanism reliant upon the production of cell-to-cell signals, called quorum sensing autoinducer (QSAI) molecules, leading to biofilm matrix proliferation14. When a single bacterium releases a QSAI molecule, it is at too small a concentration to be detected by neighbouring bacterial cells. However, if QSAI molecules are collectively released by enough bacteria, the concentration of these molecules increases past a threshold level, allowing the bacteria to recognise a critical cell mass and activate specific genes15. Specifically, for P. aeruginosa aggregates, QSAI release from approximately 2000 cells is required to initiate QS16. QS, effectively, is the mechanism underlying bacterial cell-to-cell communication and, importantly, is a mechanism active in CF lungs infected with P. aeruginosa17. Confocal laser scanning microscopy has identified the existence of three distinct ECM layers surrounding naturally occurring bacterial sub-populations embedded within biofilm matrices18. The first is a layer surrounding individual bacterial cells, the second a layer separating individual cells (intercellular ECM) and the third a layer separating different sub-populations18. Considered along-side observations that QSAI molecules can partition into the biofilm matrix19, the implication is that the QSAI molecules must pass through ECM material to reach neighbouring bacteria. The type and variety of molecular interactions that facilitate the movement of QSAI's through the ECM is not understood and these molecules are simply assumed to be freely diffusible20,21. The vast majority of Gram-negative bacteria utilise acetylated homoserine lactones (HSL) as their primary QSAI molecules15, which are typically specific to the LasR and/or RhlR transcriptional activators22. Specifically, C4-HSL and 3-oxo-C12-HSL are the two primary HSL based QSAI molecules utilised by P. aeruginosa in the CF lung17,23. The transcriptional regulators RhlR and LasR are bound by C4-HSL and 3-oxo-C12-HSL respectively in order to regulate the expression of several genes24,25, and transcriptome analysis has determined that between 6 and 10% of the P. aeruginosa genome is regulated by these systems26. C4-HSL is a QSAI molecule that, most notably, plays a significant role in biofilm (EPS) proliferation, maturation and virulence factor expression in P. aeruginosa biofilms24,27. There also exists a third key QSAI molecule, the Pseudomonas quinolone signal (PQS)28, which is also present in the lungs of infected CF patients29. This signal is found in higher concentrations when bacterial cultures reach the late stationary phase of growth30 and therefore plays a major role in biofilm maintenance. For example, PQS production is able to regulate the autolysis of cells31, assist in iron (Fe3+) sequestration32,33,34 and induce rhamnolipid production35. 3-oxo-C12-HSL is a QSAI molecule most implicated in the initial differentiation to the biofilm mode of life upon deposition within the CF lung, with this molecule's role becoming negligible once bacteria have established a firm attachment to the substratum14,24,27. Conversely, C4-HSL and PQS (Fig. 1) are two QSAI molecules important for establishing mature biofilms and therefore contributing to biofilm chronicity. They are molecules that will move throughout the EPS and are ideal candidates to study possible modes of interaction with the EPS matrix. The intent of this work is to rationalise which molecular interactions dictate the motion of these molecules through the EPS, beyond the explanation of simple diffusion. The use of in silico theoretical modelling, specifically molecular dynamics and molecular docking, has been used primarily to study the origin of interactions between QS inhibitor molecules and the QS regulatory proteins LasR and RhlR36,37,38. However, there has not been application of any theoretical techniques to study the nature and origin of the interactions, at the atomic-scale, between QSAI molecules and EPS, despite its obvious connection to pharmaceutical design and EPS penetration. Therefore, the purpose of this investigation is to identify the key molecular functionality of QSAI molecules, secreted by mucoid P. aeruginosa in the CF lung, which mediate interactions with the EPS matrix. To this end, finite temperature explicit solvent molecular dynamics (MD) has been employed to identify QSAI-EPS adducts which are representative of those that occur at physiological equilibrium. Combined with DFT, to ensure accurate energy calculations of QSAI-EPS systems, post-simulation thermodynamic stabilities and binding energies of such adducts have been accurately evaluated and molecular functionality pertinent for binding elucidated at the atomic-scale. Molecular structures of C4-HSL and PQS. Exothermic association of four EPS chains about Ca2+ ions As detailed in the "Materials and methods" section, a DFT procedure was employed to develop an exothermic calcium cross-linked 4-chain EPS network suitable for studying EPS-QSAI interactions. This 4-chain EPS model (either 4-PolyMG or 4-PolyM) was constructed through the fibrillar stacking of smaller Ca2+ cross-linked 2-chain EPS units (either two 2-PolyMG or two 2-PolyM systems). The formation energies (evaluated according to Eq. (1)) for each stacking arrangement in the 4-PolyMG and 4-PolyM systems are given in Tables S1 and S2 respectively. In both 4-chain systems, all stacking arrangements are thermodynamically stable (all possesses negative formation energies), which shows that, independent of in which orientation the 2-chain systems aggregate, having two 2-chain systems in the vicinity of one another, and/or combining to form a larger 4-chain system, is thermodynamically stable. The number of ionic bonds established between the adjoining stacks (Tables S1, S2) correlates well with overall thermodynamic stability, with the 4-chain systems that establish the largest and smallest number of ionic bonds corresponding to the most and least thermodynamically stable systems respectively. Furthermore, stacking arrangements that establish an equal number of ionic bonds between adjoining stacks are comparatively as stable, independent of the number of hydrogen bonds also established. This, in turn, highlights that the formation of exothermic 4-chain complexes is driven more through the formation of ionic interactions rather than hydrogen bonding interactions. Previous DFT modelling has highlighted that exothermic association of multiple algal alginate disaccharides about divalent cations is driven through ionic interactions also39 and this work extends this observation to bacterial alginates (the mucoid P. aeruginosa EPS). The thermodynamic stability of the 4-PolyMG system increases as the number of acetyl groups facing the adjoining stack decreases and the number of ionic bonds between adjoining stacks subsequently increases. Having both acetyl groups facing away from the adjoining stack minimises steric repulsion and facilitates the close association of four chains about Ca2+ ions. Interestingly, it is thermodynamically stable to have two 2-PolyMG chains in the same vicinity without interacting, for example, when both acetyl groups face towards the adjoining stack. Similar observations have been made in MD simulated annealing studies investigating Ca2+ induced association of three copolymeric \(\beta\)-d-mannuronate-\(\alpha\)-l-guluronate decamers, which also identify multi-chain aggregations whereby the third chain lies in the vicinity of a Ca2+ cross-linked 2-chain system without directly binding40. In our models, this is a feature, driven by acetyl steric repulsion, that creates void spaces within the EPS. Structurally, P. aeruginosa biofilms can be described as open systems encompassing cells, extracellular matrix material and void spaces41, where the latter act as channels allowing the flow of water throughout the biofilm42. All stacking arrangements lead to ionic association of four chains in the 4-PolyM system, therefore, the presence of acetylated mannuronate–guluronate blocks in the EPS provides a structural origin for void spaces and water channels in mucoid P. aeruginosa biofilms. Following subsequent DFT simulations of the 4-PolyM and 4-PolyMG systems, it is clear that the 4-PolyMG* system (Fig. 2) is slightly more thermodynamically stable relative to the 4-PolyM* system (Fig. S4). The stability difference is minor because these 4-chain complexes share highly similar geometrical features. Specifically, the average O-Ca2+, COO-Ca2+, OH-Ca2+, Glycosidic O-Ca2+ and Ring O-Ca2+ bond lengths in the 4-PolyMG* system are highly similar to those observed in the 4-PolyM* system (Table S3). The only ionic contact present in the 4-PolyM* system that isn't present in the 4-PolyMG* system is the acetyl O-Ca2+ contact. Ionic contacts to acetyl groups, therefore, are a geometric feature that distinguishes multi-chain, exothermic, ionic association of acetylated mannuronate EPS fractions from acetylated mannuronate–guluronate EPS fractions. Acetyl groups are involved in ionic association in the former, but are implicated in increased steric repulsion and preventing ionic association in the latter. The slight difference in stability between these two systems is rationalised when considering the additional ionic contact established between adjoining stacks, the number of calcium ions involved in the formation of ionic bonds between adjoining stacks and their coordination numbers (CN). Higher maximum coordination numbers are observed in the 4-PolyMG* system (CN = 7), which also encompasses an additional ionic bond between the adjoining stacks, compared to the 4-PolyM* system (CN = 6). Furthermore, five Ca2+ ions are involved in establishing ionic bonds between the adjoining stacks in the 4-PolyMG* system, whereas only four are involved in the 4-PolyM* system. Overall, these three factors outline a larger 4-chain intracomplex space in the 4-PolyMG* system, compared to the 4-PolyM* system, making it more stable. The 4-PolyMG* structure, corresponding to the most thermodynamically stable arrangement of four EPS chains complexed about Ca2+ ions, viewed down the x, y and z axes. Carbon atoms are shown in grey, oxygen in red, calcium in blue and hydrogen in pink. Calcium–oxygen ionic bonds are shown with dashed blue lines and hydrogen bonds are shown with dashed green lines. Within the 4-PolyMG* system, the COO-Ca2+ contacts are the ionic bonds that are most stable (shortest of all O-Ca2+ bonds with the largest bond populations; Table S3) and occur most frequently (Fig. 2). Therefore, the COO group is the functional group most implicated in the formation of an exothermic 4-chain complex, correlating well with previous findings in MD simulations of Ca2+ induced algal alginate gelation43,44,45. Ca2+ ions with CN = 7 are coordinated to four different uronate residues, whereas when the CN = 5 or 6, the Ca2+ ions are only bound to three uronate residues. Coordination to four uronate residues per Ca2+ ion is closer to the classical egg-box description of calcium chelation by alginates46. Previous DFT modelling in the group has highlighted that calcium chelation by two bacterial alginate chains deviates from the egg-box model12, so it is interesting to observe egg-box characteristics regained as the number of coordinating bacterial alginate chains increases. Even though coordination to four uronate residues is possible, no Ca2+ ion is bound to more than three EPS chains. The implication of this, is that, 3-chain complexation is possible about a single Ca2+ ion, but 4-chain complexation is not; \(\ge\) 2 Ca2+ ions are needed to establish 4-chain aggregation. Finally, returning to the discussion on void spaces, fibrillar stacking of two 4-PolyMG* systems, to form an 8-chain system, would recover a stacking arrangement where acetyl groups face towards the adjoining stack—creating a void space. Therefore, this further highlights, in acetylated mannuronate–guluronate EPS fractions, the formation of void spaces is inevitable. Physiological EPS structure The 4-PolyMGMD structure, which corresponds to the structure of the hydrated mucoid P. aeruginosa EPS at physiological temperature, is shown in Fig. 3. The 4-PolyMGMD structure, corresponding to the structure of the hydrated mucoid P. aeruginosa EPS at physiological temperature. Carbon atoms are shown in grey, oxygen in red, calcium in blue and hydrogen in pink. Calcium ions are labelled and calcium–oxygen ionic bonds are shown with dashed blue lines. Explicit water molecules are not shown. Immediately, it is clear that there is a significant deviation away from the neatly stacked EPS chains obtained from the DFT modelling and the adoption of an entangled V-shaped motif. This motif isn't unexpected, as it has also been observed in MD simulations of calcium algal alginates40. The creation of a V-shaped cleft, introduces a structural discontinuity into the EPS, which assists in rationalising the complex, discontinuous, EPS structures observed in transmission electron microscopy (TEM) measurements on P. aeruginosa biofilms47. Interestingly, the V-shaped cleft, perhaps, also provides an origin for the large-scale branched/dendritic organisation of P. aeruginosa EPS scaffolds48. The increased conformational flexibility allows the system to undergo torsional change which maximises the establishment COO-Ca2+ interactions. The frequency of COO-Ca2+ interactions (25 contacts) far exceed the frequency of any other O-Ca2+ contact, with the only other Ca2+ coordinating oxygen functional group being the hydroxyl group (OH-Ca2+; 2 contacts). The OH-Ca2+, glycosidic O-Ca2+ and ring O-Ca2+ interactions, the latter two absent from the 4-PolyMGMD structure, are comparatively as stable in the 4-PolyMG* structure (Table S3) and are eliminated from the 4-PolyMG* structure upon thermal equilibration at 310 K as a result of the EPS chains favouring the establishment of COO-Ca2+ above all other oxygen-Ca2+ interactions. Carboxylate groups being the dominant contributor to the Ca2+ chelation geometry has been observed extensively in finite temperature MD simulations of solvated calcium alginate networks40,43,44,45,49. Combined with the 4-PolyMGMD structure obtained in this work, it is clear how the presence of temperature and solvent reduces the diversity in the geometry of the chelation site within Ca2+ chelate pockets and highlights that the preferred mode of aggregation under physiological conditions is through the carboxylate groups. The only structural feature retained from the 4-PolyMG* system, is the acetyl groups facing away from the neighbouring stack. Therefore, there has not been a complete (180°) inversion about the chain axes in any of the EPS chains. The acetyl groups prefer to face the solvent environment than face the neighbouring stack which, in turn, reinforces the tendency of the acetyl groups to orient further away from the main EPS chains. Only Ca 2 and Ca 6 (Fig. 3) are bound to three EPS chains—all other calcium ions are involved in coordination to two EPS chains. More calcium ions were bound to three EPS chains in the exothermic 4-PolyMG* structure and, therefore, in the 4-PolyMGMD structure it is clear that the intracomplex space is heavily reduced upon thermal equilibration at physiological temperature. Effectively, the exothermic 4-chain system (4-PolyMG*) has partitioned into two sets of 2-chain systems (2 \(\times\) 2-PolyMG systems) with only Ca 2 and Ca 6 keeping the bacterial alginate network together. The reduction in the size of the intracomplex space is the reason for the origin of V-shaped cleft. The structural morphology of the 4-PolyMGMD system, correlates well with morphologies of calcium cross-linked mannuronate–guluronate heteropolymers observed in recent large-scale multi-chain implicit solvent MD simulations50. In these simulations, heteropolymers (copolymeric mannuronate–guluronate) have increased chain flexibility relative to homopolymers (polymannuronate) which leads to entanglement upon calcium cross-linking and giving discontinuous, V-shaped, morphologies possessing open clefts after aggregation50. QSAI simulations EPS-C4-HSL and EPS-PQS structures and their formation energies are shown in Figs. 4 and 5 respectively. The PQS molecule binds early and remains bound to the EPS throughout the full time-scale of the trajectory and, therefore, EPS-PQS adducts are displayed at 2 ns intervals for brevity (Fig. 5). EPS-C4-HSL structures and their formation energies (eV). Carbon atoms are shown in grey, oxygen in red, calcium in blue and hydrogen in pink. Calcium–oxygen ionic bonds are shown with dashed blue lines. Formation energies weren't calculated if the molecule > 6 Å away from the EPS. EPS-PQS structures and their formation energies (eV) displayed at 2 ns intervals. Carbon atoms are shown in grey, oxygen in red, calcium in blue and hydrogen in pink. Calcium–oxygen ionic bonds are shown with dashed blue lines and hydrogen bonds are shown with dashed green lines. Ionic bonds between the PQS and 4-PolyMGMD system are shown with bold pink lines. The C4-HSL molecule fails to show any molecular interactions with the EPS. The C4-HSL molecule liberates the cleft and migrates over the EPS interface, but after 6 ns (Fig. 4) becomes separated from the EPS and does not return. In contrast, the PQS molecule forms both an ionic interaction between its ketone oxygen and a single Ca2+ ion and a hydrogen bonding interaction between its hydroxyl group and a single EPS carboxylate group (Fig. 5). Consequently, the PQS molecule forms a thermodynamically stable ionic complex with the EPS. In fact, these interactions are sufficient to keep the PQS molecule tethered in the cleft of the 4-PolyMGMD structure, overcoming hydrophobic repulsion between the hydrocarbon and EPS chains, for the full time-scale of the trajectory. After 8 ns, the ionic ketone O-Ca2+ interaction is lost and is compensated for by a rearrangement in the single hydrogen bonding interaction, which keeps the PQS molecule bound within the cleft. The EPS-PQS adduct at 6 ns, where the hydroxyl hydrogen bond is absent and only the ketone O-Ca2+ remains, is the least thermodynamically stable configuration and, interestingly, is less stable than the EPS-PQS adduct at 10 ns where only the hydroxyl hydrogen bond exists. Fluorescence resonance energy transfer (FRET) investigations of PQS incorporation into lipopolysaccharide (LPS) scaffolds have shown the hydroxyl group to be critically important for PQS–LPS interactions51 and these simulations further call attention to the role of the PQS hydroxyl group in molecular binding. The propensity of the PQS molecule to form ionic and hydrogen bonding interactions with the EPS significantly reduces its mobility and, conversely, the C4-HSL molecule, which forms no such interactions, is notably more mobile. Despite the C4-HSL molecule sharing similar functional groups to PQS, namely, two ketone functional groups, one located on its "head" group and the other as part of an amide linkage attaching the hydrocarbon tail, neither of these groups form any ionic interactions with the EPS Ca2+ ions. The relative hydrophobicity of C4-HSL is smaller than PQS, the latter of which possesses conjugated aromatic rings and a larger hydrocarbon tail. It is, therefore, no surprise that the C4-HSL molecule partitions more readily into bulk solvent. Biofilms are heavily hydrated systems, with the major matrix component being water52. The C4-HSL molecule has the disposition to readily move into the solvent phase and consequently into void spaces filled with water, in preference to remaining in the EPS vicinity. Now, in this circumstance, its transport is assisted by convection53 and its diffusibility increases as a result of decreased tortuosity54. Along with the absence of any molecular interactions with the EPS, this offers non-local behaviour and gives the molecule the ability to travel further distances throughout the biofilm. By contrast, the PQS molecule shows a preference for remaining in the EPS vicinity for the full time-scale of its trajectory. The PQS ketone and hydroxyl groups are heavily implicated in molecular interactions with the EPS and our simulations would suggest that PQS is, potentially, a short range QSAI signal. In fact, this complements recent secondary ion mass spectroscopy and confocal Raman microscopy studies into spatial distributions of alkyl quinolones in P. aeruginosa biofilms, which also expose PQS as a local QSAI signal55,56. AQNO, an alkyl quinolone structural analogue of PQS, where the ketone and hydroxyl groups are absent, had the larger spatial distribution56, thus, offering a link, reinforced by our simulations, between the PQS ketone and hydroxyl groups and reduced movement throughout the EPS. The localisation of PQS at the boundary between infected and non-infected sub-populations, within bacteriophage-infected P. aeruginosa biofilms, behaves as a warning signal allowing non-infected bacteria in the immediate neighbourhood to avoid danger57. The ability of the PQS molecule to localise and tether to the EPS, when the C4-HSL molecule does not, suggests that this molecule, perhaps, could be selectively retained. Local PQS retention and local signal accumulation by the EPS has survival advantages and, as these simulations outline, the two oxygen bearing functional groups are of paramount importance to facilitate this. The EPS-C4-HSL and EPS-PQS configurations, which correspond to the minima in the configurational and electrostatic energies are shown, for reference, in Fig. S5. The C4-HSL molecule is well separated from the EPS in this configuration, with the distance from its hydrocarbon tail end to the nearest EPS acetyl and EPS-bound Ca2+ ion being ~ 16 and ~ 19 Å respectively. However, this configuration, although occupying a minima in the configurational and electrostatic energies and being well separated from the EPS, is not the most thermodynamically stable EPS-C4-HSL structure isolated from the simulations. Whilst the C4-HSL molecule can readily partition into the solvent phase, it is more thermodynamically stable for the molecule to remain within the vicinity of EPS, albeit not directly interacting with the EPS through ionic or hydrogen bonds. During the C4-HSL molecule's migration over the EPS, it does not interfere significantly with the EPS ionic scaffold, with the average Ca2+ coordination numbers, only accounting for EPS oxygen donors, held in the range of 4 to 4.5 throughout the full time-scale of the trajectory. Hence, binding of this molecule is not influenced by the cationic charge distribution in the EPS. Conceivably, therefore, the C4-HSL molecule exploits solely Van der Waals (VdW) interactions with the EPS, which are not able to render the molecule immobile at physiological temperature. This, in turn, offers a thermodynamic rationale for the apparent free movement of this molecule. The EPS-PQS adduct corresponding with the minima in the electrostatic and configurational energies (Fig. S5a), encompasses the same ionic and hydrogen bonding molecular binding modes to the EPS as is observed throughout. Unlike the analogous C4-HSL structure, this adduct also corresponds to the most thermodynamically stable EPS-PQS system, further accentuating this molecule's significantly larger propensity to bind to the EPS. It is important to note also, in this configuration, that the PQS \(\pi\) system does not align in a fashion to ensure the Ca2+ is positioned above its respective plane. As such, the inference is that a quadrupole \(\pi\)-cation interaction is not a molecular interaction contributing significantly to the overall stability of the EPS-PQS complex. Indeed, previous DFT estimations of \(\pi\)-cation interactions, across an array of different aromatic systems, have shown that N-heterocyclic aromatics and the presence of electron withdrawing groups, of which both are present in PQS, lower \(\pi\)-cation binding affinities58. Quorum quenching (QQ) has emerged in recent years as a strategy to limit and/or prevent biofilm proliferation through reducing the concentration of QSAI molecules in the biofilm, through mechanisms such as enzymatic degradation59. QQ fails to have any impact on QSAI molecules encompassed within an aqueous phase where the QSAI molecule's motion is driven by convection53. Given the immobility of PQS relative to C4-HSL, which can partition into the solvent medium/void spaces, this work proposes PQS as a viable QQ target. In addition, the observed inability of PQS to propagate throughout the exopolysaccharide matrix underscores the requirement for this molecule to be packaged in outer-membrane vesicles (OMVs) if it is to maximise its effectiveness as a (long-range) cell-to-cell signal. Although, not all PQS molecules will be encompassed within OMVs, as it must mediate its own OMV packaging51,60 and, understandably, must exist as the free molecule to execute its virulence functions—the inclusion of PQS within OMVs is not associated with biological activity60. Finally, the EPS-PQS adducts are generally, considering the full time-scale of the QSAI trajectories, more thermodynamically stable compared to the EPS-C4-HSL systems; the PQS has a higher EPS affinity. Although, in their most stable configurations respectively, the thermodynamic stabilities are comparable. Intriguingly, in their most stable configurations, it is as stable for the C4-HSL to not interact with the EPS as it is for the PQS molecule to interact with the EPS. This work probed the origin of molecular interactions between quorum sensing autoinducer (QSAI) molecules and the mucoid P. aeruginosa exopolysaccharide (EPS) matrix, with the aim of rationalising which molecular interactions govern molecular motion throughout the EPS matrix. To achieve this, a combined molecular dynamics (MD) and Density-Functional Theory (DFT) approach has been employed to identify, and calculate the thermodynamic stability of, EPS-QSAI binding configurations that occur at physiological temperature in the presence of water. Initially, DFT modelling assisted in the development of a large 4-chain EPS molecular model. These calculations identified that at least two Ca2+ ions are needed for the aggregation of four EPS chains and that acetylated copolymeric β-d-mannuronate-α-l-guluronate 4-chain structures are able to facilitate tight complexation about Ca2+ ions when the acetyl groups are oppositely displaced. Furthermore, this structure can be distinguished from the less stable acetylated poly-β-d-mannuronate 4-chain analogue through the absence of acetyl contributions to the Ca2+ ion chelation geometry. Stable Ca2+ cross-linked 4-chain EPS systems show regained egg-box characteristics which were lost at a 2-chain level. The DFT molecular model was transformed, using finite temperature explicit solvent MD, to a molecular model more representative of that which is observed at physiological temperature and in the presence of water. This physiological structure possesses a discontinuous, V-shaped, dendritic morphology arising from a severely reduced intracomplex space. This, in turn, is due to a reduction in the diversity of the Ca2+ chelation geometry as a result of carboxylate groups dominating the Ca2+ coordination environment over all other oxygen functionality. A physiologically representative molecular model, combined with the MD-DFT theoretical approach, for the first time has provided atomic-scale chemical insight into the required functional groups for EPS adsorption. The C4-HSL molecule interacts with the EPS solely through VdW's interactions, can partition readily into bulk solvent and void spaces and, is unaffected by the cationic charge distribution. It is most thermodynamically stable for the C4-HSL molecule to exist within the vicinity of the EPS and not directly interact which, in turn, offers a thermodynamic rationale for the apparent unperturbed motion of this molecule throughout the biofilm matrix. In contrast, the PQS molecule has the ability to form thermodynamically stable ionic complexes with EPS-bound Ca2+ as well as establishing a hydrogen bond directly to a single EPS chain. The PQS hydroxyl group is focal for mediating binding to the EPS and the PQS molecule is rendered immobile through EPS binding. As such, these simulations support the observation that outer-membrane vesicles are required to maximise the effectiveness of PQS as a (long range) cell-to-cell signal. Indeed, OMVs have been implicated in the transportation of PQS, but not the transportation of C4-HSL60. The MD simulations in this work answer the question, with regards to intermolecular interactions at the atomic-scale, as to why this is the case. With significantly reduced EPS mobility, the PQS molecule is identified as a potential target for quorum quenching (QQ). In fact, enzymatic PQS deactivation, for example, through exogenous supplementation 2,4-dioxygenase, has proved to be a viable strategy for eliminating PQS and PQS related virulence from P. aeruginosa biofilms61,62. Finally, the model created and deployed in this work represents the major mucoid P. aeruginosa CF lung biofilm matrix component with the correct ionic composition. As such, the molecular interactions between the QSAI's and the EPS, which occur at physiological equilibrium, captured in these simulations, would occur at greater length scales also. Therefore, these models and simulations provide critical molecular insight into QSAI motion that is equally as applicable when considering QSAI distribution in large complex living biofilms. Density functional theory (DFT) Density functional theory (DFT) is a quantum mechanical (computational) approach to accurately predict the energy of a molecular system. Specifically, DFT is grounded on the following principle: that the total energy of a molecular system is a unique functional of the electron density63. The true density-functional, however, is unknown and various approximations to the true functional exist, each one being parameterised and made suitable for a particular chemical system. As such, it is important to ensure that the functional of choice has a proven track record in accurately predicting the energy for the system of interest. The procedure for a DFT calculation begins with the choice of an appropriate density functional and definition of the electron density. The electron density can be written in terms of the wavefunctions which, in turn, are expanded in a basis of plane waves—defined by use of periodic boundary conditions and Bloch's Theorem64. Pseudopotentials are employed to replace the highly oscillatory, and strongly localised, core electron wavefunctions with an electron–ion potential, permitting the use of smaller, more computationally tractable, plane wave basis expansions when describing the electron density64,65. It is important to note that DFT calculations employing semi-local or conventional hybrid density functionals, PBE and B3LYP for example—the popular choice for biomolecular systems—fail to model dispersion interactions. Dispersion interactions, more appropriately, can be defined as the attractive part of the Van der Walls (VdW) interaction potential between atoms and molecules that are not directly bonded. Specifically, these density functionals cannot provide the desired dependence of the dispersion interaction energy on the interatomic distance66 and, consequently, energetic predictions on large molecular systems, made using these functionals, are less accurate. To rectify this issue, an empirical potential of the form \({C}_{6}{R}^{-6}\) is added to DFT energy, with R being the interatomic distances and \({C}_{6}\) being the dispersion coefficients67. Computational details All Density Functional Theory (DFT) calculations were performed using the plane-wave Density Functional Theory (DFT) code, CASTEP64. A convergence tested cut-off energy of 900 eV was employed, as well as a Monkhorst–Pack k-point grid of 1 × 1 × 1 to sample the Brillouin zone68. On-the-fly ultrasoft pseudopotentials were used69 alongside the PBE exchange–correlation functional70. Intra- and intermolecular dispersive forces were accounted for by applying the semi-empirical dispersion correction of Tkatchenko and Scheffler71. All molecular dynamics trajectories were computed using DL_POLY_472. The conversion of all molecular models into DL_POLY input files was performed using DL_FIELD73. Trajectories were computed with the OPLS2005 forcefield74,75 in the canonical (NVT) ensemble, where the RATTLE algorithm76 was used to constrain covalent bonds to hydrogen, meaning the integration time-step could be increased to 2 ps. The temperature was held at 310 K (body temperature) using Langevin thermostatting77. Electrostatics were treated using the Smooth-Particle-Mesh-Ewald method78 and the distance cut-offs for electrostatic and Leonard–Jones interactions were set to 1.2 nm. Defining an initial starting structure An upregulation of the alginate biosynthetic gene cluster (algD) upon colonisation of the CF lung79 offers a mucoid P. aeruginosa CF lung biofilm matrix that is predominantly composed of bacterial alginate—an anionic, calcium cross-linked, acetylated polymer of mannuronate (M) and guluronate (G) structures possessing no contiguous G residues7,8,9,10. Molecular models of thermodynamically stable Ca2+ cross-linked acetylated copolymeric \(\beta\)-d-mannuronate-\(\alpha\)-l-guluronate (2-PolyMG) and Ca2+ cross-linked acetylated poly-\(\beta\)-d-mannuronate (2-PolyM) 2-chain complexes have previously been developed in the group and validated as being structurally representative of the mucoid P. aeruginosa EPS observed in vivo12. These models can be seen in Fig. S1 and were a starting point for the development of a larger-scale 4-chain system. Two 2-PolyMG and two 2-PolyM systems were stacked on top of one another to create two 4-chain systems, a 4-polyMG and a 4-PolyM system. Stacking the 2-chain complexes on top of on another ensured that a fibrillar morphology was maintained as is observed in X-ray diffraction and SAXS measurements of calcium-alginate gels80,81. For each 4-chain system, the 2-chain complexes were stacked either parallel or antiparallel along the chain axis with the acetyl groups oriented either parallel or antiparallel. For the 4-PolyM system, which has alternating acetyl orientations, this gave four possible stacking arrangements. In the 4-PolyMG system, which has all of its acetyl groups facing the same orientation, two additional stacking arrangements arose corresponding to the antiparallel acetyl groups both facing towards or away from the neighbouring stack. The stacking arrangements for the 4-PolyMG and 4-PolyM systems are shown in Figs. S2 and S3 respectively. Each stacking arrangement was subject to a geometry optimisation in a simulation cell measuring 42 Å \(\times\) 29 Å \(\times\) 45 Å. Due to the large number of different stacking arrangements, an initial screen was performed to identify the most stable stacking arrangement for the 4-PolyMG and 4-PolyM systems. During these screening optimisations, the SCF tolerance was set to \(1\times {10}^{-5}\) eV Atom−1 and the energy, force and displacement tolerances for the geometry optimisations were set to \(5 \times {10}^{-5} \text{eV Atom}^{-1}\), \(0.1 \text{eV}\) Å−1 and \(5 \times {10}^{-3}\) Å respectively. The thermodynamic stability of each 4-chain stacking arrangement was measured by means of evaluating a formation energy (Eq. (1)). $${E}_{f}={E}_{\left\{4-chain\right\}}-{2E}_{\left\{2-chain\right\}}.$$ \({E}_{\left\{4-chain\right\}}\) is the energy of the optimised 4-PolyMG or 4-polyM stacking arrangement and \({E}_{\left\{2-chain\right\}}\) is the energy of the initial 2-PolyMG or 2-PolyM system. The formation energies for all stacking arrangements in the 4-PolyMG and 4-PolyM systems are shown in Tables S1 and S2 respectively. While all stacking arrangements were thermodynamically favourable (Eq. (1) returning a negative formation energy), the initial screen highlighted that the most stable arrangement within the 4-PolyMG system involved stacking the chains parallel with the two acetyl groups oriented antiparallel and facing away from adjoining stack. In the 4-PolyM system, the most stable arrangement involved stacking the chains antiparallel with the acetyl groups oriented parallel. The most stable 4-PolyMG and 4-PolyM stacking arrangements, from here on labelled 4-PolyMG* and 4-PolyM* respectively, were further optimised at finer tolerances to obtain better converged ground-state geometries. For these optimisations the SCF tolerance was set to \(2 \times {10}^{-6}\) eV Atom−1 and the energy, force and displacement tolerances for the optimisation were set to \(2 \times {10}^{-5} \text{eV Atom}^{-1}\), \(0.05 \text{eV}\) Å−1 and \(2 \times {10}^{-3}\) Å respectively. The re-evaluated formation energies (according to Eq. (1)) are − 6.90 eV for the 4-PolyMG* structure and − 6.38 eV for the 4-PolyM* structure, meaning the association of two 2-PolyMG systems is slightly more exothermic compared to the association of two 2-PolyM systems. The 4-PolyMG* structure, therefore, corresponds to the most thermodynamically stable arrangement of four mucoid P. aeruginosa EPS chains complexed about Ca2+ ions. This structure is shown in Fig. 2 (and for reference the 4-PolyM* structure is shown in Fig. S4). From a DFT structure to a physiological structure The above DFT optimisations were performed at 0 K in vacuo. Therefore, although the above optimisations allowed for the quantification and identification of the most thermodynamically stable arrangement of four mucoid P. aeruginosa EPS chains complexed about Ca2+ ions, they fail to reflect possible conformational changes that could occur at physiological temperature and in the presence of water. The 4-PolyMG* structure was, therefore, used as an initial (starting) structure for a subsequent MD simulation to obtain a structure more representative of that observed at physiological temperature. This MD simulation was performed over 2 ns under periodic boundary conditions based on a simulation cell measuring 60 Å \(\times\) 60 Å \(\times\) 60 Å encompassing the 4-PolyMG* structure and SPC (Simple Point Charge) water. The thermally equilibrated 4-PolyMG* structure, from here on, is labelled as 4-PolyMGMD and can be seen in Fig. 3. This structure corresponds to a complete molecular model of a hydrated mucoid P. aeruginosa exopolysaccharide (EPS) matrix which exists at physiological temperature. EPS-QSAI simulations MD simulations were performed to sample modes of interaction between the 4-PolyMGMD structure and two QSAI molecules, C4-HSL and PQS (Fig. 1). To get structures of these molecules that are representative of the conformations observed at physiological temperature, these two molecules were equilibrated over 1 ns under periodic boundary conditions in a simulation cell measuring 40 Å \(\times\) 40 Å \(\times\) 40 Å, solvated with SPC water. After obtaining thermally equilibrated conformations for the two QSAI molecules, they were individually combined with 4-PolyMGMD structure, positioned (docked) 6 Å away from the base of a V-shaped cleft that opened up during its thermal equilibration at 310 K (see Fig. 3). MD trajectories were computed over 10 ns under periodic boundary conditions in a simulation cell measuring 60 Å \(\times\) 60 Å \(\times\) 60 Å solvated with SPC water. This trajectory length was suitable to allow the molecule to sample preferential interaction modes/binding sites. Structures (EPS-molecule adducts) were isolated from the trajectories every 1 ns, as well as at the time-step corresponding to a minima in the configurational and electrostatic energies, for subsequent DFT thermodynamic stability calculations. Thermodynamic stability of the EPS-QSAI adducts If the isolated EPS-QSAI structures were separated by \(\le\) 6 Å, their thermodynamic stabilities were evaluated by means of evaluating a formation energy (Eq. (2)). $${E}_{f}={E}_{\left\{matrix-molecule\, adduct\right\}}-\left({E}_{\left\{4-PolyM{G}_{\left\{MD\right\}}\right\}}+{E}_{\left\{molecule\right\}}\right).$$ \({E}_{\left\{matrix-molecule \,adduct\right\}}\) is the energy of an isolated EPS-molecule system, \({E}_{\left\{4-PolyM{G}_{\left\{MD\right\}}\right\}}\) is the energy of the thermally equilibrated 4-PolyMGMD system and \({E}_{\left\{molecule\right\}}\) is the energy of a thermally equilibrated molecule, either C4-HSL or PQS. The energy of each system (each term in Eq. (2)) was evaluated using 0 K in vacuo DFT single-point energy calculations where the SCF tolerance was set to \(2 \times {10}^{-6}\) eV Atom−1. Finally, Mulliken bond populations82 were calculated to classify the nature of bonding between the EPS and the molecule in each of the final EPS-molecule adducts. All data generated or analysed during this study are included in this published article (and its Supplementary Information files). Parkins, M. D., Somayaji, R. & Waters, V. J. Epidemiology, biology, and impact of clonal Pseudomonas aeruginosa infections in cystic fibrosis. Clin. Microbiol. Rev. 31, 2–38 (2018). Lyczak, J. B., Cannon, C. L. & Pier, G. B. Establishment of Pseudomonas aeruginosa infection: Lessons from a versatile opportunist. Microbes. Infect. 2, 1051–1060 (2000). Hall-Stoodley, L. & Stoodley, P. Evolving concepts in biofilm infections. Cell. Microbiol. 11, 1034–1043 (2009). Costerton, J. W., Lewandowski, Z., Caldwell, D. E., Korber, D. R. & Lappin-Scott, H. M. Microbial biofilms. Annu. Rev. Microbiol. 49, 711–745 (1995). Karygianni, L., Ren, Z., Koo, H. & Thurnheer, T. Biofilm matrixome: Extracellular components in structured microbial communities. Trends Microbiol. 28, 668–681 (2020). Bjarnsholt, T. et al. Pseudomonas aeruginosa biofilms in the respiratory tract of cystic fibrosis patients. Pediatr. Pulmonol. 44, 547–558 (2009). SkjÅk-Bræk, G., Paoletti, S. & Gianferrara, T. Selective acetylation of mannuronic acid residues in calcium alginate gels. Carbohydr. Res. 185, 119–129 (1989). SkjÅk-Bræk, G., Grasdalen, H. & Larsen, B. Monomer sequence and acetylation pattern in some bacterial alginates. Carbohydr. Res. 154, 239–250 (1986). Linker, A. & Jones, R. S. A new polysaccharide resembling alginic acid isolated from Pseudomonads. J. Biol. Chem. 241, 3845–3851 (1966). Evans, L. R. & Linker, A. Production and characterization of the slime polysaccharide of Pseudomonas aeruginosa. J. Bacteriol. 116, 915–924 (1973). Smith, D. J., Anderson, G. J., Bell, S. C. & Reid, D. W. Elevated metal concentrations in the CF airway correlate with cellular injury and disease severity. J. Cyst. Fibros 13, 289–295 (2014). Hills, O. J., Smith, J., Scott, A. J., Devine, D. A. & Chappell, H. F. Cation complexation by mucoid Pseudomonas aeruginosa extracellular polysaccharide. PLoS ONE 16, e0257026 (2021). Smith, R. S. & Iglewski, B. H. P. aeruginosa quorum-sensing systems and virulence. Curr. Opin. Microbiol. 6, 56–60 (2003). Davies, D. G. et al. The involvement of cell-to-cell signals in the development of a bacterial biofilm. Science 280, 295–298 (1998). De Kievit, T. R. & Iglewski, B. H. Bacterial quorum sensing in pathogenic relationships. Infect. Immun. 68, 4839–4849 (2000). Darch, S. E. et al. Spatial determinants of quorum signaling in a Pseudomonas aeruginosa infection model. PNAS 115, 4779–4784 (2018). Singh, P. K. et al. Quorum-sensing signals indicate that cystic fibrosis lungs are infected with bacterial biofilms. Nature 407, 762–764 (2000). Lawrence, J. R., Swerhone, G. D. W., Kuhlicke, U. & Neu, T. R. In situ evidence for microdomains in the polymer matrix of bacterial microcolonies. Can. J. Microbiol. 53, 450–458 (2007). Tan, C. H. et al. The role of quorum sensing signalling in EPS production and the assembly of a sludge community into aerobic granules. ISME J. 8, 1186–1197 (2014). da Silva, D. P., Schofield, M. C., Parsek, M. R. & Tseng, B. S. An update on the sociomicrobiology of quorum sensing in gram-negative biofilm development. Pathogens 6, 51 (2017). Parsek, M. R. & Greenberg, E. P. Sociomicrobiology: The connections between quorum sensing and biofilms. Trends Microbiol. 13, 27–33 (2005). Whiteley, M. & Greenberg, E. P. Promoter specificity elements in Pseudomonas aeruginosa quorum-sensing-controlled genes. J. Bacteriol. 183, 5529–5534 (2001). Erickson, D. L. et al. Pseudomonas aeruginosa quorum-sensing systems may control virulence factor expression in the lungs of patients with cystic fibrosis. Infect. Immun. 70, 1783–1790 (2002). Favre-Bonté, S., Köhler, T. & Van Delden, C. Biofilm formation by Pseudomonas aeruginosa: Role of the C4-HSL cell-to-cell signal and inhibition by azithromycin. J. Antimicrob. Chemother. 52, 598–604 (2003). Seed, P. C., Passador, L. & Iglewski, B. H. Activation of the Pseudomonas aeruginosa lasI gene by LasR and the Pseudomonas autoinducer PAI: An autoinduction regulatory hierarchy. J. Bacteriol. 177, 654–659 (1995). Schuster, M., Lostroh, C. P., Ogi, T. & Greenberg, E. P. Identification, timing, and signal specificity of Pseudomonas aeruginosa quorum-controlled genes: A transcriptome analysis. J. Bacteriol. 185, 2066–2079 (2003). Alayande, A. B., Aung, M. M. & Kim, I. S. Correlation between quorum sensing signal molecules and Pseudomonas aeruginosa's biofilm development and virulency. Curr. Microbiol. 75, 787–793 (2018). Pesci, E. C. et al. Quinolone signaling in the cell-to-cell communication system of Pseudomonas aeruginosa. Proc. Natl. Acad. Sci. U.S.A. 96, 11229–11234 (1999). Collier, D. N. et al. A bacterial cell to cell signal in the lungs of cystic fibrosis patients. FFEMS Microbiol. Lett. 215, 41–46 (2002). McKnight, S. L., Iglewski, B. H. & Pesci, E. C. The Pseudomonas quinolone signal regulates rhl quorum sensing in Pseudomonas aeruginosa. J. Bacteriol. 182, 2702 (2000). D'Argenio, D. A., Calfee, M. W., Rainey, P. B. & Pesci, E. C. Autolysis and autoaggregation in Pseudomonas aeruginosa colony morphology mutants. J. Bacteriol. 184, 6481–6489 (2002). Popat, R. et al. Environmental modification via a quorum sensing molecule influences the social landscape of siderophore production. Proc. R. Soc. B 284, 20170200 (2017). Bredenbruch, F., Geffers, R., Nimtz, M., Buer, J. & Häussler, S. The Pseudomonas aeruginosa quinolone signal (PQS) has an iron-chelating activity. Environ. Microbiol. 8, 1318–1329 (2006). Diggle, S. P. et al. The Pseudomonas aeruginosa 4-quinolone signal molecules HHQ and PQS play multifunctional roles in quorum sensing and iron entrapment. Chem. Biol. 14, 87–96 (2007). Davey, M. E., Caiazza, N. C. & O'Toole, G. A. Rhamnolipid surfactant production affects biofilm architecture in Pseudomonas aeruginosa PAO1. J. Bacteriol. 185, 1027–1036 (2003). Kim, H.-S., Lee, S.-H., Byun, Y. & Park, H.-D. 6-Gingerol reduces Pseudomonas aeruginosa biofilm formation and virulence via quorum sensing inhibition. Sci. Rep. 5, 8656 (2015). Nain, Z., Sayed, S. B., Karim, M. M., Islam, M. A. & Adhikari, U. K. Energy-optimized pharmacophore coupled virtual screening in the discovery of quorum sensing inhibitors of LasR protein of Pseudomonas aeruginosa. J. Biomol. Struct. Dyn. 38, 5374–5388 (2019). Hnamte, S. et al. Mosloflavone attenuates the quorum sensing controlled virulence phenotypes and biofilm formation in Pseudomonas aeruginosa PAO1: In vitro, in vivo and in silico approach. Microb. Pathog. 131, 128–134 (2019). Menakbi, C., Quignard, F. & Mineva, T. Complexation of trivalent metal cations to mannuronate type alginate models from a density functional study. J. Phys. Chem. B 120, 3615–3623 (2016). Stewart, M. B., Gray, S. R., Vasiljevic, T. & Orbell, J. D. The role of poly-M and poly-GM sequences in the metal-mediated assembly of alginate gels. Carbohydr. Polym. 112, 486–493 (2014). Lawrence, J. R., Korber, D. R., Hoyle, B. D., Costerton, J. W. & Caldwell, D. E. Optical sectioning of microbial biofilms. J. Bacteriol. 173, 6558–6567 (1991). Vogt, M., Flemming, H. & Veeman, W. S. Diffusion in Pseudomonas aeruginosa biofilms: A pulsed field gradient NMR study. J. Biotechnol. 77, 137–146 (2000). Xiang, Y., Liu, Y., Mi, B. & Leng, Y. Molecular dynamics simulations of polyamide membrane, calcium alginate gel, and their interactions in aqueous solution. Langmuir 30, 9098–9106 (2014). Plazinski, W. Molecular basis of calcium binding by polyguluronate chains. Revising the egg-box model. J. Comput. Chem. 32, 2988–2995 (2011). Plazinski, W. & Drach, M. The dynamics of the calcium-induced chain–chain association in the polyuronate systems. J. Comput. Chem. 33, 1709–1715 (2012). Grant, G. T., Morris, E. R., Rees, D. A., Smith, P. J. C. & Thom, D. Biological interactions between polysaccharides and divalent cations: The egg-box model. FEBS. Lett. 32, 195–198 (1973). Hunter, R. C. & Beveridge, T. J. High-resolution visualization of Pseudomonas aeruginosa PAO1 biofilms by freeze-substitution transmission electron microscopy. J. Bacteriol. 187, 7619 (2005). Ritenberg, M. et al. Imaging Pseudomonas aeruginosa biofilm extracellular polymer scaffolds with amphiphilic carbon dots. ACS Chem. Biol. 11, 1265–1270 (2016). Stewart, M. B., Gray, S. R., Vasiljevic, T. & Orbell, J. D. Exploring the molecular basis for the metal-mediated assembly of alginate gels. Carbohydr. Polym. 102, 246–253 (2014). Hecht, H. & Srebnik, S. Structural characterization of sodium alginate and calcium alginate. Biomacromol 17, 2160–2167 (2016). Mashburn-Warren, L. et al. Interaction of quorum signals with outer membrane lipids: Insights into prokaryotic membrane vesicle formation. Mol. Microbiol. 69, 491–502 (2008). Flemming, H.-C. et al. Biofilms: An emergent form of bacterial life. Nat. Rev. Microbiol. 14, 563–575 (2016). Tan, C. H. et al. Convection and the extracellular matrix dictate inter- and intra-biofilm quorum sensing communication in environmental systems. Environ. Sci. Technol. 54, 6730–6740 (2020). Sankaran, J. et al. Single microcolony diffusion analysis in Pseudomonas aeruginosa biofilms. NPJ Biofilms Microbiomes 5, 1–10 (2019). Baig, F. N. et al. Multimodal chemical imaging of molecular messengers in emerging Pseudomonas aeruginosa bacterial communities. Analyst 140, 6544–6552 (2015). Morales-Soto, N. et al. Spatially dependent alkyl quinolone signaling responses to antibiotics in Pseudomonas aeruginosa swarms. J. Biol. Chem. 293, 9544–9552 (2018). Bru, J. L. et al. PQS produced by the Pseudomonas aeruginosa stress response repels swarms away from bacteriophage and antibiotics. J. Bacteriol. https://doi.org/10.1128/JB.00383-19 (2019). Mecozzi, S., West, A. & Dougherty, D. Cation-π interactions in aromatics of biological and medicinal interest: Electrostatic potential surfaces as a useful qualitative guide. Proc. Natl. Acad. Sci. U.S.A. 93, 10566–10571 (1996). Paluch, E., Rewak-Soroczyńska, J., Jędrusik, I., Mazurkiewicz, E. & Jermakow, K. Prevention of biofilm formation by quorum quenching. Appl. Microbiol. Biotechnol. 104, 1871–1881 (2020). Mashburn, L. M. & Whiteley, M. Membrane vesicles traffic signals and facilitate group activities in a prokaryote. Nature 437, 422–425 (2005). Pustelny, C. et al. Dioxygenase-mediated quenching of quinolone-dependent quorum sensing in Pseudomonas aeruginosa. Chem. Biol. 16, 1259–1267 (2009). Arranz San Martín, A., Vogel, J., Wullich, S. C., Quax, W. J. & Fetzner, S. Enzyme-mediated quenching of the pseudomonas quinolone signal (PQS): A comparison between naturally occurring and engineered PQS-cleaving dioxygenases. Biomolecules 12, 864 (2022). Hohenberg, P. & Kohn, W. Inhomogeneous electron gas. Phys. Rev. 136, B864 (1964). Article ADS MathSciNet Google Scholar Clark, S. J. et al. First principles methods using CASTEP. Z. Krist. 220, 567–570 (2005). Segall, M. D. et al. First-principles simulation: Ideas, illustrations and the CASTEP code. J. Phys. Condens. Matter. 14, 2717–2744 (2002). Grimme, S. Density functional theory with London dispersion corrections. Wiley Interdiscip. Rev. Comput. Mol. Sci. 1, 211–228 (2011). Grimme, S. Accurate description of van der Waals complexes by density functional theory including empirical corrections. J. Comput. Chem. 25, 1463–1473 (2004). Monkhorst, H. J. & Pack, J. D. Special points for Brillouin-zone integrations. Phys. Rev. B 13, 5188 (1976). ADS MathSciNet Google Scholar Vanderbilt, D. Soft self-consistent pseudopotentials in a generalized eigenvalue formalism. Phys. Rev. B 41, 7892–7895 (1990). Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865 (1996). Tkatchenko, A. & Scheffler, M. Accurate molecular van der Waals interactions from ground-state electron density and free-atom reference data. Phys. Rev. Lett. 102, 073005 (2009). Article ADS PubMed CAS Google Scholar Todorov, I. T., Smith, W., Trachenko, K. & Dove, M. T. DL_POLY_3: New dimensions in molecular dynamics simulations via massive parallelism. J. Mater. Chem. 16, 1911–1918 (2006). Yong, C. W. Descriptions and implementations of DL_F notation: A natural chemical expression system of atom types for molecular simulations. J. Chem. Inf. Model. 56, 1405–1409 (2016). Jorgensen, W. L., Maxwell, D. S. & Tirado-Rives, J. Development and testing of the OPLS all-atom force field on conformational energetics and properties of organic liquids. J. Am. Chem. Soc. 118, 11225–11236 (1996). Banks, J. L. et al. Integrated modeling program, applied chemical theory (IMPACT). J. Comput. Chem. 26, 1752–1780 (2005). Andersen, H. C. Rattle: A "velocity" version of the shake algorithm for molecular dynamics calculations. J. Comput. Phys. 52, 24–34 (1983). Article ADS CAS MATH Google Scholar Adelman, S. A. & Doll, J. D. Generalized Langevin equation approach for atom/solid-surface scattering: General formulation for classical scattering off harmonic solids. J. Chem. Phys. 64, 2375 (1976). Essmann, U. et al. A smooth particle mesh Ewald method. J. Chem. Phys. 103, 8577 (1998). Chitnis, C. E. & Ohman, D. E. Genetic analysis of the alginate biosynthetic gene cluster of Pseudomonas aeruginosa shows evidence of an operonic structure. Mol. Microbiol. 8, 583–590 (1993). Agulhon, P., Robitzer, M., David, L. & Quignard, F. Structural regime identification in ionotropic alginate gels: Influence of the cation nature and alginate structure. Biomacromol 13, 215–220 (2012). Sikorski, P., Mo, F., Skjåk-Bræk, G. & Stokke, B. T. Evidence for egg-box-compatible interactions in calcium-alginate gels from fiber X-ray diffraction. Biomacromol 8, 2098–2103 (2007). Mulliken, R. S. Electronic population analysis on LCAO–MO molecular wave functions. I. J. Chem. Phys. 23, 1833–1840 (1955). This work was undertaken on ARC4, part of the High-Performance Computing facilities at the University of Leeds, UK. This work made use of computational support by CoSeC, the Computational Science Centre for Research Communities, which was made available through the Material Chemistry Consortium. School of Food Science & Nutrition, University of Leeds, Woodhouse Lane, Leeds, LS2 9JT, UK Oliver J. Hills, James Smith & Helen F. Chappell Daresbury Laboratory, Scientific Computing Department, Science and Technology Facilities Council, Keckwick Lane, Daresbury, Warrington, WA4 4AD, UK Chin W. Yong Division of Pharmacy and Optometry, School of Health Sciences, University of Manchester, Oxford Road, Manchester, M13 9PL, UK School of Chemical & Process Engineering, University of Leeds, Woodhouse Lane, Leeds, LS2 9JT, UK Andrew J. Scott School of Dentistry, University of Leeds, Clarendon Way, Leeds, LS2 9LU, UK Deirdre A. Devine Oliver J. Hills Helen F. Chappell Conceptualization: O.J.H., H.F.C.; Methodology: O.J.H., C.Y.; Investigation: O.J.H.; Visualization: O.J.H.; Supervision: J.S., A.J.S., D.A.D., H.F.C.; Writing—original draft: O.J.H., H.F.C.; Writing—review & editing: O.J.H., C.Y., J.S., A.J.S., D.A.D., H.F.C. Correspondence to Oliver J. Hills or Helen F. Chappell. Supplementary Information. Hills, O.J., Yong, C.W., Scott, A.J. et al. Atomic-scale interactions between quorum sensing autoinducer molecules and the mucoid P. aeruginosa exopolysaccharide matrix. Sci Rep 12, 7724 (2022). https://doi.org/10.1038/s41598-022-11499-9
CommonCrawl
CAMISIM: simulating metagenomes and microbial communities Adrian Fritz1 na1, Peter Hofmann1,2 na1, Stephan Majda1,2, Eik Dahms1,2, Johannes Dröge1,2, Jessika Fiedler1,2, Till R. Lesker1,3, Peter Belmann1,4, Matthew Z. DeMaere5, Aaron E. Darling5, Alexander Sczyrba4, Andreas Bremges1,3 & Alice C. McHardy ORCID: orcid.org/0000-0003-2370-34301,2 Microbiome volume 7, Article number: 17 (2019) Cite this article Shotgun metagenome data sets of microbial communities are highly diverse, not only due to the natural variation of the underlying biological systems, but also due to differences in laboratory protocols, replicate numbers, and sequencing technologies. Accordingly, to effectively assess the performance of metagenomic analysis software, a wide range of benchmark data sets are required. We describe the CAMISIM microbial community and metagenome simulator. The software can model different microbial abundance profiles, multi-sample time series, and differential abundance studies, includes real and simulated strain-level diversity, and generates second- and third-generation sequencing data from taxonomic profiles or de novo. Gold standards are created for sequence assembly, genome binning, taxonomic binning, and taxonomic profiling. CAMSIM generated the benchmark data sets of the first CAMI challenge. For two simulated multi-sample data sets of the human and mouse gut microbiomes, we observed high functional congruence to the real data. As further applications, we investigated the effect of varying evolutionary genome divergence, sequencing depth, and read error profiles on two popular metagenome assemblers, MEGAHIT, and metaSPAdes, on several thousand small data sets generated with CAMISIM. CAMISIM can simulate a wide variety of microbial communities and metagenome data sets together with standards of truth for method evaluation. All data sets and the software are freely available at https://github.com/CAMI-challenge/CAMISIM Extensive 16S rRNA gene amplicon and shotgun metagenome sequencing efforts have been and are being undertaken to catalogue the human microbiome in health and disease [1, 2] and to study microbial communities of medical, pharmaceutical, or biotechnological relevance [3–8]. We have since learned that naturally occurring microbial communities cover a wide range of organismal complexities—with populations ranging from half a dozen to likely tens of thousands of members—can include substantial strain level diversity and vary widely in represented taxa [9–12]. Analyzing these diverse communities is challenging. The problem is exacerbated by use of a wide range of experimental setups in data generation and the rapid evolution of short- and long-read sequencing technologies [13, 14]. Owing to the large diversity of generated data, the possibility to generate realistic benchmark data sets for particular experimental setups is essential for assessing computational metagenomics software. CAMI, the initiative for the Critical Assessment of Metagenome Interpretation, is a community effort aiming to generate extensive, objective performance overviews of computational metagenomics software [15]. CAMI organizes benchmarking challenges and encourages the development of standards and reproducibility in all aspects, such as data generation, software application, and result interpretation [16]. We here describe CAMISIM, which was originally written to generate the simulated metagenome data sets used in the first CAMI challenge. It has since been extended into a versatile and highly modular metagenome simulator. We demonstrate the usability and utility of CAMISIM with several applications. We generated complex, multi-replicate benchmark data sets from taxonomic profiles of human and mouse gut microbiomes [1, 17]. We also simulated thousands of small "minimally challenging metagenomes" to characterize the effect of varying sequencing coverage, evolutionary divergence of genomes, and sequencing error profiles on the popular MEGAHIT [18] and metaSPAdes [19] assemblers. The CAMISIM software CAMISIM allows customization of many properties of the generated communities and data sets, such as the overall number of genomes (community complexity), strain diversity, the community genome abundance distributions, sample sizes, the number of replicates, and sequencing technology used. For setting these options, a configuration file is needed, which is described in Additional file 1. Simulation with CAMISIM has three stages (Fig. 1): Design of the community, which includes selection of the community members and their genomes, and assigning them relative abundances, UML diagram of the CAMISIM workflow. CAMISIM starts with the "community design" step, which can either be de novo, requiring a taxon mapping file and reference genomes or based on a taxonomic profile. This step produces a community genome and taxon profile which is used for the metagenome simulation using one of currently four read simulators (ART, wgsim, PBsim, NanoSim). The resulting reads and bam-files mapping the reads to the original genomes are used to create the gold standards before all the files can be anonymized and shuffled in the post-processing step Metagenome sequencing data simulation, and Postprocessing, where the binning and assembly gold standards are produced. Community design In this step, the community genome abundance profiles, called Pout, are created. These also represent the gold standard for taxonomic profiling and, from the strain to the superkingdom rank, specify the relative abundances of individual strains (genomes) or their parental taxa in percent. In addition, a genome sequence collection for the strains in Pout is generated. Both Pout and the genome sequence collection are needed for the metagenome simulation in step 2. The taxonomic composition of the simulated microbial community is either determined by user-specified taxonomic profiles or generated de novo by sampling from available genome sequences. Profile-based design Taxonomic profiles can be provided in BIOM (Biological Observation Matrix) format [20]. With input profiles, the NCBI complete genomes [21] are used as the sequence collection for creating metagenome data sets. Optionally, the user can choose to also include genomes marked as "scaffold" or "contig" by the NCBI. Input genomes are split at positions with multiple occurrences of ambiguous bases, such that no reads spanning contig borders within larger scaffolds are simulated. Profiles can include bacterial, archaeal, and eukaryotic taxa, as well as viruses. The taxonomic identifiers of BIOM format are interpreted as free text scientific names and are mapped to NCBI taxon IDs (algorithm in Additional file 1). The so generated input profile Pin specifies pairs (t,abt) of taxon IDs t and taxon abundances \(ab_{t} \in \mathbb {R}_{\geq 0}\). The profile taxa are usually defined at higher ranks than strain and thus have to be mapped approximately to the genome sequence collection for creating Pout. Given an ordered list of ranks R = (species, genus, family, order, class, phylum, superkingdom), CAMISIM requires as an additional parameter a highest rank rmax∈R. We define the binary operator ≺ based on the ordering of the ranks in R. Given two ranks, ri,rj∈R, we write ri≺rj, if ri appears before rj in R, and we say ri is below rj. Related complete genomes are searched for all ranks below rmax. By default, this is the family rank. Another parameter is the maximum number of strains m that are included for an input taxon in a simulated sample. To create Pout from Pin, the following steps are performed: let Gin be the set of taxon IDs of the genome collection at the lowest annotated taxonomic rank, usually species or strain. For all t∈Gin, the reference taxonomy specifies a taxonomic lineage of taxon IDs (or undefined values) across the considered ranks in R. We use these to identify a collection of sets F={Gt | t=lineage taxon represented by≥1 complete genome}, which specifies for each lineage taxon the taxon IDs of available genomes from the genome collection. F is used as input for Algorithm 1. The algorithm retrieves for each t from the tuples (t,abt)∈Pin the lineage path taxt across the ranks of R (lines 2–3). Moving from the species to the highest considered rank, rmax, the algorithm determines whether for a lineage taxon tr at the considered rank r a complete genome exists, that is, whether Gt≠∅ for t=tr (lines 4–5). If this is the case, the search ends and tr is considered further (line 6). If no complete genome is found for a particular lineage, the lineage is not included in the simulated community, and a warning is issued (line 20). Next, the number of genomes X with their taxonomic IDs tr to be added to Pout is drawn from a truncated geometric distribution (Eq. 1, line 8) with a mean of \(\mu = \frac {m}{2}\) and the parameter k restricted to be less than m. $$ P(X = k) = \left(1 - \frac{1}{\mu} \right)^{k} \cdot \frac{1}{\mu} $$ If \(|G_{t_{r}}|\) is less than X, \(G_{t_{r}}\) is used entirely as Gselected, the genomes of tr that are to be included in the community. Otherwise X genomes are drawn randomly from \(G_{t_{r}}\) to generate Gselected (lines 9–12). It is optional to use genomes multiple times, by default the selected genomes g∈Gselected are removed from F, such that no genome is selected twice (line 17). Based on the taxon abundances abt from Pin, the abundances abi of the selected taxa i∈Gselected for t are then inferred. First, random variables Yi are drawn from a configurable lognormal distribution, with by default normal mean μ=1 and normal standard deviation σ=2 (Eq. 2), and then the abi are set (Eq. 3; lines 13–15). Finally, the created pairs (i,abi) are added to Pout (line 16) and Pout is returned (line 21). $$ \begin{aligned} Y_{i} &\sim \text{Lognormal}(\mu,\sigma) \\ \Leftrightarrow \frac{d}{dx}P(Y_{i}\leq x) &= \frac{1}{x\sigma\sqrt{2\pi}} \, e^{-\frac{ \left(\ln x - \mu \right)^{2}} {2\sigma^{2}}} \end{aligned} $$ $$ ab_{i} = \frac{Y_{i}}{\sum_{j\in G_{\text{selected}}}Y_{j}} \cdot ab_{t} $$ De novo design A genome sequence collection to sample and a mapping file have to be specified. The mapping file defines for each genome a taxonomic ID (per default from the NCBI taxonomy), a novelty category and an operational taxonomic unit (OTU) ID. Grouping genomes into OTUs is required for sampling related genomes, to increase strain-level diversity in the simulated microbial communities. The novelty category reflects how closely a query genome is related to draft or complete genomes in a genome sequence reference collection. This is used to maximize the spread of selected genomes across the range of taxonomic distances to the genome reference collection, such that there are genomes included of "novel" strains, species, or genera. This distinction is relevant for evaluating reference-based taxonomic binners and profilers, which may perform differently across these different categories. The user can manually generate the mapping file as described in Additional file 1 or in [15]. If controlled sampling of strains is not required, every genome can be assigned to a different OTU ID. If no reference-based taxonomic binners or profilers are to be evaluated, or the provided genome sequence collection does not vary much in terms of taxonomic distance to publicly available genomes used as references for these programs, all genomes can be assigned the same novelty category. In addition, the number of genomes greal to be drawn from the input genome selection and the total number of genomes gtot for the community genome abundance profile Pout have to be specified. The greal real genomes are drawn from the provided genome sampling collection. An equal number of genomes is drawn for every novelty category. If the number of genomes for a category is insufficient, proportionately more are drawn from others. In addition, CAMISIM simulates gsim=gtot−greal genomes of closely related strains from the chosen real genomes in total. These genomes are created with an enhanced version of sgEvolver [22] (Additional file 1: Methods) from a subset of randomly selected real genomes. Given m, the maximum number of strains per OTU, up to m−1 simulated strain genomes are added per genome. The exact number of genomes X to be simulated for a selected OTU is drawn from a geometric distribution with mean μ=0.3−1 (Eq. 1). This procedure is repeated until gsim-related genomes have been added to the community genome collection, comprising gtot=greal+gsim genomes [15]. Next, community genomes are assigned abundances. The relevant user-defined parameters for this step are the sample type and the number of samples n. In addition to single samples, multi-sample data sets (with differential abundances, replicates or time series) have become widely used in real sequencing studies [23–26], also due to their utility for genome recovery using covariance-based genome binners such as CONCOCT [27] or MetaBAT [28]. Several options for creating multi-sample metagenome data sets with these setups are provided: If simulating a single sample data set, the relative abundances are drawn from a lognormal distribution, which is commonly used to model microbial communities [29–32]. The two parameters of the lognormal distribution can be changed. By default, the mean is set to 1 and the standard deviation to 2 (Eq. 2). Setting the standard deviation σ to 0 results in a uniform distribution. The differential abundance mode models a community sampled multiple times after the environmental conditions or the DNA extraction protocols (and accordingly the community abundance profile) have been altered. This mode creates n different lognormally (Eq. 2) distributed genome abundance profiles. Metagenome data sets with multiple samples with very similar genome abundance distributions can be created using the replicates mode. Having multiple replicates of the same metagenome has been reported to improve the quality for some metagenome analysis software, such as for genome binners [23, 27, 33, 34]. Based on an initial log-normal distribution D0, n samples are created by adding Gaussian noise to this initial distribution (Eq. 4). The Gaussian term accounts for all kinds of effects on the genome abundances of the metagenomic replicates including, but not limited to, different experimenters, different place of extraction, or other batch effects. $$ \begin{aligned} D_{i} = D_{0} + \varepsilon \, \text{with} \; \varepsilon & \sim N(0,1) \; \text{and} \\ \varepsilon &\sim N(0,1) \\ \Leftrightarrow \frac{d}{dx}P(\varepsilon \leq x) &= \frac{1}{\sqrt{2\pi}} \cdot e^{-\frac{1}{2}x^{2}} \end{aligned} $$ Time series metagenome data sets with multiple related samples can be created. For these, a Markov model-like simulation is performed, with the distribution of each of the n samples (Eq. 5) depending on the distribution of the previous sample plus an additional either lognormal (Eq. 2) or Gaussian (Eq. 4) term. This emulates the natural process of fluctuating abundances over time and ensures that the abundance changes to the previously sampled metagenome do not grow very large. $$ \hfill \begin{aligned} D_{i} &= D_{i-1} + \varepsilon & \text{with} \\ D_{0} &\sim \text{Lognormal}(\mu,\sigma) & \text{and} \\ \varepsilon &\sim N(0,1) & \text{or} \\ D_{i} &= \frac{D_{i-1} + \varepsilon}{2} & \text{with} \\ \varepsilon &\sim \text{Lognormal}(\mu,\sigma) & \\ \end{aligned} \hfill $$ Metagenome simulation Metagenome data sets are generated from the genome abundance profiles of the community design step. For each genome-specific taxon t and its abundance (t,abt)∈Pout, its genome size st, together with the total number of reads n in the sample, determines the number of generated reads nt (Eq. 6). The total number of reads n is the overall sequence sample size divided by the mean read-length of the utilized sequencing technology. $$ n_{t} = n \cdot \frac{ab_{t} \cdot s_{t}}{\sum_{i \in P_{\text{out}}}ab_{i} \cdot s_{i}} $$ By default, ART [35] is used to create Illumina 2 × 150 bp paired-end reads with a HiSeq 2500 error profile. The profile has been trained on MBARC-26 [36], a defined mock community that has already been used to benchmark bioinformatics software and a full-length 16S rRNA gene amplicon sequencing protocol [37, 38], and is distributed with CAMISIM. Other ART profiles, such as the one used for the first CAMI challenge, can also be used. Further available read simulators are wgsim (https://github.com/lh3/wgsim, originally part of SAMtools [39]) for simulating error-free short reads, pbsim [40] for simulating Pacific Biosciences data and nanosim [41] for simulating Oxford Nanopore Technologies reads. The read lengths and insert sizes can be varied for some simulators. For every sample of a data set, CAMISIM generates FASTQ files and a BAM file [39]. The BAM file specifies the alignment of the simulated reads to the reference genomes. Gold standard creation and postprocessing From the simulated metagenome data sets—the FASTQ and BAM files—CAMISIM creates the assembly and binning gold standards. The software extracts the perfect assembly for each individual sample, and a perfect co-assembly of all samples together by identifying all genomic regions with a coverage of at least one using SAMtools' mpileup and extracting these as error-free contigs. This gold standard does not include all genome sequences available for the simulation, but the best possible assembly of their sampled reads. CAMISIM generates the genome and taxon binning gold standards for the reads and assembled contigs, respectively. These specify the genome and taxonomic lineage that the individual sequences belong to. All sequences can be anonymized and shuffled (but tracked throughout the process), to enable their use in benchmarking challenges. Lastly, files are compressed with gzip and written to the specified output location. Comparison to the state-of-the-art We tested seven simulators and compared them to CAMISIM (Table 1). All generate Illumina data and some—NeSSM [42], BEAR [43], FASTQSim [44], and Grinder [45]—also use a taxonomic profile. Novel and unique to CAMISIM is the ability to simulate long-read data from Oxford Nanopore, of hybrid data sets with multiple sequencing technologies and multi-sample data sets, such as with replicates, time series, or differential abundances. Grinder [45] can also create multiple samples, but only with differential abundances. In addition, CAMISIM creates gold standards for assembly (single sample assemblies and multi-sample co-assemblies), for taxonomic and genome binning of reads or contigs and for taxonomic profiling. Finally, CAMISIM can evolve multiple strains for selected input genomes and allows specification of the degree of real and simulated intra-species heterogeneity within a data set. Table 1 Properties of popular metagenome sequence simulators Effect of data properties on assemblies We created several thousand "minimally challenging" metagenome samples by varying one data property relevant for assembly, while keeping all others the same. Using these, we studied the effect of evolutionary divergence between genomes, different error profiles, and coverage on the popular metaSPAdes [19], version 3.12.0, and MEGAHIT [18], versions 1.1.2 and 1.0.3, assemblers, to systematically investigate reported performance declines for assemblers in the presence of strain-level diversity, uneven coverage distributions, and abnormal error profiles [15, 46, 47]. Both MEGAHIT and metaSPAdes work on de Bruijn graphs, which are created by splitting the input reads into smaller parts, the k-mers, and connecting two k-mers if they overlap by exactly k-1 letters. For every sequencing error k erroneous k-mers are introduced into the de Bruijn graph, which might substantially impact assembly (Fig. 2). Assembly graphs become more complex as coverage increases. MEGAHIT assembly graphs (k = 41) of an E. coli K12 genome for 2 ×, 32 ×, and 512 × per-base coverage, respectively, visualized with Bandage [60]. For 2 × coverage, the graph is disconnected and thus the assembly fragmented. With increasing coverage more and more unitigs can be joined, first resulting in a decent assembly for 32 × coverage, but—due to sequencing errors adding erroneous edges to the graph—a fragmented assembly again for 512 × coverage Varying genome coverage and sequencing error rates We initially simulated samples from Escherichia coli K12 MG1655 with varying coverage and different error rates. Reads were generated at 512 × genome coverage and subsampled stepwise by 50% until 2 × coverage was reached, resulting in a sample series with 512, 256, 128, 64, 32, 16, 8, 4, and 2-fold coverage, respectively. Subsampling was employed to control variation in the sampling of different genomic regions. To assess the effect of sequencing errors, four read data sets were simulated: three using wgsim with uniform error rates of 0%, 2%, and 5%, and one using ART with the CAMI challenge error profile (ART CAMI). Both assemblers were run on these data sets with default options, except for the phred-offset parameter for metaSPAdes, which was set to 33. Both assemblers performed similar across all error rates and coverages, with assembly quality varying substantially with coverage (Fig. 3). Performance on the data generated with the 5% error profile was worst throughout. This is an unrealistically high error profile for Illumina data [47] that software need not necessarily be adapted to handle well. Coverage dependent assembly performance for MEGAHIT and metaSPAdes. Shown are the metrics, from top to bottom: genome fraction in %, number of contigs, and NGA50 (as reported by QUAST [61]), for 0%, 2%, and 5% uniform error rate, and with the ART CAMI error profile compared to the best possible metrics (gold standard) on the ART CAMI profile (dashed black) If coverage was low, assembly failed, generating a large number of small (low NGA50) contigs covering only a small genome portion (genome fraction) across all data sets, because of uncovered regions in the genomes. Sequencing errors (denoted ε) do not play a major role (Fig. 2). The expected per-base error-rate Ep=cov·ε (disregarding the biased errors in the short-read sequencing technologies) is far below 1 (Ep≪1). With increasing coverage, assembly improved consistently across the 0%, 2%, and CAMISIM ART error profile data sets and both assemblers for all metrics (Fig. 3), and reaching an early plateau by 8–16 × coverage. Notably, the performance of an earlier version of MEGAHIT (1.0.3) decreased substantially (declining genome fraction and NGA50) for more than 128 × coverage, except for error-free reads. For instance, at 5% error rate, MEGAHIT, version 1.0.3, generated an exponential number of contigs at high coverages, which keeps the genome fraction artificially high. For these high coverages and error rates, we expect multiple errors at every position of the genome (Ep≫10). This creates de Bruijn graphs with many junctions and bubbles (Fig. 2) which cannot easily be resolved and may lead to breaking the assembly apart and covering the same part of the genome with multiple, short, and erroneous contigs (Fig. 3). Effect of evolutionary relatedness on assembler performances We systematically investigated the effect of related strains on assembler performances across a wide range of taxa and evolutionary divergences, using the genomes of 152 species from the interactive tree of life iTol [48], which includes bacteria, archaea, and eukaryotes. For each genome, we evolved 19 related genomes without larger insertions and deletions and an average nucleotide identity (ANI) between 90% and 99.5% to the original one using steps of 0.5%. For each of the 152·20=3040 pairs of original and evolved genome sequences, we simulated single sample minimal metagenomes at equal genome abundances, with error-free reads at 50 × coverage using wgsim. This constitutes good coverage for the analyzed assemblers, as shown in the previous section. For the resulting samples, variation in assembler performance should thus primarily be caused by differences in ANI. The presence of closely related genomes substantially affected assembly quality (Fig. 4). For up to 95% ANI, the assemblers restored high quality assemblies for both genomes. Between 95% and 99% ANI, the genome fraction and assembly size dropped substantially and contig numbers increased. This was the case if we allowed contigs to either map uniquely to one reference genome or to both, in case of multiple optimal mappings. For more closely related genomes, the number of contigs increased drastically and the assembly size continued to drop. The genome fraction remained high when considering non-unique mappings, but decreased for unique mappings; the explanation for this observed behavior is that for an ANI of more than 99%, assemblers produced consensus contigs of the two strains that mostly aligned similarly well to both reference genomes. This was the case for all 152 genomes and their evolved counterparts. Genome fraction calculated using unique or multiple best mappings in case of ties to the community genome collection. Left: genome fraction for the E. coli assembly created by MEGAHIT from error-free reads (top) and with ART CAMI error profile (bottom). Right: average genome fraction and standard deviation for all original 152 iTol genomes created by MEGAHIT from error-free reads (top) and with ART CAMI error profile (bottom). Error bars denote 1 × standard deviation Simulating environment-specific data sets To test the ability to create metagenome data of the human microbiome, we simulated metagenomes from taxonomic profiles of the Human Microbiome Project [9] for different body sites with CAMISIM. We selected 49 samples from the airways, gastrointestinal tract, oral cavity, skin and urogenital tract, with whole genome shotgun (WGS) and 16S rRNA gene amplicon sequence data available. We used the published QIIME OTU table (https://hmpdacc.org/hmp/HMQCP/) to generate 5 Gb of simulated reads per sample with CAMISIM, resulting in a data set of 245 Gb of Illumina data, and of PacBio data, respectively. Only genomes tagged as "complete genomes" in the NCBI were considered in the data set generation. To decrease the chance of OTUs not being represented by a genome, the option of allowing multiple OTUs being represented by a single reference genome was turned on. This can be relevant for instance when due to sequencing errors in 16S rRNA data, individual community genomes are represented by multiple OTUs. For a functional comparison of the simulated data with the original metagenome shotgun data, we inferred KEGG Ortholog family abundance profiles from the raw read data sets [49]. To this end, all reads were searched with Diamond v0.9.10 using its blastx command with default options [50] against the KEGG GENES database (release 77, species_prokaryotes, best-hit approach) and linked to KEGG Orthology (KO) via the KEGG mapping files. KO profile similarity between the simulated and original metagenome samples was calculated with Pearson's correlation coefficient (PCC) and Spearman rank correlation (SRC), and visualized with non-metric multidimensional scaling (NMDS) [51]. For comparison, we also created functional profiles with PICRUSt [52], using a prediction model generated from 3772 KEGG genomes and corresponding 16S rRNA gene sequences according to the PICRUSt "Genome Prediction Tutorial" (Additional file 1). The PCC of the CAMISIM and original samples approached a striking 0.97 for body sites with high bacterial abundances and many sequenced genomes available, such as the GI tract and oral cavity, and still ranged from 0.72 to 0.91 for airways, skin and urogenital tract samples (Fig. 5b). All PCCs were 7–30% higher than the PCC of PICRUSt with the original metagenome samples. Thus, CAMISIM created metagenome samples functionally even closer to the original metagenome samples than the functional profiles created by PICRUSt. The higher PCC may also partly be due to the fact that the original and CAMISIM data were annotated by "blasting" reads versus KEGG, while the PICRUSt profiles were directly generated from KEGG genome annotations. The Spearman correlation of the simulated CAMISIM samples to the original metagenome samples was slightly lower than the PCC across all body sites, and very similar for CAMISIM and PICRUSt (0–6% improvement of CAMISIM over PICRUSt). These results demonstrate the quality of the CAMISIM samples. Comparison of CAMISIM and PICRUSt functional profiles for different body sites. a NMDS ordination of the functional predictions of individual samples by the different methods. The different body sites are color-coded and labeled with their sample number. The original WGS is denoted by squares, the CAMISIM result as circles and the PICRUSt result as triangles. b Mean and standard deviation of Pearson and Spearman correlation to original WGS samples per body site. C, CAMISIM; P, PICRUSt The NMDS plot (Fig. 5a) showed a very distinct clustering of the CAMISIM and original WGS samples by body site, more closely than the original samples clustered with the PICRUSt profiles. Even though the urogenital tract samples did not cluster perfectly, the CAMISIM samples still formed a very distinct cluster close to the original one. Even outliers in the original samples were, at least partly, detected and correctly simulated (both original and simulated sample 26 of urogenital tract cluster most closely with the gastrointestinal tract microbiomes). We also provide a multi-sample mouse gut data set for software developers to benchmark against. For 64 16S rRNA samples from the mouse gut [17], we simulated 5 Gb of Illumina and PacBio reads each. The mice were obtained from 12 different vendors and the samples characterized by 16S V4 amplicon sequencing (OTU mapping file in Additional file 1). Since for mouse gut only a few complete reference genomes were available, the "scaffold" quality for downloading genomes was chosen. Discussion and conclusions CAMISIM is a flexible program for simulating a large variety of microbial communities and metagenome samples. To our knowledge, it possesses the most complete feature set for simulating realistic microbial communities and metagenome data sets. This feature set includes simulation from taxonomic profiles as templates, inclusion of both natural and simulated strain level diversity, and modelling multi-sample data sets with different underlying community abundance distributions. Read simulators are included for short-read (Illumina) and long-read (PacBio, ONT) sequencing technologies, allowing the generation of hybrid data sets. This turns CAMISIM into a versatile metagenome simulation pipeline, as modules for new (or updated) sequencing technologies and emerging experimental setups can easily be incorporated. We systematically explored the effect of specific data properties on assembler performances on several thousand minimally challenging metagenomes. While low coverage reduced assembly quality for both assemblers, metaSPAdes and MEGAHIT performed generally well for medium to high coverages and different error profiles. Notably, MEGAHIT is computationally very efficient and overall performed well. As noted before [15, 53], assemblers had problems with resolving closely related genomes in our experiments. For an in-depth investigation, we systematically analyzed the effect of related strains on MEGAHIT's performance across a wide range of taxa and evolutionary divergences. The average nucleotide identity (ANI) between two genomes is a robust measure of genome relatedness; an ANI value of 95% roughly corresponds to a 70% DNA-DNA reassociation value—a historical definition of bacterial species [54, 55]. For an pairwise ANI below 95%, the mixture of strains could be separated quite well and assembled into different contigs. For an ANI of more than 99%, consensus contigs of strains were produced that mostly aligned similarly well to either reference genome. In the "twilight zone" of 95–99% nucleotide identity, assembly performance dropped substantially and MEGAHIT's inability to reliable phase strain variation resulted in many small (and often redundant) contigs. For IDBA-UD [56], another de Bruijn graph-based metagenome assembler, a similar pattern has been observed [57], indicating that such behavior is common to many assemblers. Resolving strains from metagenome shotgun data is an open research question, though recently promising computational approaches were proposed [11, 58]. The hybrid long- and short-read simulated data sets we are providing for mouse gut and human body sites could enable the development of new approaches for this task. CAMISIM will facilitate the generation of further realistic benchmarking data sets to assess their performances. With the advent of long-read metagenomics, metagenomics software needs to coevolve, e.g., metagenome assemblers should support long-read and hybrid assemblies in the future (metaSPAdes [19] is a pioneer in this regard). In fact, hybrid data sets will be key to the second CAMI challenge [59]. CAMISIM can also be used to study the effect of experimental design (e.g., number of replicates, sequencing depth, insert sizes) or intrinsic community properties, such as taxonomic composition, community abundance distributions, and organismal complexities, on program performance. Due to the enormous diversity of naturally occurring microbial communities, experimental and sequencing technology setups used in the field, such explorations are required to determine the most effective combinations for specific research questions. While we tried to mimic naturally occurring data sets as close as possible, CAMISIM, especially in the de novo mode and when artificially simulating new strains, requires the user to make choices about the underlying evolutionary and ecological parameters. This includes but is not necessarily limited to the organismal abundance distribution and its parameters, like discussed in [29, 30, 32], of microbial communities and the parameters driving strain evolution. When developing metagenome analysis tools, these should not only be entirely optimized to work on individual data sets produced by CAMISIM, but also tested with additional, optimally real world data. Availability and requirements Project name: CAMISIM Project home page: https://github.com/CAMI-challenge/CAMISIM Operating system(s): UNIX Programming language: Python 2.7 Other requirements: https://github.com/CAMI-challenge/CAMISIM/wiki License: Apache 2.0 Any restrictions to use by non-academics: None. Turnbaugh PJ, Ley RE, Hamady M, Fraser-Liggett C, Knight R, Gordon JI. The human microbiome project: exploring the microbial part of ourselves in a changing world. Nature. 2007; 449(7164):804–10. https://doi.org/10.1038/nature06244. Proctor LM, Sechi S, DiGiacomo ND, Fettweis JM, Jefferson KK, et al. The integrative human microbiome project: dynamic analysis of microbiome-host omics profiles during periods of human health and disease. Cell Host Microbe. 2014; 16(3):276–89. https://doi.org/10.1016/j.chom.2014.08.014. Warnecke F, Luginbühl P, Ivanova N, Ghassemian M, Richardson TH, et al.Metagenomic and functional analysis of hindgut microbiota of a wood-feeding higher termite. Nature. 2007; 450(7169):560–5. https://doi.org/10.1038/nature06269. Hess M, Sczyrba A, Egan R, Kim TW, Chokhawala H, et al. Metagenomic discovery of biomass-degrading genes and genomes from cow rumen. Science. 2011; 331(6016):463–7. https://doi.org/10.1126/science.1200387. Bremges A, Maus I, Belmann P, Eikmeyer F, Winkler A, et al.Deeply sequenced metagenome and metatranscriptome of a biogas-producing microbial community from an agricultural production-scale biogas plant. GigaScience. 2015; 4:33. https://doi.org/10.1186/s13742-015-0073-6. Sunagawa S, Coelho LP, Chaffron S, Kultima JR, Labadie K, et al. Ocean plankton. Structure and function of the global ocean microbiome. Science. 2015; 348(6237):1261359. https://doi.org/10.1126/science.1261359. Xiao L, Feng Q, Liang S, Sonne SB, Xia Z, et al. A catalog of the mouse gut metagenome. Nat Biotechnol. 2015; 33(10):1103–8. https://doi.org/10.1038/nbt.3353. Kunath BJ, Bremges A, Weimann A, McHardy AC, Pope PB. Metagenomics and CAZyme Discovery. Methods Mol Biol. 2017; 1588:255–77. https://doi.org/10.1007/978-1-4939-6899-2_20. Huttenhower C, Gevers D, Knight R, Abubucker S, Badger JH, et al. Structure, function and diversity of the healthy human microbiome. Nature. 2012; 486(7402):207–14. https://doi.org/10.1038/nature11234. Scholz M, Ward DV, Pasolli E, Tolio T, Zolfo M, et al. Strain-level microbial epidemiology and population genomics from shotgun metagenomics. Nat Methods. 2016; 13(5):435–8. https://doi.org/10.1038/nmeth.3802. Quince C, Delmont TO, Raguideau S, Alneberg J, Darling AE, et al. DESMAN: a new tool for de novo extraction of strains from metagenomes. Genome Biol. 2017; 18(1):181. https://doi.org/10.1186/s13059-017-1309-9. Thompson LR, Sanders JG, McDonald D, Amir A, Ladau J, et al. A communal catalogue reveals earth's multiscale microbial diversity. Nature. 2017. https://doi.org/10.1038/nature24621. Quince C, Walker AW, Simpson JT, Loman NJ, Segata N. Shotgun metagenomics, from sampling to analysis. Nat Biotechnol. 2017; 35(9):833–44. https://doi.org/10.1038/nbt.3935. Goodwin S, McPherson JD, McCombie WR. Coming of age: ten years of next-generation sequencing technologies. Nat Rev Genet. 2016; 17(6):333–51. https://doi.org/10.1038/nrg.2016.49. Sczyrba A, Hofmann P, Belmann P, Koslicki D, Janssen S, et al. Critical assessment of metagenome interpretation-a benchmark of metagenomics software. Nat Methods. 2017; 14(11):1063–71. https://doi.org/10.1038/nmeth.4458. Belmann P, Dröge J, Bremges A, McHardy AC, Sczyrba A, Barton MD. Bioboxes: standardised containers for interchangeable bioinformatics software. GigaScience. 2015; 4:47. https://doi.org/10.1186/s13742-015-0087-0. Roy U, Galvez EJC, Iljazovic A, Lesker TR, Blazejewski AJ, et al. Distinct microbial communities trigger colitis development upon intestinal barrier damage via innate or adaptive immune cells. Cell Rep. 2017; 21(4):994–1008. https://doi.org/10.1016/j.celrep.2017.09.097. Li D, Liu CM, Luo R, Sadakane K, Lam TW. MEGAHIT: an ultra-fast single-node solution for large and complex metagenomics assembly via succinct de Bruijn graph. Bioinformatics. 2015; 31(10):1674–6. https://doi.org/10.1093/bioinformatics/btv033. Nurk S, Meleshko D, Korobeynikov A, Pevzner PA. metaSPAdes: a new versatile metagenomic assembler. Genome Res. 2017;:213959–116. https://doi.org/10.1101/gr.213959.116. McDonald D, Clemente JC, Kuczynski J, Rideout JR, Stombaugh J, et al. The Biological Observation Matrix (BIOM) format or: how I learned to stop worrying and love the ome-ome. GigaScience. 2012; 1:7. https://doi.org/10.1186/2047-217X-1-7. Pruitt KD, Tatusova T, Maglott DR. NCBI reference sequences (RefSeq): a curated non-redundant sequence database of genomes, transcripts and proteins. Nucleic Acids Res. 2007; 35(Database issue):61–5. https://doi.org/10.1093/nar/gkl842. Darling ACE, Mau B, Blattner FR, Perna NT. Mauve: multiple alignment of conserved genomic sequence with rearrangements. Genome Res. 2004; 14(7):1394–403. https://doi.org/10.1101/gr.2289704. Albertsen M, Hugenholtz P, Skarshewski A, Nielsen KL, Tyson GW, Nielsen PH. Genome sequences of rare, uncultured bacteria obtained by differential coverage binning of multiple metagenomes. Nat Biotechnol. 2013; 31(6):533–8. https://doi.org/10.1038/nbt.2579. Bendall ML, Stevens SL, Chan LK, Malfatti S, Schwientek P, et al. Genome-wide selective sweeps and gene-specific sweeps in natural bacterial populations. The ISME J. 2016; 10(7):1589–601. https://doi.org/10.1038/ismej.2015.241. Stolze Y, Bremges A, Rumming M, Henke C, Maus I, et al. Identification and genome reconstruction of abundant distinct taxa in microbiomes from one thermophilic and three mesophilic production-scale biogas plants. Biotechnol Biofuels. 2016; 9:156. https://doi.org/10.1186/s13068-016-0565-3. Roux S, Chan LK, Egan R, Malmstrom RR, McMahon KD, Sullivan MB. Ecogenomics of virophages and their giant virus hosts assessed through time series metagenomics. Nat Commun. 2017;8(1). https://doi.org/10.1038/s41467-017-01086-2. Alneberg J, Bjarnason BS, de Bruijn I, Schirmer M, Quick J, et al.Binning metagenomic contigs by coverage and composition. Nat Methods. 2014; 11(11):1144–6. https://doi.org/10.1038/nmeth.3103. Kang DD, Froula J, Egan R, Wang Z. MetaBAT, an efficient tool for accurately reconstructing single genomes from complex microbial communities. PeerJ. 2015; 3:1165. https://doi.org/10.7717/peerj.1165. Curtis TP, Sloan WT, Scannell JW. Estimating prokaryotic diversity and its limits. Proc Natl Acad Sci. 2002; 99(16):10494–9. https://doi.org/10.1073/pnas.142680199. Ofiţeru ID, Lunn M, Curtis TP, Wells GF, Criddle CS, et al.Combined niche and neutral effects in a microbial wastewater treatment community. Proc Natl Acad Sci. 2010; 107(35):15345–50. https://doi.org/10.1073/pnas.1000604107. Ulrich W, Ollik M, Ugland KI. A meta-analysis of species–abundance distributions. Oikos. 2010; 119(7):1149–55. https://doi.org/10.1111/j.1600-0706.2009.18236.x. Unterseher M, Jumpponen A, Opik M, Tedersoo L, Moora M, et al. Species abundance distributions and richness estimations in fungal metagenomics–lessons learned from community ecology. Mol Ecol. 2011; 20(2):275–85. https://doi.org/10.1111/j.1365-294X.2010.04948.x. Nielsen HB, Almeida M, Juncker AS, Rasmussen S, Li J, et al. Identification and assembly of genomes and genetic elements in complex metagenomic samples without using reference genomes. Nat Biotechnol. 2014; 32(8):822–8. https://doi.org/10.1038/nbt.2939. Imelfort M, Parks D, Woodcroft BJ, Dennis P, Hugenholtz P, Tyson GW. GroopM: an automated tool for the recovery of population genomes from related metagenomes. PeerJ. 2014; 2:603. https://doi.org/10.7717/peerj.603. Huang W, Li L, Myers JR, Marth GT. ART: a next-generation sequencing read simulator. Bioinformatics. 2012; 28(4):593–4. https://doi.org/10.1093/bioinformatics/btr708. Singer E, Andreopoulos B, Bowers RM, Lee J, Deshpande S, et al.Next generation sequencing data of a defined microbial mock community. Sci Data. 2016; 3:160081. https://doi.org/10.1038/sdata.2016.81. Bremges A, Singer E, Woyke T, Sczyrba A. MeCorS: Metagenome-enabled error correction of single cell sequencing reads. Bioinformatics. 2016; 32(14):2199–201. https://doi.org/10.1093/bioinformatics/btw144. Singer E, Bushnell B, Coleman-Derr D, Bowman B, Bowers RM, et al.High-resolution phylogenetic microbial community profiling. ISME J. 2016; 10(8):2020–032. https://doi.org/10.1038/ismej.2015.249. Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, et al.The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009; 25(16):2078–9. https://doi.org/10.1093/bioinformatics/btp352. Ono Y, Asai K, Hamada M. PBSIM: PacBio reads simulator–toward accurate genome assembly. Bioinformatics. 2013; 29(1):119–21. https://doi.org/10.1093/bioinformatics/bts649. Yang C, Chu J, Warren RL, Birol I. NanoSim: nanopore sequence read simulator based on statistical characterization. GigaScience. 2017. https://doi.org/10.1093/gigascience/gix010. Jia B, Xuan L, Cai K, Hu Z, Ma L, Wei C. NeSSM: a next-generation sequencing simulator for metagenomics. PLoS ONE. 2013; 8(10):75448. https://doi.org/10.1371/journal.pone.0075448. Johnson S, Trost B, Long JR, Pittet V, Kusalik A. A better sequence-read simulator program for metagenomics. BMC Bioinformatics. 2014; 15(Suppl 9):14. https://doi.org/10.1186/1471-2105-15-s9-s14. Shcherbina A. FASTQSim: platform-independent data characterization and in silico read generation for NGS datasets. BMC Res Notes. 2014; 7(1):533. https://doi.org/10.1186/1756-0500-7-533. Angly FE, Willner D, Rohwer F, Hugenholtz P, Tyson GW. Grinder: a versatile amplicon and shotgun sequence simulator. Nucleic Acids Res. 2012; 40(12):94–4. https://doi.org/10.1093/nar/gks251. Rinke C, Schwientek P, Sczyrba A, Ivanova NN, Anderson IJ, et al. Insights into the phylogeny and coding potential of microbial dark matter. Nature. 2013; 499(7459):431–7. https://doi.org/10.1038/nature12352. Laehnemann D, Borkhardt A, McHardy AC. Denoising DNA deep sequencing data-high-throughput sequencing errors and their correction. Brief Bioinformatics. 2016; 17(1):154–79. https://doi.org/10.1093/bib/bbv029. Letunic I, Bork P. Interactive Tree Of Life (iTOL): an online tool for phylogenetic tree display and annotation. Bioinformatics. 2007; 23(1):127–8. https://doi.org/10.1093/bioinformatics/btl529. Kanehisa M, Sato Y, Kawashima M, Furumichi M, Tanabe M. KEGG as a reference resource for gene and protein annotation. Nucleic Acids Res. 2015; 44(D1):457–62. https://doi.org/10.1093/nar/gkv1070. Buchfink B, Xie C, Huson DH. Fast and sensitive protein alignment using DIAMOND. Nat Methods. 2014; 12(1):59–60. https://doi.org/10.1038/nmeth.3176. Kruskal JB. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika. 1964; 29(1):1–27. https://doi.org/10.1007/bf02289565. Langille MGI, Zaneveld J, Caporaso JG, McDonald D, Knights D, et al. Predictive functional profiling of microbial communities using 16s rrna marker gene sequences. Nat Biotech. 2013; 31(9):814–21. https://doi.org/10.1038/nbt.2676. Awad S, Irber L, Brown CT. Evaluating metagenome assembly on a simple defined community with many strain variants. bioRxiv. 2017. https://doi.org/10.1101/155358. Konstantinidis KT, Tiedje JM. Genomic insights that advance the species definition for prokaryotes. Proc Natl Acad Sci USA. 2005. https://doi.org/10.1073/pnas.0409727102. Varghese NJ, Mukherjee S, Ivanova N, Konstantinidis KT, Mavrommatis K, et al. Microbial species delineation using whole genome sequences. Nucleic Acids Res. 2015. https://doi.org/10.1093/nar/gkv657. Peng Y, Leung HCM, Yiu SM, Chin FYL. IDBA-UD: a de novo assembler for single-cell and metagenomic sequencing data with highly uneven depth. Bioinformatics. 2012; 28(11):1420–8. https://doi.org/10.1093/bioinformatics/bts174. DeMaere MZ, Darling AE. Deconvoluting simulated metagenomes: the performance of hard- and soft- clustering algorithms applied to metagenomic chromosome conformation capture (3c). PeerJ. 2016; 4:2676. https://doi.org/10.7717/peerj.2676. Cleary B, Brito IL, Huang K, Gevers D, Shea T, et al. Detection of low-abundance bacterial strains in metagenomic datasets by eigengenome partitioning. Nat Biotechnol. 2015; 33(10):1053–60. https://doi.org/10.1038/nbt.3329. Bremges A, McHardy AC. Critical Assessment of Metagenome Interpretation Enters the Second Round. mSystems. 2018;3(4). https://doi.org/10.1128/mSystems.00103-18. Wick RR, Schultz MB, Zobel J, Holt KE. Bandage: interactive visualization ofde novogenome assemblies. Bioinformatics. 2015; 31(20):3350–2. https://doi.org/10.1093/bioinformatics/btv383. Gurevich A, Saveliev V, Vyahhi N, Tesler G. QUAST: quality assessment tool for genome assemblies. Bioinformatics. 2013; 29(8):1072–5. https://doi.org/10.1093/bioinformatics/btt086. Richter DC, Ott F, Auch AF, Schmid R, Huson DH. MetaSim —a sequencing simulator for genomics and metagenomics. PLoS ONE. 2008; 3(10):3373. https://doi.org/10.1371/journal.pone.0003373. Mende DR, Waller AS, Sunagawa S, Järvelin AI, Chan MM, et al. Assessment of metagenomic assembly using simulated next generation sequencing data. PLoS ONE. 2012; 7(2):31386. https://doi.org/10.1371/journal.pone.0031386. Bushnell B. BBMap: A fast, accurate, splice-aware aligner; 2014. https://sourceforge.net/projects/bbmap. Accessed 30 Jan 2019. The authors thank the Isaac Newton Institute for Mathematical Sciences for its hospitality during the programme MTG, which was supported by EPSRC Grant Number EP/K032208/1, and Victoria Sack for generating Fig. 1. This research project has been supported by the President's Initiative and Networking Funds of the Helmholtz Association of German Research Centres (HGF) under contract number VH-GS-202. Config files, input genomes, and metadata for the data sets generated and/or analyzed are available on GitHub: https://github.com/CAMI-challenge/CAMISIM and https://github.com/CAMI-challenge/CAMISIM-DATA. The large human and mouse gut microbiome data sets (alongside the BIOM and config files from which they were created) are available at: https://data.cami-challenge.org/participate. Adrian Fritz and Peter Hofmann contributed equally to this work. Computational Biology of Infection Research, Helmholtz Centre for Infection Research, Braunschweig, 38124, Germany Adrian Fritz, Peter Hofmann, Stephan Majda, Eik Dahms, Johannes Dröge, Jessika Fiedler, Till R. Lesker, Peter Belmann, Andreas Bremges & Alice C. McHardy Formerly Department of Algorithmic Bioinformatics, Heinrich-Heine University Düsseldorf, Düsseldorf, 40225, Germany Peter Hofmann, Stephan Majda, Eik Dahms, Johannes Dröge, Jessika Fiedler & Alice C. McHardy German Center for Infection Research (DZIF), partner site Hannover-Braunschweig, Braunschweig, 38124, Germany Till R. Lesker & Andreas Bremges Center for Biotechnology and Faculty of Technology, Bielefeld University, Bielefeld, 33615, Germany Peter Belmann & Alexander Sczyrba The ithree institute, University of Technology Sydney, Sydney NSW, 2007, Australia Matthew Z. DeMaere & Aaron E. Darling Adrian Fritz Peter Hofmann Stephan Majda Eik Dahms Johannes Dröge Jessika Fiedler Till R. Lesker Peter Belmann Matthew Z. DeMaere Aaron E. Darling Alexander Sczyrba Andreas Bremges Alice C. McHardy AF, PH, SM, ED, JD, JF, MZD, AED, and AB implemented CAMISIM. AF and PB tested the software. AF, TRL, and AB performed the experiments. AF, TRL, AS, AB, and ACM interpreted the results. AF, PH, TRL, PB, AB, and ACM wrote the manuscript. AB and ACM conceived the experiments. ACM conceived and supervised the project. All authors read and approved the final manuscript. Correspondence to Alice C. McHardy. Additional file 1 Supplementary information. Taxonomic profile based community design: BIOM format details, Reference database. De novo community design: Creation of the mapping file. Genome assembly metrics. Methods: iTol, Parameters, PICRUST, Configfile. OTU mapping file. (PDF 146 kb) Fritz, A., Hofmann, P., Majda, S. et al. CAMISIM: simulating metagenomes and microbial communities. Microbiome 7, 17 (2019). https://doi.org/10.1186/s40168-019-0633-6 Metagenomics software Microbial community Metagenome assembly Genome binning Taxonomic binning Taxonomic profiling Submission enquiries: [email protected]
CommonCrawl
CLMPST2019: 16TH INTERNATIONAL CONGRESS OF LOGIC, METHODOLOGY AND PHILOSOPHY OF SCIENCE AND TECHNOLOGY PROGRAMAUTHORSKEYWORDS PROGRAM FOR WEDNESDAY, AUGUST 7TH View: session overviewtalk overview 09:00-10:00 Session 11A: C7 SYMP Symposium on the philosophy of the historical sciences 1 (PHS-1) Organizer: Aviezer Tucker Philosophers have attempted to distinguish the Historical Sciences at least since the Neo-Kantians. The Historical Sciences attempt to infer rigorously descriptions of past events, processes, and their relations from their information preserving effects. Historical sciences infer token common causes or origins: phylogeny and evolutionary biology infer the origins of species from information preserving similarities between species, DNAs and fossils; comparative historical linguistics infers the origins of languages from information preserving aspects of exiting languages and theories about the mutation and preservation of languages in time; archaeology infers the common causes of present material remains; Critical Historiography infers the human past from testimonies from the past and materials remains, and Cosmology infers the origins of the universe. By contrast, the Theoretical Sciences are not interested in any particular token event, but in types of events: Physics is interested in the atom, not in this or that atom at a particular space and time; Biology is interested in the cell, or in types of cells, not in this or that token cell; Economics is interested in modeling recessions, not in this recession; and Generative Linguistics studies "Language" not any particular language that existed in a particular time and was spoken by a particular group of people. The distinctions between realms of nature and academic disciplines may be epistemically and methodologically arbitrary. If from an epistemic and methodological perspectives, historiography, the study of the human past, has more in common with Geology than with the Social Sciences that have more in common with Agronomy than with historiography, we need to redraw the boundaries of philosophies of the special disciplines. This is of course highly controversial and runs counter to attempts to distinguish the historical sciences by the use of narrative explanations, reenactment or emphatic understanding. The Historical Sciences may be distinguished from the Theoretical Sciences according to their objects of study, tokens vs. types; from Experimental Sciences according to their methodologies, inference from evidence vs. experimenting with it; and from natural sciences according to the realm of nature they occupy. The criteria philosophers proposed for these distinctions were related to larger issues in epistemology: Do the Historical Sciences and offer different kinds of knowledge? Do the Historical and Theoretical sciences support each other's claims for knowledge, and if so, how?; metaphysics and ontology: Do the types of objects the Historical and Theoretical Sciences attempt to study, represent, describe, or explain differ, and if so, how does it affect their methodologies?; and the philosophy of science: What is science and how do the Historical and Theoretical Sciences relate to this ideal? Giuseppina D'Oro (Keele University, UK) Stephen Boulter (Oxford Brookes University, UK) On the possibility and meaning of truth in the historical sciences ABSTRACT. The familiar challenges to historiographical knowledge turn on epistemological concerns having to do with the unobservability of historical events, or with the problem of establishing a sufficiently strong inferential connection between evidence and the historiographical claim one wishes to convert from a true belief into knowledge. This paper argues that these challenges miss a deeper problem, viz., the lack of obvious truth-makers for historiographical claims. The metaphysical challenge to historiography is that reality does not appear to co-operate in our cognitive endeavours by providing truth-makers for claims about historical entities and events. Setting out this less familiar, but more fundamental, challenge to the very possibility of historiography is the first aim of this paper. The various ways in which this challenge might be met are then set out, including ontologically inflationary appeals to abstract objects of various kinds, or to "block" theories of time. The paper closes with the articulation of an ontologically parsimonious solution to the metaphysical challenge to historiography. The cost of this approach is a revision to standard theories of truth. The central claim here is that the standard theories of truth have mistaken distinct causes of truth for truth itself. This mistake leads to distorted expectations regarding truth-makers for historiographical claims. The truth-makers of historiographical claims are not so much the historical events themselves (for they do not exist) but atemporal modal facts about the order of things of which those events were a part. Aviezer Tucker (Harvard University, United States) ABSTRACT. The inference of origins distinguishes the historical sciences from the theoretical sciences. Scientific inferences of origins are distinct in inferring reliably probable pasts. They base their inferences of origins on information transmitted from origins, past events and processes, to present receivers, evidence. They include most obviously the origins of species in Evolutionary Biology, origins of languages in Comparative Historical Linguistics, origins of rock formations and the shapes of continents in Geology, the origins of the universe in Cosmology, the origins of texts in Textual Criticism, original historical events scientific Historiography, and the origins of forms of art and craft like pottery in Archaeology. This paper analyses the concept of origin, its metaphysics and epistemology as distinct of those of causes. I argue that origins are tokens of types of information sources. Origins are past events that transmitted information that reached the present. Entities in the present that receive that information are receivers. Information preserved in receivers may be used to infer properties of their origins. Origin is a relational concept. As much as a cause can only be identified in relation to its effects and there are no causes without effects, origin can only be identified in relation to receivers and there are no origins without receivers. Origins transmit encoded information signals to receivers. There are many different types of information signals, transmission channels, and types of encoding: Background radiation travelled from the origin of the universe to scientific instruments today. Species transmit information about their properties and ancestry via DNA through reproduction to descendant species. During transmission, information passes through a period of latency when it is not expressed. Latency can vary in length from the age of the universe in the case of background radiation to the brief moment between sending and receiving an email. Information signals are mixed with varying levels of noise and have different levels of equivocation, loss of signal. Types and tokens of processes of encoding and decoding have varying levels of reliability-fidelity, information preservation at the end of the process of information transmitted from the origins at the beginning. Reliability reflects the ratio of preservation of information in receivers to the transmitted information. Some information is lost during transmission (equivocation) and noise that does not carry information is mixed with the signal. For example, we are all the descendants of "prehistoric" peoples. But the information they transmitted about themselves orally through traditions to contemporary societies was lost in few generations due to equivocation. What we can know about them is through information preserved in material and artistic objects and our DNA. Societies cannot transmit information reliably over centuries without a written form of language that can preserve information reliably. I clarify what origins are and how we come to know them by analyzing the conceptual and epistemic distinctions between origins and causes. This analysis justifies the introduction of origins as a new concept to epistemology and philosophy of science to supplement and partly replace philosophical discussions of causation. 09:00-10:30 Session 11B: A2 SYMP Logic, agency, and rationality 1 (LoARa-1) Organizers: Valentin Goranko and Frederik Van De Putte The concept of rational agency is broadly interdisciplinary, bringing together philosophy, social psychology, sociology, decision and game theory. The scope and impact of the area of rational agency has been steadily expanding in the past decades, also involving technical disciplines such as computer science and AI, where multi-agent systems of different kinds (e.g. robotic teams, computer and social networks, institutions, etc.) have become a focal point for modelling and analysis. Rational agency relates to a range of key concepts: knowledge, beliefs, knowledge and communication, norms, action and interaction, strategic ability, cooperation and competition, social choice etc. The use of formal models and logic-based methods for analysing these and other aspects of rational agency has become an increasingly popular and successful approach to dealing with their complex diversity and interaction. This symposium will bring together different perspectives and approaches to the study of rational agency and rational interaction in the context of philosophical logic. The symposium talks are divided into three thematic clusters, each representing a session and consisting of 4-5 presentations, as follows. I. Logic, Rationality, and Game-theoretic Semantics. Applying logic-based methods and formal logical systems to reasoning in decision and game theory is a major and increasingly popular approach to agency and rationality. Formal logical languages allow us to specify principles of strategic behaviour and interaction between agents, and essential game-theoretic notions, including solution concepts and rationality principles. Formal logical systems provide precise and unambiguous semantics and enable correct and reliable reasoning about these, while involving the concepts of knowledge, beliefs, intentions, ability, etc. II. Deontic Logic, Agency, and Action. Logics of agency and interaction such as STIT and deontic logics have been very influential and generally appreciated approaches to normative reasoning and theory of actions. Active directions of research in this area include the normative status of actions vs. propositions, causality and responsibility, collective and group oughts and permissions, and further refinements of the STIT framework stemming from the works of Belnap, Horty and others. III. Logic, Social Epistemology, and Collective Decision-making. Rational agency and interaction also presuppose an epistemological dimension, while intentional group agency is inextricably linked to social choice theory. In this thematic cluster, various logical and formal models are discussed that allow shedding light on these factors and processes. Valentin Goranko (Stockholm University, Sweden) Location: Room 152+153 Karl Nygren (Stockholm University, Sweden) Varieties of permission for complex actions ABSTRACT. The main problem in deontic logic based on propositional dynamic logic is how to define the normative status of complex actions based on the normative status of atomic actions, transitions and states. There are two main approaches to this problem in the literature: the first defines the normative status of an action in terms of the normative status of the possible outcome states of the action (Broersen, 2004; Meyer, 1988), while the second defines the normative status of an action in terms of the normative status of the transitions occurring in the possible executions of the action (van der Meyden, 1996). In this work, I focus on interpretations of permission concepts. In particular, I address what I take to be two shortcomings in the two main approaches to permission in dynamic logic. First, when assessing an agent's behavior from a normative viewpoint, one must often take into account both the results brought about by the agent, and the means by which those results were brought about. Consequently, when deciding whether a complex action is to be permitted or not, one must, in many cases, take into account both the normative status of the possible outcome states of the action, and the normative status of the atomic actions that occur in the complex action: choosing one of the two is not enough. Second, most existing accounts, with the exception of the work of Kulicki and Trypuz (2015), consider the permissibility of actions only relative to their complete executions, i.e. the possible executions where each step in the complex action is carried out. However, in the presence of non-determinism it may happen that some initial part of a complex sequential action leads to a state where the remaining part of the action cannot be executed. This possibility can lead to counterintuitive consequences when one considers strong forms of permission in combination with non-deterministic choice. Such cases show that also partial executions of complex actions are important from a normative viewpoint. Taking both permitted states and permitted atomic actions as primitive allows for a wide variety of permission concepts for complex actions to be defined. Moreover, the distinction between complete and partial executions of complex actions offers further options for defining permission concepts. Based on these points, I define a variety of permission concepts and investigate their formal properties. Broersen, J. (2004). Action negation and alternative reductions for dynamic deontic logic. Journal of Applied Logic 2, 153-168. Kulicki, P., and Trypuz, R. (2015). Completely and partially executable sequences of actions in deontic context. Synthese 192, 1117-1138. Meyer, J.-J. Ch. (1988). A different approach to deontic logic: Deontic logic viewed as a variant of dynamic logic. Notre Dame Journal of Formal Logic 29(1), 109-136. van der Meyden, R. (1996). The dynamic logic of permission. Journal of Logic and Computation 6(3), 465-479. Alessandra Marra (Bayreuth University, Germany) Dominik Klein (University of Bamberg, Bayreuth University, Germany) From Oughts to Goals PRESENTER: Alessandra Marra ABSTRACT. Suppose I believe sincerely and with conviction that today I ought to repay my friend Ann the 10 euro that she lent me. But I do not make any plan for repaying my debt: Instead, I arrange to spend my entire day at the local spa enjoying aromatherapy treatments. This seems wrong. Enkrasia is the principle of rationality that rules out the above situation. More specifically, by (an interpretation of) the Enkratic principle, rationality requires that if an agent sincerely and with conviction believes she ought to X, then X-ing is a goal in her plan. This principle plays a central role within the domain of practical rationality, and has recently been receiving considerable attention in practical philosophy (see Broome 2013, Horty 2015). This presentation pursues two aims. Firstly, we want to analyze the logical structure of Enkrasia in light of the interpretation just described. This is, to the best of our knowledge, a largely novel project within the literature. Much existing work in modal logic deals with various aspects of practical rationality starting from Cohen and Levesque's seminal 1990 paper. The framework presented here aims to complement this literature by explicitly addressing Enkrasia. The principle, in fact, bears some non-trivial conceptual and formal implications. This leads to the second aim of the talk. We want to address the repercussions that Enkrasia has for deontic logic. To this end, we elaborate on the distinction between so-called "basic oughts" and "derived oughts", and show how this distinction is especially meaningful in the context of Enkrasia. Moreover, we address issues related to the filtering of inconsistent oughts, the restricted validity of deontic closure, and the stability of oughts and goals under dynamics. In pursuit of these two aims, we provide a multi-modal neighborhood logic with three characteristic operators: A non-normal operator for basic oughts, a non-normal operator for goals in plans, and a normal operator for derived oughts. Based on these operators we build two modal logical languages with different expressive powers. Both languages are evaluated on tree-like models of future courses of events, enriched with additional structure representing basic oughts, goals and derived oughts. We show that the two modal languages are sound and weakly (resp. strongly) complete with respect to the class of models defined. Moreover, we provide a dynamic extension of the logic by means of product updates. Thijs De Coninck (Ghent University, Belgium) Frederik Van De Putte (Ghent University, Belgium) Reciprocal Group Oughts PRESENTER: Thijs De Coninck ABSTRACT. In [2], Horty shows that the framework of STIT logic can be used to reason about what agents and groups ought to do in a multi-agent setting. To decide what groups ought to do he relies on a utility function that assigns a unique value to each possible outcome of their group actions. He then makes use of a dominance relation to define the optimal choices of a group. When generalizing the utilitarian models of Horty to cases where each agent has his own utility function, Horty's approach requires each group to have a utility function as well. There are several ways to do this. In [4], each group is assigned an independent utility function. This has the disadvantage that there is no connection between the preferences of a group and its members. Another option is to define the utility of a given outcome for a group of agents as the mean of the utilities of that outcome for the group's members, as is done in [3]. However, this requires that utilities of individual agents be comparable. A third option is pursued in [5], where Turrini proposes to generalize Horty's notion of dominance such that an action of a group X dominates another action X' just in case, relative to the utility function of each group member, X dominates X'. The optimal actions of a group can then be defined using this modified dominance relation. This approach, however, leads to undesirable outcomes in certain types of strategic interaction (e.g. a prisoner's dilemma). Here, we present a new approach towards evaluating group actions in STIT logic by taking considerations of reciprocity into account. By reciprocity we mean that agents can help each other reach their desired outcomes through choosing actions that are in each other's interest. We draw upon the work of Grossi and Turrini [1] to identify certain group actions as having different types of reciprocal properties. For example, a group action can be such that, for each agent a in the group, there is some other agent a' such that the action of a' is optimal given the utility function of a. We compare these properties and show that by first selecting a certain type of reciprocal action and only then applying dominance reasoning we are left with group actions that have a number of desirable properties. Next, we show in which types of situations agents can expect to benefit by doing their part in these reciprocal group actions. We then define what groups ought to do in terms of the optimal reciprocal group actions. We call the resulting deontic claims reciprocal oughts, in contradistinction to the utilitarian oughts of [2] and strategic oughts of [3]. We end by comparing each of these deontic operators using some examples of strategic interaction. [1] Davide Grossi and Paolo Turrini. Dependence in games and dependence games. Autonomous Agents and Multi-Agent Systems, 25(2):284–312, 2012. [2] John F. Horty. Agency and deontic logic. Oxford University Press, 2001. [3]Barteld Kooi and Allard Tamminga. Moral conflicts between groups of agents. Journal of Philosophical Logic, 37(1):1–21, 2008. [4] Allard Tamminga. Deontic logic for strategic games. Erkenntnis, 78(1):183– 200, 2013. [5] Paolo Turrini. Agreements as norms. In International Conference on Deontic Logic in Computer Science, pages 31–45. Springer, 2012. 09:00-10:30 Session 11C: B6 SYMP Karl Popper: His science and his philosophy 1 (KRP-1) Organizer: Zuzana Parusniková Of all philosophers of the 20th century, few built more bridges between academic disciplines than did Karl Popper. For most of his life, Karl Popper made contributions to a wide variety of fields in addition to the epistemology and the theory of scientific method for which he is best known. Problems in quantum mechanics, and in the theory of probability, dominate the second half of Popper's Logik der Forschung (1934), and several of the earliest items recorded in §2 ('Speeches and Articles') of Volume 1 of The Writings of Karl Popper, such as item 2-5 on the quantum-mechanical uncertainty relations, item 2-14 on nebular red-shifts, and item 2-43 (and other articles) on the arrow of time, show his enthusiasm for substantive problems in modern physics and cosmology. Interspersed with these were a number of articles in the 1940s on mathematical logic, and in the 1950s on the axiomatization of the theory of probability (and on other technical problems in this area). Later he made significant contributions to discussions in evolutionary biology and on the problem of consciousness. All these interests (except perhaps his interest in formal logic) continued unabated throughout his life. The aim of this symposium is to illustrate, and to evaluate, some of the interventions, both substantive and methodological, that Karl Popper made in the natural and mathematical sciences. An attempt will be made to pinpoint the connections between these contributions and his more centrally philosophical concerns, especially his scepticism, his realism, his opposition to subjectivism, and his indeterminism. The fields that have been chosen for the symposium are quantum mechanics, evolutionary biology, cosmology, mathematical logic, statistics, and the brain-mind liaison. Zuzana Parusniková (Czech Academy of Sciences, Czechia) Olival Freire Junior (Universidade Federal da Bahia, Brazil) Popper and the Quantum Controversy ABSTRACT. It is almost a truism to say that the philosophy of science systematized by Karl Popper was heavily influenced by the physics intellectual landscape. Indeed, the most conspicuous working example of his methodology of falsifiability as a criterion to discern science from other forms the knowledge was Einstein's predictions drawn from his general relativity. While familiar with the great physical theories elaborated till the beginning of the 20th century, Popper kept a lasting fascination for the quantum theory. In addition there was among physicists a controversy over the interpretation and foundations of this scientific theory, which further attracted Popper. However, the very technical aspects of this controversy kept Popper far away from this controversy as some of his early incursions in the subject were target of criticisms. It was only from the 1960s on, with the blossoming of the interest in this scientific controversy and the appearance of a younger generation of physicists interested in the subject, that Popper could fulfill his early desire to take part of this controversy. Most of his ideas on the subject are gathered in the volume Quantum Theory and the Schism in Physics. Popper's ideas may be encapsulated in the statement that he fully accepted the probabilistic descriptions and suggested his propensity interpretation to deal with it, thus without attachment to determinism, while criticized the introduction of subjectivist approaches in this scientific domain, thus aligned with the realist position in the quantum controversy. Less known is that Popper went further in his engagement with the debates over the meaning of the quanta. He could make this through the collaboration with physicists such as Jean-Pierre Vigier and Franco Selleri, who were hard critics of the standard interpretation of quantum physics. From this collaboration emerged a proposal of an experiment to test the validity of some presumptions of quantum theory. Initially conceived as an idealized experiment but eventually led to the lab benches by Yanhua Shih, it spurred a debate which survived Popper himself. We present an overview of Popper's concerns on quantum mechanics as well as an analysis of the debates about the experiment he had suggested. Freire Junior, O. – The Quantum Dissidents – Rebuilding the Foundations of Quantum Mechanics 1950-1990, Springer, Berlin (2015). Popper, K.R., Bartley, W.W.: Quantum Theory and the Schism in Physics. Rowan and Littlefield, Totowa, NJ (1982) Del Santo, F. Genesis of Karl Popper's EPR-like experiment and its resonance amongst the physics community in the 1980s, Studies in History and Philosophy of Modern Physics, 62, 56-70 (2018) Flavio Del Santo (University of Vienna, Austria) Comment on "Popper and the Quantum Controversy" ABSTRACT. In this comment, I will discuss in detail the genesis of Popper's EPR-like experiment, which is at the centre of Prof. Freire's paper. I will show that Popper devised his experiment already in 1980, namely two years before the publication in his renown "Quantum Theory and the Schisms in Physics" (1982). Moreover, I will focus on the early resonance that such a Gedankenexperiment had in the revived debate on quantum foundations. At the same time, I will describe how Popper's role in the community of physicists concerned with foundations of quantum physics evolved over time. In fact, when he came back to problems of quantum mechanics in the 1950s, Popper strengthened his acquaintances with some illustrious physicists with philosophical interests (the likes of D. Bohm, H. Bondi, W. Yourgrau), but was not engaged in the quantum debate within the community of physicists (he did not publish in physics journals or participate in specialised physics conferences). From mid-1960s, however, with the publication of the quite influential essay "Quantum Mechanics without the Observer" (1967), Popper's ideas on quantum physics garnered interest and recognition among physicists. At that time, Popper systematised his critique of the Copenhagen interpretation of quantum mechanics, proposing an alternative interpretation based on the concept of ontologically real probabilities (propensities) that met the favour of several distinguished physicists (among them, D. Bohm, B. van der Waerden and L. de Broglie). This endeavour led Popper to enter a long-lasting debate within the community of physicists. Peter Århem (Karolinska Institutet, Sweden) Popper on the mind-brain problem ABSTRACT. Popper's influence on science can be traced within many branches. It ranges from direct contributions, such as suggestions of experiments in quantum mechanics (e.g. the so-called Popper experiment, testing the Copenhagen interpretation) to mere inspiration, waking up scientists from their dogmatic slumber. Especially his criticism of instrumentalism and his advocacy of realism has been an eye-opener for many. As an illustration a case from the field of neuroscience is discussed in the paper. It relates to the development of theories about mechanisms underlying the nerve impulse. The central question after the pioneering studies by Hodgkin and Huxley was how the critical ions permeate the nerve cell membrane (Hille, 2001). Some experimentalists adopted a realistic view and tried to understand the process by constructing mechanistic models, almost in a Cartesian fashion. Others adopted an instrumentalistic, positivistic and allegedly more scientific view and settled for a mathematical black box description. When it finally was possible to experimentally determine the molecular details, they were found to fit the realistic, mechanistic attempts well, while the instrumentalistic attempts had not led far, thus supporting the Popperian view. An important part of Popper's philosophy concerns the mind-brain problem. The present paper discusses two aspects of his philosophy of mind. One aspect relates to the ontology of mind and the theory of biological evolution. During the years Popper's interest in evolution steadily grew; from an almost negative patronizing view to giving it a central role in many of his later studies. In the theory of evolution Popper found support for his interactionistic view on the mind-brain problem. This, as will be discussed, is a view that for many philosophers is difficult to accept. Another aspect discussed is Popper's observation that mind has similarities with forces and fields of forces. As Popper points out, the introduction of forces as such (in the dynamism of Newton) could have been used by Descartes' adherents to avoid inconsistencies of the Cartesian dualism. But Popper has developed this idea further, comparing properties of mind with those of forces and fields of forces (Popper, Lindahl and Århem, 1993). His view has renewed the interest in force fields as a model for consciousness and the present paper discusses some recent hypotheses that claim to solve problems that attach to the dominant present-day mind-brain theories. Several authors even identify consciousness with an electromagnetic field (Jones, 2013). In contrast, Popper proposes that consciousness works via electromagnetic forces. (Lindahl and Århem, 1994). This has been criticized as violating thermodynamic conservation laws. The present paper discusses Popper's defence against this argument. The paper also discusses a related hypothesis that consciousness act on fields of probability amplitudes rather than on electromagnetic fields. Such an idea has been proposed by Friedrich Beck in response to Popper's hypothesis (Beck, 1996). The present paper argues that such models, based on quantum mechanical ideas, often are in conflict with Poppers propensity interpretation of quantum mechanics (Popper, 1982). Hille, B. (2001). Ion channels of excitable membranes (3rd ed.). Sunderland, MA: Sinauer Associates. Jones, M. W. (2013). Electromagnetic–field theories of mind. Journal of Consciousness Studies, 20(11-12), 124-149. Lindahl, B. I. B., & Århem, P. (1994). Mind as a force field: Comments on a new interactionistic hypothesis. Journal of Theoretical Biology, 171, 111-122. Popper, K. R. (1982). Quantum Theory and the Schism in Physics. London: Hutchinson (from 1992 published by Routledge). Popper, K. R., Lindahl, B. I. B., & Århem, P. (1993). A discussion of the mind–brain problem. Theoretical Medicine, 14, 167-180. Beck F. (1996) Mind-brain interaction: comments on an article by B.I.B. Lindahl & P.Arhem. Journal of Theoretical Biology 180, 87-89. 09:00-10:30 Session 11D: B7 New perspectives on education: Writing, literature, and out-of-school contexts Vera Matarese (Institute of Philosophy, Czech Academy of Sciences, Czechia) Kateřina Trlifajová (Czech Technical University, Czechia) Caroline E. Murr (Universidade Federal de Santa Catarina - UFSC, Brazil) Defamiliarization in science fiction: new perspectives on scientific concepts ABSTRACT. The notion of defamiliarization, developed by Victor Shklovsky in his book "Theory of Prose" (1917), refers to a technique, used in literature, that has the effect of estranging things that are so familiar that we don't even notice them anymore. They become automatized, due to familiarity and saturated contact. For Shklovsky, however, literature and art in general are able to disturb our common world views. John Dewey, in Art as Experience (1934), doesn't mention the Russian author, but presents a similar standpoint. Dewey expounds the idea of "Aesthetic Experience", which appears to have many similarities to Shklovsky's approach, asserting that art awakens from the familiar and allows more meaningful and complete experiences. This paper aims to analyze the use of scientific conceptions in science fiction, leading to a new way to look at them. This new glance modifies the trivial connections of current paradigms in science and also in everyday life. The shift to science fiction's context would contribute to their defamiliarization, giving way to new possibilities of understanding. According to the examined authors, defamiliarization and aesthetic experience are responsible for bringing to consciousness things that were automatized, putting a new light over them. That appears also to be the case in science fiction, in which the break of expectations may have consequences not only to the paradigms of the sciences, but to the reflection about the role of science in ordinary life. In many cases, scientific notions are already made unconsciously accepted in quotidian life, just like everyday assumptions. Besides, science fiction, exaggerating or pushing scientific theories as far as can be imagined, brings about important and profound considerations regarding philosophical questions as well. H. G. Wells' works from around the turn of the 20th century seem to adequately illustrate this process. For instance, in H. G. Wells' novel The invisible man (1897), scientific objects such as matter and light, as well as the prevalent scientific rules they obey, are displaced to another context, breaking usual expectations about them. In fiction, it is possible for matter and light to behave in a different way as the established paradigm in physics at the end of the 19th century permitted. It is also interesting to notice that the book was published in a time of crisis in physics, and it seems that Wells absorbed ideas that would change the paradigm in a few years. To claim that science fiction influenced some scientists to initiate scientific revolutions is maybe too large a step to take in this paper. Nevertheless, it is possible to say that the process of defamiliarization in the reading of science fiction can lead to a new understanding of scientific concepts, inducing reflections that would not be made if the regular context and laws of science were maintained. Judith Puncochar (Northern Michigan University, United States) Reducing Vagueness in Linguistic Expression ABSTRACT. Students in professional programs or graduate research programs tend to use an excess of vague pronouns in their writing. Reducing vagueness in writing could improve written communication skills, which is a goal of many professional programs. Moreover, instructor effectiveness in reducing vagueness in students' writing could improve teaching for learning. Bertrand Russell (1923) argued that all knowledge is vague. This research provides evidence that vagueness in writing is mitigated with instructor feedback. An empirical study tested the hypothesis that providing feedback on vague pronouns would increase clarity in students' writing over an academic semester. Vague terms such as "this", "it", "there", "those", "what", and "these" followed by a verb were highlighted, and a written comment drew students' attention to vague terms: "Rewrite all sentences with highlighted vague terms throughout your paper for greater clarity in professional writing." Writing with "what", "it", and other vague pronouns allows students to apply course concepts or describe contexts without understanding either. A collaboration between instructor and student could improve clarity of information communicated by helping students explain their understanding of ideas or principles taught (Faust & Puncochar, 2016). Eighty-six pre-service teachers and 36 education master's candidates participated in this research. All participants wrote at a proficient level, as determined by passing scores on a Professional Readiness Examination Test in writing. The instructor and a trained assistant highlighted and counted vague pronouns over six drafts of each participant's document. Inter-rater reliability using Cohen's kappa was 0.923 (p < .001). Frequency of vague pronouns decreased noticeably with each successive draft. A repeated measures ANOVA on use of vague pronouns in a final free-write essay compared to use of vague pronouns in an initial free-write essay achieved a statistic of F(1,40) = 3.963 (p = 0.055). Ninety percent of participants identified an increased awareness of the importance of eliminating vague pronouns to improve writing clarity on end-of-semester self-evaluations. As an example, "While I write now, I find myself using a vague term, but I stop and ask myself, "How can I eliminate this vague term to make my paper sound better?" This type of self-reflection I have never done before and I see a big improvement in the tone of my writing." This research provided information on effects of instructor feedback to reduce vague pronouns and thereby improve clarity of students' writing. As Russell (1923) said, "I shall be as little vague as I know how to be ..." (p. 84). Faust, D., & Puncochar, J. (2016, March). How does "collaboration" occur at all? Remarks on epistemological issues related to understanding / working with 'the other'. Dialogue and Universalism: Journal of the International Society for Universal Dialogue, 26, 137-144. http://dialogueanduniversalism.eu/index.php/12016-human-identity/ Russell, B. (1923). Vagueness. Australasian Journal of Psychology and Philosophy, 1(2), 84-92. https://doi.org/10.1080/00048402308540623 Zhengshan Jiao (Institute for the History of Natural Science, Chinese Academy of Sciences, China) The History of Science-related Museums: A Comparative and Cultural Study ABSTRACT. Science-related museums are special kinds of museums that are concerned with science, technology, the natural world, and other related issues. Today, there are many science-realted museums worldwide operating in different styles, and playing different social roles such as collecting, conserving and exhibiting objects, researching relevant issues and educating the public. Through the different development process of science-related museums in the Western world and in China, we can say that science-related museums are outcomes of the influence of social and cultural conditions such as economy, local culture, policy, humans' views on science, and so on. The Western world is considered to be the birthplace of science-related museums, where the museums experienced different developments that includes natural history museum, traditional science and technology museum, and science centre. However, museums are imported goods for China. Foreigners and western culture affected the emergence of museums in China, while they are developing rapidly today. 09:00-10:30 Session 11E: B4 Reduction and emergence Jacqueline Sullivan (University of Western Ontario, Canada) Veli-Pekka Parkkinen (University of Bergen, Norway) Robustness in configurational causal modelling ABSTRACT. We describe a notion of robustness for configurational causal models (CCMs, e.g. Baumgartner & Thiem (2015)), present simulation results to validate this notion, and compare it to notions of robustness in regression analytic methods (RAMs). Where RAMs relate variables to each other and quantify net effects across varying background conditions, CCMs search for dependencies between values of variables, and return models that satisfy the conditions of an INUS-theory of causation. A such, CCMs are tools for case-study research: a unit of observation is a single case that exhibits some configuration of values of measured variables. CCMs automate the process of recovering causally interpretable dependencies from data via cross-case comparisons. The basic idea is that causes make a difference to their effects, and causal structures can be uncovered by comparing otherwise homogeneous cases where some putative cause- and effect-factors vary suitably. CCMs impose strong demands on the analysed data, that are often not met in real-life data. The most important of these is causal homogeneity – unlike RAMs, CCMs require the causal background of the observed cases to be homogeneous, as a sufficient condition for the validity of the results. This assumption is often violated. In addition, data may include random noise, and lack sufficient variation in measured variables. These deficiencies may prevent CCMs from finding any models at all. Thus, CCM methodologists have developed model-fit parameters that measure how well a model accounts for the observed data, that can be adjusted to find models that explain the data less than perfectly. Lowering model fit requirements increases underdetermination of models by data, making model choice harder. We performed simulations to investigate the effects that lowering model-fit requirements has on the reliability of the results. These reveal that given noisy data, the models with best fit frequently include irrelevant components – a type of overfitting. In RAMs, overfitting is remedied by robustness testing: roughly, a robust model is insensitive to the influence of particular observations. This idea cannot be transported to CCM context, which assumes a case-study -setting: one's conclusions ought to be sensitive to cross-case variation. But this also makes CCMs sensitive to noise. However, a notion of robustness as the concordance of results derived from different models (e.g. Wimsatt (2007)) , can be implemented in CCMs. We implement the notion of a robust model as one which agrees with many other models of same data, and does not disagree with many other models, in the causal ascriptions it makes. Simulation results demonstrate that this notion can be used as a reliable criterion of model choice given massive underdetermination of models by data. Lastly, we summarize the results with respect to what they reveal about the differences between CCMs and RAMs, and how they help to improve reliability of CCMs. Baumgartner, M. & Thiem, A. 2015. Identifying complex causal dependencies in configurational data with coincidence analysis. R Journal, 7, 1. Wimsatt, W. 2007. Re-engineering philosophy for limited beings. Cambridge, MA: Harvard University Press. Erica Onnis (University of Turin, Italy) Discontinuity and Robustness as Hallmarks of Emergence ABSTRACT. In the last decades, the interest in the notion of emergence has steadily grown in philosophy and science, but no uncontroversial definitions have yet been articulated. Classical formulations generally focus on two features: irreducibility, and novelty. In the first case, an entity is emergent from another one if the properties of the former cannot be reduced to the properties of the latter. In the second case, a phenomenon is emergent if it exhibits novel properties not had by its component parts. Despite describing significant aspects of emergent processes, both these definitions raise several problems. On the one hand, the widespread habit to identify emergent entities with entities that resist to reduction is nothing more than explaining an ambiguous concept through an equally puzzling notion. Just like emergence, in fact, reduction is not at all a clear, uncontroversial technical term. On the other hand, a feature such as qualitative novelty can easily appear to be an observer-relative property, rather than an indicator of the ontological structure of the world. In view of the above, to provide a good model of emergence other features should be taken into consideration too, and the ones which I will focus on are discontinuity and robustness. The declared incompatibility between emergence and reduction reflects the difference between the models of reality underlying them. While reductionism assigns to the structure of reality a mereological and nomological continuity, emergentism leaves room for discontinuity instead. The reductionist universe is composed of a small number of fundamental (micro)physical entities, and by a huge quantity of combinations of them. In this universe, the nature of the macroscopic entities depends upon that of the microscopic ones, and no physically independent property is admitted. Accepting the existence of genuine emergence, conversely, implies the claim that the structure of the world is discontinuous both metaphysically and nomologically. Matter is organized in different ways at different scales, and there are phenomena which are consequently scale-relative and have to be studied by different disciplines. In this framework, emergence represents the specific trait had by macroscopic entities showing scale-relative properties which depend upon the organizational constraints of their components' relationships rather than upon their individual properties. While the laws of physics are still true and valid across many scales, other laws and regularities emerge with the development of new organizational structures whose behavior is often insensitive to microscopic constraints. And that's where the notion of robustness came into the picture. By robustness, it is intended the ability of a system to preserve its features despite fluctuations and perturbations in its microscopic components and environmental conditions. Emergent phenomena, therefore, rather than novel, are robust in their insensitivity to the lower level from which they emerge. Emergence, therefore, does not describe atypical processes in nature, nor the way in which we (cannot) explain reality. It suggests, by contrast, that the structure of the world is intrinsically differentiated, and at each scale and organizational layer correspond peculiar emergent and robust phenomena exhibiting features absent at lower or higher scales. REFERENCES Batterman, R. W. (2001). The devil in the details: Asymptotic reasoning in explanation, reduction, and emergence. Oxford University Press. Bedau, M. A. (1997). Weak Emergence. Philosophical Perspectives, 11, 375–399. Cartwright, N. (1994). Fundamentalism vs. the Patchwork of Laws. In Proceedings of the Aristotelian Society (Vol. 94, pp. 279-292). Aristotelian Society, Wiley. Crowther, K. (2013). Emergent spacetime according to effective field theory: From top-down and bottom-up. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 44(3), 321–328. Dennett, D. C. (1991). Real patterns. The Journal of Philosophy, 88(1), 27-51. Humphreys, P. (2016). Emergence. A Philosophical Account. NY: Oxford University Press. Kitano, H. (2004). Biological robustness. Nature Reviews Genetics, 5(11), 826. Laughlin, R. B. (2008). A different universe: Reinventing physics from the bottom down. NY: Basic books. Oppenheim, P. & Putnam, H. (1958). Unity of science as a working hypothesis. In H. Feigh, M. Scriven, and G. Maxwell (Eds.), Concepts, Theories, and the mind-body Problem, Minnesota Studies in the Philosophy of science, Minneapolis: University of Minnesota, 3–36. Pettit, P. (1993). A definition of physicalism. Analysis, 53(4), 213-223. Pines, D. (2000). Quantum protectorates in the cuprate superconductors. Physica C: Superconductivity (341–348), 59–62. Silberstein, M. (2012). Emergence and reduction in context: Philosophy of science and/or analytic metaphysics. Metascience 21(3): 627–641. Wimsatt, W. (1997). Aggregativity: Reductive heuristics for finding emergence. Philosophy of Science 64 (4): 372–84. Yates, D. (2016). Demystifying Emergence. Ergo, 3 (31), 809–844. Gualtiero Piccinini (University of Missouri - St. Louis, United States) Levels of Being: An Egalitarian Ontology ABSTRACT. Levels of Being: An Egalitarian Ontology This paper articulates and defends an egalitarian ontology of levels of being that solves a number of philosophical puzzles and suits the needs of the philosophy of science. I argue that neither wholes nor their parts are ontologically prior to one another. Neither higher-level properties nor lower-level properties are prior to one another. Neither is more fundamental; neither grounds the other. Instead, whole objects are portions of reality considered in one of two ways. If they are considered with all of their structure at a given time, they are identical to their parts, and their higher-level properties are identical to their lower-level properties. For most purposes, we consider wholes in abstraction from most of their parts and most of their parts' properties. When we do this, whole objects are subtractions of being from their parts—they are invariants under some part addition and subtraction. The limits to what lower level changes are acceptable are established by the preservation of properties that individuate a given whole. When a change in parts preserves the properties that individuate a whole, the whole survives; when individuative properties are lost by a change in parts, the whole is destroyed. By the same token, higher-level properties are subtractions of being from lower-level properties—they are part of their realizers and are also invariant under some changes in their lower level realizers. This account solves the puzzle of causal exclusion without making any property redundant. Higher-level properties produce effects, though not as many as their realizers. Lower-level properties also produce effects, though more than the properties they realize. For higher-level properties are parts of their realizers. There is no conflict and no redundancy between them causing the same effect. As long as we focus on the right sorts of effects—effects for which higher-level properties are sufficient causes—to explain effects in terms of higher-level causes is more informative than in terms of lower level ones. For adding the lower-level details adds nonexplanatory information. In addition, tracking myriad lower level parts and their properties is often practically unfeasible. In many cases, we may not even know what the relevant parts are. That's why special sciences are both necessary and useful: to find the sorts of abstractions that provide the best explanation of higher-level phenomena, whereas tracking the lower level details may be unfeasible, less informative, or both. Given this egalitarian ontology, traditional reductionism fails because, for most scientific and everyday purposes, there is no identity between higher levels and lower levels. Traditional antireductionism also fails because higher levels are not wholly distinct from lower levels. Ontological hierarchy is rejected wholesale. Yet each scientific discipline and subdiscipline has a job to do—finding the explanations of phenomena at any given level—and no explanatory job is more important than any other because they are all getting at some objective aspect of reality. 09:00-10:30 Session 11F: C6 Societal, ethical and epistemological issues of AI 1 Ave Mets (University of Tartu, Estonia) Location: Room Janák Jean-Michel Kantor (Université Paris Diderot, France) Machine learning: a new technoscience ABSTRACT. Machine learning, a new technoscience . Jean-Michel KANTOR,IMJ Jussieu,Paris. There has been a tremendous development of computerized systems for artificial intelligence in the last thirty years. . Now in some domains the machines get better results than men: --playing chess or even Go, winning over the best champions, --medical diagnosis (for example in cancerology) --automatic translation, --vision : recognizing faces in one second from millions of photos... The successes rely on : --progress in hardware technology, of computational speed and capacity of Big Data .. --New ideas in the structure of the computers with the neural networks ,inspired originally from the structure of vision treatment in the brain ,and progress in mathematical algorithms for exploiting statistical data, extensions of Markovian methods. These developments have led the main actors to talk about a new science, or rather a new techno-science: Machine learning, defined by the fact that it is able to improve its own capacities by itself ( see [L]). Some opponents give various reasons for their scepticism , some following an old tradition of identification of ''Numbers '' with the modern industrial civilization [ W ],some with theoretical arguments ,coming from the foundations of information and complexity theory ([ Ma ]),some doubting of the bayesian inferential approach to Science – refusing prediction without understanding [ Th ] which might lead to a radical attack on classical science ([H]).In particular the technique of neural networks created a new type of knowledge with a very particular mystery of '' Black Box ''. We will describe the new kind of '' truth without verificability '' that is issued from this practice. We will discuss carefully these various topics,in particular : Is it a new science or a new techno-science ? Where is the limit between the science of Machine Learning and various conjectural visions leading to the science-fiction's ideas of transhumanism ? What are the possible consequences of recent success of AI on our approach of language, of intelligence of man's cognitive functioning in general. And finally what are the limits of this numerical invasion of the world ? H Horgan J.The end of science, Broadway Books L LeCun Le Deep Learning une révolution en intelligence artificielle,Collège de France,Février 2016 Ma Manin Y. Kolmogorov complexity as a hidden factor of scientific discourse : from Newton's law to data mining. Th Thom R. Prédire n'est pas expliquer. Eschel,1991 W Weil S. ,Cahiers. Maël Pégny (IHPST, France) Mohamed Issam Ibnouhsein (Quantmetry, France) Can machine learning extend bureaucratic decisions? PRESENTER: Maël Pégny ABSTRACT. In the recent literature, there has been much discussion about the explainability of ML algorithms. This property of explainability, or lack thereof, is critical not only for scientific contexts, but for the potential use of those algorithms in public affairs. In this presentation, we focus on the explainability of bureaucratic procedures to the general public.The use of unexplainable black-boxes in administrative decisions would raise fundamental legal and political issues, as the public needs to understand bureaucratic decisions to adapt to them, and possibly exerts its right to contest them. In order to better understand the impact of ML algorithms on this question, we need a finer diagnosis of the problem, and understand what should make them particularly hard to explain. In order to tackle this issue, we turn the tables around and ask: what makes ordinary bureaucratic procedures explainable? A major part of such procedures are decision trees or scoring systems. We make the conjecture, which we test on several cases studies, that those procedures typically enjoy two remarkable properties. The first is compositionality: the decision is made of a composition of subdecisions. The second is elementarity: the analysis of the decision ends on easily understandable elementary decisions. The combination of those properties has a key consequence on explainability, which we call \emph{explainability by extracts}: it becomes possible to explain the output of a given procedure, through a contextual selection of subdecisions, without the need to explain the entire procedure. This allows bureaucratic procedures to grow in size without compromising their explainability to the general public. In the case of ML procedures, we show that the properties of compositionality and elementarity correspond to properties of the segmentation of the data space by the execution of the algorithm. Compositionality corresponds to the existence of well-defined segmentations, and elementarity corresponds to the definition of those segmentations by explicit, simple variables. But ML algorithms can loose either of those properties. Such is the case of opaque ML, as illustrated by deep learning neural networks, where both properties are actually lost.This entails an enhanced dependance of a given decision to the procedure as a whole, compromising explainability by extracts. If ML algorithms are to be used in bureaucratic decisions, it becomes necesary to find out if the properties of compositionality and elementarity can be recovered, or if the current opacity of some ML procedures is due to a fundamental scientific limitation. Ken Archer (Survata, United States) The Historical Basis for Algorithmic Transparency as Central to AI Ethics ABSTRACT. This paper embeds the concern for algorithmic transparency in artificial intelligence within the history of technology and ethics. The value of transparency in AI, according to this history, is not unique to AI. Rather, black box AI is just the latest development in the 200-year history of industrial and post-industrial technology that narrows the scope of practical reason. Studying these historical precedents provides guidance as to the possible directions of AI technology, towards either the narrowing or the expansion of practical reason, and the social consequences to be expected from each. The paper first establishes the connection between technology and practical reason, and the concern among philosophers of ethics and politics about the impact of technology in the ethical and political realms. The first generation of such philosophers, influenced by Weber and Heidegger, traced the connection between changes in means of production and the use of practical reason for ethical and political reasoning, and advocated in turn a protection of practical reasoning – of phronesis – from the instrumental and technical rationality valued most by modern production. More recently, philosophers within the postphenomenological tradition have identified techne within phronesis as its initial step of formation, and thus call for a more empirical investigation of particular technologies and their enablement or hindering of phronetic reasoning. This sets the stage for a subsequent empirical look at the history of industrial technology from the perspective of technology as an enabler or hindrance to the use of practical reasoning and judgment. This critical approach to the history of technology reveals numerous precedents of significant relevance to AI that from a conventional approach to the history of technology focusing on technical description appear to be very different from AI – such as the division of labor, assembly lines, power machine tools and computer-aided machinery. What is revealed is the effect of most industrial technology, often quite intentional, in deskilling of workers by narrowing the scope of their judgment, whereas other innovations have the potential to expand the scope of workers' judgment. In particular, this section looks like the use of statistics in industrial production, as it is the site of a nearly century-long tension between approaches explicitly designed to narrow or expand the judgment of workers. Finally, the paper extends this history to contemporary AI – where statistics is the product, rather than a control on the means of production – and presents the debate on explainable AI as an extension of this history. This final section explores the arguments for and against transparency in AI. Equipped with the guidance of 200 years of precedents, the possible paths forward for AI are much clearer, as are the effects of each path for ethics and political reasoning more broadly. 09:00-10:00 Session 11G: B4 Metaphysical aspects: Laws Location: Room Krejcar Alfonso García Lapeña (UB University of Barcelona, LOGOS Research Group in Analytic Philosophy, Spain) Scientific Laws and Closeness to the Truth ABSTRACT. Truthlikeness is a property of a theory or a proposition that represents its closeness to the truth of some matter. In the similarity approach, roughly, the truthlikeness of a theory or a proposition is defined according to its distance from the truth measured by an appropriate similarity metric. In science, quantitative deterministic laws typically have a real function representation F(x_1,…,x_n) in an n-dimensional mathematical space (sometimes called the state-space). Suppose law A is represented by F_A(x) and the truth in question (the true connexion between the magnitudes) is represented by F_T(x). Then, according to Niiniluoto (1982, 1985, 1987, 1998), Kieseppä (1996, 1996) and Kuipers (2000), among others, we can define the degree of truthlikeness of a law A with the Minkowski metric for functions: Tr(A)=d(A,T)=(∫|F_A(x)-F_T(x)|^k)^(1/k) We will expose a counterexample to this definition presented by Thom (1975), Weston (1992) and Liu (1999) and a modification of it that we think is much more clear and intuitive. We will argue then that the problem lies in the fact that the proposal take Tr to be just a function of accuracy, but an accurate law can be completely wrong about the actual "causal structure" of the world. For example, if y=x then y'=x+sin(x) is a highly accurate law for many purposes, but totally wrong about the true relation between x and y. We will present a modification of d into a new metric d_an that defines the truthlikeness for quantitative deterministic laws according to two parameters: accuracy and nomicity. The first parameter is correctly measure by the Minkowski metric. The second parameter can be measure by the difference of the derivatives. Therefore (for some interval n, m): d^an (A,T)=(∫|F_A(x)-F_T(x)|^2)^(1/2)+(m-n)(∫|F_A'(x)-F_T'(x)|^2)^(1/2) Where Tr(A)=d_an(A,T) and Tr(A)>Tr(B) if and only if d_an(A,T) Once defined in this way we can represent all possible laws regarding some phenomenon in a two dimensional space and extract some interesting insights. The point (0, 0) will correspond to the truth in question and each point will correspond to a possible law with a different degree of accuracy and nomicity. We can define level lines (sets of theories equally truthlike) and represent scientific progress as the move from a determinate level line to another closer to (0, 0), where scientific progress may be performed by a gain of accuracy and nomicity but in different degrees. We can define some values "a" of accuracy and "n" of nomicity under which we can consider laws to be truthlike in an absolute sense. We will see how can we rationally estimate this values according to the scientific practice. Finally, we will apply our proposal d_an to a real case study. We will estimate the degrees of truthlikeness of four laws (Ideal gas model, Van der Waals model, Beattie–Bridgeman model and Benedict–Webb–Rubin model) regarding Nitrogen in its gas state. We will argue that Kieseppä, I. A. (1996). Truthlikeness for Multidimensional, Quantitative Cognitive Problems. Kieseppä, I. (1996). Truthlikeness for Hypotheses Expressed in Terms of n Quantitative Variables. Journal of Philosophical Logic, 25(2). Kuipers, T. A. F. (2000). From Instrumentalism to Constructive Realism: On Some Relations Between Confirmation, Empirical Progress, and Truth Approximation. Springer. Liu, C. (1999). "Approximation, idealization, and laws of nature". Synthese 118. Niiniluoto, I. (1982). "Truthlikeness for Quantitative Statements", PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association. Niiniluoto, I. (1985) "Theories, Approximations, and Idealizations", Logic, Methodology, and Philosophy of Science VI1, North-Holland, Amsterdam. Niiniluoto, I. (1987) Truthlikeness, Dordrecht: Reidel. Niiniluoto, I. (1998). "Verisimilitude: The Third Period", The British Journal for the Philosophy of Science. Thom, R. (1975), "Structural Stability and Morphogenesis: An Outline of a General Theory". Reading, MA: Addison-Wesley. Weston, T. (1992) "Approximate Truth and Scientific Realism", Philosophy of Science, 59(1): 53–74. Eduardo Castro (University of Beira Interior, Portugal) Laws of Nature and Explanatory Circularity ABSTRACT. Some recent literature (Hicks and Elswyk 2015; Bhogal 2017) has argued that the non-Humean conceptions of laws of nature have a same weakness as the Humean conceptions of laws of nature. Precisely, both conceptions face a problem of explanatory circularity: Humean and non-Humean conceptions of laws of nature agree that the law statements are universal generalisations; thus, both conceptions are vulnerable to an explanatory circularity problem between the laws of nature and their instances. In the literature, the terminology "explanatory circularity problem" has been used to designate two slightly different circularities. A first circularity is a full explanatory circularity, hereafter the problem of circularity C. Synthetically, a law of nature is inferred from an observed phenomenon and, thereafter, it is used to explain that same observed phenomena. Thus, an observed phenomenon explains itself. The other circularity is a problem of self-explanation, hereafter the problem of circularity SE. The problem of circularity SE is a sub-problem of the problem of circularity C. A law of nature explains an observed phenomenon, but the law includes that same phenomenon in its content. Hicks and Elswyk (2015) propose the following argument for the problem of circularity C: (P1) The natural laws are generalizations. (HUMEANISM) (P2) The truth of generalizations is (partially) explained by their positive instances. (GENERALIZATION) (P3) The natural laws explain their instances. (LAWS) (P4) If A (partially) explains B and B (partially) explains C, then A (partially) explains C. (TRANSITIVITY) (C1) The natural laws are (partially) explained by their positive instances. (P1 & P2) (C2) The instances of laws explain themselves. (P3, P4, & C1) (Hicks and Elswyk 2015, 435) They claim that this argument also applies to the non-Humean conceptions of laws of nature: "Humeans and anti-Humeans should agree that law statements are universal generalizations (…) If we're right about this much, anti-Humeans are vulnerable to a tu quoque." (Hicks and Elswyk 2015, 435) The argument above can be reframed to underpin the problem of circularity SE. (P1) The natural laws are generalizations (HUMEANISM) (P2)* If the natural laws are generalizations, then the natural laws are (partially) constituted by their instances. (P3) The natural laws explain their instances. (LAWS) (C2) The instances of the law statements explain themselves. In this presentation, I will discuss the premises of the above arguments. I will try to show that Armstrong's necessitarian view of laws of nature (Armstrong 1983) – a non-Humean conception – is invulnerable to these explanatory circularity problems. At the end, I will analyse a semantic circular condition for unsuccessful explanations, recently proposed by Shumener (2018), regarding this discussion. Armstrong, David. 1983. What Is a Law of Nature? Cambridge: Cambridge University Press. Bhogal, Harjit. 2017. 'Minimal Anti-Humeanism'. Australasian Journal of Philosophy 95 (3): 447–460. Hicks, Michael Townsen, and Peter van Elswyk. 2015. 'Humean Laws and Circular Explanation'. Philosophical Studies 172 (2): 433–443. Shumener, Erica. 2018. 'Laws of Nature, Explanation, and Semantic Circularity'. The British Journal for the Philosophy of Science. doi:10.1093/bjps/axx020. 09:00-10:30 Session 11H: B1 Historical and social epistemology Vít Gvoždiak (Czech Academy of Sciences, Czechia) Natalia Kozlova (MOSCOW STATE PEDAGOGICAL UNIVERSITY, Russia) The problem of figurativeness in science: From communication to the articulation of scientific knowledge ABSTRACT. In understanding the essentially rhetorical character of science, special attention should be paid to the place of figurativeness in research discourse. The central question of my presentation is whether it is possible in the absence of figurativeness to produce radically different meanings that will transform the conceptual space of science. In most cases, the role of figurativeness is reduced to the optimisation of knowledge transmission. The expressive power of language is meant to make the transmitted knowledge more accessible for another and, in the best case, to help to 'transplant it [knowledge] into another, as it grew in [one's] own mind' (Bacon). One of the rhetoric elements most widely used in research discourse, the metaphor often becomes an irreplaceable vehicle, for it makes it possible to create an idea of the object, i.e. to generate a certain way to think about it. The use of figurative elements in scientific language translates both in communicative optimisation and, owing to the uniqueness of the interpretation process, in the discovery of new ways to understand the object. However, the role of figurativeness in research discourse is not limited to knowledge transmission. Despite the significance of communication (i.e. either explicit or implicit inclusion of another into the creative process of the self) for the development of a scientific ontology, the major source of intention is the cognising subject. Thus, in considering the role of figurativeness in scientific discourse, the focus should be shifted from the concept of communication to that of articulation, in other words, from another to the self. The function of figurativeness as a tool for the 'articulation' of knowledge is determined by the features of the scientific 'articulation' per se. The central question of the presentation can be supplemented with that whether it is possible to 'capture', to register the meaning-assigning synthesis of the elusive 'actualities of consciousness' (Husserl) beyond figurativeness. This concerns the 'act of thought' that unlocks the boundaries of meaning conventions and thus transforms scientific ontology. One can assume that the answer to this question is 'no'. In this presentation, I will put forward arguments in support of this answer. To build and develop a theoretical model, the mere abstraction of the object is not sufficient. There is a need for a constant 'live' interest in the object, i.e. the persistent imparting of significance to it, accompanied by the segregation of the object from the existing ontology. Although not realized by the author, the mechanisms of figurativeness may assume this function and make it possible to isolate and 'alienate' the object, thus granting it the 'intellectualised' status and making it 'inconvenient'. Always beyond the realm of convenience, figurativeness by default transcends the existing conceptual terrain. Sometimes, it refutes any objective, rationalised convenience. It even seems to be aimed against conveniences. It means an upsurge in subjectivity, which, in the best case, destroys the common sense of things that is embedded in the forms of communicative rationality. From this point of view, figurativeness is an essential feature of the 'articulation' of scientific knowledge. This study is supported by the Russian Foundation for Basic Research within the project 'The Rhetoric of Science: Modern Approaches' No. 17-33-00066 Bacon, F. The Advancement of Learning. Clarendon Press, 1873 2. Husserl, E. Cartesian Meditations: An Introduction to Phenomenology. Translated by Dorion Cairns, Springer Science+Business Media, 1960. Denis Artamonov (Saratov State University, Russia) Media memory as the object of historical epistemology ABSTRACT. Introduction. Under modern conditions the influence of electronic media on social construction of historical memory is huge. Historical information is transferred to a digital format, not only archives and libraries accumulate the knowledge of the Past, but also electronic storages of databases. Written memory gives way to electronic memory, and development of Internet technologies provides access of a massive number of users to it. Today the ideaof the Past is formed not only by the efforts of professional historians, but also Internet users. The set of individual images of history creates collective memory. Modern society is going through the memory boom which is connected with the ability of users to make knowledge of the Past and to transmit it through new media. Thus, the memory from personal and cultural space moves to the sphere of public media. This process is about the emergence of media memory. Methods. The research of influence of media on individual and collective memory is based on M. McLuhan's works. Studying of social memory is carried out within M. Halbwachs's theory about «a social framework of memory», the theory of cultural memory of J. Assmann and the theory of «places of memory» P. Nora. The analysis of ideas of the Past is based on the methods of historical epistemology presented in H. White and A. Megill's works. Discussion. A small number of studies is devoted to the influence of media on social memory. One of such works is the collective monograph "Silence, Screen, and Spectacle: Rethinking Social Memory in the Age of Information and New Media", edited by L. Freeman, B. Nyenas and R. Daniel (2014). The authors note that new social media change the nature of perception of the Present and Past, revealing the Past through metaphors «silence», «screen», and «performance». The mediatization of society hasproduceda special mechanism of storage, conversion and transmission of information which changed the nature of production of historical knowledge and practice of oblivion. Also, the periods of storage of social information have changed. According to theabove mentioned the author defines media memory as the digital system of storage, transformation, production and dissemination of information about the Past. Historical memory of individuals and communities is formedon the basis of media memory. Media memory can be considered as the virtual social mechanism of storing and oblivion, it has an opportunity to provide various forms of representation of history in daily occurrence space, to expand practice of representation of the Past and a commemoration and also to increase quantity creating and consuming memorial content. Standing on the position of historical epistemology we can observe the emergence of new ways of the cognition of the Past. Media memory selects historical knowledge, including relevant information about the Past in the agenda, and subjecting to oblivion the Past with no social need. Also, there is segmentation of historical knowledge between various elements of the media sphere. It is embodied in a variety of historical Internet resources available to users belonging to different target audiences. Media memory is democratic. It is created on the basis of free expression of thoughts and feelings by available language means. Photos and documentary evidence play equally important roles in the formation of ideas of the Past alongside with subjective perception of reality and estimating statements. Attempts to hide any historical information or withdraw it from public access lead to its greater distribution. Conclusion. Media memory as a form of collective memory is set within the concept of the post-truth when the personal history and personal experience of reality replace objective dataforaparticular person.The knowledge of history gains new meanings, methods and forms, this in its turn makes researchers look for new approaches within historical epistemology. Sophia Tikhonova (Saratov State University N G Chernyshevsky, Russia) Knowledge production in social networks as the problem of communicative epistemology ABSTRACT. Introduction. The communicative dimension of epistemological discourse is connected with the research of how communication forms influence the production of knowledge. The modern communication revolution is determined by a new social role of Internet technologies, which mediate the social communication of different level and open mass access to any kinds of communication. Development of Internet services of social networks gives users more and more perfect instruments of communication management. These tools give individuals the possibility to develop their own networks of any configuration despite the minimum information about partners and to distribute knowledge out of the traditional institutional schemes of the Modern. Distribution of social networks has cognitive effect because it ensures the mass users inclusion in the production of informal knowledge. The author believes that Internet content is a specific form of ordinary knowledge, including special discursive rules of production of knowledge, as well as the system of its verification and legitimation. Methods. The research media influence on cognitive structures of communication is based on M. McLuhan's ideas; the analysis of network modes of production of knowledge is based on M. Granovetter and M. Castells's network approach; the cognitive status of Internet content is proved by means of the concept of ordinary knowledge of M.L. Bianca and P. Piccari. The author's arguments are based on the communication approach which brings closer the categories of social action, the communicative act and the act of cognition. Discussion. Ordinary knowledge in epistemology is quite a marginal problem. A rather small amount of research is devoted to its development. One of the key works in this sphere is the collective monograph "Epistemology of Ordinary Knowledge", edited by M.L. Bianca and P. Piccari (2015). In this work M.L. Bianca proves the concept according to which ordinary knowledge is a form of knowledge which not only allows to get epistemic access to the world, but also includes development of the models of the world which possess different degree of reliability. The feature of this form is that ordinary knowledge can be reliable and relevant though it has no reliability of scientific knowledge. The question of how the media sphere changes the formation of ordinary knowledge, remains poorly studied. In the beginning the technical principles of operating content determine the epistemic processes connected with complication of the structure of the message. The environment of ordinary knowledge formation is the thinking and the oral speech. Usage of the text causes splitting of initial syncretism of ordinary knowledge and increasing the degree of its reflexivity and its subordination to genre norms (literary, documentary, journalistic), i.e. initial formalization. Usage of basic elements of a media text (graphic, audio- and visual inserts) strengthens genre eclecticism and expands possibilities of the user self-expression, subjectcentricity and subjectivity of the message. The dominance of subjective elements in advancement of media content is fixed by the neologism "post-truth". The author defines post-truth as the independent concept of media discourse possessing negative connotations and emphasizing influence of interpretations in comparison to factography. The communicative entity of post-truth comes down to the effect of belief as the personal and emotional relation to the subject of the message. The post-truth combines global with private, personalizes macro-events and facilitates the formation of their assessment for the recipient. The post-truth as transmission of subjectivity is based on representation of personal subjective experience of world cognition, i.e. its core is ordinary knowledge, on the platform of which personal history, personal experience and personal truth are formed, replacing objective data. The post-truth does not mean direct oblivion and depreciation of the truth. The emotionally charged attitude acts as the filter for the streams of diverse content in conditions of the information overload. Through the post-truth people also cognize, and, at the same time, express themselves, create identities and enter collective actions. Conclusion. Communicative epistemology as the methodological project offers new prospects in the research of social networks production of knowledge. According to the author, social networks as the special channel transform ordinary knowledge to informal one. 09:00-10:00 Session 11I: C3 Epistemology and reasoning in biomedical practice 1 Juraj Hvorecky (Institute of Philosophy, Czech Academy of Sciences, Czechia) Luciana Garbayo (University of Central Florida, United States) Wlodek Zadrozny (UNC Charlotte, United States) Hossein Hematialam (UNC Charlotte, United States) Measurable Epistemological Computational Distances in Medical Guidelines Peer Disagreement PRESENTER: Luciana Garbayo ABSTRACT. The study of medical guidelines disagreement in the context of the epistemology of disagreement (Goldman, 2011, Christensen & Lackey, 2013) may strongly contribute to the clarification of epistemic peer disagreement problems encoded in scientific (medical) guidelines. Nevertheless, the clarification of peer disagreement under multiple guidelines may require further methodological development to improve cognitive grasp, given the great magnitude of data and information in them, as in the case of multi-expert decision-making (Garbayo, 2014, Garbayo et al., 2018). In order to fill this methodological gap, we propose an innovative computational epistemology of disagreement platform for the study of epistemic peer evaluations of medical guidelines. The main epistemic goal of this platform is to analyze and refine models of epistemic peer disagreement with the computational power of natural language processing to improve modeling and understanding of peer disagreement under encoded guidelines, regarding causal propositions and action commands (Hamalatian & Zadrozny, 2016). To that effect, we suggest to measure the conceptual distances between guidelines terms in their scientific domains with natural language processing tools and topological analysis to add modeling precision to the characterization of epistemic peer disagreement in its specificity, while contrasting simultaneously multiple guidelines. ​To develop said platform, we study the breast cancer screening medical guidelines disagreement (CDC) as a test case. We provide a model theoretic treatment of propositions of breast cancer conflicting guidelines, map terms/predicates in reference to the medical domains in breast cancer screening and investigate the conceptual distances between them. The main epistemic hypothesis in this study is that medical guidelines disagreement of breast cancer screening, when translated into conflicting epistemic peers positions, may represent a Galilean idealization type of model of disagreement that discounts relevant peer characterization aspects thereof, which a semantic treatment of contradictions and disagreement may further help to clarify (Zadrozny, Hamatialam, Garbayo, 2017). A new near-peer epistemic agency classification in reference to the medical sub-areas involved may be required as a result, to better explain some disagreements in different fields such as oncology, gynecology, mastology, and family medicine. We also generate a topological analysis of contradictions and disagreement of breast cancer screening guidelines with sheaves, while taking in consideration conceptual distance measures, to further explore in geometrical representation continuities and discontinuities in such disagreements and contradictions (Zadrozny & Garbayo, 2018). Bibliography: CDC, "Breast Cancer Screening Guidelines for Women", accessed 2017 at http://www.cdc.gov/cancer/breast/pdf/BreastCancerScreeningGuidelines.pdf Christensen, D., Lackey, J. (eds.) The Epistemology of Disagreement: New Essays. Oxford University Press, 2013. Garbayo, L. "Epistemic considerations on expert disagreement, normative justification, and inconsistency regarding multi-criteria decision making. In Ceberio, M & Kreinovich, W. (eds.) Constraint programming and decision making, 35-45, Springer, 2014. Garbayo, L., Ceberio, M., Bistarelli, S. Henderson, J. "On modeling Multi-Experts Multi-Criteria Decision-Making Argumentation and Disagreement: Philosophical and Computational Approaches Reconsidered. In Ceberio, M & Kreinovich, W. (eds.) Constraint Programming and Decision-Making: Theory and Applications, Springer, 2018. Goldman, A & Blanchard, T. "Social Epistemology". In Oxford Bibliographies Online, OUP, 2011. Hamalatian, H., Zadrozny, W. "Text mining of Medical Guidelines. In Proc. of the Twenty-Ninth Intern. Florida Artificial Intelligence Res. Soc. Cons.: FLAIRS-29. Poster Abstracts. AAAI. Zadrozny, W; Garbayo, L. "A Sheaf Model of Contradictions and Disagreements". Preliminary Report and Discussion.arXiv:1801.09036 ISAIM 2018, International Symposium on Artificial, 2018 Zadrozny, W; Hamatialam, H; Garbayo, L. "Towards Semantic Modeling of Contradictions and Disagreements: A Case Study of Medical Guidelines". ACL Anthology A Digital Archive of Research Papers in Computational Linguistics, 2017. 09:00-10:00 Session 11J: C2 Epistemology, philosophy of physics and chemistry 1 Lukáš Bielik (Comenius University in Bratislava, Slovakia) Amaia Corral-Villate (University of the Basque Country, Spain) On the Infinite Gods paradox via representation in Classical Mechanics ABSTRACT. The Infinite Gods paradox is introduced by Benardete (1964) in the context of his metaphysical problems of the infinite. Priest (1999) starts the discussion with the publication of a logical analysis and then follows the argument by Yablo (2000) in which he defends that the paradox contains a logical impossibility. This last conclusion achieves a broad acceptance in the scientific community but reasonings introduced by Hawthorne (2000), Uzquiano (2012) and Pérez Laraudogoitia (2016) imply the questioning of that idea. Contextualised in this discussion, my communication is based on the introduction of a proposal for a representation of the Infinite Gods paradox in the strict context of Classical Mechanics. The objective of following such a methodology consists in deepening in the understanding of the paradox and clarifying the type of problem that underlies it using the analytical power of Classical Mechanics.The methodology consisting in analysing a metaphysical paradox in the context of a specific theory is in line with what Grümbaum (1967) defended concerning the analysis of the supertasks and has later been followed by other philosophers of science who introduce proposals of representation for different paradoxes of the infinite. Nevertheless, no strictly mechanical representation of the Infinite Gods paradox has been published yet. The results of my mechanical analysis are in agreement with the violation of the "Change Principle" introduced by Hawthorne (2000). But in clear contrast to his contention, this is not a big metaphysical surprise but a simple and direct consequence of causal postulates implicit in Classical Mechanics. Furthermore, the analysis via my mechanical representation shows in a very simple way that the necessary condition that Uzquiano (2012) proposes for the existence of a "beffore-effect" is refutable. Finally, it also leads to conclude that the problem that underlies the paradox is not logical but causal, and thus, is in clear opposition to the reasoning defended by Yablo (2000). Consequently, next objective consists in explaining the diagnosis of what I consider is erroneous in this last argument. In addition to the achievement of the main objective consisting in deepening in the understanding of the paradox and clarifying the type of problem that underlies it, the analysis of the problem of evolution via my mechanical representation possibilitates clarification on the type of interaction in it. This in itself is a conceptually interesting result in the theoretical context of Classical Mechanics. 1. Benardete, J. (1964). Infinity: An essay in metaphysics. Oxford: Clarendon Press. 2. Grümbaum, A. (1967). Modern science and Zeno paradoxes. Middleton: Wesleyan University Press. 3. Hawthorne, J. (2000). Before-effect and Zeno causality. Noûs, 34 (4), 622-633. 4. Pérez Laraudogoitia, J. (2016).Tasks, subtasks and the modern Eleatics. In F. Pataut (ed.), Truth, objects, infinity. Cham, Switzerland: Springer. 5. Priest, G. (1999). On a version of one of Zeno ́s paradoxes. Analysis, 59 (1), 1-2. 6. Uzquiano, G. (2012). Before-effect without Zeno causality. Noûs, 46 (2), 259-264. 7. Yablo, S. (2000). A reply to new Zeno. Analysis 60 (2), 148 -151. Dimitra Kountaki (University of Crete, Greece) CANCELLED: Anthropocentrism in Science ABSTRACT. According to the Encyclopedia on the Rights and Welfare of Animals, Anthropocentrism relates to any idea which suggests central importance, superiority and supremacy of man in relation to the rest of the world. Anthropocentrism denotes also that the purpose of Nature is to serve human needs and desires, based on the idea that man has the highest value in the world. (Fox, 2010) Even if anthropocentrism can be seen as a concept fitting mainly in the field of Environmental Ethics, we could say that it can be considered as a concept connected also with Science, as being a part of the scientific outlook to the world. Even if we claim that the scientific outlook is objective and not subjective, provided that this parameter is controllable, are we at the same time in the position to assert that our view of the world is free of anthropocentrism? The branches of science which are more vulnerable to such a viewpoint, as their name may indicate, are the Humanities as they focus on man and the achievements of human culture. Such an approach is not expected by the so-called positive sciences. Nevertheless, the anthropocentric outlook is not avoided entirely. An example of this in Cosmology is the noted Anthropic Principle. The main idea of the Anthropic principle, as we know it, is that the Universe seems to be "fine-tuned" in such a way, in order to allow the existence of intelligent life that can observe it. How a philosophical idea of "old cutting" such as the "intelligent design of the Universe" can intrude into the modern scientific outlook and why it is so resilient? In my presentation, I will attempt to present briefly the anthropic principle and to answer the questions mentioned above. In addition, I will try to show how anthropocentrism contradicts with the human effort to discover the world. Also I will refer to the consequences of Anthropocentrism for Ethics and Science itself. Ιndicative Βibliography Bostrom, N. (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy. New York: Routledge. Carter, Β., & McCrea, W. H. (1983). The Anthropic Principle and its Implications for Biological Evolution. Philosophical Transactions of the Royal Society of London. , 310 (1512), pp. 347-363. Fox, M. A. (2010). Anthropocentrism. In M. Bekoff, Encyclopedia of Animal Rights and Animal Welfare (pp. 66-68). Santa Barbara, California: Greenwood Press. 09:00-10:30 Session 11K: B1 SYMP Factivity of understanding: Moving beyond the current debates 1. Symposium of the EENPS (EENPS-1) Organizer: Lilia Gurova There are several camps in the recent debates on the nature of scientific understanding. There are factivists and quasi-factivists who argue that scientific representations provide understanding insofar as they capture some important aspects of the objects they represent. Representations, the (quasi-)factivists say, yield understanding only if they are at least partially or approximately true. The factivist position has been opposed by the non-factivists who insist that greatly inaccurate representations can provide understanding given that these representations are effective or exemplify the features of interest. Both camps face some serious challenges. The factivists need to say more about how exactly partially or approximately true representations, as well as nonpropositional representations, provide understanding. The non-factivists are expected to put more effort into the demonstration of the alleged independence of effectiveness and exemplification from the factivity condition. The aim of the proposed symposium is to discuss in detail some of these challenges and to ultimately defend the factivist camp. One of the biggest challenges to factivisim, the existence of non-explanatory representations which do not possess propositional content but nevertheless provide understanding, is addressed in 'Considering the Factivity of Non-explanatory Understanding'. This paper argues against the opposition between effectiveness and veridicality. Building on some cases of non-explanatory understanding, the author shows that effectiveness and veridicality are compatible and that we need both. A different argument for the factivity of scientific understanding provided by models containing idealizations is presented in 'Understanding Metabolic Regulation: A Case for the Factivists'. The central claim of this paper is that such models bring understanding if they capture correctly the causal relationships between the entities, which these models represent. 'Effectiveness, Exemplification, and Factivity' further explores the relation between the factivity condition and its suggested alternatives – effectiveness and exemplification. The author's main claim is that the latter are not alternatives to factivity, strictly speaking, insofar as they could not be construed without any reference to truth conditions. 'Scientific Explanation and Partial Understanding' focuses on cases where the explanations consist of propositions, which are only partially true (in the sense of da Costa's notion of partial truth). The author argues that such explanations bring partial understanding insofar as they allow for an inferential transfer of information from the explanans to the explanandum. What happens, however, when understanding is provided by explanations which do not refer to any causal facts? This question is addressed in 'Factivity of Understanding in Non-causal Explanations'. The author argues that the factivity of understanding could be analyzed and evaluated by using some modal concepts that capture "vertical" and "horizontal" counterfactual dependency relations which the explanation describes. Lilia Gurova (New Bulgarian University, Bulgaria) Richard David-Rus (Institute of Anthropology, Romanian Academy, Romania) Considering the factivity of non-explanatory understanding ABSTRACT. One of the characteristics of the debate around factivity of understanding is its focus on explanatory sort of understanding. The non-explanatory kind was barely considered. The proposed contribution tries to take some steps in this direction and to suggest this way some possible points of an investigation. The inquiry will look at the routes of realization of factivity in situations that were marked in the literature to instantiate non-explanatory understanding. Without holding on a specific account the investigation will take as reference suggestions offered by authors such as Lipton, Gijsbers or Kvanvig, though Lipton's view involving explanatory benefits as the bearers or understanding will take a central stage. The main quest will look at the differences between the issues raised by factivity in explanatory cases and non-explanatory ones. I will look at the variation & specificity of this routes in the different ways of instantiating this sort of understanding. One focus will be on the modality historical arguments and the ones from idealizations raised against supporting the non-factivity claim get contextualized in the non-explanatory cases of understanding. As some of the non-explanatory means do not involve propositional content the factivity issue has to be reassessed. I will therefore reject the pure reductvist view that non-explanatory forms are just preliminary incomplete forms of explanatory understanding i.e. proto-understanding (Khalifa 2017) and so to be considered just under the received view on factivity. In the last part I will turn to a second point by reference to the previous discussion. The effectiveness condition was advanced by de Regt as an alternative to the veridicality condition. I will support a mixed view which states the need of including reference to both conditions. The cases of non-explanatory understanding, might better illuminate the way the two components are needed in combination. Moreover, in some non-explanatory cases one of the above conditions might take precedence over the other, as for example along the separation of the ones with propositional content (possible explanation, thought experiments) and the other of a non-propositional nature (e.g. manipulations, visualizations). Martin Zach (Charles University, Czechia) Understanding metabolic regulation: A case for the factivists ABSTRACT. Factive scientific understanding is the thesis that scientific theories and models provide understanding insofar as they are based on facts. Because science heavily relies on various simplifications, it has been argued that the facticity condition is too strong and should be abandoned (Elgin 2007, Potochnik 2015). In this paper I present a general model of a metabolic pathway regulation by feedback inhibition to argue that even highly simplified models that contain various distortions can provide factive understanding. However, there is a number of issues that need to be addressed first. For instance, the core of the disagreement over the facticity condition for understanding revolves around the notion of idealization. Here, I show that the widely used distinction between idealizations and abstractions faces difficulties when applied to the model of a metabolic pathway regulation. Some of the key assumptions involved in the model concern the type of inhibition and the role of concentrations. Contra Love and Nathan (2015) I suggest to view these assumptions as a special sort of abstraction, as vertical abstraction (see also Mäki 1992). Usually, it is the idealizations that are considered problematic for the factivist position because idealizations are thought to introduce distortions into the model, something abstractions do not do. However, I show that here abstractions distort key difference-makers (i.e. type of inhibition and the role of concentration), much like idealizations do elsewhere. This seemingly further supports the nonfactivist view, since if abstractions may involve distortions then not only idealized models but abstract models as well cannot provide factive understanding. I argue that this is not the case here. The diagrammatic model of a metabolic pathway regulation does provide factive understanding insofar as it captures the causal organization of an actual pathway, notwithstanding the distortions. I further motivate my view by drawing an analogy with the way in which Bokulich (2014) presents an alternative view of the notions of how-possibly and how-actually models. The conclusion is that, at least in some instances, highly simplified models which contain key distortions can nevertheless provide factive understanding, provided we correctly specify the locus of truth. Bokulich, A. [2014]: 'How the Tiger Bush Got Its Stripes: "How Possibly" vs. "How Actually" Model Explanations', Monist, 97, pp. 321–38. Elgin, C. [2007]: 'Understanding and the Facts', Philosophical Studies, 132, pp. 33–42. Love, A. C. and Nathan, M. J. [2015]: 'The Idealization of Causation in Mechanistic Explanation', Philosophy of Science, 82, pp. 761–74. Mäki, U. [1992]: 'On the Method of Isolation in Economics', in C. Dilworth (ed.), Idealization IV: Intelligibility in Science, Amsterdam: Rodopi, pp. 319–54. Potochnik, A. [2015]: 'The Diverse Aims of Science', Studies in History and Philosophy of Science Part A, 53, pp. 71–80. Effectiveness, Exemplification, and Factivity ABSTRACT. The view that scientific representations bear understanding insofar as they capture certain aspects of the objects being represented has been recently attacked by authors claiming that factivity (veridicality) is neither necessary nor sufficient for understanding. Instead of being true, partially true, or true enough, these authors say, the representations that provide understanding should be effective, i.e. they should lead to "useful scientific outcomes of certain kind" (de Regt & Gijsbers, 2017) or should "exemplify features they share with the facts" (Elgin, 2009). In this paper I'll try to show that effectiveness and exemplification are neither alternatives nor independent complements to factivity insofar as an important aspect of these conditions cannot be construed without referring to a certain kind of truthfulness. Although Elgin's and de Regt and Gijsbers' non-factive accounts of understanding differ in the details, they share an important common feature: they both stress the link between understanding and inference. Thus, according to De Regt and Gijsbers, the understanding-providing representations allow the understander to draw "correct predictions", and according to Elgin, such representations enable "non-trivial inference" which is "responsive to evidence". If we take this inferential aspect of understanding seriously, we should be ready to address the question what makes the conclusions of the alleged inferences correct. It seems as if there is no alternative to the view that any kind of inference could reliably lead to correct, i.e. true (or true enough) conclusions only if it is based on true (or true enough) premises. Indeed, it can be shown that the examples, which the critics of the factivity of understanding have chosen as demonstrations of non-factive understanding could be successfully analyzed in terms of true enough premises endorsing correct conclusions. Thus the ideal gas model, although based on a fiction (ideal gases do not exist), "exemplifies features that exist", as Elgin herself has noticed. Similarly, the fluid model of electricity, discussed by de Regt and Gijsbers, gets right the directed motion of the electrical current, which is essential for the derivation of Ohm's law and for its practical applications. To sum up, the non-factivists have done a good job by stressing the inferential aspects of understanding. However, it should be recognized that there is no way to make reliably correct predictions and non-trivial inferences, if the latter are not based on true, partially true, or true enough premises. The understanding-providing scientific representations either contain such premises or serve as "inference tickets" bridging certain true or true enough premises to true or true enough conclusions. References De Regt, H. W., Gijsbers, V. (2017). How false theories can yield genuine understanding. In: Grimm, S. R., Baumberger, G., Ammon, S. (Eds.) Explaining Understanding. New Perspectives from Epistemology and Philosophy of Science. New York: Routledge, 50–75. Elgin, C. Z. (2009). Is understanding factive? In: Pritchard, D., Miller, A., Haddock, A. (Eds.) Epistemic Value. Oxford: Oxford University Press, 322–30. 09:00-10:00 Session 11L: B2/C1 Probability Dan Gabriel Simbotin ("Gheorghe Zane" Institute of Economic and Social Research, Romanian Academy, Iasi Branch, Romania) Jakob Koscholke (University of Hamburg, Germany) Siebel's argument against Fitelson's measure of coherence reconsidered ABSTRACT. This talk aims at showing that Mark Siebel's (2004) counterexample to Branden Fitelson's (2003) probabilistic measure of coherence can be strengthened and thereby extended to an argument against a large number of other proposals including the measures by Shogenji (1999), Douven and Meijs (2007), Schupbach (2011), Schippers (2014), Koscholke (2016) and also William Roche's (2013) average mutual firmness account which has not been challenged up to now. The example runs as follows: There are 10 equally likely suspects for a murder and the murderer is certainly among them. 6 have committed a robbery and a pickpock- eting, 2 have committed a robbery but no pickpocketing and 2 have committed no robbery but a pickpocketing. Intuitively speaking, the proposition that the murderer is a robber and the proposition that the murderer is a pickpocket are quite coherent in this example. After all, there is a large overlap of pickpocketing robbers. However, as Siebel has pointed out, Fitelson's measure indicates that they are not. Siebel's example is compelling. But it shows us much more. First, for any two propositions φ and ψ under a probability function P such that P(¬φ∧¬ψ) = 0 which is the case in Siebel's example, any measure satisfying Fitelson's (2003) dependency desiderata is unable judge the set {φ,ψ} coherent, even in cases where it should. As already mentioned, this includes the measures proposed by Fitelson, Shogenji, Douven and Meijs, Schupbach, Schippers, Koscholke and many potential ones. Second, it can be shown that under a slightly stronger constraint, i.e. for any two propositions φ and ψ under a probability function P such that P(¬φ∧¬ψ) = P(φ∧¬ψ) = 0, Roche's average mutual firmness account is unable to judge the set {φ,ψ} incoherent, even in cases where it should—this can be motivated by slightly modifying Siebel's example. These two results suggest that the aforementioned proposals do not generally capture coherence adequately. Douven, I. and Meijs, W. (2007). Measuring coherence. Synthese, 156:405–425. Fitelson, B. (2003). A probabilistic theory of coherence. Analysis, 63:194–199. Koscholke, J. (2016). Carnap's relevance measure as a probabilistic measure of coherence. Erkenntnis, 82(2):339–350. Roche, W. (2013). Coherence and probability: a probabilistic account of coherence. In Araszkiewicz, M. and Savelka, J., editors, Coherence: Insights from Philosophy, Jurisprudence and Artificial Intelligence, pages 59–91. Springer, Dordrecht. Schippers, M. (2014). Probabilistic measures of coherence: from adequacy constraints towards pluralism. Synthese, 191(16):3821–3845. Schupbach, J. N. (2011). New hope for Shogenji's coherence measure. British Journal for the Philosophy of Science, 62(1):125–142. Shogenji, T. (1999). Is coherence truth conducive? Analysis, 59:338–345. Siebel, M. (2004). On Fitelson's measure of coherence. Analysis, 64:189–190. Vladimir Reznikov (Institute of philosophy and law of SB RAS, Russia) Frequency interpretation of conditions for the application of probability theory according to Kolmogorov ABSTRACT. In the well-known book by Kolmogorov, devoted to the axiomatic theory of probability, the requirements for probabilities were formulated in the context of their applications [1]. Why, in that publication, A.N. Kolmogorov turned to the problem of applying mathematics? The answer was given in a later work by Kolmogorov; he noted that successes in substantiating mathematics overshadowed an independent problem: "Why is mathematics applicable for describing reality" [2]. Kolmogorov formulated the following requirements for probabilities in the context of applications: «A. One can practically be sure that if the set of conditions S is repeated a large number of times n and if m denotes the number of cases in which the event A occurred, then the ratio m/n will differ little from P(A). B. If P(A) is very small, then one can practically be sure that, under a single realization of conditions S, event A will not take place» [1, P. 4]. The first requirement is an informal version of von Mises' asymptotic definition of probability. The second condition describes Cournot's principle in a strong form. These requirements are bridges that connect probability theory and mathematical statistics with reality. However, in the known contemporary literature, the question on compatibility of Kolmogorov's requirements has not been studied till the works of Shafer and Vovk [3]. As Shafer and Vovk noted, Borel, Levi and Frechet criticized the redundancy of condition A, since they believed that its formal description is the conclusion of Bernoulli's theorem. The report considers the frequency interpretation of condition A, since Kolmogorov noted that in the context of applications he follows Mises, the founder of the frequency interpretation. The main thesis of the report is that condition A in the frequency interpretation is not the conclusion of Bernoulli's theorem. As is known, the conclusion of the theorem asserts that the frequency of a certain event A and the probability of event A, the probability is a constant, are close in probability. In the report I prove that, in the frequency interpretation, condition A is interpreted geometrically. I present the following arguments in defense of the thesis. First, in the frequency interpretation, the probabilities of events do not exist a priori, they are representatives of the frequencies of these events. It is natural to consider that a constant probability of an event exists if the frequency characteristics of this event turn out to be stable, for example, they occupy a small interval. Secondly, the geometrical explication of condition A is quite consistent with the definition of probability in von Mises' frequency interpretation, since Mises defines probability on the basis of convergence of frequencies defined in mathematical analysis. Thirdly, our thesis gets support on the basis of the principle of measure concentration proposed by V. Milman. According to this principle, the functions of very many variables, for example, on a multidimensional sphere and other objects turn out to be almost constant. In accordance with this principle, functions of a large number of observations that calculate frequencies turn out to be almost constant. Thus, condition A does not depend on Bernoulli's theorem, but, on the contrary, turns out to be a precondition for the application of this theorem. 09:00-10:00 Session 11M: C7 History and philosophy of the humanities 1 Olha Simoroz (Taras Shevchenko National University of Kyiv, Ukraine) Hugo Tannous Jorge (Birkbeck, University of London (United Kingdom) and Federal University of Juiz de Fora (Brazil), UK) The problem of causal inference in clinical psychoanalysis: a response to the charges of Adolf Grünbaum based on the inductive principles of the historical sciences. ABSTRACT. Aside from a therapy, clinical psychoanalysis can also be a method to produce knowledge on prominent human dimensions, such as mental suffering, sociability and sexuality. The basic premise that justifies clinical psychoanalysis as a research device is that the psychoanalyst's neutral and abstinent questioning would promote the patient's report of uncontaminated mental phenomena, that is, mental phenomena unregulated by immediate social demands. The method should draw out evidence for the inference of particular causal relations between the patient's mental representations, mainly memories and fantasies, and between these and the patient's actions and emotions. In his epistemological critique on psychoanalysis, Adolf Grünbaum claims that the formalization of this method by Sigmund Freud does not present the conditions to cogently test causal hypotheses. Two of Grünbaum's arguments specifically against the logic of causal inference in clinical psychoanalysis are analysed, the argument around inference based on thematic kinship and the one around post hoc ergo propter hoc fallacy. It is defended that both arguments are valid, but also that their premises are artificial. These premises are confined to the Freudian text and disregard the potential that the Freudian method has of becoming cogent without losing its basic features. Departing from these arguments, this work discusses the epistemological potential of the method of causal inference in clinical psychoanalysis by describing some of its inductive principles and by exploring the justification of these principles. This work reaches the conclusion that the inductive principles of clinical psychoanalysis and the ones of the historical sciences are similar in the sense that they all infer retrospectively to the best explanation with the support of "bootstrapping" auxiliary hypotheses and they all make general inferences through meta-analysis of case reports. In the end, this work discusses some responses to the justificatory burden of these inductive principles in the context of clinical psychoanalysis. Cleland, C. E. (2011). Prediction and Explanation in Historical Natural Science. British Journal for the Philosophy of Science, 62, 551–582. Dalbiez, R. (1941). Psychoanalytical method and the doctrine of Freud: Volume 2: Discussion. London: Longmans Green. Glymour, C. (1980). Theory and Evidence. Princeton, N.J.: Princeton University Press. Glymour, C. (1982). Freud, Kepler, and the clinical evidence. In R. Wollheim & J. Hopkins (Eds.), Philosophical Essays on Freud (pp. 12-31). Cambridge: Cambridge University Press. Grünbaum, A. (1984). The Foundations of Psychoanalysis: A Philosophical Critique. Berkeley, CA: University of California Press. Grünbaum, A. (1989). Why Thematic Kinships Between Events Do Not Attest Their Causal Linkage. In R. S. Cohen (Ed.), An Intimate Relation: Studies in the History and Philosophy of Science (pp. 477–494). Dordrecht, The Netherlands: Kluwer Academic Publishers. Hopkins, J. (1996). Psychoanalytic and scientific reasoning. British Journal of Psychotherapy, 13: 86–105. Lipton, P. (2004). Inference to the Best Explanation. London and New York: Routledge. Lynch, K. (2014). The Vagaries of Psychoanalytic Interpretation: An Investigation into the Causes of the Consensus Problem in Psychoanalysis. Philosophia (United States), 42(3), 779–799. Wallace IV, E. R. (1985). Historiography and Causation in Psychoanalysis. Hillsdale, N.J.: Analytic Press. Vladimir Medvedev (St. Petersburg State maritime Technical University, Russia) Explanation in Humanities ABSTRACT. There are two approaches in the treatment of humanities. The naturalistic one denies any substantial differences between human and natural sciences. The most consistently this line was realized in positivist philosophy, where the classic scheme of scientific explanation (Popper – Hempel scheme) was formulated. According to it, explanation of every event may be deductively inferred from two classes of statements: universal laws and sentences fixing initial conditions of an event. Anti-naturalistic thinkers proved that understanding is a specific means of cognition in humanities. Dilthey treated it in a radically irrationalistic mode – as empathy. Such understanding was regarded by naturalists not as a genuine method, but as a heurictic prelude to explanation. Discussions on the interrelations of understanding and explanation and on the applicability of the classical scheme of explanation in humanities are still going on. Usual arguments against naturalism are following. What is explained in humanities is not an outward object, external to us, like that of natural sciences. What is studied here (society, cultural tradition) is a part of ourselves, something that has formed us as subjects of knowledge. Social reality cannot become an ordinary object of knowledge because we belong to it. History and social life are not a performance for a subject who does not take part in it. Knowledge about people and society has transcendental character in Kant's sense: it refers to general conditions of our experience. The specific nature of subject-object relations in human and social sciences manifests itself also in the fact, that our knowledge and conception of social reality is an important part of this reality. The universal scheme of scientific explanation is connected to the technological model of knowledge: the goal of explanation is practical use of phenomena, manipulation. To realize this model in human sciences we should have divided society into subjects and objects of knowledge and manipulation. And the latter should be deprived of the access to knowledge about themselves. After all, in contrast to other objects of knowledge people are able to assimilate knowledge about themselves and to change their behavior. Explanation should be used in human and social sciences. But manipulation cannot be its purpose. The model of critical social sciences of Apel and Habermas presumes the use of explanational methods in a hermeneutic context. The goal here is not to explain others, but to help us to understand ourselves better. For example, Marx's and Manheim's critic of ideology gives causal explanation of the ideological illusions' formation. This explanation has the same character as in natural sciences. But the main principle of the sociology of knowledge denies the possibility of objective and neutral social knowledge. A subject of such knowledge cannot occupy an ideologically undetermined position in order to expose other's ideological illusions. Such subject should be attentive to possible social determination of his own ideas, to their possible ideological nature. Such is the goal of human and social sciences. Explanational methods serve there general hermeneutic task – their function is to deepen human self-understanding. 10:30-11:00Coffee Break David Černín (University of Ostrava, Czechia) Experiments in History and Archaeology: Building a Bridge to the Natural Sciences? ABSTRACT. The epistemic challenges to the historical sciences include the direct inaccessibility of their subject matters and limited empirical data whose scope and variety cannot be easily augmented. The output of historiographic research is rarely in the form of universal or general theory. Nonetheless, these properties do not distinguish the historical sciences from other disciplines. The historical sciences have been successful in generating knowledge of the past. One of the methods common to the natural sciences that historians and archaeologists pursue in order to bridge different academic cultures is the experimental method, most clearly manifest in experimental archaeology. This paper examines the use of experiments in historical and archaeological research and situate them in relation to contemporary philosophies of historiography. Experiments in historiography can take many forms – they can be designed based on textual, pictorial, or other non-textual evidence including fragments of artefacts; they can take place inside the laboratories or in the field. Designers of experiment can aim to describe an exact occurrence in the past (e. g. specific event), types of production techniques, to interpret technical texts, or to inquire into the daily life of our ancestors. However, can the results of such experiments cohere with other scientific historical methods? Can experiment in archaeology truly verify or falsify historiographic hypotheses? Is the experimental method suitable for historical research and to what extent? How do we represent the results of experimental archaeology? These questions accompanied by individual examples of experimental archaeology are discussed in relation to the constructivist approach to historiography and in relation to historical anti‑realism. It is argued that despite the fruitfulness of some experiments, their results generally suffer from the same underdetermination as other historiographic methods and theories. Jonas Ahlskog (Åbo Akademi University, Finland) Collingwood, the narrative turn, and the cookie cutter conception of historical knowledge PRESENTER: Giuseppina D'Oro ABSTRACT. The narrative turn in the philosophy of historiography relies on a constructivist epistemology motivated by the rejection of the view that there is any such thing as immediate knowledge of the past. As there is no such thing as knowledge of things as they are in themselves generally speaking, so there is no knowledge of the past in itself. Some narrativists characterise the temporal distance between the agents and the historian in positive terms and present it as an enabling condition of historical knowledge, because, so they argue, the significance of an historical event is better grasped retrospectively, in the light of the chain of events it set in motion. Others, on the other hand, see the retrospective nature of historical knowing as a sort of distorting mirror which reflects the historian's own zeitgeist. Historical knowledge, so the argument goes, requires conceptual mediation, but since the mediating concepts are those of the historian, each generation of historians necessarily re-writes the past from their own perspective, and there can never be anything as "the past as it always was" (Dray). To use a rather old analogy one might say that as the form of the cookie cutter changes, so does the shape of the cookie cut out of the dough. This paper argues that there is a better way of preserving the central narrativist claim that the past cannot be known in-itself, one which does not require biting the bullet that the past needs to be continuously re-written from the standpoint of the present. To do so one needs to rethink the notion of mediacy in historical knowledge. We present this alternative conception of mediacy through an explication and reconstruction of Collingwood's philosophy of history. According to Collingwood the past is known historically when it is known through the eyes of historical agents, as mediated by their own zeitgeist. The past is therefore not an ever-changing projection from different future "nows". While human self-understanding changes over time (the norms which govern how a medieval serf should relate to his lord are not the same as those which govern the relation between landlord and tenant in contemporary London) the norms which governed the Greek, the Romans, the Egyptian or the Mesopotamian civilization remain what they always were. It is the task of the historian to understand events as they would have been perceived by the historical agents, not in the light of legal, epistemic or moral norms that are alien to them. Fr example, understanding Caesar's crossing of the Rubicon as challenging the authority of the senate (rather than, say, simply talking a walk with horses and men) involves understanding the Roman legal system and what Republican law entailed. This is a kind of conceptual knowledge that is not altered either by the future course of events or by changes in human self-understanding. Although Collingwood's account of the nature of mediacy in historical knowledge would disallow that later historians should/could retrospectively change the self-understanding of the Romans (or the Egyptians, or the Greeks), the claim that historians can know the past as the Egyptians, the Romans or the Mesopotamians did, is not tantamount to claiming that the past can be known in itself. It is rather the assertion that the past is known historically when it is known through the eyes of the historical agent, not those of the historian. This conception of mediacy takes the past to be always-already mediated (by the conceptual framework of the agent) and, unlike the cookie-cutter conception of knowledge, does not lead to the sceptical implications which go hand in hand with the narrativist conception of mediacy. Grigory Olkhovikov (Ruhr-Universitaet Bochum, Germany) Stit heuristics and the construction of justification stit logic ABSTRACT. From its early days, stit logic was built around a set of heuristic principles that were typically phrased as recommendations to formalize certain ideas in a certain fashion. We have in mind the set of 6 stit theses advanced in [1, Ch. 1]. These theses mainly sought to guide the formalization of agentive sentences. However, it is often the case that one is interested in extending stit logic with new notions which are not necessarily confined to agentive phenomena; even in such cases one has to place the new notions in some relation to the existing stit conceptual machinery which often involves non-trivial formalization decisions which are completely outside the scope of the Belnapian stit theses. The other issue is that the preferred stit operator of [1] is achievement stit, whereas in the more recent literature the focus is on different variants of either Chellas stit or deliberative stit operator. In our talk we try to close these two gaps, by (1) reformulating some of the Belnapian theses for Chellas/deliberative stit operator, (2) developing heuristics for representing non-agentive sentences in stit logic, and (3) compensating for the absence of achievement stit operator by introducing the so-called `fulfillment perspective' on modalities in stit logic. In doing so, we introduce a new set of heuristics, which, we argue, is still in harmony with the philosophy expressed in [1]. We then apply the new heuristic principles to analyze the ideas behind the family of justification stit logics recently introduced in [2] and [3]. References [1] N. Belnap, M. Perloff, and M. Xu. Facing the Future: Agents and Choices in Our Indeterminist World. Oxford University Press, 2001. [2] G. Olkhovikov and H. Wansing. Inference as doxastic agency. Part I: The ba- sics of justification stit logic. Studia Logica. Online first: January 27, 2018, https://doi.org/10.1007/s11225-017-9779z. [3] G. Olkhovikov and H. Wansing. Inference as doxastic agency. Part II: Ramifications and refinements. Australasian Journal of Logic, 14:408-438, 2017. Alexandra Kuncová (Utrecht University, Netherlands) CANCELLED: Ability and Knowledge ABSTRACT. Imagine that I place all the cards from a deck face down on a table and ask you to turn over the Queen of Hearts. Are you able to do that? In a certain sense, yes – this is referred to as causal ability. Since you are able to pick any of the face-down cards, there are 52 actions available to you, and one of these guarantees that you turn over the Queen of Hearts. However, you do not know which of those 52 actions actually guarantees the result. Therefore, you are not able to turn over the Queen of Hearts in the epistemic sense. I explore this epistemic qualification of ability and three ways of modelling it. I show that both the analyses of knowing how in epistemic transition systems (Naumov and Tao, 2018) and of epistemic ability in labelled STIT models (Horty and Pacuit, 2017) can be simulated using a combination of impersonal possibility, knowledge and agency in standard epistemic STIT models. Moreover, the standard analysis of the epistemic qualification of ability relies on action types – as opposed to action tokens – and states that an agent has the epistemic ability to do something if and only if there is an action type available to her that she knows guarantees it. I argue, however, that these action types are dispensable. This is supported by the fact that both epistemic transition systems and labelled STIT models rely on action types, yet their associated standard epistemic STIT models do not. Thus, no action types, no labels, and no new modalities are needed. Epistemic transition systems as well as labelled STIT models have been noticeably influenced by the semantics of ATL. In line with the ATL tradition, they model imperfect information using an epistemic indistinguishability relation on static states or moments, respectively. In the STIT framework this implies that agents cannot know more about the current moment/history pair than what is historically settled. In particular, they cannot know anything about the action they perform at that moment/history pair. This is at odds with the standard epistemic extension of STIT theory which models epistemic indistinguishability on moment/history pairs instead. The main benefit of using the standard epistemic STIT models instead of epistemic transition systems or labelled STIT models is that they are more general and therefore provide a more general analysis of knowing how and of epistemic ability in terms of the notion of knowingly doing. References Horty, J. F. and E. Pacuit (2017). Action types in stit semantics. The Review of Symbolic Logic 10(4), 617–637. Naumov, P. and J. Tao (2018). Together we know how to achieve: An epistemic logic of know-how. Artificial Intelligence 262(September), 279–300. Ilaria Canavotto (University of Amsterdam, Netherlands) Alexandru Baltag (University of Amsterdam, Netherlands) Sonja Smets (University of Amsterdam, Netherlands) Introducing Causality in Stit Logic PRESENTER: Ilaria Canavotto ABSTRACT. In stit logic, every agent is endowed at every moment with a set of available choices. Agency is then characterized by two fundamental features. That is, (i) independence of agency: agents can select any available choice and something will happen, no matter what the other agents choose; (ii) dependence of outcomes: the outcomes of agents' choices depend on the choices of the other agents. In this framework, an agent sees to it that F when her choice ensures that F, no matter what the other agents do. This characterization (or variants thereof) is taken to capture the fact that an agent brings about F. However, the notion of bringing about thus modelled is too demanding to represent situations in which an individual, interacting with others, brings about a certain fact: actually, in most of the cases in which someone brings something about, what the other agents do matters. In light of this, we aim at refining stit semantic in order to make it suitable to represent the causal connection between actions and their consequences. The key idea is, first, to supplement stit semantics with action types (following Broersen, 2011; Herzig & Lorini, 2010; Horty & Pacuit, 2017; Ming Xu, 2010); then, to introduce a new relation of opposition between action types. We proceed as follows. Step 1. Let (Mom, <) be a tree-like and discrete ordered set of moments and call "transition" any pair (m,m') such that m' is a successor of m. Given a finite set Ag of agents, we have, for each i in Ag, a set A_i of action types available to i and a labelling function Act_i assigning to each transition an action type available to i, so that Act_i((m,m')) is the action that i performs along transition (m,m'). The joint action performed by a group I of agents along (m,m') is then the conjunction of the actions performed by the agents in I along (m,m'). The joint actions performed by Ag are called global actions, or strategy profiles. In this framework, the next-stit operator [i xstit]F can be given a straightforward interpretation. Step 2. Intuitively, an individual or joint action B opposes another individual or joint action when B blocks or hinders it (e.g. my action of running to catch a train is opposed by the crowd's standing in the way). In order to represent this relation, we introduce a function O associating to every action B the set O(B) of actions opposing B. We then say that B is unopposed in a global action G just in case B occurs in G and no action constituting G opposes B. The global actions in which B is unopposed represent counterfactual scenarios allowing us to determine the expected causal consequences of B. Specifically, we can say that F is an expected effect of B only if B leads to an F-state whenever it is done unopposed. Besides presenting an axiomatization of the logic induced by the semantics just sketched, we show that the next-stit operator [i xstit]F is a special case of a novel operator [i pxstit]F, defined in terms of expected effects, and that, by using this operator, we are able to fruitfully analyse interesting case studies. We then assess various refinements of the [i pxstit] operator already available in this basic setting. Finally, we indicate how this setting can be further elaborated by including the goals with which actions are performed. David Miller (The University of Warwick, UK) Comment on "Popper on the Mind-Brain Problem" ABSTRACT. In a wide-ranging interview (Popper, Lindahl, & Århem 1993) published near the end of his life, Popper drew attention to several similarities between the unconscious mind and forces or fields of forces: minds, like forces, have intensity; they are located in space and time; they are unextended in space but extended in time; they are incorporeal but existing only in the presence of bodies; and they are capable of acting on and being acted on by bodies (what he elsewhere called 'kicking and being kicked back', and proposed as a criterion of reality). Granted these similarities, Popper proposed that the unconscious mind should be understood literally as a field of forces. A related idea, extending also to the conscious part of the mind, was proposed also by Libet (1996), who elucidated in his (1997) the connections between what he described as 'Popper's valuable hypothesis' and his own. In this comment on the lead paper 'Popper on the Mind-Brain Problem', I hope to explore some similarities between these theories of minds as force fields and the proposal that the propensities that are fundamental to Popper's propensity interpretation of probability should be likened to forces. This latter proposal was made indirectly in one of Popper's earliest publications on the propensity interpretation, but never (as far as I am aware) very decisively pursued. Instead, in A World of Propensities (1990), Popper adopted the idea that propensities (which are measured by probabilities) be likened to partial or indeterministic causes. It will be maintained that this was a wrong turn, and that propensities are better seen as indeterministic forces. There is nothing necessitarian, and there is nothing intrinsically unobservable either, about forces. One of Popper's abiding concerns was the problem of how to account for human creativity, especially intellectual and artistic creativity. The speaker rightly notes the centrality of 'the Popperian thesis that present-day physics is fundamentally incomplete, i.e. the universe is open'. But this is hardly enough. It is not hard to understand how propensities may be extinguished (that is, reduced to zero) with the passage of time, but harder to understand their initiation and generation. It may be that the identification of propensities with forces, which disappear when equilibrium is achieved and are at once revived when equilibrium is upset, may help to shed some light on this problem. Libet, B. (1996). 'Conscious Mind as a Field'. Journal of Theoretical Biology 178, pp.223f. ------------(1997). 'Conscious Mind as a Force Field: A Reply to Lindahl & Århem'. Journal of Theoretical Biology 185, pp.137f. Popper, K.R. (1990). A World of Propensities. Bristol: Thoemmes. Popper, K.R., Lindahl, B.I.B. & Århem, P. (1993). 'A Discussion of the Mind-Brain Problem'. Theoretical Medicine 14, pp.167‒180. Denis Noble (University of Oxford, UK) The rehabilitation of Karl Popper's views of evolutionary biology and the agency of organisms ABSTRACT. In 1986 Karl Popper gave the Medawar Lecture at The Royal Society in London. He deeply shocked his audience, and subsequently entered into prolonged correspondence with Max Perutz over the question whether biology could be reduced to chemistry. Perutz was insistent that it could be, Popper was equally insistent that it could not be. The lecture was never published by The Royal Society but it has now been made public with the publication of Hans-Joachim Niemann's (2014) book. Popper contrasted what he called "passive Darwinism" (essentially the neo-Darwinist Modern Synthesis) with "active Darwinism" (based on the active agency of organisms). This was a classic clash between reductionist views of biology that exclude teleology and intentionality and those that see these features of the behaviour of organisms as central in what Patrick Bateson (2017) calls "the adaptability driver". In the process of investigating how organisms can harness stochasticity in generating functional responses to environmental challenges we developed a theory of choice that reconciles the unpredictability of a free choice with its subsequent rational explanation (Noble and D Noble 2018). Popper could not have known the full extent of the way in which organisms harness stochasticity nor how deeply this affects the theory of evolution (Noble & D. Noble, 2017), but in almost all other respects he arrived at essentially the same conclusions. Our paper will call for the rehabilitation of Popper's view of biology. Neo-Darwinists see genetic stochasticity as just the source of variation. We see it as the clay from which the active behaviour of organisms develops and therefore influences the direction of evolution. Bateson, Patrick. 2017 Behaviour, Development and Evolution; Open Book Publishers: Cambridge, UK. Niemann, Hans-Joachim. 2014 Karl Popper and The Two New Secrets of Life. Mohr Siebeck, Tubingen Noble, Raymond & Noble, Denis. 2017 Was the watchmaker blind? Or was she one- eyed?," Biology 6(4), 47 Noble, Raymond & Noble, Denis. 2018 Harnessing stochasticity: How do organisms make choices? Chaos, 28, 106309. Philip Madgwick (Milner Centre for Evolution, University of Bath, UK) Agency in Evolutionary Biology ABSTRACT. In response to Karl Popper, Denis Noble, Raymond Noble and others who have criticised evolutionary biology's treatment of the agency of organisms, I analyse and defend what is sometimes called 'Neo-Darwinism' or 'the Modern Synthesis' from my own perspective – as an active researcher of evolutionary theory. Since the Enlightenment, the natural sciences have made progress by removing agency from nature and understanding the world in terms of materialistic chains of cause and effect. With influence from William Paley, this mechanistic way of thinking became the bedrock of Charles Darwin's theory of evolution by natural selection. Evolutionary biology has tended to understand the 'choices' underlying form and behaviour of organisms as deterministic links in the chain between genotypic causes and phenotypic effects, albeit permitting the genotype to exhibit a range of predetermined responses dependent upon the environmental context (as a generalised form of Richard Woltereck's reaction norm). As selection acts on phenotypes, there is little room for concepts like 'free will' or 'meaningful choice' within this form of mechanistic explanation. Instead, agency becomes a useful 'thinking tool' rather than a 'fact of nature' – a metaphor that can be helpfully applied to biological entities beyond organisms, like genes which can be thought of as 'selfish'. Whilst there is reasonable grounds to find this world-view aesthetically objectionable, critics like Karl Popper have suggested that evolutionary theory has gone further in (unscientifically) denying the existence of what it cannot explain (namely, agency). Here, I evaluate this line of criticism, highlighting four different aspects of arguments against the concept of agency within modern evolutionary theory: i) issues of language that reflect phrasing rather than semantics, ii) misunderstandings of the significance of biological facts, iii) areas of acknowledged conflict between world views, and iv) unresolved criticisms. To the last point, I present a personal response to demonstrate how I use a working concept of agency to guide my own research. 11:00-12:00 Session 12D: B4 Modalities in science María Ferreira Ruiz (University of Buenos Aires, Argentina) Mihai Rusu (University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca and Babeş-Bolyai University, Romania) Mihaela Mihai (University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca, Romania) Modal notions and the counterfactual epistemology of modality PRESENTER: Mihai Rusu ABSTRACT. The paper discusses a conceptual tension that arises in Williamson's counterfactual epistemology of modality: that between accepting minimal requirements for understanding, on the one hand, and providing a substantial account of modal notions, on the other. While Williamson's theory may have the resources to respond to this criticism, at least prima facie or according to a charitable interpretation, we submit that this difficulty is an instance of a deeper problem that should be addressed by various types of realist theories of metaphysical modality. That is, how much of the content of metaphysical modal notions can be informed through everyday/naturalistic cognitive and linguistic practices? If there is a gap between these practices and the content of our metaphysical modal assertions, as we believe there is, it appears that the (counterfactual) account needs to be supplemented by various principles, rules, tenets, etc. This reflects on the nature and content of philosophical notions, as it seems that one may not be able to endorse an extreme externalist account of philosophical expressions and concepts, of the kind Williamson favours, and at the same time draw out a substantial epistemology of these notions, as a robust interpretation of metaphysical modal truth seems to require. Ilmari Hirvonen (University of Helsinki, Finland) Rami Koskinen (University of Helsinki, Finland) Ilkka Pättiniemi (Independent, Finland) Epistemology of Modality Without Metaphysics PRESENTER: Ilmari Hirvonen ABSTRACT. The epistemological status of modalities is one of the central issues of contemporary philosophy of science: by observing the actual world, how can scientists obtain knowledge about what is possible, necessary, contingent, or impossible. It is often thought that a satisfactory answer to this puzzle requires making non-trivial metaphysical commitments, such as grounding modal knowledge on essences or being committed to forms of modal realism. But this seems to put the cart before the horse, for it assumes that in order to know such ordinary modal facts as "it is possible to break a teacup" or such scientific modal facts as "superluminal signaling is impossible", we should first have a clear metaphysical account of the relevant aspects of the world. It seems clear to us that we do have such everyday and scientific knowledge, but less clear that we have any kind of metaphysical knowledge. So, rather than starting with metaphysical questions, we offer a metaphysically neutral account of how modal knowledge is gained that nevertheless gives a satisfactory description of the way modal beliefs are formulated in science and everyday life. We begin by explicating two metaphysically neutral means for achieving modal knowledge. The first, a priori way is founded on the idea of relative modality. In relative modality, modal claims are defined and evaluated relative to a system. Claims contradicting what is accepted, fixed or implied in a system are impossible within that system. Respectively, claims that can be accepted within the system without contradiction are possible. Necessary claims in a system are such that their negation would cause a contradiction, and so on. The second, a posteriori way is based on the virtually universally accepted Actuality-to-Possibility Principle. Here, what is observed to be or not to be the case in actuality or under manipulations gives us modal knowledge. Often this also requires making ampliative inferences. The knowledge thus gained is fallible, but the same holds for practically all empirical knowledge. Based on prevalent scientific practice, we then show that there is an important bridge between these two routes to modal knowledge: Usually, what is kept fixed in a given system, especially in scientific investigation, is informed by what is discovered earlier through manipulations. Embedded in scientific modelling, relative modalities in turn suggest places for future manipulations in the world, leading to an iterative process of modal reasoning and the refinement of modal knowledge. Finally, as a conclusion, we propose that everything there is to know about modalities in science and in everyday life can be accessed through these two ways (or their combination). No additional metaphysical story is needed for the epistemology of modalities – or if such a story is required, then the onus of proof lies on the metaphysician. Ultimately, relative modality can accommodate even metaphysical modal claims. However, they will be seen as claims simply about systems and thus not inevitably about reality. While some metaphysicians might bite the bullet, few have been ready to do so explicitly in the existing literature. 11:00-12:00 Session 12E: IS B7 Matthews Helen Longino (Stanford University, United States) Michael Matthews (The University of New South Wales, Australia) Philosophy in Science Teacher Education ABSTRACT. Philosophical questions arise for all teachers. Some of these arise at an individual teacher/student level (what is and is not appropriate discipline?); some at a classroom level (what should be the aim of maths instruction?); some at a school level (should classes be organised on mixed-ability or graded-ability lines?); and some at a system level (should governments fund private schooling, and if so on what basis?). These philosophical, normative, non-empirical questions impinge equally on all teachers, whether they are teaching mathematics, music, economics, history, literature, theology or anything else in an institutional setting. The foregoing questions and engagements belong to what can be called general philosophy of education; a subject with a long and distinguished past, contributed to by a roll-call of well-known philosophers and educators such as: Plato, Aristotle, Aquinas, Locke, Mill, Whitehead, Russell, Dewey, Peters, Hirst and Scheffler (to name just a Western First XI). But as well as general philosophy of education, there is a need for disciplinary philosophy of education; and for science education such philosophy is dependent upon the history and philosophy of science. Some of the disciplinary questions are internal to teaching the subject, and might be called 'philosophy for science teaching'. This covers the following kinds of questions: Is there a singular scientific method? What is the scope of science? What is a scientific explanation? Can observational statements be separated from theoretical statements? Do experimental results bear inductively, deductively or abductively upon hypotheses being tested? What are legitimate and illegitimate ways to rescue theories from contrary evidence? Other of the disciplinary questions are external to the subject, and might be called 'philosophy of science teaching'. Here questions might be: Can science be justified as a compulsory school subject? What characterises scientific 'habits of mind' or 'scientific temper'? How might competing claims of science and religion be reconciled? Should local or indigenous knowledge be taught in place of orthodox science or alongside it, or not taught at all? Doubtless the same kinds of questions arise for teachers of other subjects – mathematics, economics, music, art, religion. There are many reasons why study of history and philosophy of science should be part of preservice and in-service science teacher education programs. Increasingly school science courses address historical, philosophical, ethical and cultural issues occasioned by science. Teachers of such curricula obviously need knowledge of HPS. Without such knowledge they either present truncated and partial versions of the curricula, or they repeat shallow academic hearsay about the topics mentioned. Either way their students are done a disservice. But even where curricula do not include such 'nature of science' sections, HPS can contribute to more interesting and critical teaching of the curricular content. Beyond these 'practical' arguments for HPS in teacher education, there are compelling 'professional' arguments. A teacher ought to know more than just what he or she teaches. As an educator, they need to know something about the body of knowledge they are teaching, something about how this knowledge has come about, how its claims are justified, what its limitations are and, importantly, what the strengths and contributions of science have been to the betterment of human understanding and life. Teachers should have an appreciation of, and value, the tradition of inquiry into which they are initiating students. HPS fosters this. 11:00-12:00 Session 12F: B4 Explanation and understanding 1 Priyedarshi Jetli (University of Mumbai, India) Fabio Sterpetti (Sapienza University of Rome, Italy) Non-Causal Explanations of Natural Phenomena and Naturalism ABSTRACT. The aim of this paper is to assess whether a counterfactual account of mathematical explanations of natural phenomena (MENP) (Baker 2009) is compatible with a naturalist stance. Indeed, nowadays many philosophers claim that non-causal explanations of natural phenomena are ubiquitous in science and try to provide a unified account of both causal and non-causal scientific explanations (Reutlinger, Saatsi 2018). Among the different kinds of non-causal explanations of natural phenomena, MENP are regarded as paradigmatic examples of non-causal scientific explanations (Lange 2013). According to many philosophers, among the unified accounts of scientific explanations that have been proposed so far, the most promising ones are those that try to extend the counterfactual theory of scientific explanations to cover non-causal scientific explanations (Reutlinger 2018). We thus focus on Baron, Colyvan and Ripley (2017) (BCR), since it is one of the most well-developed attempts to provide an account of MENP that is based on a counterfactual theory of scientific explanations. More precisely, we examine BCR counterfactual account of why the shape of honeycomb cells is hexagonal. Such account rests on the idea that through a counterfactual about mathematics, one can illuminate the reason why the shape of the cells cannot but meet an optimality requirement. We firstly analyse whether BCR account is an adequate explanation of cells' shape, and then we assess whether such account would be acceptable to those who wish to adopt a naturalist stance. To do that, we specify what minimal requirements a stance has to meet in order to be defined as naturalist. We show that BCR account of the shape of honeycomb cells is unsatisfactory, because it is focused on the bidimensional shape of the cells, while actual cells are tridimensional, and the tridimensional shape of the cells does not meet any optimality requirement (Räz 2013). We also show that it might be in any case very difficult to make BCR account compatible with a naturalist stance, because of its metaphysical assumptions on how mathematics might constrain the physical domain. We claim that such a kind of "explanations by constraint" (Bertrand 2018; Lange 2013) is incompatible with a naturalist stance, because there is no naturalist account of how such a constrain might obtain. Baker A. 2009. Mathematical Explanation in Science. British Journal for the Philosophy of Science, 60: 611–633. Baron S., Colyvan M., Ripley D. 2017. How Mathematics Can Make a Difference. Philosophers' Imprint, 17: 1–19. Bertrand M. 2018. Metaphysical Explanation by Constraint. Erkenntnis, DOI: 10.1007/s10670-018-0009-5. Lange M. 2013. What Makes a Scientific Explanation Distinctively Mathematical? British Journal for the Philosophy of Science, 64: 485–511. Räz T. (2013). On the Application of the Honeycomb Conjecture to the Bee's Honeycomb. Philosophia Mathematica, 21: 351–360. Reutlinger A. 2018, Extending the Counterfactual Theory of Explanation, in A. Reutlinger, J. Saatsi (eds.), Explanation beyond Causation. Oxford: Oxford University Press: 74–95. Reutlinger A., Saatsi J. (eds.) 2018. Explanation beyond Causation. Oxford: Oxford University Press. Andrei Marasoiu (University of Virginia, United States) The truth in understanding ABSTRACT. Elgin has argued that scientific understanding is, in general, non-factive because it often partly consists in idealizations ("felicitous falsehoods"). In contrast, Strevens argues that idealizations can be eliminated from models by which we understand phenomena of interest, and hence that understanding is "correct," or quasi-factive. In contrast to both, I argue that the factivity debate cannot be settled, as a matter of principle. The factivity debate concerns whether felicitous falsehoods can ever constitute our understanding. Elgin (2004, pp. 113-114) cites "the laws, models, idealizations, and approximations which are... constitutive of the understanding that science delivers." Yet, as Strevens notes, the evidence Elgin adduces for non-factivity is consistent with idealizations falling short of constituting understanding. In contrast, for Strevens (2013, p. 505), to understand why something is the case is to "grasp a correct explanation" of it. For Strevens, explanation is model-based, hence so is the understanding that explanation provides. The role of idealizations is heuristic: to provide simplified models that preserve factors that causally and counterfactually make a difference to the phenomena theorized. Strevens (2013, p. 512) distinguishes the explanatory and literal contents of idealized models. The literal content of the model includes idealizations and their consequences. We obtain its explanatory content by devising a translation manual that eliminates idealizing assumptions and replaces them by conditional statements that are actually true. Understanding is correct (quasi-factive) to the extent that the explanatory content of the model by which we understand is accurate. I now move to my own contribution. In appraising the debate about whether understanding is factive, we should differentiate between our conceptions – the stuff of thought – and the cultural artifacts we use as props for thinking: our models and theories. When many alternative models of the same phenomena are available, some models are more "cognitively salient" than others (Ylikoski and Kourikoski 2010). Subjectively, they come to mind more easily; objectively, their easier access is due to their greater explanatory power. With the theory/ mind difference in view, I distinguish two questions: (i) whether idealizations are constitutive to the models scientists use; and (ii) whether idealizations are constitutive of the cognitive representations by which scientists understand. If there's nothing more we can say about the cognitive aspects of understanding, then we lack a procedure for finding which parts of a model are internalized as cognitive representations. This matters for the factivity of understanding: we have no way of telling whether idealizations (be they in-principle eliminable or not) are in fact cognitively represented by scientists conceiving of the phenomena thus idealized. That is, we have no basis to settle the issue of whether understanding is quasi-factive or non-factive. Elgin, C.Z. (2004) True Enough. Philosophical Issues 14, pp. 113-131. Strevens, M. (2013) No understanding without explanation. Studies in History and Philosophy of Science 44, pp. 510-515. Ylikoski, P., & Kourikoski, J. (2010) Dissecting explanatory power. Philosophical Studies 148, pp. 201-219. 11:00-12:30 Session 12G: B1/C3 Understanding models in science Chia-Hua Lin (University of South Carolina, Taiwan) Thomas Durlacher (University of Luxembourg, Luxembourg) Idealizations and the decomposability of models in science ABSTRACT. Idealizations are a central part of many scientific models. Even if a model represents its target system in an accurate way, the model will not replicate the whole target system but will only represent relevant features and ignore features that are not significant in a specific explanatory context. Conversely, in some cases not all features of a model will have a representative function. One common strategy to account for these forms of idealizations is to argue that idealizations can have a positive function if they do not distort the difference-makers of the target system. (Strevens 2009, 2017) This view about the role of idealized models has recently been challenged by Collin Rice. (Rice 2017, 2018) He claims that the strategy to account for idealizations in terms of a division between representative parts of a model and the parts which can be ignored fails for several reasons. According to him idealizations are essential for the mathematical framework in which models can be constructed. This idealized mathematical framework is, in turn, a necessary precondition to create and understand the model and undermines our ability to divide distorted from representative features of a model. His second reason for doubting the adequacy of the strategy to divide between relevant and not relevant model parts is the fact that many models distort difference-making features of the target system. Alternatively, he suggests a position he calls the holistic distortion view of idealized models. This position includes the commitment that highly idealized models allow the scientist to discover counterfactual dependencies without representing the entities, processes or difference-makers of their target systems. In my presentation I am going to argue against this position and claim that to explain and to show causal and non-causal counterfactual dependence relations with the help of a model is only possible if the model accurately represents the difference-makers within a target system. I will do this by explicating the notion of a mathematical framework of a model which Rice's argument heavily depends upon and reevaluate some of his examples of idealized models used in scientific practice like the Hardy–Weinberg equilibrium model in biology and the use of the thermodynamic limit in physics. Literature Rice, Collin. "Idealized Models, Holistic Distortions, and Universality." Synthese 195, no. 6 (June 2018): 2795–2819. https://doi.org/10.1007/s11229-017-1357-4. Rice, Collin. "Models Don't Decompose That Way: A Holistic View of Idealized Models." The British Journal for the Philosophy of Science, August 30, 2017, axx045–axx045. https://doi.org/10.1093/bjps/axx045. Strevens, Michael. "How Idealizations Provide Understanding." In Explaining Understanding: New Essays in Epistemology and the Philosophy of Science, edited by Stephen R. Grimm, Christoph Baumberger, and Sabine Ammon, 37–49. New York: Routledge, 2017. Strevens, Michael. Depth: An Account of Scientific Explanation. Cambridge, MA: Harvard University Press, 2008. Walter Veit (University of Bayreuth, Germany) Who is afraid of Model Pluralism? ABSTRACT. Abstract: In this paper, I diagnose that evolutionary game theory models are used in multiple diverse ways and for different purposes, either directly or indirectly contributing toward the generation of acceptable scientific explanations. The philosophical literature on modelling, rather than recognizing this diversity, attempts to fit all of these into a single narrow account of modelling, often only focusing on the analysis of a particular model. Recently, Cailin O'Connor and James Owen Weatherall (2016) argued that a lack of family resemblance between modelling practices makes an understanding of the term 'model' impossible, suggesting that "any successful analysis [of models] must focus on sets of models and modelling practice that hang together in ways relevant for the analysis at hand" (p. 11). Rather than providing an essentialist account of what scientific modelling practice is or should be, covering all the different ways scientists use the word 'model', I settle for something far less ambitious: a philosophical analysis of how models can explain real-world phenomena that is narrow in that it focuses on Evolutionary Game Theory (EGT) and broad in its analysis of the pluralistic ways EGT models can contribute to explanations. Overly ambitious accounts have attempted to provide a philosophical account of scientific modelling that tend to be too narrow in their analysis of singular models or small set of models and too broad in their goal to generalize their conclusions over the whole set of scientific models and modelling practices – a feat that may, in fact, be impossible to achieve and resemble Icarus who flew too close to the sun. Nevertheless, many of the conclusions in my analysis will be extendable to other sets of models, especially in biology and economics, but doubt must be cast that any essence of models can be discovered. References O'Connor, C., and J. O. Weatherall. 2016. 'Black Holes, Black-Scholes, and Prairie Voles: An Essay Review of Simulation and Similarity, by Michael Weisberg', Philosophy of Science, 83, pp. 613–26. 11:00-12:30 Session 12H: C1 Mathematical language Kati Kish Bar-On (Tel Aviv University, Israel) Luc Pellissier (Irif, Université Paris Didierot, France) Juan-Luis Gastaldi (SPHERE, CNRS & Université Paris Diderot, France) Duality and interaction: a common dynamics behind logic and natural language PRESENTER: Luc Pellissier ABSTRACT. The fact that some objects interact well together – say, a function with an argument in its domain of definition, whose interaction produce a result – define a notion of duality that has been central in the last-century mathematics. Not only does it provide a general framework for considering at the same time objects of interests and tests (or measures) on them, but it also provides a way to both enrich and restrict the objects considered, by studying a relaxed or strengthened notion of interaction. A reconstruction of logic around the notion of interaction have been underway since the pi- oneering works of Krivine and Girard where (para-)proofs are seen as interacting by exchanging logical arguments, the interaction stopping successfully only if one the two gives up as it recognises to lack arguments. All the proofs interacting in a certain way – for instance, interacting correctly with the same proof – can then be seen as embodying a certain formula; and the possible operations on proofs translate into operations on formulæ. In this work, we intend to show that, somewhat surprisingly, the same approach in terms of duality and interaction succeeds in grasping structural aspects of natural language as purely emergent properties. Starting from the unsupervised segmentation of an unannotated linguistic corpus, we observe that co-occurrence of linguistic segments at any level (character, word, phrase) can be considered as a successful interaction, defining a notion of duality between terms. We then proceed to represent those terms by the distribution of their duals within the corpus and define the type of the former through a relation of bi-diuality with respect to all the other terms of the corpus. The notion of type can then be refined by considering the interaction of a type with other types, thus creating the starting point of a variant of Lambek calculus. This approach has several precursors, for instance Hjelmslev's glossematic algebra, and more generally, the structuralist theory of natural language (Saussure, Harris). The formal version we propose in this work reveals an original relation between those perspectives and one of the most promising trends in contemporary logic. We also include an implementation of the described algorithm for the analysis of natural language. Accordingly, our approach appears as a way of analyzing many efficient mechanized natural language processing methods. More generally, this approach opens new perspectives to reassess the relation between logic and natural language. Bibliography. Gastaldi, Juan-Luis. "Why can computers understand natural language?" Under review in Philosophy and Technology. Girard, Jean-Yves. "Locus solum: from the rules of logic to the logic of rules". In: Mathematical Structures in Computer Science 11.3 (2001), pp. 301–506. Hjelmslev, Louis and Hans Jørgen Uldall. Outline of Glossematics. English. Nordisk Sprog- og Kulturforlag. Copenhague, 1957. Krivine, Jean-Louis. "Realizability in classical logic". In: Panoramas et synthèses 27 (2009), pp. 197–229. Lambek, Joachim. "The Mathematics of Sentence Structure". In: The American Mathemat- ical Monthly 65.3 (Mar. 1958), pp. 154–170. Valeria Giardino (Archives Henri-Poincaré - Philosophie et Recherches sur les Sciences et les Technologies, France) The practice of proving a theorem: from conversations to demonstrations ABSTRACT. In this talk, I will focus on mathematical proofs "in practice" and introduce as an illustration a proof of the equivalence of two presentations of the Poincaré homology sphere, which is taken from a popular graduate textbook (Rolfsen, 1976) and discussed in De Toffoli and Giardino (2015). This proof is interesting because it is given by showing a sequence of pictures and explaining in the text the actions that ought to be performed on them to move from one picture to the other and reach the conclusion. By relying on this example, I will propose to take into account Stone and Stonjic (2015)'s view of demonstrations as practical actions to communicate precise ideas; my objective is to evaluate whether such a suggestion can be of help to define what the mathematical "practice" of giving a poof is. Stone and Stonjic consider as a case study an "origami proof" of the Pythagorean theorem and base their analysis on certain aspects of the philosophy of language of David Lewis. According to Lewis (1979), communication naturally involves coordination; in principle, any action could be a signal of any meaning, as long as the agent and her audience expect the signal to be used that way; a conversation happens only when a coordination problem is solved. Formal reasoning is a particular form of coordination that happens on a conversational scoreboard, that is, an abstract record of the symbolic information that interlocutors need to track in conversation. Stone and Stonjic conclude that the role of practical action in a conversation is explained in terms of coherence relations: meaning depends on a special sort of knowledge— convention—that serves to associate practical actions with precise contributions to conversation; interpretive reasoning requires us to integrate this conventional knowledge—across modalities—to come up with an overarching consistent pattern of contributions to conversation. On this basis, I will discuss the pros of considering proofs as conversations: if this is the case, then non-linguistic representations like diagrams have content and mathematics is a distributed cognitive activity, since transformations in the world can be meaningful. However, some general problems arise in Lewis' framework when applied to mathematical proof: (i) does a convention of truthfulness and trust exist really?; (ii) how can we coordinate and update our conversational scoreboard table when we read a written demonstration? The interest of the talk will be to investigate the possibility of a link between philosophy of the mathematical practice and philosophy of language. De Toffoli, S. and Giardino, V. (2015). An Inquiry into the Practice of Proving in Low-Dimensional Topology. Boston Studies in the Philosophy and History of Science, 308, 315-336. Lewis, D. K. (1979). Scorekeeping in a language game. Journal of Philosophical Logic, 8, 339–359. Rolfsen, D. 1976. Knots and links. Berkeley: Publish or Perish. Stone, M and Stojnic, U. (2015). Meaning and demonstration. Review of Philosophy and Psychology (special issue on pictorial and diagrammatic representation), 6(1), 69-97. Onyu Mikami (Tokyo Metropolitan University, Japan) An Attempt at Extending the Scope of Meaningfulness in Dummett's Theory of Meaning. ABSTRACT. Michael Dummett proposed a radically new approach to the problem how the philosophical foundations of a meaning theory of a natural language are to be established. His central point is threefold. First, a theory of meaning should give an account of the knowledge (i.e., understanding) that competent speakers of the language have of it. Second, the knowledge consists in certain practical abilities. If someone counts as a competent speaker, it is because, by using the language, she/he is able to do anything that all and only those who understand the language can do. Therefore what a theory of meaning should account for is those practical abilities that a competent speaker. Then, what do those practical abilities consist in? This question leads us to Dummett's third point. Ordinarily, one is entitled to possess some ability by exhibiting (making manifest the possession of ) the ability. : i.e., by having done, often doing or being likely to do something that can be done by virtue of the ability. Truly, there is an intricate problem of what one should do to be entitled to possess the ability. Let us set aside the problem. Dummett tackled with another (related but more profound) problem: in almost all natural languages and formalized languages, there are various sentences that are, while well-formed and hence associated with certain precise conditions for them to be true, definitely beyond the scope of possible exhibition of those abilities that (if there were any at all) the understanding of the sentences would consist in. He objected to the common opinion that meaning of a sentence could be equated with its truth-conditions and instead claimed that the meaning should be accounted for as consisting in its (constructive) provability condition, that is, according to Dummett someone knows the meaning of a sentence just in case he knows what has to be done (what construction has to be realized) to justify the sentence (i.e. to establish constructively the sentence holds.) I basically agree with these lines of Dummett's thought, although I should point out that his view on the scope of meaningfulness (intelligibility) of sentence is too restrictive. Dummett proposes that in giving provability conditions of a sentence we should adopt the intuitionistic meaning condition of the logical connectives. The reason is that the intuitionistic connectives are conservative with respect to constructivity: If sentences derived intuitionistically from some assumptions, then the sentence is constructively justifiable provided those assumptions are. However, I think we can point out there are some sentences that are while beyond the criterion, that can be established by virtue of an agent's behavior that conclusively justifies the sentence. In that case the agent's behavior could be said to make her understanding of sentence manifest. One of the typical examples of such sentences is, one might say certain kind of infinitary disjunctions that are treated prominently by the proponents of the geometric logic such as S.Vickers. I will investigate into the matter more closely in the talk. 11:00-12:00 Session 12I: B1 Dynamics of science Holger Andreas (The University of British Columbia, Canada) Hernán Bobadilla (University of Vienna, Austria) Two types of unrealistic models: programatic and prospective ABSTRACT. The purpose of this paper is to introduce and assess a distinction among unrealistic models based on the kind of idealizations they resort to. On the one hand, programatic models resort to idealizations that align with the core commitments of a research program. On the other hand, prospective models resort to idealizations that challenge those core commitments. Importantly, unrealistic models are not intrinsically programatic or prospective. Rather, their status is dependent on an interpretation of the idealizations. Idealizations are features of models that make them different from - typically simpler than - the target phenomena they represent. However, these features become idealizations only after two stages of interpretation performed by the user of the model. First, there is a non-referential interpretation of the model's vehicle. In this stage, the user decides which features instantiated by the vehicle are those that the model is going to exemplify. These features are conceptualised in accordance with the contingent commitments of the user. These features are the bearers of the idealizations-to-be. Second, there is a referential interpretation of the features exemplified by the model. In this stage, the user assigns features exemplified by the model to features of the target phenomenon. In the assignment, exemplified features of the model are evaluated as more or less idealized representations of their denotata. Such evaluation is decided by standards and other epistemic commitments held by the user of the model. Idealizations, as the product of a user's interpretation, can align with or challenge core commitments of research programs in both stages of interpretation. First, non-referential interpretations can conflict with accepted selections of features that a model exemplifies or with the accepted conceptualizations of such features. Particularly salient are explanatory commitments, which can decide which conceptualizations are legitimate within a research program. Second, referential interpretations can conflict with accepted standards for assignment. Explanatory commitments are also relevant in deciding these standards. I continue to argue that programatic and prospective models typically aim for distinct epistemic achievements. On the one hand, programatic models aim for how-plausibly explanations, while prospective models aim for how-possibly explanations. However, I contend that how-plausibly and how-possibly explanations should not be regarded as explanations in the traditional sense, but rather as distinct forms of understanding. Thus, programatic and prospective models share a common, although nuanced, aim: the advancement of understanding. I test this account in a model case study (Olami, Feder, Christensen, 1992). This model is a cellular automaton computer model that simulates aspects of the behaviour of earthquakes. I show how different explanatory commitments, namely mechanistic and mathematical explanatory commitments, align with and challenge core commitments of distinct research programs. I also explore how these commitments lead to distinct understandings of the target phenomenon. Olami, Z., Feder, H.J.S. & Christensen, K. 1992. Self-Organized Criticality in a Continuous, Nonconservative Cellular Automaton Modeling Earthquakes. Emma Ruttkamp-Bloem (University of Pretoria, South Africa) A Dynamic Neo-Realism as an Active Epistemology for Science ABSTRACT. In this talk I defend a dynamic epistemic neo-realism (DEN). One important difference between DEN and the traditional no-miracles account of realism is that the determining factors for epistemic commitment to science (i.e. belief in empirical and theoretical knowledge) lie in the active nature of the processes of science as opposed to NMR's focus on the logical properties of science's products. I will focus on two factors of DEN in my discussion. First, I propose an explosion of the dichotomy between realism and anti-realism ruling the current realist debate. (See critiques of this dichotomy in various forms in e.g. McMulllin (1984), Stein (1989), and Kukla (1994).) The explosion I suggest results in a continuum of (neo-) realist stances towards the epistemic content of theories which rests on two motivations: (1) Depicting epistemic commitment to science in terms of a dichotomy between anti-realism and realism is inadequate, as, given the trial-and-error nature of science, most of science happens on a continuum between these stances. (2) Epistemic commitment to science need not (primarily) depend on the (metaphysical) truth of science's ontological claims, but is better determined on pluralist, functional, and pragmatic grounds. This position is not the same as Arthur Fine's (1984) natural ontological attitude. I advocate a continuum of epistemic ('neo-realist') stances as opposed to one 'core' one. Rather than imploding the realist/anti-realist dichotomy, I differentiate and refine it into a continuum of epistemic commitment. Secondly – and the main focus of this talk – I offer a critical reconsideration of the three traditional - metaphysical, semantic and epistemic - tenets of traditional scientific realism from the perspective of DEN. Specifically the traditional versions of the semantic and epistemic tenets have to be re-interpreted in the light of the suggested continuum of neo-realist stances. Reference has to be re-conceptualised as an epistemic tracking device and not only (or at all, perhaps) as an indicator of ontological existence, while the concept of truth has to be 'functionalised' in Peirce's (1955) sense of truth-as-method with some adjustments to a traditional convergent view of science. In conclusion, the account of neo-realism defended here is a fallibilist epistemology that is pragmatist in its deployment of truth and reference and pluralist in its method of evaluation. In its naturalised tracing of science, it explains the progress of science as the result of intensive time-and-context-indexed science-world interaction. Bibliography Fine, Arthur. 1984. The Natural Ontological Attitude. In Scientific Realism, J. Leplin (ed.), 261–277. Berkeley: University of California Press. Kukla, André. 1994. Scientific Realism, Scientific Practice, and the Natural Ontological Attitude. British Journal for the Philosophy of Science 45: 955–975. McMullin, Ernan. 1984. A Case for Scientific Realism. In Scientific Realism, J. Leplin (ed.), 8–40. Berkeley: University of California Press. Peirce, Charles, S. 1955. The Scientific Attitude and Fallibilism. In Philosophical Writings of Peirce, J. Buchler (ed.), 42–59. New York: Dover Publications. Stein, Howard. 1989. Yes, but … - Some Skeptical Remarks on Realism and Anti-Realism. Dialectica 43(1/2): 47–65. 11:00-12:00 Session 12J: C4 Epistemology and reasoning in biomedical practice 2 Brice Bantegnie (Czech Academy of Sciences, Czechia) Adrian Erasmus (University of Cambridge, UK) Expected Utility, Inductive Risk, and the Consequences of P-Hacking ABSTRACT. P-hacking is the manipulation of research methods and data to acquire statistically significant results. It includes the direct manipulation of data and/or opportunistic analytic tactics. Direct manipulation involves experimental strategies such as dropping participants whose responses to drugs would weaken associations; redefining trial parameters to strengthen associations; or selectively reporting on experimental results to obtain strong correlations. Opportunistic analytic tactics include performing multiple analyses on a set of data or performing multiple subgroup analyses and combining results until statistical significance is achieved. P-hacking is typically held to be epistemically questionable, and thus practically harmful. This view, which I refer to as the prevalent position, typically stresses that since p-hacking increases false-positive report rates, its regular practice, particularly in psychology and medicine, could lead to policies and recommendations based on false findings. My first goal in this paper is to formulate the prevalent position using expected utility theory. I express a hypothetical case of p-hacking in medical research as a decision problem, and appeal to existing philosophical work on false-positive report rates as well as general intuitions regarding the value of true-positive results versus false-positive ones, to illustrate the precise conditions under which p-hacking is considered practically harmful. In doing so, I show that the prevalent position is plausible if and only if (a) p-hacking increases the chance that an acquired positive result is false and (b) a true-positive result is more practically valuable than a false-positive one. In contrast to the prevalent position, some claim that experimental methods which constitute p-hacking do play a legitimate role in medical research methodology. For example, analytic methods which amount to p-hacking are a staple of exploratory research and have sometimes led to important scientific discoveries in standard hypothesis testing. My second aim is to bring the prevalent position into question. I argue that although it is usually the case that refraining from p-hacking entails more desirable practical consequences, there are conditions under which p-hacking is not as practically perilous as we might think. I use the formal resources from expected utility theory from the first part of the paper, and lessons learned from the arguments surrounding inductive risk to articulate the conditions under which this is the case. More specifically, I argue that there are hypotheses for which p-hacking is not as practically harmful as we might think. Renata Arruda (Universidade Federal de Goiás, Brazil) Multicausality and Manipulation in Medicine ABSTRACT. The objectivity of causality in its observable aspects is generally characterized by the reference to the concrete alteration of the effects due to the alteration in a cause. One of the ways of making a causal relationship takes place is precisely by human intervention in the factor that is considered the cause. This type of deliberate intervention, which an agent can produce with manipulable factors, is absolutely intrinsic to medicine. My interest here is to present how medicine, as a practical science, articulates the multiple factors and phenomena that act on an organism in order to understand cause and effect relationships. To that end, I associate JL Mackie's and Kenneth Rothman's theories about the necessary and sufficient conditions for the cause-effect relation to the theory of manipulability. This theory, in general, identifies the causal relation as that in which some kind of intervention in the cause gives rise to the effect. Medical science is distinguished exactly by the practices it performs, without which it would lose its own meaning. In this way, medicine is one of the sciences in which the relation between cause and effect can be evaluated objectively. Despite these observable aspects, a problem rises. Faced with the complexity of an organism, where several factors act together to produce an effect, how to delimit the cause on which to intervene? The proper functioning of the organism is not based on the functioning of isolated causes. If, on the one hand, the analysis of causality from a singularist perspective in sciences like medicine is impracticable, on the other hand, this analysis becomes more complicated if we add the fact that some physiological mechanisms are absolutely unknown. That is to say, in treating the organism, medicine depends fundamentally on intervention in cause-effect relationships, in a complex system with some mechanisms that are not absolutely clear. Nonetheless all these difficulties, medicine is recognized for succeeding in the various activities that concern it. In this context, both Mackie's and Rothman's conceptions on cause-effect relationship helps us to understand the role of intervention in medicine and its consequences for the general conception of causality. References Cartwright, Nancy. 2007. Hunting Causes and Using Them: Approaches in Philosophy and Economics. Cambridge UP, Cambridge. Mackie, J. L. 1965. Causes and Conditions. American Philosophical Quarterly 2 (4), p. 245 – 264. Rothman, Kenneth; Greenland, Sander; Lash, Timothy L; Poole, Charles. 2008. Causation and causal inference. Modern Epidemiology 3º edition. Wolters Kluwer Health/Lippincott Williams & Wilkins, Philadelphia. Woodward, James. 2013. Causation and Manipulability. The Stanford Encyclopedia of Philosophy (Winter Edition), Edward N. Zalta (ed.), URL = . Daniel Kostic (University Bordeaux Montaigne; Sciences, Philosophie, Humanité (SPH) University of Bordeaux, Bordeaux, France., France) Stefan Petkov (Beijing Normal University School of Philosophy, China) Scientific explanations and partial understanding ABSTRACT. Notions such as partial or approximate truth are often invoked by proponents of the factual accounts of understanding in order to address the problem of how flawed theories can provide understanding of their factual domain. A common problem of such arguments is that they merely do a lip service to such theories of truth instead of exploring them more fully. This is a perplexing fact, because a central feature of factual accounts is the so called veridical condition. The veridical condition itself appears as a result of a broadly inferential approach to understanding according to which only factually true claims can figure within an explanans capable of generating understanding of its explanandum. Here I aim at amending this issue by exploring Da Costa's notion of partial truth and liking it with a factual analysis of understanding. As a result of pursuing such an account several interesting features of explanatory arguments and understanding will emerge. Firstly, partial truth naturally links with the intuition that understanding comes in degrees. This appears straightforwardly from the fact that an explanation that contains partially true propositions can only provide partial explanatory information for its explanandum. Secondly the distinction between theoretic and observational terms on which the notion of partial truth relies, permits us to be clear on the problem when will an explanation provide partial understanding. This can be the case only if the explanans has premises which contain theoretic concepts. Only such premises that relate theoretic and observational terms can be taken as partially true (premises that contain descriptive terms only can be simply assessed as true or false). As a result of such partiality the information transfer from premises to conclusion can be only partially factually accurate, which subsequently leads to partial understanding. The resulting account of understanding then resolves the core problem that modest factual accounts face—namely, that if a partially factual account of understanding is accepted, then this account should also show by what means a partially true proposition figures centrally in an explanatory argument and explain how flawed theories can make a positive difference to understanding. I will further support my case by a critical examination of predator-prey theory and the explanatory inferences it generates for 2 possible population states – the paradox of enrichment and the paradox of the pesticide. The paradox of the pesticide is the outcome of predator-prey dynamics according to which the introduction of a general pesticide can lead to the increase of the pest specie. The paradox of enrichment is an outcome of predator prey dynamics according to which the increase of resources for the prey species can lead to destabilization. Both of these outcomes depend on idealized conceptualization the functional response within predator-prey models. This idealization can be assessed as introducing a theoretic term. The explanatory inferences using such a notion of functional response can then be judged only as approximately sound (paradox of the pesticide) or unsound (paradox of enrichment) and providing only partial understanding of predator-prey dynamics. Facticity of understanding in non-causal explanations ABSTRACT. In the literature on scientific explanation, understanding has been seen either as a kind of knowledge (Strevens 2008, 2013; Khalifa 20117; De Regt 2015) or as a mental state that is epistemically superfluous (Trout 2008). If understanding is a species of knowledge an important question arises, namely, what makes the knowledge from understanding true? In causal explanations, the facticity of understanding is conceived in terms of knowing the true causal relations (Strevens 2008, 20013). However, the issue about facticity is even more conspicuous in non-causal explanations. How to conceive of it in non-causal explanations if they don't appeal to causal, microphysical or in general ontic details of the target system? I argue that there are two ways to conceive the facticity of understanding in a particular type of non-causal explanation, i.e. the topological one. It is through understanding "vertical" and "horizontal" counterfactual dependency relations that these explanations describe. By "vertical", I mean counterfactual dependency relation which describes dependency between variables at different levels or orders in the mathematical hierarchy. These are explanatory in virtue of constraining a range of variables in a counter-possible sense, i.e. had the constraining theorem been false it wouldn't have had constrained the range of object level variables. In this sense, the fact that a meta-variable or a higher order mathematical property holds entails that a mathematical property P obtains in the same class of variables or operations (Huneman 2017: 24). An example of this approach would be an explanation stability of an ecological community. Species and predation relations between them can be modeled as a graph which can have a global general network property of being a "small-world". The fact that the small-world property holds for that system constrains various kinds of general properties, e.g. the stability or robustness (Huneman 2017: 29). On the other hand, by "horizontal" I mean the counterfactual dependency relations that are at the same level or order in the mathematical hierarchy. An example of "horizontal" counterfactual dependencies relations are the ones that hold between the topological variables such as the node's weighted degree or the network communicability measure and the variables that describe system's dynamics as a state space. Factivity in vertical cases is easy to understand, it's basically a proof or an argument. It can be laxed or stricter. A laxed version is in terms of soundness and validity of the argument, a stricter sense is in terms of grounding (Poggiolesi 2013). Factivity in horizontal cases is a bit more difficult to pin down. One way would be through possible world analysis of counterfactual dependency relations that the explanations describe. In this sense, the facticity has a stronger form which is germane to the notion of necessity. The remaining question is whether this account can be generalized beyond topological explanation. I certainly think that at the very least it should be generalizable to all horizontal varieties of non-causal explanations. 11:00-12:00 Session 12L: C7 History and philosophy of the humanities 2 Pablo Vera Vega (University of La Laguna, Spain) Evelina Barbashina (Novosibisk State Medical University, Russia) Schematism of historical reality ABSTRACT. CLMPST2019 The philosophy of history and the methodology of historical knowledge - traditional themes within the framework of continental philosophy. A person, reasoning about history, seeks to clarify his position history, to define his presence in it. History is not only a reality in which humanity finds itself, understands and interprets itself, but also a professional sphere on the professional sphere of acquiring and transmitting knowledge. In the 20-th century, a kind of «emancipation» of concrete historical knowledge from the conceptual complexes of the "classical" philosophy of history and from metaphysical grounds took place. In the 20-th century there was a rejection of the main ideas of modern philosophy regarding the philosophy of history: the idea of a rational world order, the idea of the progressive development of mankind, the idea of transcendental power responsible for what is happening in history, etc. Anthropologists, sociologists, historians, ethnographers played important role in the process of «emancipation» of concrete historical knowledge. However, many questions did not receive any answer: «What is history?», «What is the historical meaning (and is there any at all)?», «What are the problems of interpretation of history and how can they be overcome?», «What are the general and special features of different types of history?». One of the ways of contemporary understanding of history is to coordinate the schematism of historical knowledge and the structure of historical being. According to the type of co-presence described in the event communication, three schematic dimensions of historical reality are possible: spatial, situational, temporal. The spatial schematic is presented in M. Foucault's «Words and Things». According to it, the historical is found there, and only where the spatial structure and the description of the typical method of communication of the elements of this structure are deployed. The situational schematic of the historical takes place where a specific (moral, political, legal) nature of the connection between historical events is realized. The most important element of the situational schematic is the generation that has received an education that has left the fruits and results of its labor. Attractive in the history described in this way is the representation of historical reality as a process: historical formations, historical types, historical characters. The temporal schematics of the historical, exemplified by M. Heidegger's phenomenological construction in «Being and Time», is found where the temporal measure of the existence of the historical being is explicated, that is, historicity is understood as the temporality of the existence of the real, and the temporality of the historical understanding itself. Konstantin Skripnik (Southern federal university, Russia) Ekaterina Shashlova (Southern federal university, Russia) Philosophy (and methodology) of the Humanities: towards constructing a glossary PRESENTER: Konstantin Skripnik ABSTRACT. It is hard to challenge the point of view according to which our century is a century of Humanities. Undisputable evidence in favor of this point is a list of thematic sections of 14th, 15th and our 16th Congresses of Logic, Methodology and Philosophy of Science (and Technology). We mean that the list of 14th Congress did not include any section with the term "Humanities" in its title; the programme of 15th Congress included a section devoted the philosophy of the Humanities, so did the present Congress, although this section has – if it is possible to say – a "palliative" title "Philosophy of the Humanities and the Social Sciences", And now among topic areas of the 16th Congress one can see "Philosophical Traditions, Miscellaneous". Now there is the intricate spectrum of different approaches to the philosophical and methodological problems of the Humanities, each of which is connected with its own "philosophy", ideology, and visions. The fact is that the attempt to form the philosophy of the Humanities along the lines of the philosophy of science is definitely – and perhaps irrevocably – failed. It is time to scrutinize this spectrum with the aim to find a certain sustainable set of terms and notions for creating a basis of philosophy (and methodology) of the Humanities. We propose not the dictionary, not the encyclopedia (in Umberto Eco's sense), but namely glossary, each entry of which will contain clear, straightforward definitions, practice and examples of using, from one side, and, from the other side, each entry will not be closed – it may be supplemented and advanced. The order of entries will not be alphabetical; it will rather be determined by the functional features of the terms and notions, by their relationships to each other. These relations can be historical, methodological, ontological, lexico-terminological, socially-oriented etc. The terms and notions included in the glossary give us opportunity to form a certain kind of the frame or better to say, some kind of net for further researches. The net (frame) may be expanded, by means of including new notions, terms, phrases and collocations; the frame may be deepen by means of forming new connections between "old" notions or between "old" and "new" notions and terms. For example, we include notion "text" in the glossary, this inclusion forces to include such notions as "author", "reader", "language", "style", "(outer) world", "history", "value". We suppose that the initial list of basic notions must include the following set: representation, intention, sign and sign system, code, semiosis and retrograde semiosis (as a procedure of sense analysis), sense, meaning, dialogue, translation, text (and notions connected with text), interpretation and understanding. It is easy to see that basic notions are used in different realms of the Humanities (semiotics, hermeneutics, significs, history of notions and history of ideas, theory of literature, philosophy, logic and linguistics); this fact emphasizes their basic features. 11:00-12:30 Session 12M: A2 Logical analysis of science and philosophy 1 Yaroslav Shramko (Kryvyi Rih State Pedagogical University, Ukraine) Timm Lampert (Humboldt University Berlin, Germany) Theory of Formalization: The Tractarian View ABSTRACT. Logical formalization is an established practice in philosophy as well as in mathematics. However, the rules of this practice are far from clear. Sainsbury(1991) and Epstein(1994) were among the first to specify criteria of formalization. Since Brun's detailed monograph (Brun(2004)) criteria and theories of formalization have been discussed intensively (cf., e.g., most recently Peregrin(2017)). No single theory of formalization has emerged from this discussion. Instead, it more and more becomes clear that different theories of formalization involve different traditions, background theories, aims, basic conceptions and foundations of logic. Brun(2004) envisages a systematic and, ideally, automated procedure of formalizing ordinary language as the ultimate aim of logical formalization. Peregrin(2017) ground their theory on inferentialism. Like Brun, they try to combine logical expressivism with a modest normative account of logic by their theory of reflective equilibrium. In contrast, Epstein(1994) bases his theory of formalization on semantic and ontological foundations that are rather close to mathematical model theory. Sainsbury(1991) grounds the project of formalization within the philosophical tradition of identifying logical forms in terms of representing truth conditions. He identifies Davidson as the most elaborated advocate of this tradition. Davidson refers to Tarskian semantics and distinguishes logical formalization from semantic analysis. Sainsbury also assigns the Tractarian View of the early Wittgenstein to the project of identifying truth conditions of ordinary propositions by means of logical formalizations. In contrast to Davidson, however, Wittgenstein does not distinguish the project of formalization from a semantic analysis and he does not rely on Tarski semantics. Instead, Wittgenstein presumes semantics according to which instances of first-order formulas represent the existence and non-existence of logically independent facts. In my talk, I will show that the Tractarian view can be spelled out in terms of a theory of formalization that provides an alternative to Davidson's account of what it means to identify logical forms and truth conditions of ordinary propositions. In particular, I will argue that Wittgenstein with his early abnotation envisaged an account of first-order logic that makes it possible to identify logical forms by ideal symbols that serve as identity criteria for single non-redundant conditions of truth and falsehood of formalized propositions. Instead of enumerating an infinite number of possible infinitely complex models and counter-models in first-order logic, ideal symbols provide a finite description of the structure of possibly infinitely complex conditions of truth and falsehood. I will define logical forms and criteria of adequate formalization within this framework. Furthermore, I will show how to solve (i) termination problems of the application of criteria of adequate formalization, (ii) the trivialization problem of adequate formalization (iii) the problem of the uniqueness of logical form, and (iv) the problem of a mechanical and comprehensible verbalization of truth conditions. All in all, I will argue that a theory of formalization based on the Tractarian view provides a consistent and ambitious alternative that can be utilized for a systematic and partly algorithmic explanation of conditions of truth and falsehood of ordinary propositions expressible within first-order logic. References Brun, G.:Die richtige Formel. Philosophische Problem der logischen Formalisierung, Ontos, Frankfurt A.M., 2004. Epstein, R.L.: Predicate Logic. The Semantic Foundations of Logic, Oxford University Press, Oxford, 1994. Peregrin, J. and Svoboda, V.: Reflective Equilibrium and the Principles of Logical Analysis, Routledge, New York, 2017. Sainsbury, M.: Logical Forms, 2nd edition, Blackwell, Oxford, 2001, 1st edition 1991. Samuel Elgin (University of California San Diego, United States) The Semantic Foundations of Philosophical Analysis ABSTRACT. The subject of this paper is a targeted reading of sentences of the form 'To be F is to be G,' which philosophers often use to express analyses, and which have occupied a central role in the discipline since its inception. Examples that naturally lend themselves to this reading include: 1. To be morally right is to maximize utility. 2. To be human is to be a rational animal. 3. To be water is to be the chemical compound H2O. 4. To be even is to be a natural number divisible by two without remainder. 5. To be a béchamel is to be a roux with milk Sentences of this form have been employed since antiquity (as witnessed by 2). Throughout the ensuing history, proposed instances have been advanced and rejected for multitudinous reasons. On one understanding, this investigation thus has a long and rich history— perhaps as long and rich as any in philosophy. Nevertheless, explicit discussion of these sentences in their full generality is relatively recent. Recent advances in hyperintensional logic provide the necessary resources to analyze these sentences perspicuously—to provide an analysis of analysis. A bit loosely, I claim that these sentences are true just in case that which makes it the case that something is F also makes it the case that it is G and vice versa. There is a great deal to say about what I mean by 'makes it the case that.' In some ways, this paper can be read as an explication of that phrase. Rather than understanding it modally (along the lines of 'To be F is to be G' is true just in case the fact that something is F necessitates that it is G and vice versa), I employ truth-maker semantics: an approach that identifies the meanings of sentences with the finely-grained states of the world exactly responsible for their truth-values. This paper is structured as follows. I articulate the targeted reading of `To be F is to be G' I address, before discussing developments in truth-maker semantics. I then provide the details of my account and demonstrate that it has the logical and modal features that it ought to. It is transitive, reflexive and symmetric, and has the resources to distinguish between the meanings of predicates with necessarily identical extensions (sentences of the form 'To be F is to be both F and G or not G' are typically false); further, if a sentence of the form 'To be F is to be G' is true then it is necessarily true, and necessary that all and only Fs are Gs. I integrate this account with the λ-calculus—the predominant method of formalizing logically complex predicates—and argue that analysis is preserved through β-conversion. I provide two methods for expanding this account to address analyses employing proper names, and conclude by defining an irreflexive and asymmetric notion of analysis in terms of the reflexive and symmetric notion. Zuzana Rybaříková (University of Hradec Králové, Czechia) Łukasiewicz's Concept of Anti-Psychologism ABSTRACT. Although Łukasiewicz was the first proponent of Husserl's anti-psychologism in the Lvov-Warsaw School, his later concept of anti-psychologism has some features that are incompatible with Husserl's concept. In his famous book, Husserl declared anti-psychologism as the view that laws of logic are not laws of psychology and consequently logic is not a part of psychology. The distinction between logic and psychology is based on the difference between axiomatic and empirical sciences. Logic is an axiomatic science and its laws are settled but psychology is an empirical science and its laws derive from experience. The laws of logic are settled in an ideal world and as such are independent of experience and apodictic. Łukasiewicz supported Husserl's views in a short paper "Teza Husserla o stosunku logiki do psychologii" that appeared in 1904 and later also in his famous paper "Logic and Psychology" which was published in 1910. At the same time, Łukasiewicz, however, started to question the fact that the laws of logic are settled, which was an essential part of anti-psychologism for Husserl. Łukasiewicz questioned the law of contradiction in his book On Aristotle's Law of Contradiction and finished his denial of the unchangeability of laws of logic by an introduction of his systems of many-valued logic. His first system, the three-valued logic, is clearly based on the denial of the law of bivalence. In his later works, Łukasiewicz questioned also the distinction between axiomatic and empirical sciences and the truthfulness of apodictic statements, which was another important component of Husserl's anti-psychologistic argumentation. Nonetheless, he still claimed that psychologism is undesirable in logic in his later works. As he did not hold certain features of anti-psychologism that were essential for the concept of Husserl that he adopted at first, it seems that his own concept of anti-psychologism differed. Frege is the most prominent representative of anti-psychologism of that time, who was also appreciated by Łukasiewicz. It seems, however, that Łukasiewicz was also inspired by other logicians from the history of logic as Aristotle and certain medieval logicians. The aim of my talk is to provide the definition of Łukasiewicz's anti-psychologism. References: Husserl E (2009) Logická zkoumání I: Prolegomena k čisté logice. Montagová KS, Karfík F (trans.) OIKOIMENH, Prague. Łukasiewicz JL (1904) Teza Husserla o stosunku logiki do psychologii. Przegląd Filozoficzny 7: 476–477. Łukasiewicz JL (1910) Logika a psychologia. Przegląd Filozoficzny 10: 489–491. Łukasiewicz JL (1957) Aristotle's Syllogistic: From the Standpoint of Modern Formal Logic. 2nd edition. Clarendon Press, Oxford. Łukasiewicz JL (1961) O determinizmie. In: Łukasiewicz JL., Z zagadnień logiki i filozofii: Pisma wybrane. Słupecki J (ed.), Państwowe wydawnictwo naukowe. Warsaw, 114–126. Łukasiewicz JL (1987) O zasadzie sprzeczności u Arystotelesa, Studium krytyczne. Woleński J (ed.), Państwowe wydawnictwo naukowe, Varšava. Woleński J (1988) Wstep. In: Łukasiewicz JL., Sylogistyka Arystotelesa z punktu widzenia współczesnej logiki formalnej. Państwowe Wydawnictwo Naukowe, Warsaw, IX–XXIII. Surma P (2012) Poglądy filozoficzne Jana Łukasiewicza a logiki wielowartościowe. Semper, Warsaw. 12:30-14:00Lunch Break 14:00-15:00 Session 13A: C1 SYMP Text-driven approaches to the philosophy of mathematics 1 (TDPhiMa-1) Organizers: Carolin Antos, Deborah Kant and Deniz Sarikaya Text is a crucial medium to transfer mathematical ideas, agendas and results among the scientific community and in educational context. This makes the focus on mathematical texts a natural and important part of the philosophical study of mathematics. Moreover, it opens up the possibility to apply a huge corpus of knowledge available from the study of texts in other disciplines to problems in the philosophy of mathematics. This symposium aims to bring together and build bridges between researchers from different methodological backgrounds to tackle questions concerning the philosophy of mathematics. This includes approaches from philosophical analysis, linguistics (e.g., corpus studies) and literature studies, but also methods from computer science (e.g., big data approaches and natural language processing), artificial intelligence, cognitive sciences and mathematics education. (cf. Fisseni et al. to appear; Giaquinto 2007; Mancosu et al. 2005; Schlimm 2008; Pease et al. 2013). The right understanding of mathematical texts might also become crucial due to the fast successes in natural language processing on one side and automated theorem proving on the other side. Mathematics as a technical jargon or as natural language, which quite reach structure, and semantic labeling (via LaTeX) is from the other perspective an important test-case for practical and theoretical study of language. Hereby we understand text in a broad sense, including informal communication, textbooks and research articles. Carolin Antos (Universität Konstanz, Germany) Marcos Cramer (TU Dresden, Germany) Bernhard Fisseni (Leibniz-Institut für Deutsche Sprache, Universität Duisburg-Essen, Germany) Deniz Sarikaya (University of Hamburg, Germany) Bernhard Schröder (Universität Duisburg-Essen, Germany) Bridging the Gap Between Proof Texts and Formal Proofs Using Frames and PRSs PRESENTER: Marcos Cramer ABSTRACT. We will debate how different layers of interpretation of a mathematical text are useful at different stages of analysis and in different contexts. To achieve this goal we will rely on tools from formal linguistics and artificial intelligence which, among other things, allow to make explicit in the formal representation information that is implicit in the textual form. In this way, we wish to contribute to an understanding of the relationship between the formalist and the textualist position in the investigation of mathematical proofs. Proofs are generally communicated in texts (as strings of symbols) and are modelled logically as deductions, e.g. a sequence of first order formulas fulfilling specified syntactical rules. We propose to bridge the gap between these two representations by combining two methods: First, Proof Representation Structures (PRSs), which are an extension of Discourse Representation Structures (see Geurts, Beaver, & Maier, 2016). Secondly, frames as developed in Artificial Intelligence and linguistics. PRSs (Cramer, 2013) were designed in the Naproche project to formally represent the structure and meaning of mathematical proof texts, capturing typical structural building blocks like definitions, lemmas, theorems and proofs, but also the hierarchical relations between propositions in a proof. PRSs distinguish proof steps, whose logical validity needs to be checked, from sentences with other functions, e.g. definitions, assumptions and notational comments. On the (syntacto-)semantic level, PRSs extend the dynamic quantification of DRSs to more complex symbolic expressions; they also represent how definitions introduce new symbols and expressions. Minsky (1974) introduces frames as a general "data-structure for representing a stereotyped situation". 'Situation' should not be understood too narrowly, as frames can be used to model concepts in the widest sense. The FrameNet project prominently applies frames to represent the semantics of verbs. For example, "John sold his car. The price was € 200." is interpreted as meaning that the second sentence anaphorically refers to the `price` slot of `sell`, which is not explicitly mentioned in the first sentence. In the context of mathematical texts, we use frames to model what is expected of proofs in general and specific types of proofs. In this talk, we will focus on frames for inductive proofs and their interaction with other frames. An example of the interaction of different proof frames is the dependence of the form of an induction on the underlying inductive type, so that different features of the type (the base element and the recursive construction[s]) constitute natural candidates for the elements of the induction (base case and induction steps). The talk will show how to relate the two levels (PRSs and frames), and will sketch how getting from the text to a fully formal representation (and back) is facilitated by using both levels. Cramer, M. (2013). Proof-checking mathematical texts in controlled natural language (PhD thesis). Rheinische Friedrich-Wilhelms-Universität Bonn. Geurts, B., Beaver, D. I., & Maier, E. (2016). Discourse Representation Theory. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2016). ; Metaphysics Research Lab, Stanford University. Minsky, M. (1974). A Framework for Representing Knowledge. Cambridge, MA, USA: Massachusetts Institute of Technology. Perspectives on Proofs ABSTRACT. In this talk, we want to illustrate how to apply a general concept of perspectives to mathematical proofs, considering the dichotomy of formal proofs and textual presentation as two perspectives on the same proof. We take *perspective* to be a very general notion that applies to spatial representation, but also phenomena in natural language syntax known as *perspectivation* and related to diathesis (grammatical voice) or semantically partially overlapping verbs such as *sell*, *buy*, *trade*; to phenomena in natural language semantics (e.g., prototype effects) and in narrative texts (Schmid, 2010 distinguishes perspective of characters or narrators in six dimensions, from perception to language). In most applications of the concept of perspective, a central question is how to construct a superordinate 'meta'perspective that accommodates given perspectives, while maintaining complementary information. Perspectival phenomena intuitively have in common that different perspectives share some information and are partially 'intertranslatable' or can be seen as projections from a more complete and more fine-grained metaperspective to less informative or more coarse perspectives. In our approach, modelling is done bottom-up starting from specific instances. We advocate a formal framework for the representation of perspectives as frames and using feature structures, a data structure well known in linguistics. With feature structures, it becomes easier to model the interaction of frames and approach compositionality, and connects to formal models of (unification-based) linguistic grammar like Berkeley Construction Grammar (cf., e.g., Boas & Sag, 2012), but also recent work on frame semantics (see, e.g., Gamerschlag, Gerland, Osswald, & Petersen, 2015). Metaperspectives are constructed using decomposition of features and types into finer structures (see Fisseni, forthcoming), organized in the inheritance hierarchies typical of feature structure models (see, e.g., Carpenter, 1992; Pollard & Sag, 1994). Using this formal model of perspectives, it can be shown that occasionally, e.g. in the case of metaphors, *partial* perspectives are used, i.e. that perspectives contain semantic material that is to be disregarded, for instance splitting notions semantic verb classes into different properties like *involving an agent* or *most prominent participant*. Similar to syntactic perspectivation (active – passive, *buy* – *sell*), where the same event can be conceptualized differently (e.g., as an action or as a process) mathematical texts and formal proofs can be seen as describing 'the same proof' as a process and a state of affairs, respectively. The talk will show how to elaborate this analogy, and will discuss the construction of a metaperspective, i.e. merging both perspectives in a way that their common core will be distilled. References ---------- Boas, H. C., & Sag, I. A. (Eds.). (2012). *Sign-based construction grammar*. Stanford: CSLI. Carpenter, B. (1992). *The logic of typed feature structures*. Cambridge University Press. Fisseni, B. (forthcoming). Zwischen Perspektiven. In *Akten des 52. Linguistischen Kolloquiums, Erlangen*. Gamerschlag, T., Gerland, D., Osswald, R., & Petersen, W. (Eds.). (2015). *Meaning, frames, and conceptual representation. Studies in language and cognition*. Düsseldorf: Düsseldorf University Press. Pollard, C., & Sag, I. (1994). *Head driven phrase structure grammar*. University of Chicago Press. Schmid, W. (2010). *Narratology. An introduction.* Berlin: de Gruyter. Constructive deliberation: pooling and stretching modalities ABSTRACT. When a group of agents deliberates about a course of action or decision, each of the individual agents has distinct (soft or hard) constraints on what counts as a feasible alternative, evidence about potential alternatives, and higher-order evidence about the other agents' views and constraints. Such information may be to some extent shared, but it may also be conflicting, either at the level of a single individual, or among the agents. In one way or another, sharing and combining this information should allow the group to determine which set of alternatives constitutes the decision problem faced by the collective. We call this process constructive deliberation, and contrast it with the selective deliberation that takes place when a set of alternatives has been fixed and the group is supposed to select one of them by means of some decision method such as voting. Whereas selective deliberation has been investigated at length (in social choice theory and game theory), constructive deliberation has received much less attention, and there is hardly any formal account of it on the market. In the first part of our talk, we will investigate this distinction, and discuss the similarities and differences between both processes as they bear on formal modeling and considerations of rationality and equality. In the second part, we will focus on the static aspect of constructive deliberation and on the role of constraints. We will hence ask how the output, a set of viable alternatives constituting a collective decision problem can be obtained from a given input: a tuple of sets of constraints, one for each agent. We model this input in terms of a neighborhood semantics, and show how the output can be obtained by suitable combinations of two types of operations on neighborhoods: pooling (also known as aggregation or pointwise intersection) and stretching (also known as weakening or closure under supersets). We provide a sound and complete logic that can express the result of various such combinations and investigate its expressive power, building on earlier results by Van De Putte and Klein (2018). If time permits, we will also connect this work to the logic of evidence-based belief (van Benthem & Pacuit, 2011; Özgün et al., 2016) and the logic of coalitional ability (Pauly 2002). References: Baltag, A., Bezhanishvili, N., Özgün, A., & Smets, S. J. L. Justified Belief and the Topology of Evidence. In J. Väänänen, Å. Hirvonen, & R. de Queiroz (Eds.), Logic, Language, Information, and Computation: 23rd International Workshop, WoLLIC 2016: Puebla, Mexico, August 16–19th, 2016: proceedings (pp. 83-103). Pauly, M., A modal logic for coalitional power in games, Journal of Logic and Computation 1 (2002), pp. 149-166. van Benthem, J. and E. Pacuit, Dynamic logics of evidence-based beliefs, Studia Logica 99 (2011), pp. 61-92. Van De Putte, F. and Klein, D. Pointwise intersection in neighbourhood modal logic. In Bezhanishvili, Guram and D'Agostino, Giovanna (eds.), Advances in Modal Logic (AiML 12), College Publications (2018), pp. 591-610. Fengkui Ju (School of Philosophy, Beijing Normal University, China) Coalitional Logic on Non-interfering Actions ABSTRACT. Suppose that there is a group of agents, and they perform actions and the world will change. Assume that they change different parts of the world and these parts do not overlap. Under this assumption, their actions do not interfere with each other. Then the class of possible outcomes of a joint action is the intersection of the classes of possible outcomes of those individual actions in that joint action. This property can be called \emph{the intersection property}. A special case of the previous assumption is that every agent controls a set of atomic propositions and these sets are disjoint. This is a basic setting in \cite{HoekWooldridge05}. In Coalition Logic ($\mathsf{CL}$), proposed by \cite{Pauly02}, the class of possible outcomes of an individual action consists of the possible outcomes of the joint actions that are extensions of that individual action and possible outcomes of joint actions are arbitrary. As a result, the intersection property is not met. The STIT logic proposed by \cite{Horty01} has the intersection property. However, it requires that the classes of possible outcomes of the individual actions of the same agent are disjoint. This constraint is too strong. This work presents a complete coalitional logic $\mathsf{NiCL}$ on non-interfering actions. [1] J. Horty. Agency and Deontic Logic. Oxford University Press, 2001. [2] M. Pauly. A modal logic for coalitional power in games. Journal of Logic and Computation, 12(1):149-166, 2002. [3] W. van der Hoek and M. Wooldridge. On the logic of cooperation and propositional control. Articial Intelligence, 164(1):81-119, 2005. Helge Kragh (University of Copenhagen, Denmark) Popper and Modern Cosmology: His Views and His Influence ABSTRACT. Karl Popper only commented on modern cosmology at a few occasions and then in general terms. His only paper on the subject dates from 1940. Nonetheless, his philosophy of science played a most important role in the epic cosmological controversy that raged from 1948 to about 1965 and in which the new steady-state theory confronted the evolutionary cosmology based on Einstein's general theory of relativity. The impact of Popper's philosophical views and of his demarcation criterion in particular is still highly visible in the current debate concerning the so-called multiverse hypothesis. In astronomy and cosmology, as in the physical sciences generally, Popper's views of science – or what scientists take to be his views – have had much greater impact than the ideas of other philosophers. The paper analyses the interaction between Popper's philosophy of science and developments in physical cosmology in the post-World War II era. There are two separate aspects of the analysis. One is to elucidate how Popper's philosophical ideas have influenced scientists' conceptions of the universe as a whole. The other aspect is to investigate Popper's own views about scientific cosmology, a subject he never dealt with at any length in his publications. These views, as pieced together from published as well as unpublished sources, changed somewhat over time. While he had some sympathy for the now defunct steady-state theory, he never endorsed it in public and there were elements in it which he criticized. He much disliked the big bang theory, which since the mid-1960s has been the generally accepted framework for cosmology. According to Popper, the concept of a big bang as the beginning of the universe did not belong to science proper. Generally he seems to have considered cosmology a somewhat immature science. Anastasiia Lazutkina (Leipzig University, Germany) Comment on "Popper and Modern Cosmology" ABSTRACT. As Helge Kragh notes, Karl Popper never engaged scientific cosmology to the degree that he did with other physical sciences, but his dislike of the big bang theory can be shown to be consistent with his own philosophical views. We are, therefore, in a position to examine contemporary cosmological theories through a Popperian methodology, and a criticism of the current cosmological paradigm can be based on his falsificationist ideas. One of Kragh's insights is that Popper's demarcation criterion has had a lasting impact on the development of scientific cosmology. In this paper I will defend the view that a methodological analysis of contemporary cosmological models that is in line with Popper's demarcation criterion between scientific and non-scientific cosmology can greatly benefit from the use of formal methods. For example, formal methodological criteria can help us answer the question of whether physical cosmology should be considered, as Popper did, to be an immature science. The application of these formal criteria will reveal that there are two contrasting approaches in cosmology, one of which is is compatible, and the other one incompatible with Popper's methodological views. In practical terms, the difference between these approaches is that in the former the focus is on studying small scale phenomena (e.g. galaxies, clusters), and trying to build models that are successful at making novel predictions at these scales. In the latter approach the primary attempt is to form a model of the universe as a whole and then work our way toward smaller scales. Both of these approaches face difficulties with explaining some of the available data, and disagreements between their proponents have lead to a surging interest in the foundational methodological questions among cosmologists themselves. 14:00-14:30 Session 13D: B7 Concepts and conceptual change in science education Dragana Bozin (University of Oslo, Norway) CANCELLED: Teaching Conceptual Change: Can Building Models Explain Conceptual Change in Science? ABSTRACT. This paper considers how novel scientific concepts (concepts which undergo a radical conceptual change) relate to their models. I present and discuss two issues raised by respectively Chin and Samarapungavan (2007) and Nersessian (1989) about perceived (and persistent) difficulties in explaining conceptual change to students. In both cases models are either seen as secondary to concepts/conceptual change or seen as inessential for explanation. Next, I provide an example which to some extent counters these views. On the basis of that example I suggest an alternative view of the role of models in conceptual change and show that the latter could have beneficial implications for teaching conceptual change. The example in question is Robert Geroch's modeling of Minkowski spacetime in Relativity from A to B (1981). It seems reasonable to think that understanding the conceptual transformation from space and time to spacetime first, makes it easier to build a model of spacetime. This is the underlying assumption that Chin and Samarapungavan make (2007). Their objective is to find ways to facilitate conceptual change because they see the lack of understanding of the conceptual change that produced the concept as the main obstacle for students' ability to build a model of it. I argue that this is not necessarily the case: in certain cases (spacetime for example) building the model can facilitate understanding of the conceptual change. In a similar vein, although understanding how scientific concepts developed can often give clues for how to teach them I argue that in some cases the historical approach is counterproductive. Nersessian argues that the same kind of reasoning used in scientific discovery could be employed in science education (Nersessian, 1989). I essentially agree with this view but with a caveat. I argue that in some cases the historical approach might be constraining and in particular that the spacetime example shows that ignoring the historical path in certain cases is more successful. Additionally Geroch's way to model spacetime can be of consequence for teaching relativity and quantum mechanics to high school students. Physics is traditionally taught through solving equations and performing experiments which is ill suited for relativity and quantum mechanics. Norwegian curriculum requirements include that students be able to give qualitative explanations as well as discuss philosophical and epistemological aspects of physics. According to ReleQuant (University of Oslo and the NTNU project on developing alternative learning resources for teaching relativity and quantum mechanics to high school students) this opens the door for introducing qualitative methods in teaching high school physics. The conclusion that ReleQuant draws from this is that historical approaches may be profitable when teaching quantum physics on the high school level. The historical approach might not always be effective – as it is not in teaching spacetime. Teaching through building a model "from scratch" might work better. Building a model from with no or little reference to theory could be viewed as a qualitative method and would essentially be in agreement with the overall ambition of the ReleQuant project. References Bungum, Berit, Ellen K. Henriksen, Carl Angell, Catherine W. Tellefsen, and Maria V. Bøe. 2015. "ReleQuant – Improving teaching and learning in quantum physics through educational design research". Nordina: Nordic studies in science education 11(2): 153 - 168 Chin, C. and Samarapungavan, A. 2007. "Inquiry: Learning to Use Data, Models and Explanations". In Teaching Scientific Inquiry (eds) Richard Duschl and Richard Grandy, 191 – 225. Sense Publishers Geroch, R. 1981. General Relativity from A to B. Chicago: Chicago University Press Nersessian, N. 1989. "Conceptual change in science and in science education". Synthese 80: 163-183 14:00-15:00 Session 13E: IS C8 Bursten Julia Bursten (University of Kentucky, United States) Scale Separation, Scale Dependence, and Multiscale Modeling in the Physical Sciences ABSTRACT. In multi-scale modeling of physical systems, dynamical models of higher-scale and lower-scale behavior are developed independently and stitched together with connective or coupling algorithms, sometimes referred to as "handshakes." This can only be accomplished by first separating modeled behaviors into bulk behaviors and surface or interfacial behaviors. This strategy is known as "scale separation," and it requires physical behaviors at multiple length, time, or energy scales to be treated as autonomous from one another. In this talk, I examine what makes this strategy effective—and what happens when it breaks down. The nanoscale poses challenges to scale separation: there, the physics of the bulk occurs at the same length scale as the physics of the surface. Common scale-separation techniques, e.g. modeling surfaces as boundary conditions, fail. Modeling the scale-dependent physics of nanoscale materials presents a new challenge whose solution requires conceptual engineering and new modeling infrastructure. These considerations suggest a view of physical modeling that is centered not around idealization or representation but around scale. Ravit Dotan (University of California, Berkeley, United States) Machine learning, theory choice, and non-epistemic values ABSTRACT. I argue that non-epistemic values are essential to theory choice, using a theorem from machine learning theory called the No Free Lunch theorem (NFL). Much of the current discussion about the influence of non-epistemic values on empirical reasoning is concerned with illustrating how it happens in practice. Often, the examples used to illustrate the claims are drawn from politically loaded or practical areas of science, such as social science, biology, and environmental studies. This leaves advocates of the claim that non-epistemic values are essential to assessments of hypotheses vulnerable to two objections. First, if non-epistemic factors happen to influence science only in specific cases, perhaps this only shows that scientists are sometimes imperfect; it doesn't seem to show that non-epistemic values are essential to science itself. Second, if the specific cases involve sciences with obvious practical or political implications such as social science or environmental studies, then one might object that non-epistemic values are only significant in practical or politically loaded areas and are irrelevant in more theoretical areas. To the extent that machine learning is an attempt to formalize inductive reasoning, results from machine learning are general. They apply to all areas of science, and, beyond that, to all areas of inductive reasoning. The NFL is an impossibility theorem that applies to all learning algorithms. I argue that it supports the view that all principled ways to conduct theory choice involve non-epistemic values. If my argument holds, then it helps to defend the view that non-epistemic values are essential to inductive reasoning from the objections mentioned in the previous paragraph. That is, my argument is meant to show that the influence of non-epistemic values on assessment of hypotheses is: (a) not (solely) due to psychological inclinations of human reasoners; and (b) not special to practical or politically loaded areas of research, but rather is a general and essential characteristic for all empirical disciplines and all areas of inductive reasoning. In broad strokes, my argument is as follow. I understand epistemic virtues to be theoretical characteristics that are valued because they promote epistemic goals (for this reason, the epistemic virtues are sometimes just called "epistemic values"). For example, if simpler theories are more likely to satisfy our epistemic goals, then simplicity is epistemically valuable and is an epistemic virtue. I focus on one aspect of evaluation of hypotheses – accuracy, and I interpret accuracy as average expected error. I argue that NFL shows that all hypotheses have the same average expected error if we are unwilling to make choices based on non-epistemic values. Therefore, if our epistemic goal is promoting accuracy in this sense, there are no epistemic virtues. Epistemic virtues promote our epistemic goals, but if we are not willing to make non-epistemic choices all hypotheses are equally accurate. In other words, no theoretical characteristic is such that hypotheses which have it satisfy our epistemic goal better. Therefore, any ranking of hypotheses will depend on non-epistemic virtues. Elizabeth Seger (University of Cambridge, UK) Taking a machine at its word: Applying epistemology of testimony to the evaluation of claims by artificial speakers ABSTRACT. Despite the central role technology plays in the production, mediation, and communication of information, formal epistemology regarding the influence of emerging technologies on our acquisition of knowledge and justification of beliefs is sparse (Miller & Record 2013), and with only a couple exceptions (Humphreys 2009; Tollefsen 2009) there has been almost no attempt to directly apply epistemology of testimony to analyze artifacts-as-speakers (Carter & Nickel 2014). This lacuna needs to be filled. Epistemology of testimony is concerned with identifying the conditions under which a hearer may be justified in trusting and forming beliefs based on a speaker claims. Similarly, philosophers of technology and computer scientists alike are urgently pushing to ensure that new technologies are sufficiently explainable and intelligible to appropriately ground user understanding and trust (Tomsett et al. 2018; Weller 2017). Given the convergent goals of epistemologists and philosophers of technology, the application of epistemology of testimony to the evaluation of artifact speakers may be incredibly productive. However, we must first determine whether an artifact may legitimately hold the role of 'speaker' in a testimonial relationship. Most epistemologist assume that testimonial speakers are intentional, autonomous agents, and methods for evaluating the testimonial claims of such agents have developed accordingly making technology difficult to slot into the conversation. In this paper I demonstrate that epistemology of testimony may be applied to analyze the production and transmission of knowledge by artificial sources. Drawing on Gelfert (2014) I first argue, independently of my goal to apply testimony to technology, that our current philosophical conception of testimony is ill-defined. I then differentiate between the theoretical and pragmatic aims of epistemology of testimony and argue that the pragmatic aim of epistemology of testimony is to provide tools for the evaluation of speaker claims. I explicate a more precise 'continuum view' of testimony that serves this pragmatic aim, and conclude by describing how the explicated continuum view may be usefully and appropriately applied to the evaluation of testimony from artificial speakers. Carter, A. J., & Nickel, P. J. (2014). On testimony and transmission. Episteme, 11(2), 145-155. doi:10.1017/epi.2014.4 Gelfert, A. (2014). A Critical Introduction to Testimony. London: Bloomsbury. Humphreys, P. (2009). Network Epistemology. Episteme, 6(2), 221-229. Miller, B., & Record, I. (2013). Justified belief in a digital age: On the epistemic implication of secret internet technologies. Episteme, 10(2), 117-134. doi:10.1017/epi.2013.11 Tollefsen, D. P. (2009). Wikipedia and the Epistemology of Testimony. Episteme, 6(2), 8-24. Tomsett, R., Braines, D., Harborne, D., Preece, A., & Chakraborty, S. (2018). Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. Paper presented at the 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018). Weller, A. (2017). Challenges for transparency. Paper presented at the ICML Workshop on Human Interpretability in Machine Learning, Syndey, NSW, Australia. 14:00-14:30 Session 13G: B4 Explanation and understanding 2 Explanatory Conditionals ABSTRACT. The present paper aims to complement causal model approaches to causal explanation by Woodward (2003), Halpern and Pearl (2005), and Strevens (2004). It does so by carrying on a conditional analysis of the word 'because' in natural language by Andreas and Günther (2018). This analysis centres on a strengthened Ramsey Test of conditionals: α ≫ γ iff, after suspending judgment about α and γ, an agent can infer γ from the supposition of α (in the context of further beliefs in the background). Using this conditional, we can give a logical analysis of because: Because α,γ (relative to K) iff α≫γ ∈ K and α,γ ∈ K where K designates the belief set of the agent. In what follows, we shall refine this analysis by further conditions so as to yield a fully-fledged analysis of (deterministic) causal explanations. The logical foundations of the belief changes that define the conditional ≫ are explicated using AGM-style belief revision theory. Why do we think that causal model approaches to causal explanation are incomplete? Halpern and Pearl (2005) have devised a precise semantics of causal models that centres on structural equations. Such an equation represents causal dependencies between variables in a causal model. In the corresponding definition of causation, however, there is no explanation of what it is for a variable to causally depend directly on certain other variables. This approach merely defines complex causal relations in terms of elementary causal dependencies, just as truth-conditional semantics defines the semantic values of complex sentences in terms of a truth-value assignment to the atomic formulas. And the corresponding account of causal explanation in Halpern and Pearl (2005) inherits the reliance on elementary causal dependencies (which are assumed to be antecedently given) from the analysis of causation. Woodward (2003) explains the notion of a direct cause in terms of interventions, but the notion of an intervention is always relative to a causal graph so that some knowledge about elementary causal dependencies must be antecedently given as well. The kairetic account of explanation by Strevens (2004) makes essential use of causal models as well, but works with a more liberal notion of such a model. In his account, a set of propositions entail an explanandum E in a causal model only if this entailment corresponds to a "real causal process by which E is causally produced" (2004, p. 165). But the kairetic account is conceptually incomplete in a manner akin to the approaches by Halpern and Pearl (2005) and Woodward (2003). For, it leaves open what the distinctive properties of causal relations of logical entailment are. In what follows, we aim to give a precise characterization of logical entailment with a causal meaning. For this characterization, we define an explanatory conditional ≫, but impose also non-logical conditions on the explanans and the explanandum. Here is a semiformal exposition of our final analysis: Definition 1. Causal Explanation. Let S be an epistemic state that is represented by a prioritised belief base. K(S) is the set of beliefs of S, extended by the strengthened Ramsey Test. The set A of antecedent conditions and the set G of generalisations explain the fact F - for an epistemic state S - iff (E1) For all α ∈ A, all γ ∈ G, and all β ∈ F: α,γ,β ∈ K(S). (E2) For all non-empty A′ ⊆ A, A′≫F ∈ K(S). (E3) For any α ∈ A and any β ∈ F, (i) the event designated by α temporally precedes the event designated by β, or (ii) the concepts of α are higher up in the hierarchy of theoreticity of S than the concepts of β. (E4) For any γ ∈ G, γ is non-redundant in the set of all generalisations of S. 14:00-15:00 Session 13H: B6 History and philosophy of the life sciences 1 Mustafa Yavuz (Istanbul Medeniyet University, History of Science Department, Turkey) Definition and Faculties of Life in Medieval Islamic Philosophy ABSTRACT. Abstract: It has always been problematic to give a concrete definition and identify a principle, when it comes to life, however, unlike the death. Although life itself is the main difference between biological organisms and lifeless (or inanimate?) things. At first sight, it is visible that between the eighteenth and the twentieth centuries there are forty-three books dedicated in discussions on the origin or definition of life, with a quarter of which has been published in this millennium. The increase in these numbers indicate that the debate on solving life puzzle has been popular among scientists and philosophers. How was this situation in medieval times? Did philosophers and physicians in the medieval Islamic world compose books, which give a definition of life and its faculties? In this study, after giving few definitions of life (and that of death) from recent scientific literature, I will try to go back into the medieval in order to investigate how life and its faculties were considered in a book of kalam from fifteenth century. Composed by Sayyid al Sharif al Jurjani (d. 1413) who was appreciated as an authority in the Ottoman Empire, Sharh al Mawaqif had been copied; read, commented frequently, which shows its popularity among Ottoman philosophers, and theologians. This book is an explanation of al-Mawaqif fi Ilm al-Kalam, written by Adud al-Din al-Idji (d. 1355). I will also pursue by citations from Ibn Sina (d. 1037) -known as Avicenna- through his famous book al-Qanun fi al-Tibb (Canon of Medicine) where he discussed the Vegetal type and Animal type of life through the performance of some actions. As the eminent symbol of Peripatetic Tradition in Islamic Philosophy and Medicine, I will try to compare different considerations on life and death by Kalamic and Philosophic Schools. Starting from biology, shifting towards kalam and philosophy, I will try to show whether or not we can find philosophical instruments which may inspire us to solve the life puzzle, today. References: Al-Dinnawi, M. A, 1999. Ibn Sina al-Qanun fi al-Tibb (Arabic Edition). Dar al-Kotob al-Ilmiyah. Beirut. Bara, I. 2007. What is Life? or α + β + ω = ∞. Oltenia. Studii i comunicari. Ştiinţele Naturii. Tom XXIII. 233-238. Cürcânî, S. Ş. 2015. Şerhu'l-Mevâkıf (Arabic Text Editetd and Turkish Translation by Ömer Türker). İstanbul: Yazma Eserler Kurumu. Dupré, J. 2012. Processes of Life: Essays in the Philosophy of Biology. Oxford: Oxford University Press. Luisi P. L. 2006. The Emergence of Life: From Chemical Origins to Synthetic Biology. Cambridge: Cambridge University Press. McGinnis, J. and Reisman, D. C. 2004. Interpreting Avicenna: Science and Philosophy in Medieval Islam. Brill, Leiden. Nicholson, D. J. and Dupré, J. 2018. Everything Flows: Towards a Processual Philosophy of Biology. Oxford: Oxford University Press. Popa, R. 2004. Between Necessity and Probability: Searching for the Definition and Origin of Life. Berlin: Springer-Verlag. Pross, A. 2012. What is Life? How Chemistry becomes Biology. Oxford: Oxford University Press. Daniel Nicholson (Konrad Lorenz Institute for Evolution and Cognition Research, Austria) CANCELLED: Schrödinger's 'What Is Life?' 75 Years On ABSTRACT. 2019 marks 75 years since Erwin Schrödinger, one of the most celebrated physicists of the twentieth century, turned his attention to biology and published a little book titled 'What Is Life?'. Much has been written on the book's instrumental role in marshalling an entire generation of physicists as well as biologists to enter the new field that came to be known as 'molecular biology'. Indeed, many founding figures of molecular biology have acknowledged their debt to it. Scientifically, the importance of 'What Is Life?' is generally taken to lie in having introduced the idea that the hereditary material (at the time it hadn't yet been conclusively identified as DNA) contains a 'code-script' that specifies the information necessary for the developmental construction of an organism. Although Schrödinger ascribed too much agency to this code-script, as he assumed that it directly determines the organism's phenotype, his insight that the genetic material contains a code that specifies the primary structure of the molecules responsible for most cellular functions has proven to be essentially correct. Similarly, Schrodinger's famous account of how organisms conform to the second law of thermodynamics, by feeding on 'negative entropy' at the expense of increasing the entropy of their surroundings, is also quite correct (even if this idea was already well-known at the time). Consequently, most retrospective evaluations of 'What Is Life?' (including the ones which have just appeared to commemorate its 75th anniversary) converge in praising the book for having exerted a highly positive influence on the development of molecular biology. In this paper I challenge this widely accepted interpretation by carefully dissecting the argument that Schrödinger sets out in 'What Is Life?', which concerns the nature of biological order. Schrödinger clearly demarcates the kind of order found in the physical world, which is based on the statistical averaging of vast numbers of stochastically-acting molecules that collectively display regular, law-like patterns of behaviour, from the kind of order found in the living world, which has its basis in the chemical structure of a single molecule, the self-replicating chromosome, which he conceived as a solid-state 'aperiodic crystal' in order to account for its remarkable stability in the face of stochastic perturbations. Schrödinger referred to the former, physical kind of order as 'order-from-disorder' and the latter, biological kind of order as 'order-from-order'. As I will argue, this demarcation proved disastrous for molecular biology, for it granted molecular biologists the licence for over half a century to legitimately disregard the impact of stochasticity at the molecular scale (despite being inevitable from a physical point of view), encouraging them instead to develop a highly idealized, deterministic view of the molecular mechanisms underlying the cell, which are still today often misleadingly characterized as fixed, solid-state 'circuits'. It has taken molecular biologists a disturbingly long time to 'unlearn' Schrödinger's lessons regarding biological order and to start taking seriously the role of self-organization and stochasticity (or 'noise'), and this, I claim, should be considered the real scientific legacy of 'What Is Life?' 75 years on. 14:00-15:00 Session 13I: C2 Epistemology, philosophy of physics and chemistry 2 Samuel Fletcher (University of Minnesota, United States) The Topology of Intertheoretic Reduction ABSTRACT. Nickles (1973) first introduced into the philosophical literature a distinction between two types of intertheoretic reduction. The first, more familiar to philosophers, involves the tools of logic and proof theory: "A reduction is effected when the experimental laws of the secondary science (and if it has an adequate theory, its theory as well) are shown to be the logical consequences of the theoretical assumptions (inclusive of the coordinating definitions) of the primary science" (Nagel 1961, 352). The second, more familiar to physicists, involved the notion of a limit applied to a primary equation (representing a law) or theory. The result is a secondary equation or theory. The use of this notion, and the subsequent distinction between so-called "regular" and "singular" limits, has played a role in understanding the prospects for reductionism, its compatibility (or lack thereof) with emergence, the limits of explanation, and the roles of idealization in physics (Batterman 1995; Butterfield 2011). Despite all this debate, there has been surprisingly no systematic account of what this second, limit-based type of reduction is supposed to be. This paper provides such an account. In particular, I argue for a negative and a positive thesis. The negative thesis is that, contrary to the suggestion by Nickles (1973) and the literature following him, limits are at best misleadingly conceived as syntactic operators applied to equations. Besides not meshing with mathematical practice, the obvious ways to implement such a conception are not invariant under substitution of logical equivalents. The positive thesis is that one can understand limiting-type reductions as *relations* between classes of models endowed with extra, topological (or topologically inspired) structure that encodes formally how those models are relevantly similar to one another. In a word, theory T reduces T' when the models of T' are arbitrarily similar to models of T -- they lie in the topological closure the models of T. Not only does this avoid the problems with syntactically focused account of limits and clarify the use of limits in the aforementioned debates, it also reveals an unnoticed point of philosophical interest, that the models of a theory themselves do not determine how they are relevantly similar: that must be provided from outside the formal apparatus of the theory, according to the context of investigation. I stress in conclusion that justifying why a notion of similarity is appropriate to a given context is crucial, as it may perform much of the work in demonstrating a particular reduction's success or failure. I illustrate both negative and positive theses with the elementary case of the simple harmonic oscillator, gesturing towards their applicability to more complex theories, such as general relativity and other spacetime theories. Batterman, R. W. (1995). Theories between theories: Asymptotic limiting intertheoretic relations. Synthese 103:171-201. Butterfield, J. (2011). Less is different: Emergence and reduction reconciled. Foundations of Physics 41(6):1065-1135. Nagel, E. (1961). The Structure of Science: Problems in the Logic of Scientific Explanation. Hackett, Indianapolis. Nickles, T. (1973). Two concepts of intertheoretic reduction. The Journal of Philosophy 70(7):181-201. Chrysovalantis Stergiou (The American College of Greece-Deree, Greece) Empirical Underdermination for Physical Theories in C* Algebraic Setting: Comments to an Arageorgis's Argument ABSTRACT. In this talk I intend to reconstruct an argument of Aristidis Arageorgis(1) against empirical underdetermination of the state of a physical system in a C*-algebraic setting and to explore its soundness. The argument, aiming against algebraic imperialism, the operationalist attitude which characterized the first steps of Algebraic Quantum Field Theory, is based on two topological properties of the state space: being T1 and being first countable in the weak*-topology. The first property is possessed trivially by the state space while the latter is highly non-trivial, and it can be derived from the assumption of the algebra of observables' separability. I present some cases of classical and of quantum systems which satisfy the separability condition, and others which do not, and relate these facts to the dimension of the algebra and to whether it is a von Neumann algebra. Namely, I show that while in the case of finite-dimensional algebras of observables the argument is conclusive, in the case of infinite-dimensional von Neumann algebras it is not. In addition, there are cases of infinite-dimensional quasilocal algebras in which the argument is conclusive. Finally, I discuss Martin Porrmann's(2) construction of a net of local separable algebras in Minkowski spacetime which satisfies the basic postulates of Algebraic Quantum Field Theory. (1) Arageorgis, A., (1995). Fields, Particles, and Curvature: Foundations and Philosophical Aspects of Quantum Field Theory in Curved Spacetime. Pittsburgh: University of Pittsburgh (PhD Dissertation) (2) Porrmann, M. (2004). "Particle Weights and their Disintegration II", Communications in Mathematical Physics 248: 305–333 14:00-15:00 Session 13J: C7 Philosophy of the humanities and social sciences Petr Špecián (Charles University, Czechia) Thou Shalt not Nudge: Towards an Anti-Psychological State ABSTRACT. The neoclassical economics defines market failures as an uncompensated impact of one agent's actions on the other agents' well-being. The favored solution is the use of economic incentives like taxes and subsidies to correct these situations. Recently, the findings of behavioral economists have provided support for the argument that market failures should also comprise the cases where individuals harm themselves due to systematic mistakes they make (Sunstein 2014; Allcott and Sunstein 2015). Also, the set of regulatory tools should be expanded beyond economic incentives towards the use of subtle manipulation of the choice architecture (Thaler, Sunstein, and Balz 2014). I argue that both of these steps would serve to increase the arbitrary power of the government and the fragility of the liberal democratic institutions. While it is easy to muster intuitive support for the claim that exploitation of systematic mistakes in decision-making is an inherent feature of the free market exchange (Akerlof and Shiller 2015), no one has yet succeeded in establishing a coherent and practically useful notion of 'true preferences' against which these mistakes could be defined (Sugden 2018). Thus, the concept of market failure due to self-harm is vague. Therefore, the government interventions to prevent these failures lack a general theoretical framework and, where applied, proceed on an ad hoc basis. Moreover, as far as individuals' choice is no longer to be taken at face value, the voters' choices can be contested at least as easily as the consumers' choices (Brennan 2016). Use of nudges instead of economic incentives to bring people's choices closer to their nebulous true preferences lowers the transparency of the intervention and increases the temptations to misuse it to strengthen the incumbents' hold on political power (Schubert 2017). I propose to use the government's regulatory power to preempt the most dangerous manipulative techniques rather than to engage the government in them. Such 'anti-psychological' role has significant advantages. Regulation of the forms of commercial (and political) communication can capitalize on the scientific knowledge of human cognitive limitations, and yet avoids the necessity to establish what the true preferences are. It also takes a form of general rules which are more transparent than measures that need to target particular situations. References Akerlof, George A., and Robert J. Shiller. 2015. Phishing for Phools: The Economics of Manipulation and Deception. Princeton: Princeton University Press. Allcott, Hunt, and Cass R. Sunstein. 2015. "Regulating Internalities." Journal of Policy Analysis and Management 34(3):698–705. https://doi.org/10.1002/pam.21843. Brennan, Jason. 2016. Against Democracy. Princeton: Princeton University Press. Schubert, Christian. 2017. "Exploring the (Behavioural) Political Economy of Nudging." Journal of Institutional Economics 13(3):499–522. https://doi.org/10.1017/S1744137416000448. Sugden, Robert. 2018. The Community of Advantage: A Behavioural Economist's Defence of the Market. New product edition. New York, NY: Oxford University Press. Sunstein, Cass R. 2014. Why Nudge? The Politics of Libertarian Paternalism. New Haven: Yale University Press. Thaler, Richard H., Cass R. Sunstein, and John P. Balz. 2014. "Choice Architecture." SSRN Scholarly Paper ID 2536504. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=2536504. Ivan F. da Cunha (Federal University of Santa Catarina, Brazil) Utopias in the context of social technological inquiry ABSTRACT. This paper elaborates on Otto Neurath's proposal that utopias can be used in social scientific and technological methodology of research. In Foundations of the Social Sciences (1944), Neurath claims that such imaginative works may provide means for scientists to overcome the limitations of existing social arrangements in order to devise alternatives to experienced problematic situations. I compare this point of view with the work scientists do with models and nomological machines in Nancy Cartwright's conception, as presented in The Dappled World (1999). That is, utopias are abstractions that depict the complexity of social arrangements and that provide idealized situations to which our understanding of some aspects of society applies. As models, they enable scientists to visualize the functioning of social scientific laws and generalizations, as well as new possibilities, since models allow the operation of features and consequences of the abstract arrangements. In this operation scientists acquire knowledge not only of imagined arrangements, but also of concrete social institutions, since models mediate between more abstract and more concrete parts of scientific experience. But how does this mediation take place? That is, why is that knowledge valid? A common answer to this question in the recent controversy on models in philosophy of science assumes some form of (more or less mitigated) scientific realism: that scientific models represent some features of reality. Such an answer can be found in Cartwright's proposals, since she claims that scientific models and nomological machines instantiate real capacities of the modeled systems. This stance seems not to be compatible with an account of the complexity of social situations, which have many concurring causes that are not always describable in mathematical terms. In other words, social arrangements do not present the stability that Cartwright's models and nomological machines seem to require. An approach of utopias as models is meant to bring together scientific and literary social thought. A realist claim, such that science apprehends some aspects of reality while literature does not, offers too sharp a line between these modes of social reasoning. Nevertheless an appropriate account of social scientific models must offer a way to distinguish between models in scientific investigations and utopias when they are regarded as fictional works. My suggestion is that this problem is properly addressed by considering the pragmatic contexts of inquiry in which utopias as models of social science and technology appear. In this paper I am going to develop this suggestion by drawing inspiration from the works of John Dewey as well as from some recent theories of inquiry. In this perspective, scientific abstract constructions are to be considered as answers to experienced problematic situations and as projected courses of action to deal with social problems. The difference in regard to utopias as works of art is not in the composition of the abstraction, but in the context of inquiry that they elicit. By focusing on the context of inquiry, this approach dismisses the need for realist claims, in the spirit of Neurath's well-known anti-metaphysical stance. 14:00-15:00 Session 13K 14:00-15:00 Session 13L: C1 Pluralism and philosophy of the formal sciences 1 Silvia De Toffoli (Princeton University, United States) Andrea Sereni (School of Advanced Study IUSS Pavia, Italy) Maria Paola Sforza Fogliani (School of Advanced Study IUSS Pavia, Italy) Luca Zanetti (School of Advanced Study IUSS Pavia, Italy) A Roundabout Ticket to Pluralism PRESENTER: Luca Zanetti ABSTRACT. A thriving literature has developed over logical and mathematical pluralism (LP and MP, respectively) – i.e. the views that several rival logical and mathematical theories can be correct. However, these have unfortunately grown separate; we submit that, instead, they both can greatly gain by a closer interaction. To show this, we present some new kinds of MP modeled on parallel ways of substantiating LP, and vice versa. We will use as a reference abstractionism in the philosophy of mathematics (Wright 1983). Abstractionists seek to recover as much mathematics as possible from abstraction principles (APs), viz. quantified biconditionals stating that two items have the same abstract just in case they belong to the same equivalence class; e.g. Hume's Principle (HP), which states that two concepts have the same cardinal number iff they can be put into one-to-one correspondence (Frege 1884, §64). The proposed new forms of pluralism we will advance can fruitfully be clustered as follows: 1. CONCEPTUAL PLURALISM – From LP to MP: Just as LPs argue that different relations of logical consequence are equally legitimate by claiming that the notion of validity is underspecified (Beall & Restall 2006) or polysemous (Shapiro 2014), abstractionists might deem more than one version of HP acceptable by stating that the notion of "just as many" – and, consequently, of cardinal number – admits of different precisifications. 2. DOMAIN PLURALISM – From MP to LP: Just as MPs claim that rival mathematical theories can be true of different domains (Balaguer 1998), it could be argued that each version of HP introduces its own domain of cardinal numbers, and that the results these APs yield might differ with respect to some domains, and match with respect to some others (e.g., of finite and infinite cardinals). The proposal, in turn, prompts some reflections on the sense of "rivalry" between the logics accepted by LPs, which often agree on some laws, while diverging on others. Is the weaker logic genuinely disagreeing or just silent on the disputed rule? Do rival logicians employ the same notion of consequence in those rules about which they agree or, given some inferentialist view, always talk past each other? 3. CRITERIA PLURALISM – From LP to MP, and back: Another form of pluralism about abstractions could be based on the fact that more than one AP is acceptable with respect to different criteria (e.g. irenicity, conservativity, simplicity); accordingly, LP has so far been conceived as the claim that more than one logic satisfies a single set of requirements, but a new form of LP could arise from the acceptance of several legitimacy criteria themselves (e.g. compliance with our intuitions on validity, accordance with mathematical practice). These views – besides, we will argue, being in and of themselves attractive – help expanding and clarifying the spectrum of possibilities available to pluralists in the philosophy of both logic and mathematics; as a bonus, this novel take can be shown to shed light on long-standing issues regarding LP and MP – in particular, respectively, the "collapse problem" (Priest 1999) and the Bad Company Objections (Linnebo 2009). Balaguer, M. (1998). Platonism and Anti-Platonism in the Philosophy of Mathematics. OUP. Beall, JC and Restall, G. (2006). Logical Pluralism. OUP. Frege, G. (1884). ​The Foundations of Arithmetic,​ tr. by J. Austin, Northwestern University Press, 1950. Linnebo, Ø. (2009). Introduction to Synthese Special Issue on the Bad Company Problem, 170(3): 321-9. Priest, G. (1999). "Logic: One or Many?", typescript. Shapiro, S. (2014). Varieties of Logic. OUP. Wright, C. (1983). Frege's Conception of Numbers as Objects, Aberdeen UP. Cian Guilfoyle Chartier (University of Amsterdam, Netherlands) A Practice-Oriented Logical Pluralism ABSTRACT. I conceive logic as a formal presentation of a guide to undertaking a rational practice, a guide which itself is constituted by epistemic norms and their consequences. The norms themselves may be conceived in a non-circular manner with a naturalistic account, and we use Hilary Kornblith's: epistemic norms are "hypothetical imperatives" informed by instrumental desires "in a cognitive system that is effective at getting at the truth" ([1]). What I mean by "formal" is primarily what John MacFarlane refers to in his PhD thesis [2] as the view that logic "is indifferent to the particular identities of objects", taken together with MacFarlane's intrinsic structure principle and my own principle that logic is provided by the norms that constitute a rational practice. The view that logic is provided by constitutive norms for a rational practice helps us respond to a popular objection to logical pluralism, the collapse argument ([3], chapter 12). Logic here has been misconceived as starting with a given situation and then reasoning about it. Instead we start with our best known practice to suit an epistemic goal, and ask how to formalise this practice. This view of logic provides a starting point for an account of the normativity of logic: assuming we ought to follow the guide, we ought to accept the logic's consequences. If we cannot, we must either revise either the means of formalisation or some of the epistemic norms that constitute the guide. Revision might be performed either individually or on a social basis, comparable to Novaes' conception in [4]. Mutual understanding of differences emerges from the practice-based principle of interpretive charity: we make the best sense of others when we suppose they are following epistemic norms with maximal epistemic utility with respect to our possible interpretations of what their instrumental desires could be. One might ask what the use is of logic as a formalisation of good practice rather than good practice in itself. Indeed Teresa Kouri Kissel in [5] takes as a motto that "we ought not to legislate to a proper, functioning, science". Contrary to this, my response is that logic provides evidence for or against our conception of good practice, and can thus outrun our own intuitions of what good practice is. Implementations of intuitionistic logic manifested in proof assistants such as Coq have proved themselves capable of outrunning intuitions of good mathematical practice in the cases of particularly long proofs (see for instance [6]). [1] Kornblith, Hilary, "Epistemic Normativity", Synthese, Vol. 94, pp. 357-376, 1993. [2] MacFarlane, John, What Does It Mean That Logic Is Formal, PhD thesis University of Pittsburgh, 2000. [3] Priest, Graham, Doubt Truth to Be a Liar, 2009. [4] Dutilh Novaes, Catarina, "A Dialogical, Multi-Agent Account of the Normativity of Logic", Dialectica, Vol. 69, Issue 4, pp. 587-609, 2015. [5] Kouri Kissel, Teresa, Logical Instrumentalism, PhD thesis Ohio State University, 2016. [6] Gonthier, Georges, "Formal Proof—The Four Color Theorem", Notices of the American Mathematical Society, Vol. 55, No. 11, pp. 1382-1393, 2008. Ansten Klev (Czech Academy of Sciences, Czechia) Martin Tabakov (ISSK-BAS, Bulgaria) Reflections on the term "Philosophical logic" ABSTRACT. I will discuss questions "What they call and what must be named a "philosophical logic", and is it possible and relevant". After analyze the usage of term "philosophical logic" and objection against its usage I will explain my conception about "philosophical logic" The main questions are "Are there a significant field of study for what there is no suitable term", "where is this field of study named "philosophical logic", "Is "philosophical logic" a (kind of) logic, or it is philosophy but not logic. There are 2 main reasons for the term "philosophical logic" – "Scientific and Theoretical" and Social and practical. Theoretical reasons - There are a significant problem fields in XX-th century logic. New philosophical problematic was developed because of the paradoxes in set theory and the limitative theorems (Tarski, Gödel). They necessitated the elaboration of new philosophical and conceptual investigation into the methods, nature and subject of mathematics and logic and the broad epistemological topics connected with them. But the really new research field in logic was non-classical logic. The non-classical logic and especially modal logic became central topic of logical research in the Xx-th century and by the same token was considered as highly significant for philosophy. Social and practical reasons - Approving a such term is convenient for a two group of scientist for their work and career. - Philosopher with good knowledge in some other fields - ontology, epistemology, philosophy of science, with traditional training in logic. - Scientists with good skills in formal (mathematical) method, frequently with firmly mathematical education working in the field of non-classical logic's, which find job as logic lecturers in philosophical departments. Non-classical logic's are not related to the logic of mathematics, they (with the exception of intuitionistic logic) do not serve as the basis of mathematical theory, that is why most mathematicians for a long time were not interested in non-classical logic's. In mathematics there is no contextual ambiguity and modality, such notions are not interesting for the mathematicians. This led to the employment of logicians interested in such problematic in philosophical faculties. And also term sounds impressive and hardly provokes objection from deans and foundation. Objections The main objections against the term "Philosophical logic" are "It in unnecessary - "Logic" is well enough in all cases."; "Which problems belong to logic but not to "Philosophical logic", "are papers of Aristotle's, (Frege. Hilbert) "philosophical logic"?"; "If Logic is a part of philosophy, so why we must Restrict logic through the more general concept "Philosophy" ". I see four expressive interpretations of the term "Philosophical logic" -Philosophical logic as (some types of) logic and studding logical systems with connection to -philosophy. Especially it as the logic, which investigate nonmathematical reasoning. -"Philosophical logic" is "The logic in (of) philosophy" and explores the rules of the logical inference, the modes of deduction from and in philosophy. "Philosophical logic" as "Philosophy in logic"; -"Philosophical logic" as "Philosophy of logic". Olga Karpinskaia (Foundation for Humanities Research and Technologies, Russia) Abstract and concrete concepts: an approach to classification ABSTRACT. 1. Term logic, called also traditional or Aristotelian logic, deals with terms as the main components of propositions. Terms are considered to represent ideas or concepts. Concepts are usually divided into concrete and abstract ones. Concrete concepts (such as Socrates, tree, table, etc.) are taken to refer to things or objects, and abstract concepts (such as wisdom, truth, etc.) refer to properties of objects. Objects, in contrast to properties, have independent being. V. Bocharov and V. Markin in their textbook «Elements of Logic» (Moscow, 1994) propose a refinement of this classification by distinguishing between various kinds of objects. Namely, they consider individuals, n-tuples of individuals and sets of individuals to be different types of objects, which can have properties and enter into relations with each other. Then a concept is said to be concrete if and only if its extension consists of individuals, n-tuples of individuals or sets of individuals, and it is abstract otherwise. 2. This classification is problematic, since it does not fit a common-sense idea of abstract/concrete distinction, and moreover, it finds no support in observations of modern psychology. Specifically, according to Bocharov-Markin's definition, such abstract objects as numbers, geometrical figures, truth values, etc. are considered to be individuals, and thus, the concepts about them should be recognized as concrete side by side with concepts about tables, chairs and trees. Moreover, a concept about concept will then also be concrete, what is counterintuitive. Besides, one and the same concept can appear both concrete and abstract, depending on different treatment of the corresponding objects. It seems also difficult to differentiate between concepts having different types of abstractness, such as friendship and symmetrical relation. 3. I propose an approach to the concept classification based on a metaphysical division between particulars and universals. Accordingly, the concepts can be divided into logically concrete and logically abstract. As usual, particulars can be defined as concrete, spatiotemporal entities accessible to sensory perception, as opposed to abstract entities, such as properties or numbers. Then a concept is logically concrete if and only if its extension consists of particulars, and it is logically abstract otherwise. Thus, logically abstract concepts concern various kinds of universals, such as sets of particulars, properties, relations, abstract objects, theirs n-tuples, sets, etc. This approach makes it possible to differentiate between levels of abstractness. Thus, concepts about properties, relations and functional characteristics will be less abstract then, for example, concepts about properties of properties, etc. 4. The proposed idea of the concept classification based on the types of generalized objects opens further opportunities for a natural language analysis and determining a degree (level) of abstractness of the given discourses and domains. Juan Luis Gastaldi (SPHERE (CNRS - Université Paris Diderot), France) Luc Pellissier (IRIF (Université Paris Diderot), France) A structuralist framework for the automatic analysis of mathematical texts PRESENTER: Juan Luis Gastaldi ABSTRACT. As a result of the "practical turn" in the philosophy of mathematics, a significant part of the research activity of the field consists in the analysis of all sorts of mathematical corpora. The problem of mathematical textuality (inscriptions, symbols, marks, diagrams, etc.) has thus gained an increasing importance as decisive aspects of mathematical knowledge have been shown to be related to regularities and emergent patterns identifiable at the level of mathematical signs in texts. However, despite the fruitfulness of text-driven approaches in the field, the concrete tools available for the analysis of actual mathematical texts are rather poor and difficult to employ objectively. Moreover, analytical techniques borrowed from other fields, such as computational linguistics, NLP, logic or computer science, often present problems of adaptability and legitimacy. Those difficulties reveal a lack of clear foundations for a theory of textuality that can provide concrete instruments of analysis, general enough to deal with mathematical texts. In this work, we intend to tackle this problem by proposing a novel conceptual and methodological framework for the automatic treatment of texts, based on a computational implementation of an analytical procedure inspired by the classic structuralist theory of signs. Guided by the goal of treating mathematical texts, our approach assumes a series of conditions for the elaboration of the intended analytical model. In particular, the latter should rely on a bottom-up approach; be unsupervised; be able to handle multiple sign regimes (e.g. alphabetical, formulaic, diagrammatical, etc.); be oriented towards the identification of syntactic structures; capture highly stable regularities; and provide an explicit account of those regularities. A major obstacle the vast majority of existing NLP models present to match those requirements resides in the primacy accorded to words as fundamental units of language. The main methodological hypothesis of our perspective is that basic semiological units should not be assumed (e.g. as words in a given dictionary) but discovered as the result of a segmentation procedure. The latter not only allows to capture generic units of different levels (graphical, morphological, lexical, syntactical, etc.) in an unsupervised way, but also provides a more complex semiological context for those units (i.e. units co-occurring with a given unit within a certain neighborhood). The task of finding structural features can thus be envisaged as that of identifying plausible ways of typing those units, based on a duality relation between units and contexts within the segmented corpus. More precisely, two terms are considered of the same type if they are bi-dual with respect to contexts. The types thus defined can then be refined by considering their interaction, providing an emergent complex type structure that can be taken as the abstract grammar of the text under analysis. In addition to providing a conceptual framework and concrete automated tools for textual analysis, our approach puts forward a novel philosophical perspective in which logic appears as a necessary intermediary between textual properties and mathematical contents. Bibliography Juan Luis Gastaldi. Why can computers understand natural language. Philosophy & Technology. Under review. Jean-Yves Girard et al. Proofs and types. Cambridge University Press, New York, 1989. Zellig Harris. Structural linguistics. University of Chicago Press, Chicago, 1960. Louis Hjelmslev. Résumé of a Theory of Language. Number 16 in Travaux du Cercle linguistique de Copenhague. Nordisk Sprog-og Kulturforlag, Copenhagen, 1975. Tomas Mikolov et al. Distributed representations of words and phrases and their compositionality. CoRR, abs/1310.4546, 2013. Peter D. Turney et al. From frequency to meaning: Vector space models of semantics. CoRR, abs/1003.1141, 2010. Mikkel Willum Johansen (University of Copenhagen, Denmark) Entering the valley of formalism: Results from a large-scale quantitative investigation of mathematical publications ABSTRACT. As pointed out by Reuben Hersh (1991) there is a huge difference between the way mathematicians work and the way they present their results. In a previous qualitative study on mathematical practice we confirmed this result by showing that although mathematicians frequently use diagrams and figures in their work process, they tend to downplay these representations their published manuscripts, in part because they feel subjected to genre norms and values when they prepare their work for publication (AUTHOR and ANONYMIZED 2016; Accepted). This result calls for a better understanding of these genre norms and for the development the norms may undergo over time. From a casual point of view, it may seem that the norms are currently in a process of change. The formalistic claim that figures and diagrams are superfluous has been contested by philosophers of mathematics (e.g. Brown 1999, Giaquiont 2007), and looking at mathematics journals and textbooks, one gets the impression that diagrams and figures are being used more frequently. That however is merely an impression, as we do not have solid empirical data tracking the representational style used in mathematics texts. In order to fill this gab ANONYMIZED, ANONYMIZED and AUTHOR developed a classification scheme that makes it possible to distinguish between the different types of diagrams used in mathematics based on the cognitive support they offer (AUTHOR et al 2018). The classification scheme is designed to facilitate large-scale quantitative investigations of the norms and values expressed in the publication style of mathematics, as well as trends in the kinds of cognitive support used in mathematics. We presented the classification scheme at conferences and workshops during the summer 2018 to get feedback from other researchers in the field. After minor adjustments we applied the scheme to track the changes in publication style in the period 1885 to 2015 in the three mathematics journals Annals of Mathematics, Acta Mathematica and Bulletin of the AMS. In this talk I will present the main results of our investigation, and I will discuss the advantages and disadvantages of our method as well as the possible philosophical implications of our main results. Literature • Hersh, R. (1991): Mathematics has a front and a back. Synthese 80(2), 127-133. • AUTHOR and ANONYMIZED (2016): [Suppressed for review] • AUTHOR and ANONYMIZED (Accepted): [Suppressed for review] • Brown, J. R. (1999): Philosophy of mathematics, an introduction to a world of proofs and pictures. Philosophical Issues in Science. London: Routledge. • Giaquinto, M. (2007): Visual Thinking in mathematics, an epistemological study. New York: Oxford University Press. • AUTHOR, ANONYMIZED and ANONYMIZED (2018): [Suppressed for review] Soroush Rafiee Rad (Bayreuth University, Germany) Olivier Roy (Bayreuth University, Germany) Deliberation, Single-Peakedness and Voting Cycles PRESENTER: Olivier Roy ABSTRACT. A persistent theme in defense of deliberation as a process of collective decision making is the claim that voting cycles, and more generally Arrowian impossibility results can be avoided by public deliberation prior to aggregation [2,4]. The argument is based on two observations. First is the mathematical fact that pairwise majority voting always outputs a Condorcet winner when the input preference profile is single-peaked. With its domain restricted to single-peaked profiles pairwise majority voting satisfies, alongside the other Arrowian conditions, rationality when the number of voters is odd [1]. In particular, it does not generate voting cycles. Second are the conceptual arguments [4, 2] and the empirical evidence that deliberation fosters the creation of single-peaked preferences [3], which is often explained through the claim that group deliberation helps creating meta-agreements [2]. These are agreements regarding the relevant dimensions along which the problem at hand should be conceptualized, as opposed to a full consensus on how to rank the alternatives, i.e. a substantive agreement. However, as List [2] observes, single-peakedness is only a formal structural condition on individual preferences. Although single-peaked preferences do entail the existence of a structuring dimension, this does not mean that the participant explicitly agree on what that dimension is. As such single-peakedness does not reflect any joint conceptualization, which is necessary for meta-agreement. Achieving meta-agreement usually requires the participants to agree on the relevant normative or evaluative dimension for the problem at hand. This dimensions will typically reflect a thick concept intertwining factual with normative and evaluative questions, for instance health, well-being, sustainability, freedom or autonomy, to name a few. It seems rather unlikely that deliberation will lead the participants to agree on the meaning of such contested notions. Of course, deliberative democrats have long observed that public deliberation puts rational pressure on the participants to argue in terms of the common good [4], which might be conducive of agreement on a shared dimension. But when it comes to such thick concepts this agreement might only be a superficial one, involving political catchwords, leaving the participants using their own, possibly mutually incompatible understanding of them [5]. All of this does not exclude the fact that deliberation might make it more likely, in comparison with other democratic procedures, to generate single-peaked preferences from meta-agreements. The point is rather that by starting from the latter one puts the bar very high, especially if there appear to be other ways to reach single-peaked preferences or to avoid cycles altogether. In view of this two questions arise regarding the claim that deliberation helps avoiding cycles: Q1: Can cycles be avoided by pre-voting deliberation in cases where they are comparatively more likely to arise, namely in impartial cultures i.e. where a voter picked at random is equally likely to have any of the possible strict preference orderings on the alternatives? Q2: If yes, are meta-agreements or the creation of single-peaked preferences necessary or even helpful for that? In this work we investigate these questions more closely. We show that, except in case where the participants are extremely biased towards their own opinion, deliberation indeed helps to avoid cycles. It does so even in rather unfavourable conditions, i.e. starting from an impartial culture and with participants rather strongly biased towards themselves. Deliberation also creates single-peaked preferences. Interestingly enough, however, this does not appear particularly important for avoiding cycles. Most if not all voting cycles are eliminated, but not by reaching single-peaked preferences. We show this in a minimalistic model of group deliberation in which the participants repeatedly exchange, and rationally update their opinions. Since this model completely abstracts away from the notion of meta agreement, it provides an alternative, less demanding explanation as to how pre-voting deliberation can avoid cyclic social preferences, one that shifts the focus from the creation of single-peaked preferences to rational preference change and openness to change one's mind upon learning the opinion of others. [1] K. J. Arrow. Social Choice and Individual Values. Number 12. Yale University Press, 1963. [2] C. List. Two concepts of agreement. The Good Society, 11(1):72–79, 2002. [3] C. List, R. C. Luskin, J. S. Fishkin, and I. McLean. Deliberation, single-peakedness, and the possibility of meaningful democracy: evidence from deliberative polls. The Journal of Politics, 75(1):80–95, 2012. [4] D. Miller. Deliberative democracy and social choice. Political studies, 40(1 suppl):54–67, 1992. [5] V. Ottonelli and D. Porello. On the elusive notion of meta-agreement. Politics, Philosophy & Economics, 12(1):68–92, 2013. Alexandru Baltag (Institute for Logic, Language and Computation, Netherlands) Sonja Smets (Institute for Logic, Language and Computation, Netherlands) Learning Probabilities: A Logic of Statistical Learning PRESENTER: Soroush Rafiee Rad ABSTRACT. We propose a new model for forming beliefs and learning about unknown probabilities (such as the probability of picking a red marble from a bag with an unknown distribution of coloured marbles). The most widespread model for such situations of `radical uncertainty' is in terms of imprecise probabilities, i.e. representing the agent's knowledge as a set of probability measures. We add to this model a plausibility map, associating to each measure a plausibility number, as a way to go beyond what is known with certainty and represent the agent's beliefs about probability. There are a number of standard examples: Shannon Entropy, Centre of Mass etc. We then consider learning of two types of information: (1) learning by repeated sampling from the unknown distribution (e.g. picking marbles from the bag); and (2) learning higher-order information about the distribution (in the shape of linear inequalities, e.g. we are told there are more red marbles than green marbles). The first changes only the plausibility map (via a `plausibilistic' version of Bayes' Rule), but leaves the given set of measures unchanged; the second shrinks the set of measures, without changing their plausibility. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds. But our belief change does not comply with standard AGM axioms, since the revision induced by (1) is of a non-AGM type. This is essential, as it allows our agents to learn the true probability: we prove that the beliefs obtained by repeated sampling converge almost surely to the correct belief (in the true probability). We end by sketching the contours of a dynamic doxastic logic for statistical learning. Stephen Senn (Luxembourg Institute of Health, UK) De Finetti meets Popper or Should Bayesians care about falsificationism? ABSTRACT. Views of the role of hypothesis falsification in statistical testing do not divide as cleanly between frequentist and Bayesian views as is commonly supposed. This can be shown by considering the two major variants of the Bayesian approach to statistical inference and the two major variants of the frequentist one. A good case can be made that the Bayesian, De Finetti, just like Popper, was a falsificationist. A thumbnail view, which is not just a caricature, of De Finetti's theory of learning, is that your subjective probabilities are modified through experience by noticing which of your predictions are wrong, striking out the sequences that involved them and renormalising. On the other hand, in the formal frequentist Neyman-Pearson approach to hypothesis testing, you can, if you wish, shift conventional null and alternative hypotheses, making the latter the straw-man and by 'disproving' it, assert the former. The frequentist, Fisher, however, at least in his approach to testing of hypotheses, seems to have taken a strong view that the null hypothesis was quite different from any other and there was a strong asymmetry on inferences that followed from the application of significance tests. Finally, to complete a quartet, the Bayesian geophysicist Jeffreys, inspired by Broad, specifically developed his approach to significance testing in order to be able to 'prove' scientific laws. By considering the controversial case of equivalence testing in clinical trials, where the object is to prove that 'treatments' do not differ from each other, I shall show that there are fundamental differences between 'proving' and falsifying a hypothesis and that this distinction does not disappear by adopting a Bayesian philosophy. I conclude that falsificationism is important for Bayesians also, although it is an open question as to whether it is enough for frequentists. Timothy Childers (Institute of Philosophy, Czech Academy of Sciences, Czechia) Comment on "De Finetti meets Popper" ABSTRACT. The claim that subjective Bayesianism is a form of falsificationism in which priors are rejected in light of evidence clashes with a fundamental symmetry of Bayesianism, p(A) = 1 – p(A), tying together confirmation and falsification of a hypothesis. Wherever there is disconfirmation, there is falsification. This duality arises from the Boolean structure of a probability space. While Popper holds that there is a fundamental asymmetry between the two, subjective Bayesianism must view confirmation and disconfirmation as two sides of the same coin. Moreover, the standard account is that priors are neither falsified nor verified, but are revised in light of accepted evidence, again, dually. That said, there are forms of Bayesianism that are closer to Popper's methodology. In particular, one form, which we might term London Bayesianism, has more in common with the Popperian approach than is generally recognized. (I choose the name since this is where it originated, especially in the work of Colin Howson and Peter Urbach: I take it as opposed to the Princeton Bayesianism of David Lewis and his students). This form of Bayesianism is motivated by the acceptance of a negative solution to the problem of induction and a deep scepticism towards mechanical approaches to scientific methodology. In particular, Howson's rejection of diachronic conditionalization in favour of synchronic conditionalization shifts Bayesianism toward a generalized account of consistency at a given time, away from a view of Bayes Theorem as providing the ideal account of learning from experience (whatever that might be). This also leads to the rejection of countable additivity both by Howson and others. Finally, as the author points out, standard statistical methodology incorporates an adjustable parameter (levels of significance) for which no independent justification is given. Thus it exemplifies an ad hoc solution to scientific discovery, and so cannot be seen as taking falsifications seriously. 15:15-16:15 Session 14D: B7 Science education, pseudo-science and fake news Dennis Apolega (Philippine Normal University, Philippines) CANCELLED: Does Scientific Literacy Require a Theory of Truth? ABSTRACT. From "flat earthers" to "anti-vaxxers", to the hoax cures and diets in social media, the importance of scientific literacy cannot be emphasized enough. On the one hand, this informs one of the challenges of those in science education. Changes in teaching approaches and addressing deficiencies in the curriculum may be done. On the other hand, this opens the discussion to epistemological questions of truth and knowledge. Easily one can go to "The earth is flat is false", or "Baking soda is not a treatment for cancer" for these kinds of discussions that would involve scientific literacy and epistemology. This paper aims to show that while scientific literacy may benefit from discussions of epistemological issues, it does not require a theory of truth. This appears counterintuitive since there is a view that epistemology needs to account for the success of science in terms of its being truth conducive. This is the view that Elgin (2017) calls veritism. Following Elgin, some of the problems with veritism in science will be discussed in terms of their relevance to scientific literacy. Popularizers of science would also probably object to this position, that a theory of truth is not required for scientific literacy. Especially since this paper, would also look back at Rorty's (1991) views on science to further buttress its position. Indeed, Rorty's views on science may prove more relevant to issues in scientific literacy than to science itself. REFERENCES Elgin, Catherine. 2017. True Enough. MIT Press. Rorty, Richard. 1991. Objectivity, Relativism and Truth. Cambridge University Press. Jan Štěpánek (Masaryk University, Czechia) Tomáš Ondráček (Masaryk University, Czechia) Iva Svačinová (University of Hradec Králové, Czechia) Michal Stránský (Tomáš Baťa University in Zlín, Czechia) Paweł Łupkowski (Adam Mickiewicz University, Poland) Impact of Teaching on Acceptance of Pseudo-Scientific Claims PRESENTER: Jan Štěpánek ABSTRACT. Can teaching have any impact on students' willingness to embrace pseudo-scientific claims? And if so, will this impact be significant. This paper aims to present an ongoing research conducted in two countries and four universities which aims to answer these questions. The research is based on a previous work McLaughlin & McGill (2017). They conducted a study among university students which seems to show that teaching critical thinking can have a statistically significant impact on the acceptance of pseudo-scientific claims by students. They compared a group of students that attended a course on critical thinking and pseudo-scientific theories with a control group of students who attended a course on a general philosophy of science using the same questionnaire containing the pseudo-scientific claims. The questionnaire was administered at the onset of the semester (along with a Pew Research Center Science Knowledge Quiz), and then at the end of the semester. While there was no significant change in a degree of belief in pseudo-scientific claims in the control group, the experimental group showed a statistically significant decrease in belief in pseudo-scientific claims. In the first phase of our research, we conducted a study similar to that of McLaughlin & McGill, though we were not able to replicate their results. There was no significant change in belief in pseudo-scientific claims among the study's participants. This, in our opinion, is due to the imperfections and flaws in both our and McLaughlin & McGills studies. In this paper, we would like to present our research along with the results obtained during its first phase. We will also discuss the shortcomings and limitations of our research and the research it is based on. Finally, we would like to present and discuss future plans for the next phase of our research into the teaching of critical thinking and its transgression of critical thinking in cases focusing on humanities and science. McLaughlin, A.C. & McGill, A.E. (2017): Explicitly Teaching Critical Thinking Skills in a History Course. Science & Education 26(1–2), 93–105. Adam, A. & Manson, T. (2014): Using a Pseudoscience Activity to Teach Critical Thinking. Teaching of Psychology 41(2), 130–134. Tobacyk, J. (2004): A revised paranormal belief scale. International Journal of Transpersonal Studies 23, 94–98. 15:15-16:15 Session 14E: IS B1 Aliseda Atocha Aliseda (National, Mexico) A plurality of methods in the philosophy of science: how is that possible? ABSTRACT. In this talk, the place of logical and computational methods in the philosophy of science shall be reviewed in connection to the emergence of the cognitive sciences. While the interaction of several disciplines was breeding ground for diverse methodologies, the challenge to methodologists of science to provide a general framework, still remains. As is well-known, the distinction between the context of discovery and the context of justification, which served as a basis for logical positivism, left out of its research agenda –specially from a formal perspective-- a very important part of scientific practice, that which includes issues related to the generation of new theories and scientific explanations, concept formation as well as other aspects of discovery in science. Some time later, in the seventies of last century, cognitive scientists revived some of the forgotten questions related to discovery in science within research topics such as mental patterns of discovery and via computational models, like those found for hypothesis formation. This helped to bring scientific discovery to the fore as a central problem in the philosophy of science. Further developments in computer science, cognitive science and logic itself, provided a new set of tools of a logical and computational nature. The rather limited logic used by the Vienna Circle was now augmented by the discovery of quite new systems of logic, giving place to a research context in which computer science, logic and philosophy of science interacted, each of them providing its own methodological tools to the service of philosophy of science. A further interaction arised in the eighties of last century, in this case between logic and history, giving place to computational philosophy of science, a space in which history and computing met on a par. This created the possibility of a partial synthesis between the logical and the historical approaches with a new computational element added to the mix. The present setting, in which we have all logical, historical and computational approaches to philosophy of science, fosters the view that what we need is a balanced philosophy of science, one in which we take advantage of a variety of methodologies, all together giving a broad view of science. However, it is not at all clear how is that such plurality of methods can be successfully integrated. Amita Chatterjee (Jadavpur University, Kolkata, India) Agnė Alijauskaitė (Vilnius University, Lithuania) Liability Without Consciousness? The Case of a Robot ABSTRACT. It is well known that the law punishes those who caused harm to someone else. However, the criteria for punishment becomes complicated when applied to non-human agents. When talking about non-human agency we primarily have in mind robot agents. Robot agency could be reasonably defended in terms of liability, the mental state of being liable. The roots of the problem should be looked for when defining robots' ability to have mental states but even when we put this particular problem aside the question of liability seems to be of a crucial value while discussing a harm-causing technology. Since the question of liability requires special attention to the domain of mental states, we argue that it is crucial for the legal domain to define the legal personhood of a robot. We should try to answer the question – what constitutes a legal person in terms of non-human agency? If legal personhood is the ability to have legal rights and obligations, how can we ascribe these human qualities to a non-human agent? Are computing paradigms able to limit robots' ability to cause harm? If so, can legal personhood still be ascribed (having in mind that computing could be limiting the free will)? These questions are of the highest importance when thinking about if we should punish a robot, and how this punishment could function having in mind the non-human personhood. David Fuenmayor (Freie Universität Berlin, Germany) Christoph Benzmüller (Freie Universität Berlin, Germany) Automated Reasoning with Complex Ethical Theories -- A Case Study Towards Responsible AI PRESENTER: Christoph Benzmüller ABSTRACT. The design of explicit ethical agents [7] is faced with tough philosophical and practical challenges. We address in this work one of the biggest ones: How to explicitly represent ethical knowledge and use it to carry out complex reasoning with incomplete and inconsistent information in a scrutable and auditable fashion, i.e. interpretable for both humans and machines. We present a case study illustrating the utilization of higher-order automated reasoning for the representation and evaluation of a complex ethical argument, using a Dyadic Deontic Logic (DDL) [3] enhanced with a 2D-Semantics [5]. This logic (DDL) is immune to known paradoxes in deontic logic, in particular "contrary-to-duty" scenarios. Moreover, conditional obligations in DDL are of a defeasible and paraconsistent nature and thus lend themselves to reasoning with incomplete and inconsistent data. Our case study consists of a rational argument originally presented by the philosopher Alan Gewirth [4], which aims at justifying an upper moral principle: the "Principle of Generic Consistency" (PGC). It states that any agent (by virtue of its self-understanding as an agent) is rationally committed to asserting that (i) it has rights to freedom and well-being; and that (ii) all other agents have those same rights. The argument used to derive the PGC is by no means trivial and has stirred much controversy in legal and moral philosophy during the last decades and has also been discussed as an argument for the a priori necessity of human rights. Most interestingly, the PGC has lately been proposed as a means to bound the impact of artificial general intelligence (AGI) by András Kornai [6]. Kornai's proposal draws on the PGC as the upper ethical principle which, assuming it can be reliably represented in a machine, will guarantee that an AGI respects basic human rights (in particular to freedom and well-being), on the assumption that it is able to recognize itself, as well as humans, as agents capable of acting voluntarily on self-chosen purposes. We will show an extract of our work on the formal reconstruction of Gewirth's argument for the PGC using the proof assistant Isabelle/HOL (a formally-verified, unabridged version is available in the Archive of Formal Proofs [8]). Independent of Kornai's claim, our work exemplarily demonstrates that reasoning with ambitious ethical theories can meanwhile be successfully automated. In particular, we illustrate how it is possible to exploit the high expressiveness of classical higher-order logic as a metalanguage in order to embed the syntax and semantics of some object logic (e.g. DDL enhanced with quantication and contextual information) thus turning a higher-order prover into a universal reasoning engine [1] and allowing for seamlessly combining and reasoning about and within different logics (modal, deontic, epistemic, etc.). In this sense, our work provides evidence for the flexible deontic logic reasoning infrastructure proposed in [2]. References 1. C. Benzmüller. Universal (meta-)logical reasoning: Recent successes. Science of Computer Programming, 172:48-62, March 2019. 2. C. Benzmüller, X. Parent, and L. van der Torre. A deontic logic reasoning infrastructure. In F. Manea, R. G. Miller, and D. Nowotka, editors, 14th Conference on Computability in Europe, CiE 2018, Proceedings, volume 10936 of LNCS, pages 60-69. Springer, 2018. 3. J. Carmo and A. J. Jones. Deontic logic and contrary-to-duties. In Handbook of Philosophical Logic, pages 265-343. Springer, 2002. 4. A. Gewirth. Reason and morality. University of Chicago Press, 1981. 5. D. Kaplan. On the logic of demonstratives. Journal of Philosophical Logic, 8(1):81-98, 1979. 6. A. Kornai. Bounding the impact of AGI. Journal of Experimental & Theoretical Artificial Intelligence, 26(3):417-438, 2014. 7. J. Moor. Four kinds of ethical robots. Philosophy Now, 72:12-14, 2009. 8. XXXXXXX. Formalisation and evaluation of Alan Gewirth's proof for the principle of generic consistency in Isabelle/HOL. Archive of Formal Proofs, 2018. 15:15-16:15 Session 14G: C8 Philosophy of the applied sciences and technology Anna Estany (Universityt Autonoma Barcelona, Spain) Robin Kopecký (Charles University, The Karel Čapek Center for Values in Science and Technology, Czechia) Michaela Košová (Charles University, The Karel Čapek Center for Values in Science and Technology, Czechia) How virtue signalling makes us better: Moral preference of selection of types of autonomous vehicles. PRESENTER: Robin Kopecký ABSTRACT. In this paper, we present a study on moral judgement on autonomous vehicles (AV). We employ a hypothetical choice of three types of "moral" software in a collision situation ("selfish", "altruistic", and "aversive to harm") in order to investigate moral judgement beyond this social dilemma in the Czech population we aim to answer two research questions: Whether the public circumstances (i.e. if the software choice is visible at the first glance) make the personal choice "altruistic" and what type of situation is most problematic for the "altruistic" choice (namely if it is the public one, the personal one, or the one for a person's offspring). We devised a web-based study running between May and December of 2017 and gathered 2769 respondents (1799 women, 970 men; age IQR: 25-32). This study was a part of research preregistered at OSF before start of data gathering. The AV-focused block of the questionnaire was opened by a brief information on AV and three proposed program solutions for previously introduced "trolley problem like" collisions: "selfish" (with preference for passengers in the car), "altruistic" (with preference for the highest number of saved lives), and "aversion to harm" (which will not actively change direction leading to killing a pedestrian or a passenger, even though it would save more lives in total). Participants were asked the following four questions: 1. What type of software would you choose for your own car if nobody was able to find out about your choice ("secret/self"). 2. What type of software would you choose for your own car if your choice was visible at the first glance ("public/self"). 3. What type of software would you choose for the car of your beloved child if nobody was able to find out ("child"). 4. What type of software would you vote for in secret in the parliament if it was to become the only legal type of AV ("parliament"). The results are as follows, test of independence was performed by a chi square: "Secret/self": "selfish" (45.2 %), "altruistic" (45.2 %), "aversion to harm" (9.6 %). "public/self: "selfish" (30 %), "altruistic" (58.1 %), "aversion to harm" (11.8 %). In public choice, people were less likely to choose selfish software for their own car. "Child": "selfish" (66.6 %), "altruistic" (27.9 %), "aversion to harm" (5.6 %). A vote in parliament for legalizing single type: "selfish" (20.6 %), "altruistic" (66.9 %), "aversion to harm" (12.5 %) In choice of car for one's own child people were more likely to choose selfish software than in the choice for themselves. Based on the results, we can conclude that the public choice is more likely to pressure consumers to accept the altruistic solution making it a reasonable and relatively cheap way to shift them towards higher morality. In less favourable news, the general public tends to heightened sensibility and selfishness in case of one's own offspring, and a careful approach is needed to prevent moral panic. Naira Danielyan (National Research University of Electronic Technology, Russia) Prospect of NBICS Development and Application ABSTRACT. The report considers the basic principles of the philosophical approach to NBICS-convergence. Being a method of getting a fundamental knowledge, NBICS-technologies turn into an independent force influencing nature, society and man. One of the basic ideas of nanotechnology concept is an opportunity to consider a man as a constructor of the real world, e.g., by means of constructing human perception due to nanochips, programming the virtual reality in human brain. It might lead to some new forms of consciousness and emergence of the modified objective reality. Developing and introducing nanotechnologies brings up new scientific issues being closely connected with the realization of possible projects such as, for instance, complete description of thinking processes and perception of the reality by human brain, slowdown of aging processes, opportunity of human organism rejuvenation, development of brain/brain or brain/computer interfaces, creation of robots and other devices possessing at least partial individuality, etc. Penetrating technologies into human perception inevitably results in the hybrid reality, which eliminates any borders between man's virtual personality and his physical embodiment. Space ideas of physical limits of communication and identification also change due to the fact that human presence in the communication medium is cognized as virtual and real simultaneously. It turns out to be an absolutely new phenomenon of human existence having in many ways the constructivism principles in its foundation. The active role of cognition is the most important aspect of the paradigm analyzed in the report as the methodology of this new type of technologies. Such an opportunity opens unlimited perspectives for individual and collective creative work. The author examines the dialogue between man and nature by means of the technologies. He demonstrates that they are directed to the decision of scientific issues, mostly having a constructive nature under the influence of virtualization of human consciousness and social relations. The report illustrates on the example of the 'instrumental rationality' paradigm that as NBICS-technologies include the Internet, they can't be used in vacuum. They are interconnected and imply a number of political, economical and social aspects which accompany them. As a result, they're becoming a characteristic of the public style of thinking. The emphasis is made on socio-cultural prospects of the new kind of technologies and their constructivism nature. Any cognition process turns into a social act as some norms and standards, which are not related to a significant originator, but being recognized by all the society involved in the process, appear among the representatives of different knowledge spheres during the communication process. From the scientific point of view, the consequences of NBICS application are both the unprecedented progress in medicine, molecular biology, genetics, proteomics and the newest achievements in electronics, robotics and software. They will provide a chance to create artificial intelligence, to prolong the life expectancy unprecedentedly, to create new public forms, social and psychical processes. At the same time man doesn't stop to be rational under the influence of technologies. His cognition process is accompanied by creative and constructive human activity leading to the effects that can reveal themselves, for instance, in the modification of human sensitivity level by means of significant transformation of physical capabilities. In turn, it should lead to nonreversible consequences, because man himself, his body and consciousness turn into an integral part of complex eco-, socio-cultural and socio-technical systems. That's why the philosophical reflection of ecological, social and cultural results of NBICS-technologies introduction and application is becoming more and more topical. The report concludes that NBICS overcomes all previous technological achievements according to its potential and socio-cultural effects. Hein van den Berg (University of Amsterdam, Netherlands) Theoretical Virtues in Eighteenth-Century Debates on Animal Cognition ABSTRACT. This paper discusses the role of the theoretical virtues (i) unification, (ii) simplicity, and (iii) scientific understanding in eighteenth-century debates on animal cognition. It describes the role that these virtues play in the construction of different theories of animal cognition and aims to establish the relative weight that these virtues were assigned. We construct a hierarchy of theoretical virtues for the biologists and philosophers Georg-Louis Leclerc Buffon (1707-1788), Hermann Samuel Reimarus (1694-1768), and Charles-Georges LeRoy (1723-1789) and Etienne Bonnot de Condillac (1714-1780). Through discussing these virtues and the importance assigned to these virtues by different authors, we can determine how different theoretical virtues shaped and determined the theories articulated by Buffon, Reimarus, Le Roy and Condillac. Theoretical virtues such as unification, simplicity and scientific understanding have received a lot of attention in the philosophical literature. An important question is how these different theoretical virtues relate and how they are supposed to be ranked. We can imagine questions such as the following: confronted between a choice between a simple theory and a theory that yields unified explanations, do we, other things being equal, prefer simple theories over theories that yield unified explanations? Or do we prefer theories that yield scientific understanding over simple theories? To answer these types of questions requires making a hierarchy of theoretical virtues. In this paper, I do not have the systematic aim of constructing such a hierarchy. Rather, I have the historical aim of showing that eighteenth-century debates on animal cognition can be profitably understood if we analyze the relative weight that different authors assigned to different theoretical virtues. I will show that within eighteenth-century debates on animal cognition we can distinguish three core positions: (a) Buffon's mechanism, (b) Reimarus' theory of instinct, and (c) Le Roy's and Condillac's sensationalist position which assigns intelligence to animals. I show that these positions are partly shaped by the theoretical virtues that these authors adopted. Thus, Buffon's mechanism is shaped by his acceptance of unification as a theoretical virtue, Reimarus' theory of instinct is shaped by his adoption of a particular virtue of simplicity, whereas Le Roy's and Condillac's sensationalist position is shaped by their acceptance of the theoretical virtue of scientific understanding. I will further argue, however, that the way in which Buffon, Reimarus, Le Roy and Condillac understand different theoretical virtues is also shaped by their theoretical commitments. Thus, for example, Buffon's mechanism influences the way he conceives of unification. Although the appeal to different theoretical virtues thus partly explains the theoretical position articulated by Buffon, Reimarus, Le Roy and Condillac, the converse is also true: their theoretical positions shaped the way in which they conceived of different theoretical virtues. Finally, I show that the different theories on animal cognition could often appeal to the same theoretical virtues for support. This means that the theoretical virtues are sometimes incapable of necessitating a choice between different theories. Martin Wasmer (Leibniz University Hannover, Germany) Bridging between biology and law: European GMO law as a case for applied philosophy of science ABSTRACT. Laws regulating the permissibility of producing and releasing genetically modified organisms (GMOs) into the environment address a multitude of normatively loaded issues and frequently lead to heated public debate. Drafting new legislature as well as interpreting and operationalizing current GMO law draws on knowledge from both (applied) biology and the study of law. The European Directive 2001/18/EC regulates the deliberate release of GMOs, such as genetically modified crops in agriculture. Its legal definition of GMO depends on the interpretation of the vaguely formulated phrase "altered in a way that does not occur naturally" (Bobek, 2018). However, this phrase decides which organisms do or do not fall under the regulatory obligations of European GMO law, with far reaching implications for what is planted on our fields and served on our plates. I provide a framework for interpreting the European GMO definition on an outcome-based approach, by identifying two main issues that challenge its straightforward application to organisms bread by new breeding techniques: (1) First, three contradicting concepts of naturalness can be distinguished (following Siipi, 2008; Siipi & Ahteensuu, 2016) and the decision between those is necessarily based on values. (2) Second, a theory of biological modalities is required for the operationalization of natural possibilities (following Huber, 2017). Once these conceptual issues are solved the GMO definition can be operationalized for regulatory practice. This case study on the GMO Definition in European law shows how history and philosophy of science can contribute to bridging across the disciplines: Note that legal methods alone do not suffice to interpret the GMO definition in the context of new technologies, because there are no legal precedents and no comparable instances in the legal body in the case of (radically) new scientific developments. For this reason, lawyers call on experts from biology and biotechnology to draw on scientific ontologies, emphasizing the importance of science for policymaking (cf. Douglas, 2009). On the other hand, also methods from biology alone do not suffice to operationalize the GMO definition, since ontological choices do not only depend on empirical evidence but also on value judgments (Ludwig 2014, 2016). Instead, HPS is the go-to-discipline for the clarification of conceptual issues in multidisciplinary normative contexts. References: Bobek, M. (2018). Reference for a preliminary ruling in the Case C-528/16. No. ECLI:EU:C:2018:20 Douglas, H. (2009). Science, policy, and the value-free ideal. University of Pittsburgh Press. Huber, M. (2017). Biological modalities (PhD Thesis). University of Geneva. Kahrmann, J. et al. (2017). Aged GMO Legislation Meets New Genome Editing Techniques. Zeitschrift für Europäisches Umwelt- und Planungsrecht, 15(2), 176–182. Ludwig, D. (2014). Disagreement in Scientific Ontologies. Journal for General Philosophy of Science, 45(1), 119–131. Ludwig, D. (2016). Ontological Choices and the Value-Free Ideal. Erkenntnis, 81(6), 1253–1272. Siipi, H. (2008). Dimensions of Naturalness. Ethics & the Environment, 13(1), 71–103. Siipi, H., & Ahteensuu, M. (2016). Moral Relevance of Range and Naturalness in Assisted Migration. Environmental Values, 25(4), 465–483. 15:15-15:45 Session 14I: B2 Abduction Abductive Inference and Selection Principles ABSTRACT. Abductive inference appears in various contexts of cognitive processes. Usually, the two prominent uses of abduction with respect to the explanatory hypotheses in question are distinguished: a) the context of discovery (or hypothesis-generation/formulation); and, b) the context of justification (or evidential support). Notwithstanding the other uses of abduction (e.g. computational tasks), I pay close attention to an overlooked context of abductive inference: c) the context of explanatory selection. I propose to distinguish these three kinds of explanatory inferences explicitly by using a notion of a selection principle. The selection principle is optimally construed (or modelled) as a partial function defined on a (non-empty) set of (explanatory) hypotheses with respect to an explicit piece of evidence E and background B comprising doxastic, epistemic and axiological items. If a given selection principle operates on an admissible n-tuple of arguments, it yields at most one explanatory hypothesis (or its content-part) as a function-value. Having the notion of selection principle at our disposal, it is possible to make the difference among those three contexts of the use of abduction completely explicit. In particular, I argue for distinguishing the three kinds of selection principles operating in these contexts. These kinds of principles differ both with respect to the arguments they operate on, and to the function-values they yield. Moreover, I provide explicit reasons for identifying inference to the best explanation (henceforth "IBE") only with abductive inference in a justificatory context of reasoning. As a consequence, I show that, at least, some widely-discussed objections against IBE in the literature (such as van Fraassen's (1989) argument from a bad lot) are not relevant to other forms of abductive inference in the context of discovery or the context of explanatory selection. Hence, such a clarification of different selection principles underlying different contexts of abduction appears to be fruitful for re-considering the question of which traditional objections against IBE are also objections against abductive inference in general. References Aliseda, A. (2006): Abductive Reasoning. Springer. Douven, I. (2002): Testing Inference to the Best Explanation. Synthese 130, No. 3, 355-377. Douven, I. (2011): Abduction. In: Zalta, E. N. (ed.): The Stanford Encyclopedia of Philosophy. Available at: https://plato.stanford.edu/entries/abduction. Harman, G. (1965): Inference to the Best Explanation. Philosophical Review 74, No. 1, 88-95. Josephson, J. & S. Josephson (eds.) (1996): Abductive Inference. Computation, Philosophy, Technology. Lipton, P. (2004): Inference to the Best Explanation. London: Routledge. McCain, K. & T. Poston (eds.) (2017): Best Explanations. New Essays on Inference to the Best Explanation. Oxford: Oxford University Press. Niiniluoto, I. (1999): Defending Abduction. Philosophy of Science 66, S436-S451. Okasha, S. (2000): Van Fraassen's Critique of Inference to the Best Explanation. Studies in History and Philosophy of Science 31, 691-710. Poston, T. (2014): Reason and Explanation. Basingstoke: Palgrave Macmillan. Psillos, S. (2004): Inference to the Best Explanation and Bayesianism. In: F. Stadler (ed.): Induction and Deduction in the Sciences. Dordrecht: Kluwer, 83-91. Schurz, G. (2008): Patterns of Abduction. Synthese 164, 201-234. van Fraassen, B. (1989): Laws and Symmetry. Oxford: Oxford University Press. 15:15-16:15 Session 14J: B4 Metaphysical aspects: Conceptual analysis 1 Manuel Gustavo Isaac (Swiss National Science Foundation (SNSF) + Institute for Logic, Language, and Computation (ILLC), Netherlands) Rogelio Miranda (Universidad, Mexico) Three Problems with the Identification of Philosophy with Conceptual Analysis ABSTRACT. Keywords: conceptual analysis; methodology; philosophy; science In this conference I am going to argue that, although we do conceptual analysis when we do philosophy, the world plays a central role in the constitution of our philosophical concepts, statements and theories. Particularly, their meaning is partially determined by the way the world is. I will object to what the advocates of conceptual analysis, specifically the members of the Canberra Plan (Jackson and Chalmers (2001); Jackson, F., (1998)) –who, arguably, have advanced the most influential antinaturalistic metaphilosophical view in our days– consider to be the conceptual elements of philosophical practice: the two steps of philosophical analysis and the deductive implication of the folk vocabulary by the scientific one. I will advance three main problems for the purely conceptual and aprioristic status of these components: (P1) Science also does conceptual analysis (Jackson, 1998; Larkin, McDermott, & Simon, 1980; Tallant, 2013). (P2) Philosophy also depends on the world, which is known by us through observation and experimentation (specifically, our implicit folk theories depend on the world (Arthur, 1993; Chassy & Gobet, 2009; Goldman, 2010). (P3) The deduction of the folk and philosophical vocabulary from the vocabulary of the sciences presupposes factual and a posteriori elements (Williamson, 2013). The main conclusion is that even if we agree that philosophy does conceptual analysis, empirical evidence has shown us that philosophy still depends on the way the world is. So, conceptual analysis doesn't differentiate philosophy –neither in method, nor in subject matter– from scientific practice in the way that the conceptual analysts wanted it to. The world partially determines, a posteriori, the nature of the two-step methodology of conceptual analysis. Therefore, the possible identification of philosophy with conceptual analysis cannot establish a difference in kind between philosophy and science, be it semantic or epistemic. This leaves us with the problem of explaining why these activities seem so different. I think that this question can be seen as a matter of degree, but this will be the subject for another conference. References Arthur, S. R. (1993). Implicit Knowledge and Tacit Knowledge. New York and Oxford: Oxford University Press. Chalmers, D., & Jackson, F. (2001). Conceptual Analysis and Reductive Explanation. Philosophical Review, 153-226. Chassy, P., & Gobet, F. (2009). Expertise and Intuition: A Tale of Three Theories. Minds & Machines, 19, 151-180. Goldman, A. (2010). Philosophical Naturalism and Intuitional Methodology. Proceedings and Addresses of the American Philosophical Association, 84, 115-150. Jackson, F. (1998). From Metaphysics to Ethics. Oxford: Clarendon Press. Larkin, J. H., McDermott, J. S., & Simon, H. A. (1980). Expert and novice performance in solving physics problems. Science, 208, 1335-1342. Tallant, J. (2013). Intuitions in physics. Synthese, 190, 2959-2980. Williamson, T. (2013). How Deep is the Distinction betwee A Priori and A Posteriori Knowledge? In A. C. Thurow (Ed.), The A Priori In Philosophy (pp. 291-312). Oxford: Oxford University Press. Matt Barker (Concordia University, Canada) Using norms to justify theories within definitions of scientific concepts ABSTRACT. This paper is about scientific concepts that are often thought to correspond to categories in nature. These range from widely known concepts such as SPECIES, to more specialized concepts like 2,4-DIHYDROXYBUTYRIC ACID METABOLIC PATHWAY, thought to correspond to a category to which certain synthetic metabolic pathways belong. A typical definition of such a concept summarizes or otherwise suggests a theory about the conditions that constitute belong to the corresponding category. So these are theories that make constitution claims. For several decades most philosophical discussions about such concepts have been metaphysical. This paper instead helps defend an epistemic thesis: Normative conventionalism: If an agent is justified in endorsing the constitution claims from a definition of the concept C, then this stems at least in part from the constitution claims being in accord with norms about how agents ought to categorize things, and this contribution to justification is independent of any degree of correspondence between the constitution claims and supposed modal facts. (cf. Thomasson 2013) To allow for detailed (rather than complete) defense, the paper restricts its focus to one concept, the persistent BIOLOGICAL SPECIES CONCEPT (BSC). The paper first uncovers how the BSC's typical definition (e.g., Mayr 2000) is more profoundly ambiguous than others have noted. From the definition, one can infer several extensionally non-equivalent and complex sets of constitution claims. Next the paper interprets the practices of relevant species biologists (e.g., Coyne and Orr 2004) as implicitly appealing to what have been called classificatory norms (Slater 2017) when selecting between competing BSC constitution claims. Finally, the paper argues this is wise because modal facts cannot alone tell biologists which constitution claims to endorse, and classificatory norms should help take up that slack. The conventionalism thus supported is interesting because it differs from others. It is about how to specify constitution claims for a given concept, rather than about selecting between multiple concepts (Dupré 1993; Kitcher 2001) or about when constitutive conditions are satisfied (Barker and Velasco 2013). Barker, Matthew, and Joel Velasco. 2013. "Deep Conventionalism about Evolutionary Groups." Philosophy of Science 80:971–82. Coyne, Jerry, and H. A. Orr. 2004. Speciation. Sunderland, MA: Sinauer. Dupré, John. 1993. The Disorder of Things: Metaphysical Foundations of the Disunity of Science. Cambridge, MA: Harvard University Press. Kitcher, Philip. 2001. Science, Truth, and Democracy. Oxford University Press. Mayr, Ernst. 2000. "The Biological Species Concept." In Species Concepts and Phylogenetic Theory: A Debate, edited by Quentin Wheeler and Rudolf Meier, 17–29. New York: Columbia University Press. Millstein, Roberta L. 2010. "The Concepts of Population and Metapopulation in Evolutionary Biology and Ecology." In Evolution Since Darwin: The First 150 Years, edited by M. Bell, D. Futuyma, W. Eanes, and J. Levinton. Sunderland, MA: Sinauer. Slater, Matthew H. 2017. "Pluto and the Platypus: An Odd Ball and an Odd Duck - On Classificatory Norms." Studies in History and Philosophy of Science 61:1–10. Thomasson, Amie. 2013. "Norms and Necessity." The Southern Journal of Philosophy 51:143–60. 15:15-16:15 Session 14K: A1 Mathematical logic 1: Model theory Pablo Cubides (TU Dresden, Germany) Guillermo Badia (The University of Queensland, Australia) Carles Noguera (Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Czechia) A generalized omitting type theorem in mathematical fuzzy logic PRESENTER: Carles Noguera ABSTRACT. Mathematical fuzzy logic (MFL) studies graded logics as particular kinds of many-valued inference systems in several formalisms, including first-order predicate languages. Models of such first-order graded logics are variations of classical structures in which predicates are evaluated over wide classes of algebras of truth degrees, beyond the classical two-valued Boolean algebra. Such models are relevant for recent computer science developments in which they are studied as weighted structures. The study of such models is based on the corresponding strong completeness theorems [CN,HN] and has already addressed several crucial topics such as: characterization of completeness properties w.r.t. models based on particular classes of algebras [CEGGMN], models of logics with evaluated syntax [NPM,MN], study of mappings and diagrams [D1], ultraproduct constructions [D2], characterization of elementary equivalence in terms of elementary mappings [DE], characterization of elementary classes as those closed under elementary equivalence and ultraproducts [DE3], Löwenheim-Skolem theorems [DGN1], and back-and-forth systems for elementary equivalence [DGN2]. A related stream of research is that of continuous model theory [CK,C]. Another important item in the classical agenda is that of omitting types, that is, the construction of models (of a given theory) where certain properties of elements are never satisfied. In continuous model theory the construction of models omitting many types is well known [YBWU], but in MFL has only been addressed in particular settings [CD,MN]. The goal of the talk is establish a new omitting types theorem, generalizing the previous results to the wider notion of tableaux (pairs of sets of formulas, which codify the properties that are meant to be preserved and those that will be falsified). References: [YBWU] I. Ben Yaacov, A. Berenstein, C. Ward Henson, and A. Usvyatsov. Model theory for metric structures, (2007), URL:https://faculty.math.illinois.edu/~henson/cfo/mtfms.pdf [C] X. Caicedo. Maximality of continuous logic, in Beyond first order model theory, Chapman & Hall/CRC Monographs and Research Notes in Mathematics (2017). [CK] C.C. Chang and H. J. Keisler. Continuous Model Theory, Annals of Mathematical Studies, vol. 58, Princeton University Press, Princeton (1966). [CD] P. Cintula and D. Diaconescu. Omitting Types Theorem for Fuzzy Logics. To appear in IEEE Transactions on Fuzzy Systems. [CEGGMN] P. Cintula, F. Esteva, J. Gispert, L. Godo, F. Montagna, and C. Noguera. Distinguished Algebraic Semantics For T-Norm Based Fuzzy Logics: Methods and Algebraic Equivalencies, Annals of Pure and Applied Logic 160(1):53-81 (2009). [CN] P. Cintula and C. Noguera. A Henkin-style proof of completeness for first-order algebraizable logics. Journal of Symbolic Logic 80:341-358 (2015). [D1] P. Dellunde. Preserving mappings in fuzzy predicate logics. Journal of Logic and Computation 22(6):1367-1389 (2011). [D2] P. Dellunde. Revisiting ultraproducts in fuzzy predicate logics, Journal of Multiple-Valued Logic and Soft Computing 19(1):95-108 (2012). [D3] P. Dellunde. Applicactions of ultraproducts: from compactness to fuzzy elementary classes. Logic Journal of the IGPL 22(1):166-180 (2014). [DE] P. Dellunde and Francesc Esteva. On elementary equivalence in fuzzy predicate logics. Archive for Mathematical Logic 52:1-17 (2013). [DGN1] P. Dellunde, À. García-Cerdaña, and C. Noguera. Löwenheim-Skolem theorems for non-classical first-order algebraizable logics. Logic Journal of the IGPL 24(3):321-345 (2016). [DGN2] P. Dellunde, À. García-Cerdaña, and C. Noguera. Back-and-forth systems for first-order fuzzy logics. Fuzzy Sets and Systems 345:83-98 (2018). [HN] P. Hájek and P. Cintula. On theories and models in fuzzy predicate logics. Journal of Symbolic Logic 71(3):863-880 (2006). [MN] P. Murinová and V. Novák. Omitting Types in Fuzzy Logic with Evaluated Syntax, Mathematical Logic Quarterly 52 (3): 259-268 (2006). [NPM] V. Novák, I. Perfilieva, and J. Močkoř. Mathematical Principles of Fuzzy Logic, Kluwer Dordrecht (2000). Inessa Pavlyuk (Novosibirsk State Pedagogical University, Russia) Sergey Sudoplatov (Sobolev Institute of Mathematics, Novosibirsk State Technical University, Novosibirsk State University, Russia) On ranks for families of theories of abelian groups PRESENTER: Inessa Pavlyuk ABSTRACT. We continue to study families of theories of abelian groups \cite{PS18} characterizing $e$-minimal subfamilies \cite{rsPS18} by Szmielew invariants $\alpha_{p,n}$, $\beta_p$, $\gamma_p$, $\varepsilon$ \cite{ErPa, EkFi}, where $p\in P$, $P$ is the set of all prime numbers, $n\in\omega\setminus\{0\}$, as well as describing possibilities for the rank ${\rm RS}$ \cite{rsPS18}. We denote by $\mathcal{T}_A$ the family of all theories of abelian groups. \begin{theorem}\label{th1_PS} An infinite family $\mathcal{T}\subseteq\mathcal{T}_A$ is $e$-minimal if and only if for any upper bound $\xi\geq m$ or lower bound $\xi\leq m$, for $m\in\omega$, of a Szmielew invariant $$\xi\in\{\alpha_{p,n}\mid p\in P,n\in\omega\setminus\{0\}\}\cup\{\beta_p\mid p\in P\}\cup\{\gamma_p\mid p\in P\},$$ there are finitely many theories in $\mathcal{T}$ satisfying this bound. Having finitely many theories with $\xi\geq m$, there are infinitely many theories in $\mathcal{T}$ with a fixed value $\alpha_{p,s} \begin{theorem}\label{th2_PS} For any theory $T$ of an abelian group $A$ the following conditions are equivalent: $(1)$ $T$ is approximated by some family of theories; $(2)$ $T$ is approximated by some $e$-minimal family; $(3)$ $A$ is infinite. \end{theorem} Let $\mathcal{T}$ be a family of first-order complete theories in a language $\Sigma$. For a set $\Phi$ of $\Sigma$-sentences we put $\mathcal{T}_\Phi=\{T\in\mathcal{T}\mid T\models\Phi\}$. A family of the form $\mathcal{T}_\Phi$ is called {\em $d$-definable} (in $\mathcal{T}$). If $\Phi$ is a singleton $\{\varphi\}$ then $\mathcal{T}_\varphi=\mathcal{T}_\Phi$ is called {\em $s$-definable}. \begin{theorem}\label{th3_PS} Let $\alpha$ be a countable ordinal, $n\in\omega\setminus\{0\}$. Then there is a $d$-definable subfamily $(\mathcal{T}_A)_\Phi$ such that ${\rm RS}((\mathcal{T}_A)_\Phi)=\alpha$ and ${\rm ds}((\mathcal{T}_A)_\Phi)=n$. \end{theorem} This research was partially supported by Committee of Science in Education and Science Ministry of the Republic of Kazakhstan (Grant No. AP05132546) and Russian Foundation for Basic Researches (Project No. 17-01-00531-a). \begin{thebibliography}{10} \bibitem{PS18} {\scshape In.I.~Pavlyuk, S.V.~Sudoplatov}, {\itshape Families of theories of abelian groups and their closures}, {\bfseries\itshape Bulletin of Karaganda University. Series ``Mathematics''}, vol.~90 (2018). \bibitem{rsPS18} {\scshape S.V.~Sudoplatov}, {\itshape On ranks for families of theories and their spectra}, {\bfseries\itshape International Conference ``Mal'tsev Meeting'', November 19--22, 2018, Collection of Abstracts}, Novosibirsk: Sobolev Institute of Mathematics, Novosibirsk State University, 2018, p.~216. \bibitem{ErPa} {\scshape Yu.L.~Ershov, E.A.~Palyutin}, {\bfseries\itshape Mathematical logic}, FIZMATLIT, Moscow, 2011. \bibitem{EkFi} {\scshape P.C.~Eklof, E.R.~Fischer}, {\itshape The elementary theory of abelian groups}, {\bfseries\itshape Annals of Mathematical Logic}, vol.~4 (1972), pp.~115--171. \end{thebibliography} Pavel Arazim (Czech Academy of Sciences, Instutute of Philosophy, Department of Logic, Czechia) Kengo Okamoto (Tokyo Metropolitan University, Japan) How Should We Make Intelligible the Coexistence of the Different Logics? -An Attempt Based on a Modal Semantic Point of View ABSTRACT. Recently, logicians and computer scientists increasingly tend to treat those different logics such as the classical, the intermediate, the intuitionistic and the still weaker logics (notably the linear logics) as the equally justifiable, legitimate objects of the study. It is too obvious that those research trends are to be welcomed in view of the amount of the theoretically fruitful results they bring in. We should admit, however, that we are still quite in the dark about how to make philosophically intelligible and justify the coexistence of those different logics, in particular, of the two representative logics, i.e. the classical logic (henceforth CL) and the intuitionistic logic (henceforth IL). With good reasons, logicians and computer scientists usually prefer to avoid being involved in philosophical debates and are prone to take pragmatic attitudes to the problem. What about philosophers, then? They seem to be rather bigoted. At the one extreme, the ordinary analytic philosophers vehemently stick to CL (and the modal extensions thereof) and refuse to take the IL and the still weaker logics into serious philosophical consideration. At the other extreme, those few radical philosophers such as Michael Dummett baldly claim that CL should be abandoned on account of the unintelligibility of its fundamental semantic principle, i.e. the principle of bivalence, and that instead IL should be adopted as the uniquely justifiable genuine logic. On one hand, I agree with Dummett that IL has at least one prominent virtue that CL definitely lacks: the constructive character of its inference principles, which, one might say, makes it theoretically more transparent and philosophically more coherent than CL, whose characteristic inference principles, i.e. the classical reductio, makes it irremediably non-constructive. On the other hand, however, it is too evident that CL plays a pivotal role in the development of the ordinary classical mathematics, in particular of its theoretical foundations, i.e. the set theory, and that those theoretical contributions of the classical logic should be, rather than just being neglected or denounced to be unintelligible, squarely made sense of and illuminatingly accounted for. I propose to start by setting up a certain common "platform" on which to locate the different logics and to determine their respective properties and mutual relationships precisely. It is well-known that the translation introduced by Gödel in his [G1933] (henceforth G-translation) from the language of IL to that of the logic S4, a modal extension of CL, is sound and faithful: An IL formula φ is provable in IL if and only if its G-translation is provable in S4. Roughly speaking, one may understand the situation thus: IL has (something very akin to) its isomorphic image in (some sublanguage of) S4. And in this sense S4 is called "the modal companion" of IL. G-translation also yields a modal companion for each of the super-intuitionistic logics (i.e. the stronger logics than IL). For example, it assigns CL with the modal logic S5 as its modal companion. Moreover, various weaker logics can be assigned with their respective modal companion by the translation (or its slight variant). Thus, for the moment we can conclude that we have a "platform" (in the above sense of the word) for the different logics just in the world of the (classical) modal logics. Note that this is not the end of the matter but the beginning. Why can the modal languages play such a prominent role? Undoubtedly the key to the answer lies in the notion of the modality (in particular that of necessity), but the mere notion of necessity is rather void of content and seem to provide hardly any sufficient explanation. At this point, the Kripke semantics (the relational structural semantics) of the modal languages is helpful in that it accounts for the modal notions by the non-modal relational terms. But then it becomes crucial how to conceive of the two key notions of the semantics: the notion of the possible state and that of the accessibility relation. I am going to propose a new account of these notions that is based on a proof theoretical viewpoint. . Diego Fernandes (Universidade Federal de Goiás, Brasil., Brazil) On the elucidation of the concept of relative expressive power among logics ABSTRACT. The concept of expressive power, or strength, is very relevant and frequent in comparisons of logics. Despite its ubiquity, it still lacks a conceptual elucidation and this is manifested in two ways. On the one hand, it is not uncommon to see the notion of expressiveness employed in comparisons of logics on imprecise and varying grounds. This creates confusion in the literature and hardens the process of building upon other's results. On the other hand, when care is taken to specify that the formal criterion of expressiveness being used is a certain E, there is generally no further comment on it, e.g. intuitive motivations or why E was chosen in the first place (E.g. in [KW99], [Koo07], [AFFM11] and [Kui14]). This gives the impression that the concept of expressiveness has been thoroughly elucidated, and it's clearest and best formal counterpart is E. This is also misleading, since there are prima facie plausible but conflicting formal ways to compare expressiveness of logics. This work is intended to tackle these issues. Formal comparisons of expressiveness between logics can be traced back to [Lin69], where a certain formal criterion for expressiveness (to be referred as EC) is given. No conceptual discussion or motivations are offered for EC, perhaps because it issues directly from Lindström's concept of logical system (a collection of elementary classes). In [Ebb85] there is a very brief discussion in which a pair of intuitions for expressiveness is given, and it is argued that one would be captured by EC, and another by a new criterion EQ. Shapiro questions the adequacy of EC in [Sha91] due to its strictness and gives two broader criteria (PC and RPC). One motivation for the latter is that, as opposed to EC, they allow the introduction of new non-logical symbols in expressiveness comparisons. For example, in some logics the concept of infinitely many is embedded in a logical constant whereas in others, it must be "constructed" with the help of non-logical symbols. Thus, PC and RPC consider also the latent expressive power of a logic, so to speak. Up to now, four formal criteria of expressiveness were mentioned. When comparing logics, all of EC, PC and RPC can be seen as mapping formulas in the source logic, to formulas in the target logic, with respective restrictions on the allowed mappings. This might be seen as too restrictive, as there are cases where a concept can be expressed in a logic but only using a (possibly infinite) set of formulas (e.g. the concept of infinity in first-order logic). If we allow that formulas in one logic to be mapped to a (possibly infinite) set of formulas in the target logic, we get three new criteria for expressive power: EC-D, PC-D and RPC-D. Thus we have at least seven formal criteria for expressiveness, but in order to be able to choose between them, we need to select some intuitions for what it can mean for a logic to be more expressive than another. It will be seen that the seven criteria can be divided into two groups capturing each some basic intuition as regards expressiveness. In order to clarify what we mean by "the logic L' is more expressive than the logic L", firstly we have to select some basic intuitions regarding expressive power, and then choose among the rival formal criteria intended to capture them. In order to do this, some adequacy criteria will be proposed, and the material adequacy of the formal criteria will be assessed. [AFFM11] Carlos Areces, Diego Figueira, Santiago Figueira, and Sergio Mera. The expressive power of memory logics. The Review of Symbolic Logic, 4(2):290-318, 2011. [Ebb85] H.D. Ebbinghaus. Extended logics: The general framework. In Jon Barwise and Solomon Feferman, editors, Model-theoretic logics, Perspectives in mathematical logic. Springer-Verlag, 1985. [Koo07] Barteld Kooi. Expressivity and completeness for public update logics via reduction axioms. Journal of Applied Non-Classical Logics, 17(2):231-253, 2007. [Kui14] Louwe B. Kuijer. The expressivity of factual change in dynamic epistemic logic. The Review of Symbolic Logic, 7(2):208-221, 2014. [KW99] Marcus Kracht and Frank Wolter. Normal monomodal logics can simulate all others. Journal of Symbolic Logic, 64(1):99-138, 1999. [Lin69] P. Lindström. On extensions of elementary logic. Theoria, 35(1):1-11, 1969. [Sha91] S. Shapiro. Foundations without Foundationalism : A Case for Second-Order Logic: A Case for Second-Order Logic. Oxford Logic Guides. Clarendon Press, 1991 Don Faust (Northern Michigan University, United States) Sara Ipakchi (Departement of Philosophy at the Heinrich Heine University, Germany) Even logical truths are falsifiable. ABSTRACT. A special group of sentences, namely logical true sentences like the Law of Excluded Middle or the Law of non-Contradiction, are interesting for most philosophers because of -- among other things -- their infallibility. Moreover, it seems that their truth value is so obvious that it is not necessary to justify them. These properties lead some philosophers to use them as trustworthy sources to construct philosophical theories or even as direct justifications of philosophical theories. But are they really infallible or are they really self-evident? In this paper, I want to answer both of these questions with no. For the infallibility-part, I will argue that just in the case that a sentence is analytic, necessary or a priori, it makes sense to speak about its infallibility. In other words, if a sentence is neither analytic, nor necessary, nor a priori, then it is not infallible. With some examples, I will show that a logical true sentence like the Law of Excluded Middle -- as we use it in philosophy -- has none of these properties and therefore is not infallible. In the second part -- the justifiability-part -- I will argue that there is a direct connection between sentences in need of justification and falsifiable sentences. Since logical truths are neither analytic, nor necessary, nor a priori sentences and therefore not falsifiable, they are not exempt from justifications either. In other words, their truth value is not always assessable, is context dependent, and often cannot be determined by rational and/or transcendental methods alone. Thus, logical truths need justification. References [1] Baehr, S. Jason, A Priori and a Posteriori, Internet Encyclopedia of Philosophy, 2003 [2] Boghossian, Paul and Peacocke, Christopher, New essays on the A Priori, Oxford University Press, 2000 [3] Casullo, Albert, A Priori knowledge, Aldershot, Brookfield, Vt. : Ashgate, 1993 [4] Popper, Karl, Logik der Forschung, Springer-Verlag Wien GmbH, 1935 [5] Rey, Georges, The Analytic/Synthetic Distinction, The Stanford Encyclopedia of Philosophy, 2018 [6] Russell, Bruce, A Priori Justification and Knowledge, The Stanford Encyclopedia of Philosophy, 2017 Hussien Elzohary (Head of Academic Studies & Events Section, Manuscripts Center, Academic Research Sector, Bibliotheca Alexandrina, Egypt) The Influence Of The Late School Of Alexandria On The Origin And Development Of Logic In The Muslim World ABSTRACT. In order to promote the discussion surrounding the origins and background of Arabic Logic, we have to explore the Greek Logical traditions and the Logical introductions to Aristotle's works compiled at the end of the Roman Empire. The study demonstrates that the view of Logic adopted by many Greek thinkers and transmitted through translations of the commentaries on Aristotle into Arabic had a great impact on the genesis of Islamic theology and philosophy. A number of late philosophers are explored with a view to demonstrate this point. By about 900, almost all of Aristotle's logical works had been translated into Arabic, and was subject to intensive study. We have texts from that time, which come in particular from the Baghdad school of Philosophy. The school's most famous logician was Al-Farabi (d. 950) who wrote a number of introductory treatises on Logic as well as commentaries on the books of the Organon. The research aims at studying the influence of the late Alexandrian School of philosophy in the 6th century AD, in the appearance and development of Greek Logic in the Muslim World. In addition, the adaptation of its methodologies by the Islamic thinkers, and its impact on Muslim philosophical thought. The late Alexandrian school has been underestimated by many scholars who regard its production at the end of the Classical Age as mere interpretations of previous writings; delimiting its achievement to the preservation of ancient Logical and philosophical heritage. The research reviews the leading figures of the Alexandrian School and its production of Logical commentaries. It also traces the transmission of its heritage to the Islamic World through direct translations from Greek into Syriac first and then into Arabic. It also highlights the impact of the Alexandrian commentaries on Muslim recognition of Plato and Aristotle as well as its Logical teaching methodology starting with the study of Aristotle's Categories as an introduction to understand Aristotle's philosophy. References 1. Adamson P., Aristotle in the Arabic Commentary Tradition, In: C. Shields, The Oxford Handbook of Aristotle, Oxford University Press, 2012. 2. Blumenthal, H. J. Aristotle and Neoplatonism in Late Antiquity: Interpretations of the 'De anima'. London: Duckworth, 1996. 3. Chase M., The Medieval Posterity Of Simplicius' Commentary On The Categories "Thomas Aquinas And Al-Fārābī, In: "Medieval Commentaries on Aristotle's Categories", Edited by: Lloyd A. Newton, Brill, 2008. 4. Gabbay, D. M., Woods, J.: Handbook of the History of Logic, Vol.1, Greek, Indian and Arabic Logic, Elsevier, Holland, 2004. 5. Gleede, B. "Creation Ex Nihilo: A Genuinely Philosophical Insight Derived from Plato and Aristotle? Some Notes on the Treatise on the Harmony between the Two Sage". Arabic Sciences and Philosophy 22, no. 1 (Mar 2012). 6. Lossl, J., and J. W. Watt, eds. Interpreting the Bible and Aristotle in Late Antiquity: The Alexandrian Commentary Tradition between Rome and Baghdad. Farnham, UK: Ashgate, 2001. 7. Sorabji, R. Aristotle Transformed: The Ancient Commentators and Their Influence. New York: Cornell University Press, 1990. 8. Sorabji, R. The Philosophy of the Commentators 200-600 AD. Vol. 3. Logic and Metaphysics. New York: Cornell University Press, 2005. 9. Watt, J. "The Syriac Aristotle between Alexandria and Baghdad". Journal for Late Antique Religion and Culture 7 (2013). 10. Wilberding J., The Ancient Commentators on Aristotle, In: J. Warren, F. Sheffield, "The Routledge Companion to Ancient Philosophy", Routledge, 2013. Deborah Kant (Universität Konstanz, Germany) Juan Pablo Mejía Ramos (Rutgers University, United States) Matthew Inglis (Loughborough University, UK) Using linguistic corpora to understand mathematical explanation PRESENTER: Juan Pablo Mejía Ramos ABSTRACT. The notion of explanation in mathematics has received a lot of attention in philosophy. Some philosophers have suggested that accounts of scientific explanation can be successfully applied to mathematics (e.g. Steiner 1978). Others have disagreed, and questioned the extent to which explanation is relevant to the actual practice of mathematicians. In particular, the extent to which mathematicians use the notion of explanatoriness explicitly in their research is a matter of sharp disagreement. Resnik and Kushner (1987, p.151) claimed that mathematicians "rarely describe themselves as explaining". But others disagree, claiming that mathematical explanation is widespread, citing individual mathematicians' views (e.g., Steiner 1978), or discussing detailed cases in which mathematicians explicitly describe themselves or some piece of mathematics as explaining mathematical phenomena (e.g. Hafner & Mancosu 2005). However, this kind of evidence is not sufficient to settle the disagreement. Recently, Zelcer (2013) pointed out that a systematic analysis of standard mathematical text was needed to address this issue, but that such analysis did not exist. In this talk we illustrate the use of corpus linguistics methods (McEnery & Wilson 2001) to perform such an analysis. We describe the creation of large-scale corpora of written research-level mathematics (obtained from the arXiv e-prints repository), and a mechanism to convert LaTeX source files to a form suitable for use with corpus linguistic software packages. We then report on a study in which we used these corpora to assess the ways in which mathematicians describe their work as explanatory in their research papers. In particular, we analysed the use of 'explain words' (explain, explanation, and various related words and expressions) in this large corpus of mathematics research papers. In order to contextualise mathematicians' use of these words/expressions, we conducted the same analysis on (i) a corpus of research-level physics articles (constructed using the same method) and (ii) representative corpora of modern English. We found that mathematicians do use this family of words, but relatively infrequently. In particular, the use of 'explain words' is considerably more prevalent in research-level physics and representative English, than in research-level mathematics. In order to further understand these differences, we then analysed the collocates of 'explain words' –words which regularly appear near 'explain words'– in the two academic corpora. We found differences in the types of explanations discussed by physicists and mathematicians: physicists talk about explaining why disproportionately more often than mathematicians, who more often focus on explaining how. We discuss some possible accounts for these differences. References Hafner, J., & Mancosu, P. (2005). The varieties of mathematical explanation. In P. Mancosu et al. (Eds.), Visualization, Explanation and Reasoning Styles in Mathematics (pp. 215–250). Berlin: Springer. McEnery, T. & Wilson, A. (2001). Corpus linguistics: An introduction (2nd edn). Edinburgh: Edinburgh University Press. Steiner, M. (1978) Mathematical explanation. Philosophical Studies, 34(2), 135–151. Resnik, M, & Kushner, D. (1987). Explanation, independence, and realism in mathematics. British Journal for the Philosophy of Science, 38, 141–158. Zelcer, M. (2013). Against mathematical explanation. Journal for General Philosophy of Science, 44(1), 173-192. Fenner Tanswell (Loughborough University, UK) Studying Actions and Imperatives in Mathematical Texts PRESENTER: Fenner Tanswell ABSTRACT. In this paper, we examine words relating to mathematical actions and imperatives in mathematical texts, and within proofs. The main hypothesis is that mathematical texts, and proofs especially, contain frequent uses of instructions to the reader, issued by using imperatives and other action-focused linguistic constructions. We take common verbs in mathematics, such as "let", "suppose", "denote", "consider", "assume", "solve", "find", "prove" etc. and compare their relative frequencies within proofs, in mathematical texts generally, and in spoken and written British and American English, by using a corpus of mathematical papers taken from the ArXiv. Furthermore, we conduct 'keyword' analyses to identify those words which disproportionately occur in proofs compared to other parts of mathematics research papers. Previous analyses of mathematical language, such as those conducted by de Bruijn (1987) and Ganesalingam (2013), have largely been carried out without empirical investigations of actual mathematical texts. As a result, some of the claims they make are at odds with the reality of written mathematics. For example, both authors claim that there is no room for imperatives in rigorous mathematics. Whether this is meant to be a descriptive or normative claim, we demonstrate that analysing the actual writings of mathematicians, particularly written proofs, shows something quite different. Mathematicians use certain imperatives far more frequently than in natural language, and within proofs we find an even higher prevalence of certain verbs. The implications of this are that mathematical writing and argumentation may be harder to formalise than other linguistic accounts of it suggest. Furthermore, this backs the idea that proofs are not merely sequences of declarative sentences, but instead provide instructions for mathematical activities to be carried out. De Bruijn, N.G. (1987) "The mathematical vernacular, a language for mathematics with typed sets", in Dybjer, P., et al. (eds.) Proceedings of the Workshop on Programming Logic, Report 37, Programming Methodology Group, University of Göteborg and Chalmers University of Technology. Ganesalingam, M. (2013) The Language of Mathematics, Lecture Notes in Computer Science Vol 7805, Springer, Berlin. Rasmus K. Rendsvig (University of Copenhagen, Denmark) Dynamic Term-Modal Logic ABSTRACT. Term-modal logics are first-order modal logics where the operators doubles as first-order predicates: in e.g. an epistemic knowldge operator K_a, the 'a' here is not merely an index, but a first-order term. Term-modal syntax considers e.g. "exists.x K_x phi(x)" a well-formed formula. Semantically, the knowing agents are elements in a proper domain of quantification, rather than mere indices as is the case in ordinary epistemic logic. Term-modal logic thus treat agents as first-class citizens, both syntactically and semantically. This has the effect that variants of the Descartes' Cogito may be expressed, e.g. by the valid "(not K_a phi) implies (K_a exists.x K_x phi)". Likewise, term-modal logics may in a natural way be used to expresses agents' (higher-order) knowledge about their relation R to others in a social network, e.g. by "K_a (R(a,b) and not K_b R(a,b))". The above are inherently term-modal expressions. Though e.g. the latter may be emulated using ordinary propositional epistemic logic with operators K_a, K_b and atomic proposition r_{ab}, such solutions are ad hoc as they do not logically link the occurrences of the indices of the operators and the indices of the atom. Inherently term-modal expressions have been considered as early as von Wright's 1951 "An Essay in Modal Logic", with Hintikka's syntax in (1962, "Knowledge and Belief") in fact being term-modal. Despite their natural syntax, term-modal logics have received relatively little attention in the literature and no standard implementation exists. Often, combinations of well-known difficulties from first-order modal logic mixed with oddities introduced by the term-modal operators result in tri-valent or non-normal systems, systems with non-general completeness proofs, or systems satisfying oddities like "P(x) and forall.y not P(y)". In this paper, we present a simple, well-behaved instance of a term-modal system. The system is bivalent, normal, with close-to-standard axioms (compared to similar first-order modal logics), and allows for a canonical model theorem approach to proving completeness for a variety of different frame classes. The "static" system may be extended dynamically with a suitable adaption of so-called "action models" from dynamic epistemic logic; for the extension, reduction axioms allow showing completeness. Throughout, the system is illustrated by showing its application to social network dynamics, a recent topic of several epistemic and/or dynamic logics. Dominik Klein (University of Bamberg, Germany) A logical approach to Nash equilibria ABSTRACT. Maximizing expected utility is a central concept within classic game theory. It combines a variety of core aspects of game theory including the agent's beliefs, preferences and their space of available strategies, pure or mixed. These frameworks presuppose quantitative notions in various ways, both on the input and output side. On the input side, utility maximizing requires a quantitative, probabilistic representation on the agent's beliefs. Moreover, agents' preferences over the various pure outcomes need to be given quantitatively. Finally, the scope of maximizing is sometimes taken to include mixed strategies, requiring a quantitative account of mixed actions. Also on the output side, expected utilities are interpreted quantitatively, providing interval scaled preference or evaluations of the available actions, again pure or mixed. In this contribution, we want to pursue qualitative, logical counterparts of maximal utility reasoning. These will build on two core components, logical approaches to games and to probabilities. On the game side, the approach builds on a standard modal logic for n-player matrix games with modalities []_i and ≥_i for all players, denoting their uncertainty over opponent's choices and their preferences over outcomes respectively. This language is expanded with a mild form of conditional belief operators. Given that players are still to decide on their actions, agents cannot reasonably have beliefs about the likelihood of outcome strategy profiles. Rather, agents have conditional beliefs p_i(φ|a), denoting i's beliefs about which outcome states (or formulas) obtain(ed) if she is or was to perform action a. To see how such languages can express maximal utility reasoning, assume agent i's goals in some game to be completely determined by a formula φ: she receives high utility if the game's outcome satisfies φ and low utility else. In this case, the expected utility of her various moves is directly related to their propensity of making φ true. More concretely, the expected utility of performing some acton a is as least as high as that of b iff p(φ|a) ≥ p(φ|b). Notably, the current approach is not restricted to cases where utility is determined by a single formula. We will show the formalism expressive enough to represent all utility assignments on finite games with values in the rationals. Moreover, we will show that the framework can be used to represent pure and mixed strategy Nash equilibria, again over finite games: For all combinations of rational valued utility assignments there is a formula expressing that an underlying game form equipped with the relevant probability assignments is in equilibrium with respect to these utilities. Lastly, we show the logical framework developed to be well-behaved in that it allows for a finite axiomatization and is logically compact. Thomas Piecha (University of Tübingen, Department of Computer Science, Germany) Karl R. Popper: Logical Writings ABSTRACT. Karl Popper developed a theory of deductive logic in a series of papers in the late 1940s. In his approach, logic is a metalinguistic theory of deducibility relations that are based on certain purely structural rules. Logical constants are then characterized in terms of deducibility relations. Characterizations of this kind are also called inferential definitions by Popper. His works on logic anticipate several later developments and discussions in philosophical logic, and are thus interesting from both historical and systematic points of view [1]: - Anticipating the discussion of connectives like Prior's "tonk", Popper considered a tonk-like connective called the "opponent" of a statement, which leads, if it is present in a logical system, to the triviality of that system. - He suggested to develop a system of dual-intuitionistic logic, which was then first formulated and investigated by Kalman Joseph Cohen in 1953. - He already discussed (non-)conservative language extensions. He recognized, for example, that the addition of classical negation to a system containing implication can change the set of deducible statements containing only implications, and he gave a definition of implication with the help of Peirce's rule that together with intuitionistic negation yields classical logic. - He also considered the addition of classical negation to a language containing intuitionistic as well as dual-intuitionistic negation, whereby all three negations become synonymous. This is an example of a non-conservative extension where classical laws also hold for the weaker negations. - Popper was probably the first to present a system that contains an intuitionistic negation as well as a dual-intuitionistic negation. By proving that in the system so obtained these two kinds of negation do not become synonymous, he gave the first formal account of a bi-intuitionistic logic. - He provided an analysis of logicality, in which certain negations that are weaker than classical or intuitionistic negation turn out not to be logical constants. A critical edition of Popper's logical writings is currently prepared by David Binder, Peter Schroeder-Heister and myself [2], which comprises Popper's published works on the subject as well as unpublished material and Popper's logic-related correspondence, together with our introductions and comments on his writings. In this talk I will introduce this edition in order to provide an overview of Popper's logical writings, and I will highlight the central aspects of Popper's approach to logic. [1] David Binder and Thomas Piecha: Popper's Notion of Duality and His Theory of Negations, History and Philosophy of Logic 38(2), 154--189, 2017. [2] David Binder, Thomas Piecha and Peter Schroeder-Heister: Karl R. Popper: Logical Writings, to appear in 2019. Constantin C. Brîncuș (University of Bucharest, Romania) Comment on "Karl R. Popper: Logical Writings" ABSTRACT. Since Karl Popper's work in logic and its philosophy from the 1940s was, unfortunately, mostly neglected, the publication of a critical edition of Popper's logical writings is a remarkable and fruitful event for the academic community. Thomas Piecha discusses in his talk some central issues of Popper's work that have been later developed by contemporary logicians, most of which are concerned with logical negation. In my intervention, I discuss Popper's logico-philosophical motivation for deducibility relations structurally defined (work developed latter by [Dana Scott 1971, 1974] and [Arnold Koslow 1992]) and for logical negation. Popper's attempt to weaken classical logic in mathematical proofs by weakening the rules for logical negation is important and deserves closer attention. For instance, [Saul Kripke 2015] has recently argued that an affirmativist logic, i.e., a logic without negation and falsity, which is semantically distinct from [Hilbert and Bernays 1934]'s positive logic because it also eliminates falsity, is all that science needs. In this respect, the analysis of the (non-) conservative extensions of a minimal system of logic is necessary and philosophically useful. Popper's interest in logical negation was also generated by [Carnap 1943]'s discovery of non-normal models for classical logic that arise mainly due to negation (an important issue today for classical logical inferentialists). Carnap, Rudolf. 1943. Formalization of Logic, Cambridge, Mass., Harvard University Press. Hilbert, David, & Bernays, Paul. 1934/1968. Grundlagen der Mathematik, I. Berlin, Heidelberg, New York: Springer. Kripke, Saul. 2015. 'Yet another dogma of logical empiricism'. Philosophy and Phenomenological Research, 91, 381-385. Koslow, Arnold. 1992. A Structuralist Conception of Logic, Cambridge University Press. Scott, Dana. 1971. 'On engendering an illusion of understanding', The Journal of Philosophy 68(21):787–807. Scott, Dana. 1974. 'Completeness and axiomatizability in many-valued logic', in L. Henkin et al. (eds.), Proceedings of the Tarski Symposium, vol. 25 of Proceedings of Symposia in Pure Mathematics, American Mathematical Society, pp. 411–436. 16:45-17:45 Session 15D: B1 Methodology: Societal issues Ivo Pezlar (The Czech Academy of Sciences, Institute of Philosophy, Czechia) Tomáš Ondráček (KPH ESF MU, Czechia) Science as Critical Discussion and Problem of Immunizations ABSTRACT. The value of ideal of a critical discussion is something that should be shared by scientists. It is because in the core of a critical discussion is an inter-subjective evaluation of given propositions, facts, evidence. I will argue that (A) the pursuit of this ideal can also be taken as a possible demarcation criterion for science, at least concerning its demarcation from pseudo-sciences. Pseudo-sciences are characterized as something that wants to be or looks like science, but it is not. Uses of unfounded immunizations are one of the possible signs of pseudo-science (Derksen 1993). In general, immunizations (immunizing strategies or stratagems) prevent for a theory to be falsified or reasonably denied. This concept was initially introduced by Popper (1959/2005) as a conventionalist trick. Popper identified four types: an introduction of ad hoc hypotheses, a modification of ostensive (or explicit) definition, a skeptical attitude as to the reliability of the experimenter, and casting doubt on the acumen of the theoretician. Later, Boudry and Braeckman (2011) provided an overview of immunizing strategies identifying five different types: conceptual equivocations and moving targets, postdiction and feedback loops, conspiracy thinking, changing the rules of play, and invisible escape clauses. They also provided several examples to each type. But more importantly, they presented a definition of immunizing strategies: "[a]n immunizing strategy is an argument brought forward in support of a belief system, though independent from that belief system, which makes it more or less invulnerable to rational argumentation and/or empirical evidence." Although I do consider immunizations as an indication of pseudo-science, I will argue that (B) immunizations are not arguments as Boudry and Braeckman proposed but rather (C) immunizations are violations of rules of a critical discussion. To support the first part of this claim (B), I will present an analysis of selected examples provided by Boudry and Braeckman using the Toulmin's model of argument (Toulmin 1958/2003). Regarding the second part (C), I will show that analyses of these examples as violations of rules of a critical discussion in pragma-dialectical theory (van Eemeren & Grootendorst 2004) is more suitable. In conclusion, immunizations prevent a critical discussion, and therefore reasonable process where inter-subjective evaluation of claims plays a significant role. The evidence, facts, theories and similar are accepted in science by the scientific community, not by individuals. Thus, inter-subjectivity is characteristic for science, and lack of it is typical for pseudo-sciences. Therefore, (A) science can be characterized as an attempt of a critical discussion where the goal is to solve a difference of opinion by reasonable means. References: Boudry, M., & Braeckman, J. (2011). Immunizing Strategies and Epistemic Defense Mechanisms. Philosophia, 39(1), 145–161. Derksen, A. A. (1993). The Seven Sins of Pseudo-Science. Journal for General Philosophy of Science, 24(1), 17–42. Popper, K. (1959/2005). The Logic of Scientific Discovery. Routledge. Toulmin, S. E. (1958/2003). The Uses of Argument. Cambridge University Press. van Eemeren, F. H., & Grootendorst, R. (2004). A Systematic Theory of Argumentation: The pragma-dialectical approach. Cambridge University Press. Michael Sidiropoulos (Member of Canadian Society for the History and Philosophy of Science, Canada) PHILOSOPHICAL AND DEMARCATION ASPECTS OF GLOBAL WARMING THEORY ABSTRACT. In their effort to explain a phenomenon, scientists use a variety of methods collectively known as "the scientific method". They include multiple observations and the formulation of a hypothesis, as well as the testing of the hypothesis through inductive and deductive reasoning and experimental testing. Rigorous skepticism and refinement or elimination of the hypothesis are also part of the process. This work presents an updated concept of the scientific method with the inclusion of two additional steps: demarcation criteria and scientific community consensus. Demarcation criteria such as hypothesis testing and falsifiability lead to a proposed "Popper Test". The method is applied to fundamental aspects of Global Warming theory (GW). David Hume's "problem of induction" is the concern of making inductive inferences from the observed to the unobserved. It is shown that this issue is crucial to GW, which claims that temperature observations of the last 100 years create a new pattern of systematic warming caused by human activity. We use the term "Global Warming" rather than "Climate Change" as it is more amenable to falsification and is therefore a stronger scientific theory. Natural phenomena can have multiple causes, effects and mitigating factors. Many interrelationships among these are not well understood and are often proxied by statistical correlations. Certain statistical correlations exist because of a causal relationship and others exist in the absence of causal relationships. Statistical correlations can lead to the formulation of a theory but do not constitute proof of causality. This must be provided by theoretical and experimental science. Trial and error leads to model enhancement as, for example, climate models have recently been modified to include the effect of forests, an important missing variable in prior models. Tests comprising the proposed method are applied on fundamental assumptions and findings of both parts of GW theory: (1) Rising global temperatures, (2) Anthropogenesis of rising global temperatures. Several premises of the theory are found falsifiable within the means of current technology and are therefore scientific theories. Certain other premises are found to be unfalsifiable and cannot be included in a scientific theory. The latter must be eliminated or be substituted by alternative falsifiable proposals. 1 Popper, Karl, 1959, The Logic of Scientific Discovery, Routledge. 2 Hume, David, 1739, A Treatise of Human Nature, Oxford: Oxford University Press. 16:45-17:45 Session 15E: IS C7 Alexandrova Anna Alexandrova (University of Cambridge, UK) On the definitions of social science and why they matter ABSTRACT. What sort of category is 'social science'? Is it theoretical, that is, reflecting a genuine specialness of social sciences' subject matter or method? Or merely institutional, that is, denoting the activities and the body of knowledge of those recognised as practicing economics, sociology, anthropology, etc.? The field of philosophy of social science has traditionally assumed the former and sought to articulate ways in which social sciences are unlike the natural ones. I trace the history and the motivations behind this exceptionalism and evaluate its viability in this age of interdisciplinarity and data-driven methods. 16:45-17:45 Session 15F: C1 Philosophy of the formal sciences Zeynep Soysal (University of Rochester, United States) Melisa Vivanco (University of Miami, United States) Numbers as properties; dissolving Benacerraf's Tension ABSTRACT. Generations of mathematicians and philosophers have been intrigued by the question, What are arithmetic propositions about? I defend a Platonist answer: they're about numbers, and numbers are plural properties. I start with the seminal "Mathematical Truth" (1973), where Benacerraf showed that if numbers exist, there is a tension between their metaphysical and epistemological statuses. Even as Benacerraf's particular assumptions have been challenged, this tension has reappeared. I bring it out with two Benacerrafian requirements: Epistemic requirement. Any account of mathematics must explain how we can have mathematical knowledge. Semantic requirement. Any semantics for mathematical language must be homogenous with a plausible semantics for natural language. Each of the prominent views of mathematical truth fails one of these requirements. If numbers are abstract objects, as the standard Platonist says, how is mathematical knowledge possible? Not by one common source of knowledge-causal contact. Field (1989) argues that the epistemological problem extends further: if numbers are abstract objects, we cannot verify the reliability of our mathematical belief-forming processes, even in principle. If mathematical truth amounts to provability in a system, as the combinatorialist says, the semantics for mathematical language is unlike those semantics normally given for natural language ('snow is white' is true iff snow is white, vs. '2 + 2 = 4' is true iff certain syntactic facts hold). I argue that numbers are properties. Epistemic requirement. We're in causal contact with properties, so we're in causal contact with numbers. More generally, because a good epistemology must account for knowledge of properties, any such theory should account for mathematical knowledge. Semantic requirement. Just as 'dog' refers to the property doghood, '2' refers to the property being two. Just as 'dogs are mammals' is true iff a certain relation holds between doghood and mammalhood, '2 + 2 = 4' is true iff a certain relation holds between being two and being four. Specifically, I say that numbers are what I call pure plural properties. A plural property is instantiated by a plurality of things. Take the fact that Thelma and Louise cooperate. The property cooperate doesn't have two argument places: one for Thelma, and one for Louise. Rather, it has a single argument place: here it takes the plurality, Thelma and Louise. Consider another property instantiated by this plurality: being two women. This plural property is impure because it does not concern only numbers, but we can construct it out of two other properties, womanhood and being two. This latter plural property is a pure. It is the number two. Famously, number terms are used in two ways: referentially ('two is the smallest prime') and attributively ('I have two apples'). If numbers are objects, the attributive use is confounding (Hofweber 2005). If they're properties, there is no problem: other property terms share this dual use ('red is my favorite color' vs. 'the apple is red'). The standard Platonist posits objects that are notoriously mysterious. While the nature of properties may be contentious, my numbers-as-properties view is not committed to anything so strange. Karl Heuer (TU Berlin, Germany) The development of epistemic objects in mathematical practice: Shaping the infinite realm driven by analogies from finite mathematics in the area of Combinatorics. PRESENTER: Deniz Sarikaya ABSTRACT. We offer a case study of mathematical theory building via analogous reasoning. We analyse the conceptualisation of basic notions of (topological) infinite graph theory, mostly exemplified by the notion of infinite cycles. We show in how far different definitions of "infinite cycle" were evaluated against results from finite graph theory. There were (at least) three competing formalisations of "infinite cycles" focusing on different aspects of finite ones. For instance, we might observe that in a finite cycle every vertex has degree two. If we take this as the essential feature of cycles, we can get to a theory of infinite cycles. A key reason for the rejection of this approach is that some results from finite graph theory do not extend (when we syntactically change "finite graph", "finite cycle", etc. to "infinite graph", "infinite cycle" etc. The activity to axiomatise a field is no purely mathematical one, which cannot be solved by proof but by philosophical reflection. This might sound trivial but is often neglected due to an over simplified aprioristic picture of mathematical research. We must craft our formal counterparts in mathematics by our intuitions of the abstract concepts/objects. While we normally think of a mathematical argument as the prototype of deductive reasoning, there are inductive elements in at least three senses: 1. In the heuristic of developing. 2. In the process of axiomatisation, while 2a. we test the adequacy of an axiomatisation. 2b. we are looking for new axioms to extend a current axiomatic system. We want to focus on 2a and especially on the role of analogies. Nash-Williams (1992, p. 1) analysed that "the majority of combinatorialists seem to have concentrated on finite combinatorics, to the extent that it has almost seemed an eccentricity to think that graphs and other combinatorial structures can be either finite or infinite". This observation is still true, but more recently a growing group of combinatorialists work on infinite structures. We want to analyse the heuristics of theory development in this growing area. There are vocabularies from finite graph theory for which it is not clear which infinite counterpart might be the best concept to work with. And for theorems making use of them it is also unclear whether they can or should have an extension to infinite graphs. This area is very suitable for the philosophical discourse, since the used concepts are quite intuitive and involve only a little background from topology and graph theory. We argue that mathematical concepts are less fixed and eternal than it might seem. Shifting the focus from the sole discussion of theorems, which is overrepresented in the reflections of philosophers of mathematics, towards the interplay of definitions and theorems. While theorems are a very important (and probably even constitutive) part of the practice of mathematicians, we should not forget that mathematical concepts in the sense of concepts used in mathematical discourses develop over time. We can only proof / state / comprehend with fitting vocabulary, which we develop in a process of iterative refinement. 16:45-17:45 Session 15G: C6 Societal, ethical and epistemological issues of AI 4 Hyundeuk Cheon (Seoul National University, South Korea) Insok Ko (Inha University, South Korea) Prerequisite for Employing Intelligent Machines as Human Surrogate ABSTRACT. This paper discusses qualifying conditions for employing and utilizing intelligent machines as human surrogate. It thereby induces us to philosophical reflection on the rationality of our own act of designing, manufacturing and utilizing such artifacts. So long as the conditions discussed here are not realized, recruitment and use of such system shall be advised as unreasonable, thus unacceptable. Machines equipped with higher level of AI will surrogate the human role in ever-increasing range. Decision of the extent and appropriate mode of such surrogacy is an important societal task about which we should make decisions. For this purpose we should first analyze the question, what the prerequisite condition for acceptable robotic surrogacy is. This paper discusses the primary condition for intelligent machines in order to be justified to surrogate human agents. Seen from the viewpoint of the analysis of this paper, it will be hard for a machine to satisfy the requirement. It suggests that our societies proceed more carefully with such intelligent artifacts than they do now. This paper discusses the case of autonomous vehicles that are coming incorporated into our transportation system as primary example. An essential condition that a driver-AI should fulfill in order to legitimately take the driver role in our traffic system is that it assume and apply a perceptual taxonomy that is sufficiently near to that of the competent human drivers, so that it tends to make eventually the same classification of all the objects encountered while driving as the human drivers do. How can we make the intelligent artifacts to share human systematics? We may consider two paths. One is the method of inputting the classification system we are adopting and applying into the program (top-down inscription), and the other is to let them interact with human beings in the real world and thereby learn it bit by bit (bottom-up learning). But neither way seems to be effective for realization of the goal. I will show why. I suggested an ontological stance that interprets AI as a form of "externalized mind"(Ko 2012). Externalized mind is a boundary type of extended mind. The former differs from the latter in that 1) it does not belong to a specific cognitive agent, even though it might owe its origin to certain agent, and 2) it does not necessarily have a center for which the extended resources of cognition serve and supplement. The AI that operates a networked system of unmanned vehicles, for instance, can be interpreted as the externalized collective mind of decision-makers about the system, of those engineers who perform design in details, engineers of manufacturing and implementing, and administrators of the system. This way of ontological interpretation stands against the philosophical stance that grants the status of independent agents to smart robots (e.g., in terms of the "electronic personhood"). I argue that it is a philosophical scheme adequate for addressing the issue of responsibility and contribution allocation associated with the application of the intelligent machines. From this perspective, I will stand with J. Bryson in the debate on the socio-ethical status of robots (Gunkel 2018 & Bryson 2018). Reference Bryson, J., (2018), "Patiency is not a virtue: the design of intelligent systems and systems of ethics", Ethics and Information Technology 20. Gunkel, D. (2018), Robot Rights, MIT Press. Ko, I. (2012), "Can robots be accountable agents?", Journal of the New Korean Philosophical Association 67/1. Bennett Holman (Underwood International College, Yonsei University, South Korea) Dr. Watson: The Impending Automation of Diagnosis and Treatment ABSTRACT. Last year may be remembered as the pivotal point for artificial "deep learning" and medicine. A large number of different labs have used Artificial intelligence (AI) to augment some portion of medical practice, most notably in diagnosis and prognosis. I will first review the recent accomplishments of deep-learning AI in the medical field, including: the landmark work of Esteva et al. (2017) which showed that AI could learn to diagnose skin cancer better than a dermatologist; extensions of similar projects into detecting breast cancer (Liu et al., 2017); Oakden-Rayner et al.'s (2017) work showing AI could create its own ontological categories for patient risk; and through analyzing tumor DNA identify more possible sites for intervention (Wrzeszczynski et al., 2017). I will next argue that a forseeable progression of this technology is to begin automating treatment decisions. Whether this development is positive or negative depends on the specific details of who develops this technology and how it is used. I will not attempt to predict the future, but I will run out some emerging trends to their logical conclusions and identify some possible pitfalls of the gradual elimination of human judgment from medical practice. In particular some problems could become significantly worse. It is the essence of deep learning AI that reasons for its outcomes are opaque. Many researchers have shown that industry has been adept at causing confusion by advancing alternative narratives (e.g. Oreskes and Conway, 2010), but at the very least with traditional research there were assumptions that could, in principle, be assessed. With this deep learning AI there are no such luxuries. On the other hand, I will argue that properly implemented deep learning solves a number of pernicious problems with both the technical and the social hindrances to reliable medical judgments (e.g. the end to a necessary reliance on industry data). Given the multiple possible routes that such technology I argue that consideration of how medical AI should develop is an issue that will not wait and thus demands immediate critical attention of philosophy of science in practice. Esteva, Andre, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau, and Sebastian Thrun. "Dermatologist-level classification of skin cancer with deep neural networks." Nature 542, no. 7639 (2017): 115-118. Liu, Y., Gadepalli, K., Norouzi, M., Dahl, G. E., Kohlberger, T., Boyko, A., ... & Hipp, J. D. (2017). Detecting cancer metastases on gigapixel pathology images. arXiv preprint arXiv:1703.02442. Oakden-Rayner, L., Carneiro, G., Bessen, T., Nascimento, J. C., Bradley, A. P., & Palmer, L. J. (2017). Precision Radiology: Predicting longevity using feature engineering and deep learning methods in a radiomics framework. Scientific Reports, 7(1), 1648 Oreskes, N. and E. Conway (2010), Merchants of Doubt. New York: Bloomsbury Press Wrzeszczynski, K. O., Frank, M. O., Koyama, T., Rhrissorrakrai, K., Robine, N., Utro, F., ... & Vacic, V. (2017). Comparing sequencing assays and human-machine analyses in actionable genomics for glioblastoma. Neurology Genetics, 3(4), e164. 16:45-17:45 Session 15H: C3/B6 Transition and change in science Nathalie Gontier (Applied Evolutionary Epistemology Lab, Center for Philosophy of Science, University of Lisbon, Portugal) Time, causality and the transition from tree to network diagrams in the life sciences ABSTRACT. How we conceptualize and depict living entities has changed throughout the ages in relation to changing worldviews. Wheels of time originally depicted cycles of life or the cyclic return of the seasons that associated with circular notions on time. Aristotle introduced the concept of a great chain of being that became foundational for Judeo-Christian theorizing on scala naturae that originally associate with a-historic depictions of nature and later with linear timescales. Scales of nature in turn formed the basis for phylogenetic trees as they were introduced by the natural history scholars of the 19th century. And these trees are set in two-dimensional Cartesian coordinate systems where living entities are tracked across space and time. Today, the various disciplines that make up the evolutionary sciences often abandon tree typologies in favor of network diagrams. Many of these networks remain historically unrooted and they depict non-linear causal dynamics such as horizontal information exchange. We investigate these transitions and hone in on how networks introduce new concepts of causality. We end by delineating how a reconstruction of the genealogy of these diagrams bears larger consequences for how scientific revolutions come about. Gabor Zemplen (ELTE, Hungary) Evolving theories and scientific controversies: a carrier-trait approach ABSTRACT. It is not without problems to account for the evolution of scientific theories. Reconstruction of theoretical content is typically carried out using static and atemporal conceptual and modelling spaces, but many of the historically important scientific theories are far from being easily delineable entities, and a scientist's theoretical position can respond to new data, literature, and the criticisms received. Especially challenging are scientific controversies, where the debated issues are complex, the exchange involves several participants, and extends over long periods. Famous examples include the Methodenstreit, the Hering-Helmholtz controversy or the debates over Newton's or Darwin's views. In these cases controversies lasted for several generations, and polarisation is a recurring trait of the exchanges. The reconstructions and evaluations of the exchanges also exhibit heterogeneity and polarisation. Cultures of reading, representing, interpreting, and evaluating the theory suggest that some scientific theories are manifolds. What are the suitable frameworks that help the study of theories, theory-acceptance and the often co-occurring process of opinion-polarization? The talk offers a permissivist carrier-trait framework to study theories, and an artifact-human-artefact knowledge-mobilization process. The theory picked for analysis is Newton's optical theory, a highly successful scientific theory, but one that cannot be easily reduced to equations or formulas, and one that gave rise to opinion polarization. Instead of assuming some type of content (a propositional structure, a conceptual space, or a mathematical object) to reconstruct the theory, and thus provide a paraphrase to stand for the theory, I look at traits that are delineable when studying the carriers of a theory. In a deliberately broad definition, carriers are scientific representations, parts thereof, or composites of them, targets of an interpretation-process. A carrier is an external (non-mental) representation, akin to some speech act, yet it can be a whole book, or just a part of a diagram or sentence. A trait is a distinctive or distinguishable feature, corresponding to some act of making distinctions between carriers. The reconstruction focuses on innovative aspects (novel traits) of theories that become conventionalized: items introduced to the lexicon (neologisms), some of the mathematical idealizations, and novel diagrammatic traits of the theory. The perspective helps to map strands of uptake (including polarisation of opinions), and trait-analysis can show that multiple readability (ambiguity) of carriers facilitated heterogeneous uptake and the spread of competing views. 16:45-17:45 Session 15I: B1 Epistemology 1 Stephan Kornmesser (University of Oldenburg, Germany) Frames – A New Model for Analyzing Theories ABSTRACT. The frame model was developed in cognitive psychology (Barsalou 1992) and imported into the philosophy of science in order to provide representations of scientific concepts and conceptual change (Andersen and Nersessian 2000; Andersen et al. 2006; Chen and Barker 2000; Chen 2003; Barker et al. 2003; Votsis and Schurz 2012; Votsis and Schurz 2014). The aim of my talk is to show that beside the representation of scientific concepts the frame model is an efficient instrument to represent and analyze scientific theories. That is, I aim to establish the frame model as a representation tool for the structure of theories within the philosophy of science. In order to do so, in the first section of my talk, I will briefly introduce the frame model and develop the notion of theory frames as an extension of it. Further, I will distinguish between theory frames for qualitative theories, in which scientific measurement is based on nominal scales, and theory frames for quantitative theories, in which measurement is based on ratio scales. In two case studies, I will apply the notion of theory frames to a linguistic and a physical theory. Section 2 contains a diachronic analysis of a qualitative theory by applying the notion of a theory frame to the pro drop theory of generative linguistics. In section 3, I will provide a frame-based representation of electrostatics, the laws of which contain quantitative theoretical concepts. Based on the two case studies, I will argue that the frame model is a powerful instrument to analyze the laws of scientific theories, the determination of theoretical concepts, the explanatory role of theoretical concepts, the abductive introduction of a new theoretical concept, the diachronic development of a theory, and the distinction between qualitative and quantitative scientific concepts. I will show that due to its graphical character the frame model provides a clear and intuitive representation of the structure of a theory as opposed to other models of theory representation like, for instance, the structuralist view of theories. Andersen, Hanne, and Nancy J. Nersessian. 2000. "Nomic Concepts, Frames, and Conceptual Change." Philosophy of Science 67 (Proceedings): S224-S241. Andersen, Hanne, Peter Barker, and Xiang Chen. 2006. The Cognitive Structure of Scientific Revolutions. Cambridge: University Press. Barker, Peter, Xiang Chen, and Hanne Andersen. 2003. "Kuhn on Concepts and Categorization." In Thomas Kuhn, ed. Thomas Nickles, 212-245. Cambridge: University Press. Barsalou, Lawrence W. 1992. "Frames, concepts, and conceptual fields." In Frames, fields, and contrasts, ed. Adrienne Lehrer, and Eva F. Kittay, 21–74. Hillsdale: Lawrence Erlbaum Associates. Chen, Xiang. 2003. "Object and Event Concepts. A Cognitive Mechanism of Incommensurability." Philosophy of Science 70: 962-974. Chen, Xiang, and Peter Barker. 2000. "Continuity through Revolutions: A Frame-Based Account of Conceptual Change During Scientific Revolutions." Philosophy of Science 67:208-223. Votsis, I., and Schurz, G. 2012. "A Frame-Theoretic Analysis of Two Rival Conceptions of Heat." Studies in History and Philosophy of Science, 43(1): 105-114. Votsis, I., and Schurz, G. 2014. "Reconstructing Scientific Theory Change by Means of Frames." In Concept Types and Frames. Application in Language, Cognition, and Science, ed. T. Gamerschlag, D. Gerland, R. Osswald, W. Petersen, 93-110. New York: Springer. Demetris Portides (University of Cyprus, Cyprus) Athanasios Raftopoulos (University of Cyprus, Cyprus) Abstraction in Scientific Modeling PRESENTER: Demetris Portides ABSTRACT. Abstraction is ubiquitous in scientific model construction. It is generally understood to be synonymous to omission of features of target systems, which means that something is left out from a description and something else is retained. Such an operation could be interpreted so as to involve the act of subtracting something and keeping what is left, but it could also be interpreted so as to involve the act of extracting something and discarding the remainder. The first interpretation entails that modelers act as if they possess a list containing all the features of a particular physical system and begin to subtract in the sense of scratching off items from the list. Let us call this the omission-as-subtraction view. According to the second interpretation, a particular set of features of a physical system is chosen and conceptually removed from the totality of features the actual physical system may have. Let us call the latter the omission-as-extraction view. If abstraction consists in the cognitive act of omission-as-subtraction this would entail that scientists know what has been subtracted from the model description and thus would know what should be added back into the model in order to turn it into a more realistic description of its target. This idea, most of the time, conflicts with actual scientific modeling, where a significant amount of labor and inventiveness is put into discovering what should be added back into a model. In other words, the practice of science provides evidence that scientists, more often than not, operate without any such knowledge. One, thus, is justified in questioning whether scientists actually know what they are subtracting in the first case. Since it is hard to visualize how modelers can abstract, in the sense of omission-as-subtraction, without knowing what they are subtracting, one is justified in questioning whether a process of omission-as-subtraction is at work. In this paper we particularly focus on theory-driven models and phenomenological models in order to show that for different modeling practices what is involved in the model-building process is the act of extracting certain features of physical systems, conceptually isolating and focusing on them. This is the sense of omission-as-extraction, that we argue is more suitable for understanding how scientific model-building takes place before the scientist moves on to the question of how to make the required adjustments to the model in order to meet the representational goals of the task at hand. Furthermore, we show that abstraction-as-extraction can be understood as a form of selective attention and as such could be distinguished from idealization. Laszlo E. Szabo (Institute of Philosophy, Eotvos Lorand University Budapest, Hungary) Intrinsic, extrinsic, and the constitutive a priori ABSTRACT. On the basis of what I call physico-formalist philosophy of mathematics, I will develop an amended account of the Kantian–Reichenbachian conception of constitutive a priori. It will be shown that the features (attributes, qualities, properties) attributed to a real object are not possessed by the object as a "thing-in-itself"; they require a physical theory by means of which these features are constituted. It will be seen that the existence of such a physical theory implies that a physical object can possess a property only if other contingently existing physical objects exist; therefore, the intrinsic–extrinsic distinction is flawed. The paper is available from here: http://philsci-archive.pitt.edu/15567/ 16:45-17:45 Session 15L: SYMP Symposium of the Spanish Society of Logic, Methodology and Philosophy of Science 1 (SLMFCE 1) Organizer: Cristina Corredor The Spanish Society of Logic, Methodology and Philosophy of Science (SLMFCE in its Spanish acronym) is a scientific association formed by specialists working in these and other closely related fields. Its aims and scope cover also those of analytic philosophy in a broad sense and of argumentation theory. It is worth mentioning that among its priorities is the support and promotion of young researchers. To this aim, the Society has developed a policy of grants and awards for its younger members. The objectives of the SLMFCE are to encourage, promote and disseminate study and research in the fields above mentioned, as well as to foster contacts and interrelations among specialists and with other similar societies and institutions. The symposium is intended to present the work carried out by prominent researchers and research groups linked to the Society. It will include four contributions in different subfields of specialization, allowing the audience at the CLMPST 2019 to form an idea of the plural research interests and relevant outcomes of our members. Jose Martinez Fernandez (Logos - Universitat de Barcelona, Spain) On revision-theoretic semantics for special classes of circular definitions ABSTRACT. Circular definitions are definitions that contain the definiendum in the definiens. The revision theory of circular definitions, created by Anil Gupta, shows that it is possible to give content to circular definitions and to use them to solve the semantic paradoxes. Let us consider definitions of the form Gx := A(x,G), where A is a first-order formula in which G itself can occur. Given a model M for the language, possible extensions for the predicate G are given by subsets of the domain of the model (which are called hypotheses). Given a hypothesis h, the revision of h (denoted D(h)) is the set of all the elements of the domain which satisfy the definiens in the model M+h (i.e., the model M with the hypothesis that the extension of G is h). Revision can be iterated, generating the sequence of revision: h, D(h), D(D(h))… which is represented as D^0(h), D^1(h), D^2(h)… Roughly speaking, the key idea of revision theory is that one can categorically assert that an object is G when, for every hypothesis h, the object eventually stabilises in the sequence of revision that starts with h, i.e., it belongs to all the hypotheses in the sequence after a certain ordinal. Gupta (in "On Circular Concepts", in Gupta and Chapuis, "Circularity, Definition and Truth", Indian Council for Philosophical Research, 2000) defined a special type of definitions, called finite definitions, and proved that this class of definitions has nice formal properties, for instance, there is a natural deduction calculus sound and complete for their validities. The aim of the talk is to introduce several generalizations of finite definitions that still preserve many of their good properties. Given a type of hypotheses T, we will define the following four classes of special circular definitions: (i) A definition is a T-definition iff for each model M, there is a hypothesis of type T. (ii) A definition is a uniformly T-definition iff for each model M and each hypothesis h, there is n such that D^n(h) is of type T. (iii) A definition is a finitely T-definition iff for each M there is n such that, for each h, D^n(h) is of type T. (iv) A definition is a bounded T-definition iff there is n such that for every M and h, D^n(h) is of type T. A finite definition is, in this notation, a finitely T-definition, where T is the type of reflexive hypotheses (i.e., those that generate cycles in the directed graph that connects h to D(h)). We will analyze the relations among the different classes of definitions, focusing in the types of reflexive hypotheses and descending hypotheses (i.e. hypotheses which belong to Z-chains in the directed graph). Sergi Oms (Logos, University of Barcelona, Spain) Common solutions to several paradoxes. What are they? When should they be expected? ABSTRACT. In this paper I will examine what a common solution to more than one paradox is and why, in general, such a solution should be expected. In particular, I will explore why a common solution to the Liar and the Sorites should be expected. Traditionally, the Sorites and the Liar have been considered to be unrelated. Nevertheless, there have been several attempts to uniformly cope with them. I will discuss some of these attempts in the light of the previous discussion. 16:45-17:45 Session 15M: C5 Philosophy of the cognitive and behavioral sciences 1 Konrad Rudnicki (University of Antwerp, Belgium) Lavinia Marin (Delft University of Technology, Netherlands) Online misinformation as a problem of embodied cognition ABSTRACT. This paper argues that the creation and propagation of misinformation in online environments, particularly in social media, is confronted with specific challenges which are not to be found in offline communication. Starting from the widely accepted definition of misinformation as 'deliberate production and distribution of misleading information' (Floridi, 2013) which we designate as the semantic view of misinformation, we aim to provide a different definition of misinformation based primarily on the pragmatics of communication and on the role of the technological environment. While misinformation in online environments is also false and misleading, its main characteristic is the truncated way in which it is perceived and re-interpreted and, we will argue, this way of processing information belongs foremost to the online environment as such rather than to a defective way of information-processing from the side of the epistemic agent. From this pragmatic perspective, sometimes misinformation is true information which is interpreted and propagated in a biased way. One of the major features of the online environments which makes it for a medium prone to mis-interpretation and bias concerns a way of leading to impoverished sensory information processing. Assuming an embodied cognition view – in its compatibilist version, see (Varela et al., 1991; Clark, 1997) - then the environment in which we exercise our cognitive abilities has a deciding role for our ability to function as epistemic agents because through our bodies and we acquire cognitive states dependent on the environment to which our bodies are exposed. Following this embodied cognition assumption, then the online environment presents itself as a challenge through the ways in which it prioritises certain senses while obliterating others: the visual senses are primordial to the detriment of other senses such as touch, smell, and even hearing; moreover, we interact with others in online environments through text messages which favor explicit meanings while tacit communication and other pragmatic aspects of communication relying on body-language and non-verbal signs are lost. This presentation will describe the constellation of aspects which characterise the pragmatics of communication in online environments and then show why this kind of communicational situation is biased leading to what we will call an 'incomplete pragmatics' of communication. In online environments, we will argue, misunderstandings are the rule and not the exception, because of the dis-embodied and text-biased forms of communication. We will illustrate our theory of incomplete pragmatics of online communication with several case studies of online misinformation based on factually true information which is systematically misunderstood. Clark, A. (1997). Being There: Putting Brain Body and World Together Again. Cambridge, MA: MIT Press. Clark, A. and Chalmers, D. (1998). The extended mind. Analysis, 58, 7-19. Floridi, Luciano. The philosophy of information. OUP Oxford, 2013. Varela, F., Thompson, E., Rosch, E. (1991). The Embodied Mind. Cambridge, MA: MIT Press. Sarah Songhorian (Vita-Salute San Raffaele University, Italy) The Role of Cognitive and Behavioral Research on Implicit Attitudes in Ethics ABSTRACT. Empirical research and philosophy have both recently been interested in what implicit attitudes and biases are and in how they can predict prejudicial behavior towards certain groups (Brownstein 2017; Brownstein, Saul 2017: 1–19). The assumption on the basis of such data is that human beings are sensitive to their own and others' group identities. These researches I focus on show an effect that mostly goes under the radar: even subjects that have non-racist explicit attitudes can and often are biased by implicit stereotypes and prejudices concerning group identities. The aim of this talk is to look at this set of data from the perspective of moral philosophy. Thus, on the one hand, I will be interested in analyzing if and to what extent implicit attitudes have an impact on abilities that are crucial for moral judgment and for moral decision-making – the descriptive goal – and, on the other, I will consider whether this influence should bear any normative significance for moral theory – the normative goal. I will deal with whether these implicit attitudes have an impact on some of the key components constituting the basis of our moral abilities, regardless of whether one can be deemed responsible or not. I will thus consider the effects – if any – that implicit attitudes have on empathy and trust, understanding both as related to our way of judging and behaving morally (for discussion on this, e.g. Faulkner, Simpson 2017; Stueber 2013). I will focus on the effects on empathy and trust on the basis of the widely shared – though not universally accepted – assumption that they both play a relevant – though not exclusive – role in morality, and given that experimental data on implicit attitudes seem to suggest at least an unconscious proclivity towards empathizing and trusting in-groups more than out-groups. My main claims will be: (a) The descriptive claim: Implicit in-group bias directly modulates empathy and automatic trust, while it has only a derivative influence on sympathy and deliberated trust. (b) The normative claim: If moral duties or standards are meant to guide human behavior, then knowing about our implicit biases towards in-groups restricts the set of moral theories that can be prescribed (according to a criterion of psychological realizability). And yet this is not tantamount to claiming that we cannot and should not take action against our implicit attitudes once we have recognized their malevolent influence upon us. References (selection) Brownstein, M. 2017, "Implicit Bias", The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), E. N. Zalta (ed.), https://plato.stanford.edu/archives/spr2017/entries/implicit-bias/. Brownstein, M., and Saul, J. (eds.) 2017, Implicit Bias and Philosophy, Volume 1: Metaphysics and Epistemology, New York: Oxford University Press. Faulkner, P., and Simpson, T. 2017, The Philosophy of Trust, Oxford: Oxford University Press. Simpson, T. W. 2012, "What Is Trust?", Pacific Philosophical Quarterly, 93, pp. 550–569. Stueber, K. 2013, "Empathy", The Stanford Encyclopedia of Philosophy (Fall 2016 Edition), E. N. Zalta (ed.), http://plato.stanford.edu/archives/fall2016/entries/empathy/. Anna Kiel Steensen (ETH Zurich, Switzerland) Semiotic analysis of Dedekind's arithmetical strategies ABSTRACT. In this talk, I will present a case study, which uses close reading of [Dedekind 1871] to study semiotic processes of mathematical text. I focus on analyzing from a semiotic perspective what Haffner [2017] describes as 'strategical uses of arithmetic' employed by Richard Dedekind (1831-1916) and Heinrich Weber (1842-1913) in their joint work on function theory [Dedekind and Weber 1882]. My analysis of Dedekind's representations shows that neither word-to-word correspondences with other texts (e.g. texts on number theory), nor correspondences between words and stable referents fully capture Dedekind's "strategic use of arithmetic" in [Dedekind 1871]. This use is thus the product of a textual practice, not a structural correspondence to which the text simply refers. An important line of argument in [Haffner 2017] is that a mathematical theory (be it function theory as in [Dedekind and Weber 1882] or ideal theory as in [Dedekind 1871]) becomes arithmetical by introducing concepts that are 'similar' to number theoretical ones, and by transferring formulations from number theory. Haffner's claim only emphasizes why we need a better understanding of the production of analogy as a semiotic process. Since the definitions and theorems of [Dedekind 1871] do not correspond word-for-word to number theoretical definitions and theorems, simply saying that two concepts or formulations are 'similar' neglects to describe the signs that make us see the similarities. Thus, appealing to similarity cannot account for the semiotic processes of the practice that produces analogy of ideal theory to number theory. The case study aims to unfold Haffner's appeals to similarity through detailed descriptions of representations that Dedekind constructs and uses in [1871]. Dedekind is often pointed to as a key influence in shaping present mathematical textual practices and a considerable part of this influence stems from his development of ideal theory, of which [Dedekind 1871] is the first published version. Therefore, apart from being interesting in their own right, a better understanding of the semiotic processes of this text could contribute to our views on both present mathematical textual practices and late-modern history of mathematics. Dedekind, R. (1871). Über die Komposition der binären quadratischen Formen. In [Dedekind 1969 [1932]], vol. III. 223-262. Dedekind, R. (1969) [1932]. Gesammelte mathematische Werke. Vol I-III. Fricke R., Noether E. & Ore O. (Eds.). Bronx, N.Y: Chelsea. Dedekind, R. and Weber, H. (1882). Theorie der algebraischen Funktionen einer Veränderlichen. J. Reine Angew. Math. 92, 181–290. In [Dedekind 1969 [1932]], vol. I. 238–351. Haffner, E. (2017). Strategical Use(s) of Arithmetic in Richard Dedekind and Heinrich Weber's Theorie Der Algebraischen Funktionen Einer Veränderlichen. Historia Mathematica, vol. 44, no. 1, 31–69. Text-driven variation as a vehicle for generalisation, abstraction, proofs and refutations: an example about tilings and Escher within mathematical education. PRESENTER: Karl Heuer ABSTRACT. In this talk we want to investigate in how far we can understand (or rationally reconstruct) how mathematical theory building can be analysed on a text-level approach. This is apparently only a first approximation concerning the heuristics actually used in mathematical practice but delivers already useful insights. As a first model we show how simple syntactical variation of statements can yield to new propositions to study. We shall show in how far this mechanism can be used in mathematical education to develop a more open, i.e. research oriented experience for participating students. Apparently not all such variations yield to fruitful fields of study and several of them are most likely not even meaningful. We develop a quasi-evolutionary account to explain why this variational approach can help to develop an understanding how new definitions replace older ones and how mathematicians choose axiomatisations and theories to study. We shall give a case study within the subject of 'tilings'. There we begin with the basic question which regular (convex) polygon can be used to construct a tilling of the plane; a question in principle accessible with high school mathematics. Small variations of this problem quickly lead to new sensible fields of study. For example, allowing the combination of different regular (convex) polygons yields to Archimedean tilings of the plane, or introducing the notion of 'periodicity' paves the way for questions related to so-called Penrose tilings. It is easy to get from a high school problem to open mathematical research by only introducing a few notions and syntactical variations of proposed questions. Additionally, we shall offer a toy model of the heuristics used in actual mathematical practice by a model of structuring a mathematical question together with variations of its parts on a syntactical level. This first step is accompanied by a semantic check to avoid category mistakes. By a quasi-evolutionary account, the most fruitful questions get studied, which leads to a development of new mathematical concepts. Depending on whether there is time left, we show that this model can also be applied to newly emerging fields of mathematical research. This talk is based on work used for enrichment programs for mathematically gifted children and on observations from working mathematicians. Antti Kuusisto (Tampere University, Finland) Interactive Turing-complete logic via game-theoretic semantics ABSTRACT. We define a simple extension of first-order logic via introducing self-referentiality operators and domain extension quantifiers. The new extension quantifiers allow us to insert new points to model domains and also to modify relations by adding individual tuples. The self-referentiality operators are variables ranging over subformulas of the same formula where they are used, and they can be given a simple interpretation via game-theoretic semantics. We analyse the conceptual properties of this logic, especially the way it links games and computation in a one-to-one fashion. We prove that this simple extension of first-order logic is Turing-complete in the sense that it exactly captures the expressive power of Turing-machines in the sense of descriptive complexity: for every Turing-machine, there exists an equivalent formula, and vice versa. We also discuss how this logic can describe classical compass and straightedge constructions of geometry in a natural way. In classical geometry, the mechanisms of modifying constructions are analogous to the model modification steps realizable in the Turing-complete logic. Also the self-referentiality operators lead to recursive processes omnipresent in everyday mathematics. The logic has a very simple translation to natural language which we also discuss. Raine Rönnholm (University of Tampere, Finland) Antti Kuusisto (University of Bremen, Germany) Rationality principles in pure coordination games PRESENTER: Raine Rönnholm ABSTRACT. We analyse so-called pure win-lose coordination games (WLC games) in which all players receive the same playoff, either 1 ("win") or 0 ("lose"), after every round. We assume that the players cannot communicate with each other and thus, in order to reach their common goal, they must make their choices based on rational reasoning only. We study various principles of rationality that can be applied in these games. We say that a WLC game G is solvable with a principle P if winning G is guaranteed when all players follow P. We observe that there are many natural WLC games which are not unsolvable in a single round by any principle of rationality, but which become solvable in the repeated setting when the game can be played several times until the coordination succeeds. Based on our analysis on WLC games, we argue that it is very hard to characterize which principles are "purely rational" - in the sense that all rational players should follow such principles in every every WLC game. 18:00-19:30 Session 16C: B1/B2/B3 Theory assessment and theory change Tina Wachter (University of Hannover, Germany) Lei Ma (Huaqiao Univetsity, China) Empirical Identity as an Indicator of Theory Choice ABSTRACT. There are many theories about theory choice in philosophy of science, but no any indicator of scientific theory has been precisely defined, let alone a index system. By the example of empirical identity, I shall show that a range of scientific indicators to decide theory choice can be precisely defined by some basic concepts. I think that these indicators can provide us a better description of the principles of philosophy of science. The certain pursuit of theories' empirical identity and novelty leads the cumulative view of scientific progress; under non-cumulative circumstance, it is totally practicable to judge a theory's empirical identity as well as empirical novelty; empirical identity underdetermines the acceptance of a particular theory. It is possible that all the principles of philosophy of science could be explained again through the system of index of theory choice, thus a more rigorous theory of philosophy of science could be established. Cristin Chall (University of South Carolina, United States) Abandoning Models: When Non-Empirical Theory Assessment Ends ABSTRACT. The standard model of physics has several conceptual problems (for example, it has no explanation for the three almost identical particle generations) and explanatory gaps (for example, it has no dark matter candidate). These issues have motivated particle physicists to develop and test new models which go beyond the standard model (BSM). Currently, none of the unique predictions of any BSM model have met with any experimental success. The Large Hadron Collider, the most powerful particle accelerator in the world, has reached unprecedented energies, but it has found no compelling evidence for the new physics which researchers are convinced we will eventually find. Despite these experimental failures, physicists continue to pursue various BSM projects. The various groups of BSM models receive different degrees of confidence from physicists, which can be roughly tracked by observing the number of preprints and publications detailing them and the way they are discussed in the summary talks of large physics conferences. From this, we can see that the core ideas of these BSM models persist, even as various concrete predictions stemming from those core ideas fail. This persistence can be explained using classic schemes of theory assessment. For example, once suitably modified to accommodate models alongside theories, Lakatosian research programmes and Laudanian research traditions offer compelling explanations for this phenomena: in Lakatos the hard cores of BSM projects are shielded from contrary experimental results while in Laudan BSM projects can be understood as solving problems, despite their experimental failings. However, by evoking such explanations, a new problem arises. With the next phase of particle physics uncertain, since there is no consensus on the plans for the next generation of particle accelerators, it is unclear how the various BSM models are properly discriminated without empirical findings to determine success and failure. Non-empirical justifications can be given for the continued pursuit of these kinds of models and theories (those which make predictions we lack the capacity to test), but we must also analyse the non-empirical justifications for abandoning a line of research. I will argue that particle physicists lose confidence and eventually abandon models because of two related factors. First, although a framework like Lakatos's or Laudan's can insulate BSM models from the experimental failures for a time, as the prospects of finding evidence for new physics at the LHC diminish, there is an equivalent depreciation of confidence in these models as a consequence of this lack of fruitfulness. Second, changes in the degree of support for the problems and solutions motivating BSM models cause similar changes in support for the models (for instance, with less confidence in naturalness as a guiding principle, models built to be more natural fall out of favour). These two factors lead to increasing disfavour in these models, and eventually their abandonment. Once we have established this non-empirical reasoning behind giving up models and theories, we can add it to our non-empirical assessment criteria at play in cases where theory extends beyond experiment. In-Rae Cho (Seoul National University, South Korea) Toward a Coevolutionary Model of Scientific Change ABSTRACT. In this work, I attempt to develop a coevolutionary model of scientific change, which affords a more balanced view on both the continuous and discontinuous aspects of scientific change. Supposing that scientific inquiry is typically goal-directed, I'm led to propose that scientific goals, methods and theories constitute the main components of scientific inquiry, and to investigate the relationships among these components and their changing patterns. In doing so, first of all, I identify explanatory power and empirical adequacy as primary goals of science. But, facing what I call the problem of historical contingency according to which those primary scientific goals could not be justified because they are historically contingent, I explore the possibility of evaluating scientific goals and suggest that several well-motivated measures of evaluating scientific goals allow us to alleviate the problem of historical contingency. Then I try to bring out the major features of how those main components of science are related to each other. One major feature is that they mutually constrain each other, and thus each main component operates as a selective force on the other components. Another major feature is that the main components of science are induced to change reciprocally, but with certain intervals. Considering these features together, I suggest that scientific change is evolutionary (rather than revolutionary), as well as coevolutionary. Further, I claim that there are other important features, which deserve our serious attention. They are the modes and tempos of changes in the main components of scientific inquiry. Firstly, the modes of changes in the main components of scientific inquiry are not homogeneous. That is to say, unlike what has happened in scientific methods and theories throughout the history of scientific inquiry, what I take as primary goals of science seem to have experienced a sort of strong convergence. Secondly, the tempos of changes in the main components of scientific inquiry also are not quite homogeneous. Particularly, the tempo of change in primary goals of science seems to have been much slower than those in method or theory. So I come to conclude that, despite mutually constraining relationships among these main components, what really anchors scientific activities are goals rather than methods or theories. Finally I argue that this coevolutionary model of scientific change does not yield to what I call the problems of circularity and scientific progress. The problem of circularity is that the evaluation process occurring in the coevolutionary model of scientific change is structurally circular. I argue, however, that the changes resulting from evaluating one of the main components of scientific inquiry under the constraints of other components are not circular viciously, but usually self-corrective. Further, the problem of scientific progress results from the observation that my coevolutionary model seems quite similar to Laudan's reticulated model of scientific change. While admitting that there exist significant similarities between the two models of scientific change, I claim that the mode of scientific progress in my coevolutionary model is not transient as what happens in Laudan's reticulated model but transitive. 18:00-19:30 Session 16D: C8 Philosophy of the applied sciences and technology Dazhou Wang (University of Chinese Academy of Sciences, China) A Phenomenological Analysis of Technological Innovations ABSTRACT. Inspired by Martin Heidegger, the phenomenological analysis has become one of the mainstream approaches in philosophy of technology, and promoted hugely our understanding of the place of technology in the society. So far, however, such studies has been characterized by static nature to a large extent, and there is virtually no phenomenological analysis of technological innovations. For instance, the central questions in post-phenomenologist Don Ihde's investigations are what role technology plays in everyday human experience and how technological artifacts affect people's existence and their relation with the world. According to him, there are four typical relations between human and artifacts, i.e., embodiment relation, hermeneutic relation, alterity relation and background relation. These relations are those between the "given" user and the "given" artifact under the "given" life world, taking no account of the creation of artifacts and dynamic interaction among the user, new artifacts and the world. It is true that Heidegger himself discussed the "present-at-hand" state which could lead to further "thematic investigation", a very beginning of the innovative practices. In this way, his analysis certainly suggests the emergence of innovative practices. However, his focus was mainly put on the structure of the routine practice, and the very essence of innovative practice was largely put aside. In this paper, the author attempts to develop a phenomenology of technological innovations to make up for the above shortcomings. It is presented that, ontologically, technological innovations drive and lie in the existential cycle of the human beings. Technological innovations stem from the rupture of the lifeworld and the "circular economy". With such ruptures, all the basic elements of the social practices, i.e. the human, the natural, the institutional and the technological, would be brought to light, and the innovation is the very process of transactions among various stakeholders who dispute each other and try to resolve these disputes. In this process, through a series of mangles and experiments, technological problems will be firstly defined, then gradually be resolved, eventually a new collection of human and nonhuman formed and the lifeworld renewed. Therefore, if the technology is the root of human existence, the technological innovations, based on the related traditions as the ready-made things, is the dangerously explorative journey to create new possibilities for human existence. The essence of human existence lies in technology (being) as well as in innovations (becoming), especially technological innovations. To break the solidification of society, the fundamental means is to conduct technological innovations. So it is absurd to simply deny technologies and innovations and the only choice is to do technological innovations proactively and responsibly. Responsible innovation means not so much letting more prescient and responsible people into the innovation process to supervise the less prescient and irresponsible innovators, but rather to help broaden the innovation vision and to share such increasingly weighty responsibilities through public participation and equal dialogue. Paulina Wiejak (Universita Politecnica delle Marche, Italy) On Engineering Design. A Philosophical Inquiry ABSTRACT. In my work, I would like to explore the area of engineering design through the philosopher's glass. First, looking at the whole process of engineering design - as described by Pahl and Beitz [4] - as a perfect combination of ancient techne and episteme [7], what is an art or craft and what is theoretical knowledge. In this part, I will try to build a bridge over Aristotelian thought and contemporary discourse in engineering design. Second, focusing on the so-called conceptual and embodiment design I would like to explore the notion of representation. In particular, I would like to use the work of Roman Ingarden on the works of art and use his analysis to name and interpret elements of, what engineers often call, 3D models or 3D representations. Third, I would like to recognize the usefulness of the idea of 'directions of fit'[1,6] in the area of manufacturing, like it is presented in [2], and try to apply this idea to the area of Computer-Aided Design. Bibliography John Searle.Intentionality. Oxford University Press, 1983. Michael Poznic. "Modeling Organs with Organs on Chips: Scientific Representation and Engineering Design as Modeling Relations". In: Philosophy and Technology 29.4 (2016), pp. 357–371. Tarja Knuuttila. "Modelling and Representing: An Artefactual Approach to Model-Based Representation". In: Studies in History and Philosophy of Science Part A42.2 (2011), pp. 262–271. Gerhard Pahl; W. Beitz; J ̈org Feldhusen; Karl-Heinrich Grote. Engineering Design. A Systematic Approach. Springer-Verlag London, 2007. Mieke Boon and Tarja Knuuttila. "Models as Epistemic Tools in Engineering Sciences". In: Philosophy of Technology and Engineering Sciences. Ed.by Anthonie Meijers. Handbook of the Philosophy of Science. Amsterdam: North-Holland, 2009, pp. 693–726. G. E. M. Anscombe.Intention. Harvard University Press, 1957. Parry, Richard, "Episteme and Techne", The Stanford Encyclopedia of Philosophy (Fall 2014 Edition), Edward N. Zalta (ed.), URL =. Aleksandr Fursov (M.V. Lomonosov Moscow State Univercity, Russia) The anthropic technological principle ABSTRACT. The idea that humankind is something transient is widely discussed in the modern philosophy. The nightmare of both humanistic and trans-humanistic philosophy is that humanity will inevitably face some insuperable limits for further development, or even worse, can appear an evolutionary dead-and. In public consciousness this idea is often represented by the scenarios like artificial intelligence will "conquer the world". K. Popper sais that there is only one step from amoeba to Einstein. He implies concrete methodological significance in this rhetorical statement. But this significance can be ontological. Approximately 3 billion years ago cyanobacterium started to produce oxygen. They just needed energy, so they received it from the water and filled the Earth atmosphere with oxygen. These bacterium didn't have the "aim" to produce oxygen, they satisfied their needs. Nowadays homo sapiens produces the wide range of technical devices, including artificial intelligence, in order to satisfy its needs in communication, safety, energy supply and political power. Like in the case with cyanobacterium oxygen production, the technical development is not the aim, but just the instrument for humanity. If we continue the analogy between cyanobacterium and homo sapiens from the "Universe point of view", we can suppose, that technical sphere as a co-product of human activity may be the precondition of a new form of being genesis, like oxygen atmosphere was the precondition of the more complex aerobic organisms genesis. But in this case the transience of humankind must be fixed ontologically. So, we can develop the anthropic technological principle if we attempt to save the special ontological status of homo sapiens. It claims that technological development must be controlled in order to prevent complete human elimination in principle. Every technical device must be constructed in the way that makes impossible it's full-fledged functioning, including self-reproduction, without human sanction. This principle is based on the strong supposition that we really can control the technical development. The anthropic technological principle, in contrast with the anthropic cosmological principle, is pure normative. We can't use it in the explanatory scheme. It should be understand as the evolutionary defensive mechanism of homo sapiens. 18:00-19:00 Session 16E: IS B4 Knuuttila Tarja Knuuttila (University of Vienna, Austria) Modeling Biological Possibilities in Multiple Modalities ABSTRACT. A noticeable feature of contemporary modeling practice is the employment of multiple models in the study of same phenomena. Following Levins' (1966) original insight, philosophers of science have studied multiple modeling through the notions of triangulation and robustness. Among the different philosophical notions of robustness one can discern, first, those that focus on robust results achieved by triangulating independent epistemic means, and, second, those that target the variation of the assumptions of a group of related mathematical models (Knuuttila and Loettgers 2011). The discussion of modeling has concentrated on the latter kind of robustness as the models being triangulated are typically not independent (c.f. Orzak and Sober 1993). Yet, the problem of robust results derived from related mathematical models sharing a common "causal" core is that they may all be prone to the same systematic error (Wimsatt 2007). One compelling strategy to safeguard against systematic errors is the construction of related models in different material modalities. This paper considers such multiple modeling practices through the cases of synthetic genetic circuits and minimal cells. While the various incarnations of these model systems are not independent by their design, they utilize independent media – mathematical, digital and material – thus mitigating the errors prone to using only one kind of modeling framework. Moreover, the combination of a related model design and independent representational media, also tackles the worries concerning inconsistent and discordant evidence (e.g. Stegenga 2009). The cases studied highlight also another important dimension of multiple modeling: the study of what is possible. While much of scientific modeling can be understood as an inquiry into various kinds of possibilities, the research practice of synthetic biology takes modeling beyond mere theoretical conceivability toward material actualizability. Knuuttila, T., & Loettgers, A. (2011). Causal isolation robustness analysis: The combinatorial strategy of circadian clock research. Biology and Philosophy, 26(5), 773-791. Levins, R. (1966). The strategy of model building in population biology. American Scientist, 54(4), 421-431. Orzack, S. H., & Sober, E. (1993). A Critical assessment of Levins's The strategy of model building in population biology (1966). The Quarterly Review of Biology, 68(4), 533-546. Stegenga, J. (2009). Robustness, Discordance and Relevance. Philosophy of Science 76, 650-661. Wimsatt, W.C. (2007). Re-engineering philosophy for limited beings: Approximations to reality. Harvard University Press, Cambridge. 18:00-18:30 Session 16F: C6 Philosophy of computing and computation Jiří Raclavský (Department of Philosophy, Masaryk University, Czechia) Jens Kipper (University of Rochester, United States) Intuition, Intelligence, Data Compression ABSTRACT. The main goal of this paper is to argue that data compression is a necessary condition for intelligence. One key motivation for this proposal stems from a paradox about intuition and intelligence. For the purposes of this paper, it will be useful to consider playing board games—such as chess and Go—as a paradigm of problem solving and cognition, and computer programs as a model of human cognition. I first describe the basic components of computer programs that play board games, namely value functions and search functions. A search function performs a lookahead search by calculating possible game continuations from a given board position. Since many games are too complex to be exhausted by search, game-playing programs also need value functions, which provide a static evaluation of a given position based on certain criteria (for example, space, mobility, or material). As I argue, value functions both play the same role as intuition in humans (which roughly corresponds to what is often called 'System 1 processing') and work in essentially the same way. Increasingly sophisticated value functions take more features of a given position into account, which allows them to provide more accurate estimates about game outcomes, because they can take into account more relevant information. The limit of such increasingly sophisticated value functions is a function that takes all of the features of a given position into account and determines as many equivalence classes of positions as there are possible positions, thereby achieving perfect accuracy. Such a function is just a complete database that stores the game-theoretic values of all possible positions. Following Ned Block (1981), there is widespread consensus that a system that solves a problem by consulting a database—or, as in Block's example, a 'lookup table'—does not exhibit intelligence. This raises our paradox, since reliance on intuition—both inside and outside the domain of board games—is usually considered as manifesting intelligence, whereas usage of a lookup table is not. I therefore introduce another condition for intelligence that is related to data compression. According to my account, for a judgment or action to be intelligent, it has to be based on a process that surpasses both a certain threshold of accuracy and a certain threshold of compression. In developing this account, I draw on complexity measures from algorithmic complexity theory (e.g. Kolmogorov 1963). My proposal allows that reliance on a lookup table—even if it is perfectly accurate—can be nonintelligent, while retaining that reliance on intuition can be highly intelligent. As I explain, and as has been pointed out by researchers in computer science (e.g. Hernández-Orallo 2017), there are strong theoretical reasons to assume that cognition and intelligence involve data compression. Moreover, my account also captures a crucial empirical constraint. This is because all agents with limited resources that are able to solve complex problems—and hence, all cognitive systems—need to compress data. References Block, Ned (1981). Psychologism and Behaviorism. Philosophical Review 90, 5–43. Hernández-Orallo, José (2017). The Measure of All Minds. CUP. Kolmogorov, Andrey (1963). On Tables of Random Numbers. Sankhyā Ser. A. 25, 369–375. 18:00-19:00 Session 16G: B5 Pragmatism Ana Cuevas-Badallo (Universtiy of Salamanca, Spain) Daniel Labrador-Montero (University of Salamanca, Spain) How Pragmatism Can Prevent From the Abuses of Post-truth Champions PRESENTER: Daniel Labrador-Montero ABSTRACT. How to establish if a sentence, a statement, or a theory are true has become a problem of public relevance. To defend that scientific knowledge is not a privileged way for understanding the reality and, therefore, that there are not good reasons for using science as the basis for committing some decisions, has grown a widespread argument. Even prevalent relativistic conceptions about science, like Fuller's, defend the acceptance of the post-truth: "a post-truth world is the inevitable outcome of greater epistemic democracy. (…) once the instruments of knowledge production are made generally available—and they have been shown to work—they will end up working for anyone with access to them. This in turn will remove the relatively esoteric and hierarchical basis on which knowledge has traditionally acted as a force for stability and often domination." (Fuller, 2016: 2-3). However, epistemic democracy does not necessary lead to the acceptance of that position. As the editor of Social Studies of Science has pointed out: "Embracing epistemic democratization does not mean a wholesale cheapening of technoscientific knowledge in the process. (...) the construction of knowledge (...) requires infrastructure, effort, ingenuity and validation structures." (Sismondo, 2016: 3). Post-truth, defined as "what I want to be true is true in a post-truth culture (Wilber, 2017, p. 25), defends a voluntaristic notion of truth, and there is nothing democratic in individualistic and whimsical decisions about the truthfulness of a statement. For radical relativism scientific consensus is reached by the same kind of mechanisms as in other social institutions, i.e. by networks that manufacture the "facts" using political negotiation, or by other ways of domination. However, the notion of truth that relativists (for instance Zackariasson, 2018: 3) are attacking is actually a straw man: the "God's eye point of view" that very few among philosophers or scientists, defend any more. We suggest that an alternative to post-truth arguments, that at the same time suggests mechanisms for developing a real epistemic democracy, is the notion of truth from pragmatism. This could seem controversial, if we have in mind the debunked and popularized version of pragmatism —the usefulness of an assertion is the only thing that counts in favour of it being true—. Nevertheless, whether among classic pragmatists as Dewey or neo-pragmatists (e.g. Kitcher, 2001), the construction of scientific knowledge, with all its limitations, is the best way for reaching if not the Truth, at least partial or rectifiable, but reliable and well-built knowledge. Fuller, S. (2016). Embrace the inner fox: Post-truth as the STS symmetry principle universalized. Social Epistemology Review and Reply Collective. Kitcher, P. (2001). Science, democracy and truth. Oxford U.P. Sismondo, S. (2016). Post-truth? Social Studies of Science Vol. 47(1) 3–6. Wilber, K. (2017). Trump and a Post-Truth World. Shamballah. Zackariasson, U. (2018). Introduction: Engaging Relativism and Post-Truth. In M. Stenmark et al. (eds.), Relativism and Post-Truth in Contemporary Society. Springer. Nataliia Viatkina (Institute of Philosophy of National Academy of Sciences of Ukraine, Ukraine) Deference as Analytic Technique and Pragmatic Process ABSTRACT. The goal of the paper is to consider what is determining the deference in cases with inaccurate and untested knowledge. Deferential concept is a sort of concept which people use a public word without fully understanding what it typically entertains (Recanati, F. "Modes Of Presentation: Perceptual vs. Deferential", 2001). What happens on the level of the everyday use of language? There is a link between social stimulations to certain use of words, social learning, different "encouragements for objectivity", leading to correcting of everything that is not consistent with the generally accepted use of words and meanings (Quine. Word and Object,1960) and the deference, revealing the chain of these uses, distortions, refinements, leading to some problematic beginning of use of the term. When a philosopher is performing a conceptual analysis, and affirming the causal relationship does not care about analysis of the cause, but relies on the specialists, we say such a philosopher applies the analytic technique of 'Grice's deference' (Cooper, W. E. "Gricean Deference".1976). This technique allows the philosopher to be free from any responsibility for explanation the nature of the causes. From this point of view, the philosopher at a certain point in her analysis defers to a specialist in the relevant science, competent to talk about the causal relationships.'Deferentially' means relying on someone's thought, opinion, knowledge. The wellknown post-truth phenomena is interpreted as a result of deferential attitude to information, knowledge and various data concerning reality. Along with linguistic and epistemic deference and their forms of default and intentional deference (Woodfield, A. Reference and Deference, 2000) (Stojanovic, De Brabanter, Fernandez, Nicolas. Deferential Utterances, 2005) the so called "backfire effect" will be considered. "Backfire effect" named the phenomenon pertaining to the cases when "misinformed people, when confronted with mistakes, cling even more fiercely to their incorrect beliefs (Tristan Bridges, Why People Are So Averse to Facts. "The Society Pages". http:// thesocietypages. org). There is a problem, within what approach could be correllated the instances of falsity-by-misunderstanding and cases when speaker openly prefers to use expressions like it makes someone else from the nearest linguistic community, following the custom or authority. Being a pragmatic process, deference is responsible for the lack of transparency in meta-representations (Recanati, op.cit.). So, what determines deference lies in basic concepts of the theory of opacity, in meta-representations, and in mechanism of the deference in connection with the opacity and meta-representations. The last, but not least, in this sequence is the use of the mechanism of deference to problems with imperfect mastery and unconsciously deferential thoughts (Fernandez, N.V. Deferential Concepts and Opacity). 18:00-19:30 Session 16H: B6/C7 Philosophy of Popper; Hypothetical reasoning Manjari Chakrabarty (VISVA BHARATI UNIVERSITY, India) Karl Popper, prehistoric technology and cognitive evolution ABSTRACT. More than a century and a half after Darwin it is almost a commonplace that human species is the outcome of an evolutionary process going back to the origin of life. Just as the human brain-body has been shaped by evolutionary pressures operating in our ancestral past, the biological structures and mechanisms relating to human cognitive aspects might also have been selected for and it probably took millions of years for the distinctive cognitive faculties to evolve. One way to find possible evolutionary explanations of these cognitive abilities is to explore the domain of prehistoric stone tool technology. Scholarly interest in the evolutionary impacts of lower Paleolithic stone tool making (and use) on the initial emergence of hominin cognitive behaviour is growing steadily. The most controversial questions include, for example, how in the evolutionary process did cognitive abilities (or consciousness) emerge in a world hitherto purely physical in its attributes? Or, what role did these early stone tools play in the evolution of human (or hominin) cognitive system? Stone tools are typically described in the archaeological literature as mere products of hominin cognition. Evidently the causal arrow assumed in this standard perception is one way− from cognition to tools or artefacts. Since late 1990s several interesting approaches to cognition have come up challenging this simplistic one-way-causal-arrow view. Cognitive processes are increasingly interpreted as not just something happening entirely inside our head but as extended and distributed processes (e.g., Clark & Chalmers 1998; Hutchins 2008). Interesting is to note, Karl Popper's theory of the emergence of consciousness (or cognition) posed a serious challenge to the one-way-causal-arrow view decades before the appearance of this beyond-the-body conception of human cognition. Reinterpreting Darwin's views on the biological function of mental phenomena Popper's psycho-physical interactionist (though not a dualist interactionist) theory (Popper and Eccles 1977; Popper 1978) not only questioned the mere epiphenomenal status of tools or artefacts but placed great emphasis on the role of such extra-somatic environmental resources in transforming and augmenting human cognitive capacities. What's more, Popper's conjectures about the emergence of consciousness seem strongly convergent with current experimental-archaeological research on early stone tool making and cognitive evolution (e.g., Jeffares 2010; Malafouris 2013). The present paper seeks to synthesize the critical insights of Popper with those of the experimental-archaeologists to see if some fresh light could be thrown on the problem of hominin cognitive evolution. References: 1. Clark, A. & Chalmers D. 1998. The Extended Mind. Analysis 58 (1): 7-19. 2. Hutchins, E. 2008. The Role of Cultural Practices in the Emergence of Modern Human Intelligence. Philosophical Transactions of the Royal Society 363: 2011-2019. 3. Jeffares, B. 2010. The Co-evolution of Tools and Minds: Cognition and Material Culture in the Hominin Lineage. Phenomenology and Cognitive Science 09: 503-520. 4. Malafouris, L. 2013. How Things Shape the Mind. Cambridge Mass: The MIT Press. 5. Popper, K. R. & Eccles, J. C. 1977. The Self and Its Brain. Berlin: Springer-Verlag. 6. Popper, K. R. 1978. Natural Selection and the Emergence of the Mind. Dialectica 32 (3-4): 339-355. Lois Rendl (Institute Vienna Circle, Austria) CANCELLED: Peirce on the Logic of Science – Induction and Hypothesis ABSTRACT. In his Harvard Lectures on the Logic of Science from 1865 Peirce for the first time presented his logical theory of induction and hypothesis as the two fundamental forms of scientific reasoning. His study of the logic of science seems to have been initiated by the claim of William Hamilton and Henry L. Mansel that "material inferences", which Peirce calls a posteriori and inductive inferences, are to be considered as "extralogical". In consequence they regarded the principles of science, which Kant maintained to be valid a priori, to be as axioms "not logically proved". In opposition to this view Peirce in his Harvard Lectures seeks to establish first, that deduction, induction and hypothesis are three irreducible forms of reasoning which can be analysed with reference to Aristotle's three figures of syllogism as the inference of a result, the inference of a rule and the inference of a case respectively and second, with reference to Kant's doctrine that a synthetic "inference is involved in every cognition" and Whewell's distinction of fact and theory, that "every elementary conception implies hypothesis and every judgment induction" and that therefore we can never compare theory with facts but only one theory with another one and that consequently the universal principles of science, for instance the principle of causation, as conditions of every experience, understood as a theory inferred from facts, can never be falsified and are therefore valid a priori. Peirce develops his position examining the theories of induction of Aristotle, Bacon, Kant, Whewell, Comte, Mill and Boole. The paper will first reconstruct the main points of Peirce's discussion of the theories of these authors and give a critical account of his arguments and motives and second analyse his syllogistic and transcendental solution of the problem of a logical theory of scientific reasoning. This second part will be supplemented by an account of the significance of later revisions of the logic of science by Peirce in his Lowell Lectures (1866), the American Academy Series (1867), the Cognition Series (1868/69), the Illustrations of the Logic of Science (1877/78) and How to Reason (1893). The main focus of the paper will be Peirce's reformulation of Kant's conception of transcendental logic as a logic of science and therefore of synthetic reasoning with reference to a reinterpretation of Aristotle's Syllogistic. Martin Potschka (Independent Scholar, Austria) What is an hypothesis? ABSTRACT. I document usage and meaning of the phrase 'hypothesis' with original quotes from the earliest origins to contemporary sources. My study is based on some 100 authors from all fields of knowledge and academic cultures, including first hand reflections by scientists, from which I shall present a selection. I focus on explicit methodological statements by these authors and do not investigate concrete hypotheses. The interpretations of the term hypothesis developed over time. Conflict and disagreement often was the result from not speaking a common language. Philologically no single definition captures all its meaning, and usage in fact often is contradictory. The purposes of this exercise are several: (1) To give meaning to expressions such as more-than-hypothesis, or pejoratives like mere-hypothesis. (2) To provide a lexical overview of the term. (3) To elaborate on the different kinds of epistemes. (4) To classify hypotheses (phenomenological postulates, models, instruments, and imaginary cases). (5) To trace the origins of evidence-based hypothetico-deductive epistemology (Bellarmine - Du Châtelet - Whewell - Peirce - Einstein – Popper). (6) To demarcate the term from several related ones (theory, thesis, principle, fact). Notwithstanding personal preferences, "hypothesis" shall remain a term with multiple even mutually exclusive connotations, what counts is giving exemplars of use (Kuhn!). For purposes of illustration let me quote from a table of my finished manuscript with the principal interpretations of the term hypothetical: not demonstrated, unproven but credible, capable of proof if asked for one, presumption; an element of a theory, système, a subordinate thesis, proposal, assumption, paradigm, presupposition; a kind of syllogism (if – then), conditional certitude; a statement expressing diverse degrees of probability (morally certain, probable, fallible, falsifiable, reformable, tentative, provisional, unconsolidated, subjective, speculative, fictitious, imaginary, illegitimate assumption, unspecified); pejorative uses, diminutive; ex suppositione – to safe the phenomena (instrumentalism), mathematical contrivances, ex hypothesi – why?, a model, mutual base of a discourse, reconciles reason with experience; suppositio – postulate, rule, prediction, evidence-based hypothetico-deductive, that what is abducted, guess; a third category besides ideas and reality, a blueprint for change, free inventions of the mind. Hypothesis, supposition and conjecture are roughly synonyms. A full length manuscript on the subject of my conference presentation is available for inspection by those interested. 18:00-19:30 Session 16I: B2 Formal philosophy of science and formal epistemology: Axiomatic and formal topics Libor Behounek (Institute for Research and Applications of Fuzzy Modeling, University of Ostrava, NSC IT4Innovations, Czechia) Salvatore Roberto Arpaia (Università degli Studi di Bergamo, Italy) Incompleteness-based formal models for the epistemology of complex systems ABSTRACT. The thesis I intend to argue is that formal approaches to epistemology deriving from Goedel incompleteness theorem, as developed for instance by Chaitin, Doria and da Costa (see [3]), even if originally conceived to solve decision problems in physical and social sciences (e.g. the decision problem for chaotic systems), could also be used to adress problems regarding consistency and incompleteness of sets of beliefs, and to define formal models for epistemology of complex systems and for the "classical" systemic-relational epistemology of psychology, such as Gregory Bateson's epistemology (see [2]) and Piaget's Genetic Epistemology (see for instance [4]). More specifically, following systemic epistemology of psychology, there are of two different classes of learning and change processes for cognitive sytems: a "quantitative learning" (the cognitive system adquires information without changing the rules of reasoning) and a "qualitative" learning (an adaptation process which leads the system to a re-organization). Therefore, as in Incompleteness theorems the emergence of an undecidable sentence in a logical formal system lead to the definition of a chain of formal systems, obtained by adjoining as axioms propositions that are undecidable at previous levels, in the same way, the emergence of an undecidable situation for a cognitive system could lead to the emergence of "new ways of thinking". Thus, a (systemic) process of change (process of "deuterolearning"), could be interpreted as a process that leads the cognitive organization of the subject to a different level of complexity by the creation of a hierarchy of abstract relations between concepts, or by the creation of new sets of rules of reasoning and behaving (where the process of learning is represented by a sequence of learning-stages, e.g. by sequences of type-theoretically ordered sets, representing information/proposition and rules of reasoning/rules of inference). I will propose two formal models for qualitative change processes in cognitive systems and complex systems: • The first, form set set theory, is based on Barwise's notion of partial model and model of Liar-like sentences (see [1]). • The second, from proof theory and algebraic logic, is based on the idea that a psychological-change process (the development on new epistemic strategies), is a process starting from a cognitive state s_0 and arriving to a cognitive state s_n, possibly assuming intermediate cognitive states s_1 , . . . , s_(n−1) : developing some researches contained in [5] and [6], I will propose a model of these processes based on the notion of paraconsistent consequence operator. I will show that these two different formal models are deeply connected and mutually translatable. References . [1] Barwise, J. and Moss L., Vicious circles: on the mathematics of non-well- founded phenomena, CSLI Lectures Notes, 60, Stanford, 1993. . [2] Bateson G., Steps to an ecology of mind, Paladin Book, New York, 1972. . [3] Chaitin, G. F.A. Doria, N.C. da Costa, Goedel's Way: Exploits into an undecidable world, CRC Press, Boca Raton, 2011 . [4] Piaget, J. and Garcia, R., Toward a logic of meaning, Lawrence Erlbaum Associates, Hillsdale, 1991. . [5] Van Lambalgen, M. and Hamm, F., The Proper Treatment of Events, Black- well, London, 2004. . [6] Van Lambalgen, M. and Stenning, K., Human reasoning and Cognitive Science, MIT Press, Cambridge, 2008. Maria Dimarogkona (National Technical University of Athens, Greece) Petros Stefaneas (National Technical University of Athens, Greece) A Meta-Logical Framework for Philosophy of Science PRESENTER: Maria Dimarogkona ABSTRACT. In the meta-theoretic study of science we can observe today a tendency towards logical abstraction based on the use of abstract model theory [1], where logical abstraction is understood as independence from any specific logical system. David Pearce's idea of an abstract semantic system in 1985 [2] was characterised by this tendency, and so was the idea of translation between semantic systems, which is directly linked to reduction between theories [3]. A further step towards logical abstraction was the categorical approach to scientific theories suggested by Halvorson and Tsementzis [4]. Following the same direction we argue for the use of institution theory - a categorical variant of abstract model theory developed in computer science [5] - as a logico-mathematical modeling tool in formal philosophy of science. Institutions offer the highest level of abstraction currently available: a powerful meta-theory formalising a logical system relative to a whole category of signatures, or vocabularies, while subscribing to an abstract Tarskian understanding of truth (truth is invariant under change of notation). In this way it achieves maximum language-independence. A theory is always defined over some institution in this setting, and we also define the category of all theories over any institution I. Appropriate functors allow for the translation of a theory over I to a corresponding theory over J. Thus we get maximum logic-independence, while the theory remains at all times yoked to some particular logic and vocabulary. To clarify our point we present an institutional approach to the resurgent debate between supporters of the syntactic and the semantic view of scientific theory structure, which currently focuses on theoretical equivalence. If the two views are formalized using institutions, it can be proven that the syntactic and the (liberal) semantic categories of theories are equivalent [6][7]. This formal proof supports the philosophical claim that the liberal semantic view of theories is no real alternative to the syntactic view; a claim which is commonly made - or assumed to be true. But it can also - as a meta-logical equivalence - support another view, namely that there is no real tension between the two approaches, provided there is an indispensable semantic component in the syntactic account. [1] Boddy Vos (2017). Abstract Model Theory for Logical Metascience. Master's Thesis. Utrecht University. [2] Pearce David (1985). Translation, Reduction and Equivalence: Some Topics in Inter-theory Relations. Frankfurt: Verlag Peter Lang GmbH. [3] Pearce David, and Veikko Rantala (1983a). New Foundations for Metascience Synthese 56(1): pp. 1–26. [4] Halvorson Hans, and Tsementzis Dimitris (2016). Categories of scientific theories. Preprint. PhilSci Archive: http://philsciarchive.pitt.edu/11923/2/Cats.Sci.Theo.pdf [5] Goguen Joseph, and Burstall Rod (1992). Institutions: abstract model theory for specification and programming. J. ACM 39(1), pp. 95-146. [6] Angius Nicola, Dimarogkona Maria, and Stefaneas Petros (2015). Building and Integrating Semantic Theories over Institutions. Worksop Thales Algebraic Modeling of Topological and Computational Structures and Applications, pp.363-374. Springer [7] Dimarogkona Maria, Stefaneas Petros, and Angus Nicola. Syntactic and Semantic Theories in Abstract Model Theory. In progress. Vladimir Lobovikov (Ural Federal University, Laboratory for Applied System Research, Russia) A formal axiomatic epistemology theory and the controversy between Otto Neurath and Karl Popper about philosophy of science ABSTRACT. The controversy mentioned in the title had been related exclusively to the science understood as empirical cognition of the world as totality of facts. Obviously, verifiability of knowledge implies that it is scientific one. Popper developed an alternative to the verification-ism, namely, the falsification-ism emphasizing that falsifiability of knowledge implies that it is scientific one. Neurath criticized Popper for being fixed exclusively on falsifiability of knowledge as criterion of its scientific-ness. Neurath insisted that there was a variety of qualitatively different forms of empirical knowledge, and this variety was not reducible to falsifiable knowledge. In my opinion the discrepancy between Popper and Neurath philosophies of science is well-modeled by the axiomatic epistemology theory  as according to this theory it is possible that knowledge is neither verifiable nor falsifiable but empirical one. The notion "empirical knowledge" is precisely defined by the axiom system  considered, for instance, in [XXXXXXXXX XXXX]. The symbols Kq, Aq, Eq in  stand, respectively, for: "agent knows that q"; "agent a-priori knows that q"; "agent has experience knowledge that q". In  the epistemic modality "agent empirically knows that q" is defined by the axiom 4 given below. In this axiom: Sq represents the verifiability principle; q represents the falsifiability one; (q  Pq) represents an alternative meant by Neurath but missed by Popper. The symbol Pq in  stands for "it is provable that q". Thus, according to the theorems by Gödel, arithmetic-as-a-whole is an empirical knowledge. The theory  is consistent. A proof of its consistency is the following. Let in the theory  the meta-symbols  and  be substituted by the object-one q. Also let q be substituted by Pq. In this case the axiom-schemes of  are represented by the following axioms, respectively. 1: Aq  (q  q). 2: Aq  ((q  q)  (q  q)). 3: Aq  (Kq & (q & Sq & (q  Pq))). 4: Eq  (Kq & (q  Sq  (q  Pq))). The interpretation  is defined as follows. ω = ω for any formulae ω. (ω  π) = (ω  π) for any formulae ω and π, and for any classical logic binary connective . q = false. Aq = false. Kq = true. Eq = true. q = true. Sq = true. (q  q) = true. Pq = true. (q  Pq) = false (according to Gödel theorems). In  all the axioms of  are true. Hence,  is a model for . Hence  is consistent. References XXXXXXXXX XXXX 18:00-18:30 Session 16J: B1 Epistemology 2 Rueylin Chen (National Chung-Cheng university, Taiwan) Natural analogy: A Hessean Approach to Analogical Reasoning in Theorizing ABSTRACT. This paper aims to explore the use of analogy in scientific theorizing via Mary Hesse's original understanding of analogical reasoning. The approach is thus Hessean. I revise Hesse's interpretation and symbolic schema of analogy to develop a new framework that can be used to analyze the structure and cognitive process of analogy. I take Hessean approach for two main reasons: (1) By a preliminary comparison with the probabilistic, the cognitive, and the computational approaches, I think that Hessean approach is more suitable for investigating the use of analogical reasoning in theorizing than are other approaches. I will defend this claim in terms of comparing my approach with the cognitive approach such as structural mapping theory (SMT). (2) Hesse's approach is more natural than others are. The adjective "natural" is understood in the following sense: Relative to SMT, Hessean approach preserves "pretheoretic similarity" in our natural languages as a necessary element of analogy. Moreover, Hesse's symbolic schema of "dyadic items" best reflects the comparative and contrastive characteristic of the analogical reasoning naturally emerged in our minds. Therefore, I would like to call the framework developed via Hessean approach "natural analogy" – a concept similar to "natural deduction." My framework of natural analogy revises Hesse's original in the following two ways: (1) Hesse follows logical empiricists' distinction between the formal type and the material type of analogy. In this paper, I will argue that analogy, in the field of scientific theorizing, is both formal and material. To mark this revision of Hesse's framework, I will use a new contrast between "structural/theoretical" and "conceptual/pretheoretical" as two aspects or elements of analogical reasoning to replace the old dichotomy of "formal" and "material" types. The meanings of the new pair of concepts will be elaborated. As a consequence, my framework does not only consider the conceptual/pretheoretical similarities, but also tracks the structural correspondence between two analogues. (2) I modify and expand Hesse's original symbolic schema of dyadic items to build up three new schemas and use them to analyze the role of analogical reasoning plays in scientific theorizing in historical cases. Two symbolic schemas for representing the structure of analogy and the third schema for simulating the cognitive operations of analogical reasoning have been proposed. Those schemas we introduce step by step lead us to suggest that the use of analogy in theorizing can be analyzed into four cognitive operations: abstraction, projection, incorporation, and fitting. I will use the scheme to analyze the process in which Coulomb's law was proposed by analogizing to Newton's law of gravitation, a famous case which is usually deemed as an exemplar of formal analogy, to illustrate the schemas of natural analogy. 18:00-18:30 Session 16K: B1/B6 Transition and change in science and technology Lukas Zamecnik (Palacky University Olomouc, Czechia) Igor Frolov (Institute of Economic Forecasting, Russia) Olga Koshovets (Institute of Economics, Russian Academy of Sciences, Russia) Rethinking the transformation of classical science in technoscience: ontological, epistemological and institutional shifts PRESENTER: Igor Frolov ABSTRACT. Rethinking the transformation of classical science in technoscience: ontological, epistemological and institutional shifts The key tendency in the development of science in the present-date society is that the scientific knowledge produced in Academia by scientists and academic society is losing its privileged position; moreover, the science as an institution is losing its monopoly to the production of knowledge that is considered powerful, valuable and effective. This process of deep transformation has been partially reflected in such concepts as technoscience, post-academic science, transdisciplinarity and can be found in such practices as deprofessionalization of scientific knowledge, civil science (expertise), informal science exchange in social media. In our presentation we aim to put for further consideration some ideas discussing not so much causes but purposes and effects of this transformation – epistemological, institutional and social. In particular we will focus on new subject (entity, person) of knowledge and its position in society and on the crucial change in the mechanisms of the scientific knowledge production that may lead to replacement of scientific knowledge by technologies (complex machines, techniques, skills, tools, methods) and innovations. The key theses we will develop in our presentation: 1. Basically, the concepts of technoscience, post-academic science and transdisciplinarity register and show various aspects of science transformation into something new, which we continue to call ''science'' only due to institutional and cultural reasons. Indeed, science is a project of the Modern Time, which was artificially extended by historization of scientific rationality; and apparently it has come to its end as any historical formation. It seems that ''technoscience'' is probably the best general term (though subject to a number of restrictions) to denote what we still call ''science'' but what, in fact, is not science anymore, however it is consistently taking the place /position of science in the society. 2. The term ''technoscience'' emphasizes an entanglement of science and technology and it was mainly raised to distinguish a ''new'' type of scientific activities from ''traditional'' ones with a different epistemic interest producing different objects with a different ontological status. Yet, for us it is important, that the concept enables us to capture the drastic changes in the means of production of knowledge and its organization. We claim that scientific knowledge is gradually turning into a component of innovative development and this means that scientific knowledge and academic science as an institution are becoming conformed to the principles and rules of functioning of other social spheres – economics, finance, and industry. The governance, business and society require the science to produce not the true / veritable knowledge to understand and explain the (real) world but give information and efficient ''knowledge'' to create a world, a specific environment, and artefacts. Indeed, we can see that the key functions of natural sciences are crucially changed as they become the part of capital circle-flow: the main task is production of potentially commercialized findings which can be constantly reinvested with the purpose of getting innovation. At the same time ''innovation'' has been shifted from new idea in form of device to provision of more-effective products (technologies) available to markets to a cycle of capital extended development, which uses new technology as permanent resource for its growing. 3. Apparently, the development of scientific knowledge will go in the direction of further synthesis with the technologies, the strengthening of the latter component, and partially the substitution of scientist by machines and scientific knowledge with technologies in two forms: producing artefacts (artificial beings and substances) and machines as well producing skills (techniques, algorithms) of working with information or giving and presentation of information. Now we can clearly see it on the examples of the explosion of emerging technosciences (e.g. artificial intelligence, nanotechnology, biomedicine, systems biology and synthetic biology) or the intervention of neuroscience based on wide use of fMRI brain scans into various areas of human practice and cognition which results in the formation of the so-called ''brain culture''. In general, transformation of science into "technoscience" implies that the production of information, technologies and innovations is its key goal. Thus, we can claim that science task is considerably narrowing as it implies the loss of scientific knowledge key function to be the dominant world-view (Weltanschauung). This loss may provoke other significant transformations. Ends 19:00. Lilian Bermejo-Luque (University of Granada, Spain) What should a normative theory of argumentation look like? ABSTRACT. What should a normative theory of argumentation look like? What makes argumentation reasonable, rational or justified? I address this question by considering two ways of thinking of the relationship between argumentation and reasonableness/rationality/justification that mirror two very different conceptions of what a theory of argumentation should look like. As argumentation theorists, we can either aim at providing criteria for saying that a target-claim is justified, reasonable or rational, or at characterizing justification, rationality or reasonability from the point of view of the practice of arguing. For the former group of theorists, the main question would be "should we accept this claim on the basis of those reasons?" In turn, for those interested in "characterizing" what is good argumentation, the main question is: "does this piece of argumentation count as good argumentation, taking into account the conception of good argumentation that underlies the practice of arguing?" Both conceptions of Argumentation Theory assimilate the goals of a normative theory of argumentation to the goals of a theory of justification, but the former focuses on the conditions for considering that a target-claim is justified, whereas the latter tries to characterize the very concept of justification from the point of view of the practice of arguing. In this paper, I analyze the rewards and shortcomings of both epistemological conceptions of Argumentation Theory and their corresponding criteriological and transcendental accounts of the sort of objectivity that good argumentation is able to provide. Bermejo-Luque, L. (2011) Giving Reasons. A linguistic-pragmatic approach to Argumentation Theory. Dordrecht: Springer Biro, J., and H. Siegel (1992). "Normativity, Argumentation, and an Epistemic Theory of Fallacies," in Argumentation Illuminated: Selected Papers from the 1990 International Conference on Argumentation. in Frans H. van Eemeren, R. Grootendorst, J. A. Blair and C. A. Williard. Dordrecht: Foris, 81-103 _____ (2006) "In Defense of the Objective Epistemic Approach to Argumentation," in Informal Logic Vol. 26, No. 1, 91-101 Booth, A. (2014) "Two Reasons Why Epistemic Reasons Are Not Object-Given Reasons" Philosophy and Phenomenological Research, Vol. 89, No 1, 1–14 Eemeren, F.H. van, & R. Grootendorst (2004) A Systematic Theory of Argumentation. The Pragma-dialectical approach. Cambridge: Cambridge University Press Feldman, Richard (1994) "Good arguments," in F. F. Schmitt (Ed.) Socializing epistemology: the social dimensions of knowledge. Lanham, MD: Rowman & Littlefield Publishers, 159-188 Goldman, Alvin. I (2003) "An Epistemological Approach to Argumentation," in Informal Logic Vol. 23, No.1, 51-63 Hieronymi, P. (2005) "The Wrong Kind of Reason," in The Journal of Philosophy Vol. 102, No. 9, 437–57 Putnam, H. (1981) Reason, Truth and History. Cambridge: Cambridge University Press Maria Cerezo (University of Murcia, Spain) Issues at the intersection between metaphysics and biology ABSTRACT. Recent work in Metaphysics and in Philosophy of Science, and in particular in Philosophy of Biology, shows a revival of interest in issues that might be considered to be either metaphysical issues that can be further elucidated by recourse to biological cases or metaphysical consequences that some advancements in Biology have. In some cases, the application of some metaphysical notions to classical debates in Philosophy of Biology helps to clarify what is at stake and to solve some misunderstandings in the discussion. The interactions that can take place between Metaphysics and Biology are therefore of different kinds. In my contribution, I will present some examples in which such interaction takes place and will explore the way in which such interaction takes place. In general, I will present interactions between Evolutionary Biology, Genetics and Developmental Biology and metaphysical notions such as dispositions, identity and persistence, and teleology. Although I will present several examples, I will focus in particular on one or two of them, namely the interaction between metaphysics of dispositions and Genetics, on the one hand, and the one between theories of persistence and the species concept issue in Philosophy of Biology. I will revise the Dispositionalist theory of causation recently proposed by Mumford and Anjum (2011) and evaluate its explanatory potential and difficulties when it is applied to causal analysis in Biology. My main concern is with the application of their theory to Genetics, something that they do as an illustration of their proposal in chapter 10 of their book. I will try to deploy further the advantages and disadvantages of a dispositionalist conception of genes. After introducing some crucial features of their approach, I will revise the advantages of their conception to account for complex biological phenomena, and its potential to overcome the dispute between gene-centrism and developmentalism. However, I will raise a difficulty for the dispositionalist, namely, the difficulty to defend the simultaneity of cause and effect (essential in their proposal) when epigenetic processes are taken into account. I will focus on a particular phenomenon, the mechanism of RNA alternative splicing and will explore some ways out of the difficulty. Secondly, I will address the question of whether the persistence of biological species raises any difficulty for the thesis of the metaphysical equivalence between three-dimensionalism (3D) and four-dimensionalism (4D). I will try to show that, even if one assumes that 'species' is a homonymous term and refers to two entities (evolverons or synchronic species and phylons or diachronic ones), 3D/4D metaphysical equivalence still holds. My argument lies in challenging the strong association between a synchronic view of species and a 3D theory of persistence, and a diachronic view of species and a 4D theory of persistence. In the last part of my contribution, I will try to characterize the way in which Metaphysics and Philosophy of Biology interact in those issues. Filip Tvrdý (Palacký University, Czechia) Shunkichi Matsumoto (Tokai University, Japan) How Can We Make Sense of the Relationship between Adaptive Thinking and Heuristic in Evolutionary Psychology? ABSTRACT. Evolutionary psychology was initiated by its pioneers as a discipline to reverse-engineer human psychological mechanisms by largely adopting a forward-looking deductive inference but has subsequently shifted toward a positivism-oriented discipline based on heuristic and empirical testing. On its course, however, the very characteristics which initially defined the methodological advantage of the discipline seem to have been lost; namely, the prospect to predict the human mental constitution from the vantage point of the ancient selection pressures imposed on our ancestors. This is what was supposed to enable the discipline to claim the methodological advantage both over sociobiology and the contemporary cognitive psychology by providing testable predictions about our psychological makeup by way of looking into its deeper root of our evolutionary past. However, with the subsequent trend to emphasize its aspect as heuristics, the roles played by such adaptive thinking has been gradually set aside. According to Rellihan (2012), the type of adaptive thinking typical of evolutionary psychology is in fact what can be termed as 'strong adaptationism,' which is the idea that the force of natural selection is so powerful and overwhelming of any obstacles that the destination of adaptive evolution is uniquely predictable no matter what phenotypes a given population may have started with in the long past --- much stronger version than the one evolutionary psychologists typically think themselves committed to. Thus, the role of adaptive thinking played is more decisive than is normally perceived. Provided this is true, how can we make sense of the relationship between adaptive thinking and the heuristic aspect in evolutionary psychology? In this talk, I will build on Rellihan's analysis that "Heuristics are simply less reliable inference strategies and inference strategies are simply more reliable heuristics" (Rellihan 2012) and argue that the distinction between heuristic and adaptive inference may be expedient. If heuristics are not based on largely adaptive thinking that evolutionarily makes sense, they will not bring forth meaningful hypotheses that deserve to be called evolutionary. Evolutionary psychologists make it a rule to name comparative studies, hunter-gatherer studies, or archeology as the sources of inspiration for their hypothesis generation, not just evolutionary theory (e.g., Machery, forthcoming). Still, if adaptive thinking doesn't constitute an integral part, evolutionary psychology will end up with a mere hodgepodge of heterogeneous bodies of knowledge, which makes us wonder why the whole enterprise ought to be called evolutionary. In another line of defense, some (e.g., Goldfinch 2015) argue that the task for evolutionary psychology as a heuristic program can end with proposing some interesting hypotheses where the task for other relevant adjacent disciplines of actually confirming them starts. This 'division of labor' view of the confirmation strategy will do to some extent but may eventually risk letting go of its disciplinary integration. References Rellihan, M. (2012) "Adaptationism and Adaptive Thinking in Evolutionary Psychology," Philosophical Psychology 25(2): 245-277. Machery, E. (forthcoming in J. Prinz ed., Oxford Handbook of Philosophy of Psychology) "Discovery and Confirmation in Evolutionary Psychology". Goldfinch, A. (2015) Rethinking Evolutionary Psychology, Palgrave Macmillan. Jakub Matyja (Polish Academy of Sciences, Poland) Music cognition and transposition heuristics: a peculiar case of mirror neurons ABSTRACT. The aim of my presentation is to analyse how models are constructed in contemporary embodied music cognition research. I introduce and discuss the idea of "transposition heuristic" in cognitive science (of music). Utilizing this heuristic, researchers in cognitive science tend to copy and apply ways of thinking about particular concepts from general cognitive science to their own (sub)field of research. Unless done with proper caution, however, such transposition may lead to particular problems. I will illustrate the use of transposition heuristic with reference to contemporary works in embodied music cognition (e.g., Schiavio et al., 2015; Matyja, 2015). I will show how music cognition researchers tend to take particular concepts (e.g., imagination or simulation) from general cognitive science and apply them to their own field of research (e.g., introducing rather ambiguous concepts of musical imagination or musical simulation). Often, music cognition researchers do not see the need of specifying those concepts. They do, however, construct models on the basis of those unspecified concepts. In my presentation I argue that transposition research heuristic employed while constructing models in embodied music cognition is often fallible. Initially, such transpositions may be inspiring. They, however, are not enough to provide exhaustive models (the "how-actually" explanations) of how musical processing and musical imagination is embodied. I conclude that the transpositions from general cognitive science to its subdisciplines should be performed with proper caution. The talk will be structured in the following way. (1) I begin with introducing the general ideas behind the embodied music cognition research paradigm in cognitive science (e.g., Maes et al., 2014) and its relations to hypothesized simulative function of musical imagination (e.g., Molnar-Szakacs & Overy, 2006; Matyja, 2015). (2) I will show that in addition to research heuristics in cognitive science already discussed in the literature (Bechtel & Richardson, 2010; Craver & Darden, 2013), a careful analysis of recent developments in music cognition research fleshes out what I dub to be the "transposition heuristics". (3) I will show that by their nature research heuristics are fallible, sometimes leading to inadequate formulations of both research problems and corresponding theories. In order to illustrate this problem, I return to previously discussed case studies from music cognition research. (4) I discuss the mechanistic criteria for complete and adequate explanations and show how they relate to my case studies. In particular, I show that contemporary models in embodied music cognition lack accounts on how body and its physical and spatial components (e.g., physical responses to music) shape musical processing. (5) In the light of what has been discussed, I conclude that transpositions from general cognitive science to its particular sub-disciplines should be performed with proper caution. Bechtel, W. & Richardson, R. (2010) Discovering complexity: Decomposition and Localisation in Scientific Research. MIT Press. Craver, C. & Darden, L. (2013). In Search of Mechanisms: Discoveries Across the Life Sciences. University of Chicago Press. Maes, P.-J., Leman, M., Palmer, C., & Wanderley, M. M. (2014). Action-based effects on music perception. Frontiers in Psychology, 4(January), 1–14. http://doi.org/10.3389/fpsyg.2013.01008 Matyja, J. R. (2015). The next step: mirror neurons, music, and mechanistic explanation. Frontiers in Psychology, 6(April), 1–3. http://doi.org/10.3389/fpsyg.2015.00409 Molnar-Szakacs, I., & Overy, K. (2006). Music and mirror neurons: from motion to "e"motion. Social Cognitive and Affective Neuroscience, 1(3), 235–41. http://doi.org/10.1093/scan/nsl029 Schiavio, A., Menin, D., & Matyja, J. (2015). Music in the Flesh : Embodied Simulation in Musical Understanding. Psychomusicology: Music, Mind and Brain, Advance On. http://doi.org/http://dx.doi.org/10.1037/pmu0 Błażej Skrzypulec (Polish Academy of Sciences, Poland) What is constitutive for flavour experiences? ABSTRACT. Within contemporary philosophy of perception, it is commonly claimed that flavour experiences are paradigmatic examples of multimodal perceptual experiences (Smith 2013; Stevenson 2014). Typically, the phenomenal character of a flavour experience is determined by the activities of various sensory systems processing, inter alia, gustatory, olfactory, tactile, thermal, and trigeminal data. In fact, virtually all sensory systems, including vision and audition, are believed to influence how we experience flavours. However, there is a strong intuition that not all of these sensory systems make an equal contribution to the phenomenology of flavour experiences. More specifically, it seems that the activities of some sensory systems are constitutive for flavour perception while others merely influence how we experience flavours (see Prescott 2015; Spence 2015). From the philosophical perspective, addressing the above issue requires explicating what it means to say that some factors are 'constitutive' for flavour perception and providing a criterion for distinguishing constitutive and non-constitutive factors. My presentation aims to address this theoretical question in a twofold way. First, a theoretical framework is developed which defines the stronger and weaker senses in which the activities of sensory systems may be constitutive for flavour perception. Second, relying on empirical results in flavour science (e.g., Delwiche 2004; Spence et al. 2014), the constitutive status of activities related to distinct sensory systems in the context of flavour perception is investigated. In particular, I start by providing a notion of minimal constitutivity that is developed relying on considerations presented in works regarding analytic metaphysics (Wilson 2007) and philosophy of science (Couch 2011; Craver 2007). The main intuition behind my conceptualization of constitutiveness, is that being constitutive is closely connected to being necessary. From this perspective, activities of a sensory system S are minimally constitutive for flavour perception if there is a way of obtaining a flavour experience F such that this way of obtaining it requires the presence of an activity of system S. Subsequently, stronger notions of constitutivity are defined, and I explicate how they can be applied in considerations about flavour perception. Finally, I consider the constitutive status of activities associated with functioning of the selected sensory systems relevant for the flavour perception: olfactory, gustatory, tactile, auditory, and visual. I argue that activities of all these systems, except the visual one, are at least minimally constitutive for flavour perception. Couch, M. B. (2011). Mechanisms and constitutive relevance. Synthese, 183, 375–388. Craver, C. (2007). Constitutive Explanatory Relevance. Journal of Philosophical Research, 32, 3-20. Delwiche, J. (2004). The impact of perceptual interactions on perceived flavor. Food Quality and Preference, 15, 137–146. Prescott, J. (2015). Multisensory processes in flavour perception and their influence on food choice. Current Opinion in Food Science, 3, 47–52. Smith, B. C. (2013). The nature of sensory experience: the case of taste and tasting. Phenomenology and Mind, 4, 212-227. Spence, C. (2015). Multisensory Flavor Perception. Cell, 161, 24-35. Spence, C., Smith, B. & Auvray, M. (2014). Confusing tastes with flavours. In D. Stokes, M. Matthen, S. Briggs (Eds.), Perception and Its Modalities, Oxford: Oxford University Press, 247-276. Stevenson, R. J. (2014). Object Concepts in the Chemical Senses. Cognitive Science, 38(2014), 1360–1383. Wilson, R. A. (2007). A Puzzle about Material Constitution & How to Solve it: Enriching Constitution Views in Metaphysics. Philosophers' Imprint, 7(5), 1-20. Disclaimer | Powered by EasyChair Smart Program
CommonCrawl
Abelian Normal subgroup, Quotient Group, and Automorphism Group Let $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$. Let $\Aut(N)$ be the group of automorphisms of $G$. Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime. Then prove that $N$ is contained in the center of $G$. If Quotient $G/H$ is Abelian Group and $H < K \triangleleft G$, then $G/K$ is Abelian Let $H$ and $K$ be normal subgroups of a group $G$. Suppose that $H < K$ and the quotient group $G/H$ is abelian. Then prove that $G/K$ is also an abelian group. Quotient Group of Abelian Group is Abelian Let $G$ be an abelian group and let $N$ be a normal subgroup of $G$. Then prove that the quotient group $G/N$ is also an abelian group. Give a Formula For a Linear Transformation From $\R^2$ to $\R^3$ Let $\{\mathbf{v}_1, \mathbf{v}_2\}$ be a basis of the vector space $\R^2$, where \end{bmatrix} \text{ and } \mathbf{v}_2=\begin{bmatrix} \end{bmatrix}.\] The action of a linear transformation $T:\R^2\to \R^3$ on the basis $\{\mathbf{v}_1, \mathbf{v}_2\}$ is given by \begin{align*} T(\mathbf{v}_1)=\begin{bmatrix} \end{bmatrix} \text{ and } T(\mathbf{v}_2)=\begin{bmatrix} \end{bmatrix}. \end{align*} Find the formula of $T(\mathbf{x})$, where \[\mathbf{x}=\begin{bmatrix} x \\ \end{bmatrix}\in \R^2.\] Each of the following sets are not a subspace of the specified vector space. For each set, give a reason why it is not a subspace. (1) \[S_1=\left \{\, \begin{bmatrix} x_3 \end{bmatrix} \in \R^3 \quad \middle | \quad x_1\geq 0 \,\right \}\] in the vector space $\R^3$. \end{bmatrix} \in \R^3 \quad \middle | \quad x_1-4x_2+5x_3=2 \,\right \}\] in the vector space $\R^3$. \end{bmatrix}\in \R^2 \quad \middle | \quad y=x^2 \quad \,\right \}\] in the vector space $\R^2$. (4) Let $P_4$ be the vector space of all polynomials of degree $4$ or less with real coefficients. \[S_4=\{ f(x)\in P_4 \mid f(1) \text{ is an integer}\}\] in the vector space $P_4$. (5) \[S_5=\{ f(x)\in P_4 \mid f(1) \text{ is a rational number}\}\] in the vector space $P_4$. (6) Let $M_{2 \times 2}$ be the vector space of all $2\times 2$ real matrices. \[S_6=\{ A\in M_{2\times 2} \mid \det(A) \neq 0\} \] in the vector space $M_{2\times 2}$. (7) \[S_7=\{ A\in M_{2\times 2} \mid \det(A)=0\} \] in the vector space $M_{2\times 2}$. (Linear Algebra Exam Problem, the Ohio State University) (8) Let $C[-1, 1]$ be the vector space of all real continuous functions defined on the interval $[a, b]$. \[S_8=\{ f(x)\in C[-2,2] \mid f(-1)f(1)=0\} \] in the vector space $C[-2, 2]$. (9) \[S_9=\{ f(x) \in C[-1, 1] \mid f(x)\geq 0 \text{ for all } -1\leq x \leq 1\}\] in the vector space $C[-1, 1]$. (10) Let $C^2[a, b]$ be the vector space of all real-valued functions $f(x)$ defined on $[a, b]$, where $f(x), f'(x)$, and $f^{\prime\prime}(x)$ are continuous on $[a, b]$. Here $f'(x), f^{\prime\prime}(x)$ are the first and second derivative of $f(x)$. \[S_{10}=\{ f(x) \in C^2[-1, 1] \mid f^{\prime\prime}(x)+f(x)=\sin(x) \text{ for all } -1\leq x \leq 1\}\] in the vector space $C[-1, 1]$. (11) Let $S_{11}$ be the set of real polynomials of degree exactly $k$, where $k \geq 1$ is an integer, in the vector space $P_k$. (12) Let $V$ be a vector space and $W \subset V$ a vector subspace. Define the subset $S_{12}$ to be the complement of $W$, \[ V \setminus W = \{ \mathbf{v} \in V \mid \mathbf{v} \not\in W \}.\] If 2 by 2 Matrices Satisfy $A=AB-BA$, then $A^2$ is Zero Matrix Let $A, B$ be complex $2\times 2$ matrices satisfying the relation \[A=AB-BA.\] Prove that $A^2=O$, where $O$ is the $2\times 2$ zero matrix. Normal Nilpotent Matrix is Zero Matrix A complex square ($n\times n$) matrix $A$ is called normal if \[A^* A=A A^*,\] where $A^*$ denotes the conjugate transpose of $A$, that is $A^*=\bar{A}^{\trans}$. A matrix $A$ is said to be nilpotent if there exists a positive integer $k$ such that $A^k$ is the zero matrix. (a) Prove that if $A$ is both normal and nilpotent, then $A$ is the zero matrix. You may use the fact that every normal matrix is diagonalizable. (b) Give a proof of (a) without referring to eigenvalues and diagonalization. (c) Let $A, B$ be $n\times n$ complex matrices. Prove that if $A$ is normal and $B$ is nilpotent such that $A+B=I$, then $A=I$, where $I$ is the $n\times n$ identity matrix. Application of Field Extension to Linear Combination Consider the cubic polynomial $f(x)=x^3-x+1$ in $\Q[x]$. Let $\alpha$ be any real root of $f(x)$. Then prove that $\sqrt{2}$ can not be written as a linear combination of $1, \alpha, \alpha^2$ with coefficients in $\Q$. Irreducible Polynomial $x^3+9x+6$ and Inverse Element in Field Extension Prove that the polynomial \[f(x)=x^3+9x+6\] is irreducible over the field of rational numbers $\Q$. Let $\theta$ be a root of $f(x)$. Then find the inverse of $1+\theta$ in the field $\Q(\theta)$. Irreducible Polynomial Over the Ring of Polynomials Over Integral Domain Let $R$ be an integral domain and let $S=R[t]$ be the polynomial ring in $t$ over $R$. Let $n$ be a positive integer. \[f(x)=x^n-t\] in the ring $S[x]$ is irreducible in $S[x]$. Special Linear Group is a Normal Subgroup of General Linear Group Let $G=\GL(n, \R)$ be the general linear group of degree $n$, that is, the group of all $n\times n$ invertible matrices. Consider the subset of $G$ defined by \[\SL(n, \R)=\{X\in \GL(n,\R) \mid \det(X)=1\}.\] Prove that $\SL(n, \R)$ is a subgroup of $G$. Furthermore, prove that $\SL(n,\R)$ is a normal subgroup of $G$. The subgroup $\SL(n,\R)$ is called special linear group Beautiful Formulas for pi=3.14… The number $\pi$ is defined a s the ratio of a circle's circumference $C$ to its diameter $d$: \[\pi=\frac{C}{d}.\] $\pi$ in decimal starts with 3.14… and never end. I will show you several beautiful formulas for $\pi$. Linear Transformation $T(X)=AX-XA$ and Determinant of Matrix Representation Let $V$ be the vector space of all $n\times n$ real matrices. Let us fix a matrix $A\in V$. Define a map $T: V\to V$ by \[ T(X)=AX-XA\] for each $X\in V$. (a) Prove that $T:V\to V$ is a linear transformation. (b) Let $B$ be a basis of $V$. Let $P$ be the matrix representation of $T$ with respect to $B$. Find the determinant of $P$. Quiz 8. Determine Subsets are Subspaces: Functions Taking Integer Values / Set of Skew-Symmetric Matrices (a) Let $C[-1,1]$ be the vector space over $\R$ of all real-valued continuous functions defined on the interval $[-1, 1]$. Consider the subset $F$ of $C[-1, 1]$ defined by \[F=\{ f(x)\in C[-1, 1] \mid f(0) \text{ is an integer}\}.\] Prove or disprove that $F$ is a subspace of $C[-1, 1]$. (b) Let $n$ be a positive integer. An $n\times n$ matrix $A$ is called skew-symmetric if $A^{\trans}=-A$. Let $M_{n\times n}$ be the vector space over $\R$ of all $n\times n$ real matrices. Consider the subset $W$ of $M_{n\times n}$ defined by \[W=\{A\in M_{n\times n} \mid A \text{ is skew-symmetric}\}.\] Prove or disprove that $W$ is a subspace of $M_{n\times n}$. If the Order of a Group is Even, then the Number of Elements of Order 2 is Odd Prove that if $G$ is a finite group of even order, then the number of elements of $G$ of order $2$ is odd. A Group is Abelian if and only if Squaring is a Group Homomorphism Let $G$ be a group and define a map $f:G\to G$ by $f(a)=a^2$ for each $a\in G$. Then prove that $G$ is an abelian group if and only if the map $f$ is a group homomorphism. Determine linear transformation using matrix representation Let $T$ be the linear transformation from the $3$-dimensional vector space $\R^3$ to $\R^3$ itself satisfying the following relations. T\left(\, \begin{bmatrix} \end{bmatrix} \,\right) =\begin{bmatrix} \end{bmatrix}, \qquad T\left(\, \begin{bmatrix} \end{bmatrix} \, \right) = \end{bmatrix}, \qquad T \left( \, \begin{bmatrix} \end{bmatrix} \, \right)= Then for any vector y \\ \end{bmatrix}\in \R^3,\] find the formula for $T(\mathbf{x})$. Page 21 of 38« First«...10...1819202122232425...30...»Last » The Preimage of a Normal Subgroup Under a Group Homomorphism is Normal Calculate Determinants of Matrices Sherman-Woodbery Formula for the Inverse Matrix A Relation of Nonzero Row Vectors and Column Vectors An Example of Matrices $A$, $B$ such that $\mathrm{rref}(AB)\neq \mathrm{rref}(A) \mathrm{rref}(B)$ Rank of the Product of Matrices $AB$ is Less than or Equal to the Rank of $A$
CommonCrawl
BMB Reports Korean Society for Biochemistry and Molecular Biology (생화학분자생물학회) Life Science > Developmental/Neuronal Biology BMB Reports is an international journal devoted to the very rapid dissemination of timely and significant results in diverse fields of biochemistry and molecular biology. Novel findings in the area of genomics, proteomics, metabolomics, bioinformatics, and systems biology are also considered for publication. For speedy publication of novel knowledge, we aim to offer a first decision to the authors in less than 3 weeks from the submission date. BMB Reports is an open access, online-only journal. The journal publishes peer-reviewed Original Articles and Contributed Mini Reviews. http://submit.bmbreports.org/ KSCI KCI SCOPUS SCIE CREB and FoxO1: two transcription factors for the regulation of hepatic gluconeogenesis Oh, Kyoung-Jin;Han, Hye-Sook;Kim, Min-Jung;Koo, Seung-Hoi 567 https://doi.org/10.5483/BMBRep.2013.46.12.248 PDF KSCI KPUBS JATS XML Liver plays a major role in maintaining glucose homeostasis in mammals. Under fasting conditions, hepatic glucose production is critical as a source of fuel to maintain the basic functions in other tissues, including skeletal muscle, red blood cells, and the brain. Fasting hormones glucagon and cortisol play major roles during the process, in part by activating the transcription of key enzyme genes in the gluconeogenesis such as phosphoenol pyruvate carboxykinase (PEPCK) and glucose 6 phosphatase catalytic subunit (G6Pase). Conversely, gluconeogenic transcription is repressed by pancreatic insulin under feeding conditions, which effectively inhibits transcriptional activator complexes by either promoting post-translational modifications or activating transcriptional inhibitors in the liver, resulting in the reduction of hepatic glucose output. The transcriptional regulatory machineries have been highlighted as targets for type 2 diabetes drugs to control glycemia, so understanding of the complex regulatory mechanisms for transcription circuits for hepatic gluconeogenesis is critical in the potential development of therapeutic tools for the treatment of this disease. In this review, the current understanding regarding the roles of two key transcriptional activators, CREB and FoxO1, in the regulation of hepatic gluconeogenic program is discussed. Microorganism lipid droplets and biofuel development Liu, Yingmei;Zhang, Congyan;Shen, Xipeng;Zhang, Xuelin;Cichello, Simon;Guan, Hongbin;Liu, Pingsheng 575 Lipid droplet (LD) is a cellular organelle that stores neutral lipids as a source of energy and carbon. However, recent research has emerged that the organelle is involved in lipid synthesis, transportation, and metabolism, as well as mediating cellular protein storage and degradation. With the exception of multi-cellular organisms, some unicellular microorganisms have been observed to contain LDs. The organelle has been isolated and characterized from numerous organisms. Triacylglycerol (TAG) accumulation in LDs can be in excess of 50% of the dry weight in some microorganisms, and a maximum of 87% in some instances. These microorganisms include eukaryotes such as yeast and green algae as well as prokaryotes such as bacteria. Some organisms obtain carbon from $CO_2$ via photosynthesis, while the majority utilizes carbon from various types of biomass. Therefore, high TAG content generated by utilizing waste or cheap biomass, coupled with an efficient conversion rate, present these organisms as bio-tech 'factories' to produce biodiesel. This review summarizes LD research in these organisms and provides useful information for further LD biological research and microorganism biodiesel development. Identification of anti-adipogenic proteins in adult bovine serum suppressing 3T3-L1 preadipocyte differentiation Park, Jeongho;Park, Jihyun;Nahm, Sang-Soep;Choi, Inho;Kim, Jihoe 582 Adipocyte differentiation is a complex developmental process forming adipocytes from various precursor cells. The murine 3T3-L1 preadipocyte cell line has been most frequently used in the studies of adipocyte differentiation. Differentiation of 3T3-L1 preadipocytes includes a medium containing fetal bovine serum (FBS) with hormonal induction. In this study, we observed that differentiation medium containing adult bovine serum (ABS) instead of FBS did not support differentiation of preadipocytes. Impaired adipocyte differentiation was due to the presence of a serum protein factor in ABS that suppresses differentiation of preadipocytes. Using a proteomic analysis, alpha-2-macroglobulin and paraoxonase/arylesterase 1, which were previously shown to suppress differentiation of preadipocytes, were identified as anti-adipogenic proteins. Although their functional mechanisms have not yet been elucidated, the anti-adipogenic effects of these proteins are discussed. Identification of the novel substrates for caspase-6 in apoptosis using proteomic approaches Cho, Jin Hwa;Lee, Phil Young;Son, Woo-Chan;Chi, Seung-Wook;Park, Byoung Chul;Kim, Jeong-Hoon;Park, Sung Goo 588 Apoptosis, programmed cell death, is a process involved in the development and maintenance of cell homeostasis in multicellular organisms. It is typically accompanied by the activation of a class of cysteine proteases called caspases. Apoptotic caspases are classified into the initiator caspases and the executioner caspases, according to the stage of their action in apoptotic processes. Although caspase-3, a typical executioner caspase, has been studied for its mechanism and substrates, little is known of caspase-6, one of the executioner caspases. To understand the biological functions of caspase-6, we performed proteomics analyses, to seek for novel caspase-6 substrates, using recombinant caspase-6 and HepG2 extract. Consequently, 34 different candidate proteins were identified, through 2-dimensional electrophoresis/MALDI-TOF analyses. Of these identified proteins, 8 proteins were validated with in vitro and in vivo cleavage assay. Herein, we report that HAUSP, Kinesin5B, GEP100, SDCCAG3 and PARD3 are novel substrates for caspase-6 during apoptosis. Binding model for eriodictyol to Jun-N terminal kinase and its anti-inflammatory signaling pathway Lee, Eunjung;Jeong, Ki-Woong;Shin, Areum;Jin, Bonghwan;Jnawali, Hum Nath;Jun, Bong-Hyun;Lee, Jee-Young;Heo, Yong-Seok;Kim, Yangmee 594 The anti-inflammatory activity of eriodictyol and its mode of action were investigated. Eriodictyol suppressed tumor necrosis factor (mTNF)-${\alpha}$, inducible nitric oxide synthase (miNOS), interleukin (mIL)-6, macrophage inflammatory protein (mMIP)-1, and mMIP-2 cytokine release in LPS-stimulated macrophages. We found that the anti-inflammatory cascade of eriodictyol is mediated through the Toll-like Receptor (TLR)4/CD14, p38 mitogen-activated protein kinases (MAPK), extracellular-signal-regulated kinase (ERK), Jun-N terminal kinase (JNK), and cyclooxygenase (COX)-2 pathway. Fluorescence quenching and saturation-transfer difference (STD) NMR experiments showed that eriodictyol exhibits good binding affinity to JNK, $8.79{\times}10^5M^{-1}$. Based on a docking study, we propose a model of eriodictyol and JNK binding, in which eriodictyol forms 3 hydrogen bonds with the side chains of Lys55, Met111, and Asp169 in JNK, and in which the hydroxyl groups of the B ring play key roles in binding interactions with JNK. Therefore, eriodictyol may be a potent anti-inflammatory inhibitor of JNK. Feasibility of simultaneous measurement of cytosolic calcium and hydrogen peroxide in vascular smooth muscle cells Chang, Kyung-Hwa;Park, Jung-Min;Lee, Moo-Yeol 600 Interplay between calcium ions ($Ca^{2+}$) and reactive oxygen species (ROS) delicately controls diverse pathophysiological functions of vascular smooth muscle cells (VSMCs). However, details of the $Ca^{2+}$ and ROS signaling network have been hindered by the absence of a method for dual measurement of $Ca^{2+}$ and ROS. Here, a real-time monitoring system for $Ca^{2+}$ and ROS was established using a genetically encoded hydrogen peroxide indicator, HyPer, and a ratiometric $Ca^{2+}$ indicator, fura-2. For the simultaneous detection of fura-2 and HyPer signals, 540 nm emission filter and 500 nm~ dichroic beamsplitter were combined with conventional exciters. The wide excitation spectrum of HyPer resulted in marginal cross-contamination with fura-2 signal. However, physiological $Ca^{2+}$ transient and hydrogen peroxide were practically measurable in HyPer-expressing, fura-2-loaded VSMCs. Indeed, distinct $Ca^{2+}$ and ROS signals could be successfully detected in serotonin-stimulated VSMCs. The system established in this study is applicable to studies of crosstalk between $Ca^{2+}$ and ROS. Novel AGLP-1 albumin fusion protein as a long-lasting agent for type 2 diabetes Kim, Yong-Mo;Lee, Sang Mee;Chung, Hye-Shin 606 Glucagon like peptide-1 (GLP-1) regulates glucose mediated-insulin secretion, nutrient accumulation, and ${\beta}$-cell growth. Despite the potential therapeutic usage for type 2 diabetes (T2D), GLP-1 has a short half-life in vivo ($t_{1/2}$ <2 min). In an attempt to prolong half-life, GLP-1 fusion proteins were genetically engineered: GLP-1 human serum albumin fusion (GLP-1/HSA), AGLP-1/HSA which has an additional alanine at the N-terminus of GLP-1, and AGLP-1-L/HSA, in which a peptide linker is inserted between AGLP-1 and HSA. Recombinant fusion proteins secreted from the Chinese Hamster Ovary-K1 (CHO-K1) cell line were purified with high purity (>96%). AGLP-1 fusion protein was resistant against the dipeptidyl peptidase-IV (DPP-IV). The fusion proteins activated cAMP-mediated signaling in rat insulinoma INS-1 cells. Furthermore, a C57BL/6N mice pharmacodynamics study exhibited that AGLP-1-L/HSA effectively reduced blood glucose level compared to AGLP-1/HSA. Molecular mechanisms of luteolin-7-O-glucoside-induced growth inhibition on human liver cancer cells: G2/M cell cycle arrest and caspase-independent apoptotic signaling pathways Hwang, Yu-Jin;Lee, Eun-Ju;Kim, Haeng-Ran;Hwang, Kyung-A 611 Luteolin-7-O-glucoside (LUT7G), a flavone subclass of flavonoids, has been found to increase anti-oxidant and anti-inflammatory activity, as well as cytotoxic effects. However, the mechanism of how LUT7G induces apoptosis and regulates cell cycles remains poorly understood. In this study, we examined the effects of LUT7G on the growth inhibition of tumors, cell cycle arrest, induction of ROS generation, and the involved signaling pathway in human hepatocarcinoma HepG2 cells. The proliferation of HepG2 cells was decreased by LUT7G in a dose-dependent manner. The growth inhibition was due primarily to the G2/M phase arrest and ROS generation. Moreover, the phosphorylation of JNK was increased by LUT7G. These results suggest that the anti-proliferative effect of LUT7G on HepG2 is associated with G2/M phase cell cycle arrest by JNK activation. Nuclear Rac1 regulates the bFGF-induced neurite outgrowth in PC12 cells Kim, Eung-Gook;Shin, Eun-Young 617 Rac1 plays a key role in neurite outgrowth via reorganization of the actin cytoskeleton. The molecular mechanisms underlying Rac1-mediated actin dynamics in the cytosol and plasma membrane have been intensively studied, but the nuclear function of Rac1 in neurite outgrowth has not yet been addressed. Using subcellular fractionation and immunocytochemistry, we sought to explore the role of nuclear Rac1 in neurite outgrowth. bFGF, a strong agonist for neurite outgrowth in PC12 cells, stimulated the nuclear accumulation of an active form of Rac1. Rac1-PBR (Q) mutant, in which six basic residues in the polybasic region at the C-terminus were replaced by glutamine, didn't accumulate in the nucleus. In comparison with control cells, cells expressing this mutant form of Rac1 displayed a marked defect in extending neurites that was concomitant with reduced expression of MAP2 and MEK-1. These results suggest that Rac1 translocation to the nucleus functionally correlates with bFGF-induced neurite outgrowth.
CommonCrawl
Stabilization of hyperbolic equations with mixed boundary conditions MCRF Home Finite-time stabilization of a network of strings December 2015, 5(4): 743-760. doi: 10.3934/mcrf.2015.5.743 Exact controllability for the Lamé system Belhassen Dehman 1, and Jean-Pierre Raymond 2, Département de Mathématiques, Faculté des Sciences de Tunis, Université de Tunis El Manar, 2092 El Manar, Tunisia Institut de Mathématiques de Toulouse, Université Paul Sabatier & CNRS, 31062 Toulouse Cedex Received August 2014 Revised May 2015 Published October 2015 In this article, we prove an exact boundary controllability result for the isotropic elastic wave system in a bounded domain $\Omega$ of $\mathbb{R}^{3}$. This result is obtained under a microlocal condition linking the bicharacteristic paths of the system and the region of the boundary on which the control acts. This condition is to be compared with the so-called Geometric Control Condition by Bardos, Lebeau and Rauch [3]. The proof relies on microlocal tools, namely the propagation of the $C^{\infty}$ wave front and microlocal defect measures. Keywords: elasticity, Lamé system, microlocal defect measures, geometric control condition, controllability, propagation of wave front.. Mathematics Subject Classification: Primary: 93C20, 93B05; Secondary: 74B05, 35L51, 35L0. Citation: Belhassen Dehman, Jean-Pierre Raymond. Exact controllability for the Lamé system. Mathematical Control & Related Fields, 2015, 5 (4) : 743-760. doi: 10.3934/mcrf.2015.5.743 L. Aloui, Stabilisation Neumann pour l'équation des ondes sur un domaine extêrieur, J. Math. Pures Appl., 81 (2002), 1113-1134. doi: 10.1016/S0021-7824(02)01261-8. Google Scholar K. Andersson and R. Melrose, The propagation of singularities along gliding rays, Invent. Math., 41 (1977), 197-232. doi: 10.1007/BF01403048. Google Scholar C. Bardos, G. Lebeau and J. Rauch, Sharp sufficient conditions for the observation, control, and stabilization of waves from the boundary, SIAM J. Control Optimization, 30 (1992), 1024-1065. doi: 10.1137/0330055. Google Scholar C. Bardos, T. Masrour and F. Tatout, Singularités du problème d'élastodynamique, C. R. Acad. Sci. Paris Sér. I Math., 320 (1995), 1157-1160. Google Scholar C. Bardos, T. Masrour and F. Tatout, Condition nécessaire et suffisante pour la controlabilité exacte et la stabilisation du problème de l'élastodynamique, C. R. Acad. Sci. Paris Sér. I Math., 320 (1995), 1279-1281. Google Scholar C. Bardos, T. Masrour and F. Tatout, Obseravation and control of elastic waves, in Singularities and Oscillations (eds. J. Rauch, et al.), IMA Vol. Math. Appl., 91, Springer, New York, NY, 1997, 1-16. doi: 10.1007/978-1-4612-1972-9_1. Google Scholar M. Bellassoued, Energy decay for the elastic wave equation with a local time-dependant nonlinear damping, Acta Math. Sinica, English Series, 24 (2008), 1175-1192. doi: 10.1007/s10114-007-6468-2. Google Scholar N. Burq, Contrôle de l'équation des ondes dans des ouverts comportant des coins, Bull. Soc. Math. France, 126 (1998), 601-637. Google Scholar N. Burq and P. Gérard, Condition Nécessaire et suffisante pour la contrôlabilité exacte des ondes, Comptes Rendus de l'Académie des Sciences, Série I, 325 (1997), 749-752. doi: 10.1016/S0764-4442(97)80053-5. Google Scholar N. Burq and G. Lebeau, Mesures de Défaut de compacité, Application au système de Lamé, Ann. Scient. Ec. Norm. Sup. 4 série, 34 (2001), 817-870. doi: 10.1016/S0012-9593(01)01078-3. Google Scholar M. Daoulatli, B. Dehman and M. Khenissi, Local energy decay for the elastic system with nonlinear damping in an exterior domain, SIAM J. Control Optim., 48 (2010), 5254-5275. doi: 10.1137/090757332. Google Scholar B. Dehman and L. Robbiano, La propriété du prolongement unique pour un système elliptique. Le système de Lamé, J. Math. Pures Appl. (9), 72 (1993), 475-492. Google Scholar T. Duyckaerts, Thèse de Doctorat, Université de Paris Sud, 2004. Google Scholar P. Gérard, Microlocal defect measures, Com.Par. Diff. Eq., 16 (1991), 1761-1794. doi: 10.1080/03605309108820822. Google Scholar L. Hörmander, The Analysis of Partial Differential Operators, Vol. 3, Springer-Verlag, 1985. Google Scholar G. Lebeau and E. Zuazua, Decay rates for the three-dimensional linear system of thermoelasticity, J. Arch. Ration. Mech. Anal., 148 (1999), 179-231. doi: 10.1007/s002050050160. Google Scholar J.-L. Lions, Contrôlabilité exacte, Stabilisation et Perturbations de Systèmes Distribués. Tome 1, Rech. Math. Appl., 8, Masson, Paris, 1988. Google Scholar M. Taylor, Pseudodifferential Operators, Princeton University Press, Princeton, NJ, 1981. Google Scholar K. Yamamoto, Singularities of solutions to the boundary value problems for elastic and Maxwell's equations, Japan J. Math., 14 (1988), 119-163. Google Scholar K. Yamamoto, Exponential energy decay of solutions of elastic wave equations with the Dirichlet condition, Math. Scand., 65 (1989), 206-220. Google Scholar K. Yamamoto, Propagation of microlocal regularities in Sobolev spaces to solutions of boundary value problems for elastic equations, Hokkaido Math. Journal, 35 (2006), 497-545. doi: 10.14492/hokmj/1285766414. Google Scholar Jong-Shenq Guo, Chang-Hong Wu. Front propagation for a two-dimensional periodic monostable lattice dynamical system. Discrete & Continuous Dynamical Systems, 2010, 26 (1) : 197-223. doi: 10.3934/dcds.2010.26.197 Sergei Avdonin, Jeff Park, Luz de Teresa. The Kalman condition for the boundary controllability of coupled 1-d wave equations. Evolution Equations & Control Theory, 2020, 9 (1) : 255-273. doi: 10.3934/eect.2020005 Mohar Guha, Keith Promislow. Front propagation in a noisy, nonsmooth, excitable medium. Discrete & Continuous Dynamical Systems, 2009, 23 (3) : 617-638. doi: 10.3934/dcds.2009.23.617 Yana Nec, Vladimir A Volpert, Alexander A Nepomnyashchy. Front propagation problems with sub-diffusion. Discrete & Continuous Dynamical Systems, 2010, 27 (2) : 827-846. doi: 10.3934/dcds.2010.27.827 Bopeng Rao, Laila Toufayli, Ali Wehbe. Stability and controllability of a wave equation with dynamical boundary control. Mathematical Control & Related Fields, 2015, 5 (2) : 305-320. doi: 10.3934/mcrf.2015.5.305 Mohamed Ouzahra. Controllability of the semilinear wave equation governed by a multiplicative control. Evolution Equations & Control Theory, 2019, 8 (4) : 669-686. doi: 10.3934/eect.2019039 Muhammad I. Mustafa. On the control of the wave equation by memory-type boundary condition. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 1179-1192. doi: 10.3934/dcds.2015.35.1179 Benoît Perthame, P. E. Souganidis. Front propagation for a jump process model arising in spacial ecology. Discrete & Continuous Dynamical Systems, 2005, 13 (5) : 1235-1246. doi: 10.3934/dcds.2005.13.1235 Emeric Bouin. A Hamilton-Jacobi approach for front propagation in kinetic equations. Kinetic & Related Models, 2015, 8 (2) : 255-280. doi: 10.3934/krm.2015.8.255 Bo Su and Martin Burger. Global weak solutions of non-isothermal front propagation problem. Electronic Research Announcements, 2007, 13: 46-52. Elena Trofimchuk, Manuel Pinto, Sergei Trofimchuk. On the minimal speed of front propagation in a model of the Belousov-Zhabotinsky reaction. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1769-1781. doi: 10.3934/dcdsb.2014.19.1769 Mikhail Kuzmin, Stefano Ruggerini. Front propagation in diffusion-aggregation models with bi-stable reaction. Discrete & Continuous Dynamical Systems - B, 2011, 16 (3) : 819-833. doi: 10.3934/dcdsb.2011.16.819 Margarita Arias, Juan Campos, Cristina Marcelli. Fastness and continuous dependence in front propagation in Fisher-KPP equations. Discrete & Continuous Dynamical Systems - B, 2009, 11 (1) : 11-30. doi: 10.3934/dcdsb.2009.11.11 Luisa Malaguti, Cristina Marcelli, Serena Matucci. Continuous dependence in front propagation of convective reaction-diffusion equations. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1083-1098. doi: 10.3934/cpaa.2010.9.1083 Larissa V. Fardigola. Transformation operators in controllability problems for the wave equations with variable coefficients on a half-axis controlled by the Dirichlet boundary condition. Mathematical Control & Related Fields, 2015, 5 (1) : 31-53. doi: 10.3934/mcrf.2015.5.31 Umberto De Maio, Akamabadath K. Nandakumaran, Carmen Perugia. Exact internal controllability for the wave equation in a domain with oscillating boundary with Neumann boundary condition. Evolution Equations & Control Theory, 2015, 4 (3) : 325-346. doi: 10.3934/eect.2015.4.325 Alfredo Lorenzi, Vladimir G. Romanov. Recovering two Lamé kernels in a viscoelastic system. Inverse Problems & Imaging, 2011, 5 (2) : 431-464. doi: 10.3934/ipi.2011.5.431 Francesca Bucci. Improved boundary regularity for a Stokes-Lamé system. Evolution Equations & Control Theory, 2022, 11 (1) : 325-346. doi: 10.3934/eect.2021018 Nicolas Van Goethem. The Frank tensor as a boundary condition in intrinsic linearized elasticity. Journal of Geometric Mechanics, 2016, 8 (4) : 391-411. doi: 10.3934/jgm.2016013 Belhassen Dehman Jean-Pierre Raymond
CommonCrawl
cushion sea star trophic level It is converted to one transfer efficiency estimate by raising to the power of one over the number of transfer steps (trophic level 5 − trophic level 1 = 4), TEeff_ATL ¼. Sea star (Image Credit: Flickr) ... they are present throughout all the trophic levels in a given ecosystem. << 0 0. . Energy is transfered through the consumption of organisms. https://oceana.org/marine-life/corals-and-other-invertebrates/cushion-star Left: Bob Paine / Alamy.com ; Right: Kevin Schafer / Alamy Stock Photo. In this study, we anticipated that habitat divergence and diet shifts might lead to exposure to different parasites and shifts in infection risk. … It also describes marine trophic levels, nudibranchs' defenses, snail and sea star infestations, odd fish behaviors, sea star autotomy, and much more. Graham et al. Thus, the term refers to the trophic level of an animal in a particular food chain. 8 0 obj ... each podium in a sea star is composed of two structures : muscular is calles. >> large body of salt water that covers most of the Earth. Also called an autotroph. When it catches its food, the sea star will wrap its arms around the animal's shell and pull it open just slightly. /ca 1.0 4 0 obj organism that breaks down dead organic material; also sometimes referred to as detritivores. lobster, bicolor damselfish, polychaete worm, cushion sea star, and southern stingray. one of three positions on the food chain: autotrophs (first), herbivores (second), and carnivores and omnivores (third). In this ecosystem, the sea star was the keystone species. Mutualism: This is … ASAP :) Answer Save. endobj Examples are swordfish, seals and gannets. They are usually carnivores, but can be omnivores as well. Trophic Level- The Starfish is on the third trophic level making it a secondary consumer. This study aimed to test whether any of the five common marine invertebrates around Adelaide Island (Western Antarctic Peninsula) displayed MCA: the suspension-feeding holothurian Heterocucumis steineni, the grazing limpet Nacella concinna, and the omnivorous brittle star, cushion star and sea-urchin Ophionotus victoriae, Odontaster validus and Sterechinus neumayeri, respectively. demonstrate how humans influence the trophic structure of coral reef fish assemblages. 2. endobj Having seen what the term stands for, let's look at some examples of tertiary consumers from various ecosystems. What are the primary producers in the coral reef food web illustration? . If there is low food availability, the cushion star will re-absorb its own tissue, which leads to a reduction in size. The 75 sea star samples were taken from 12 sea star taxa (Supplementary Table 1). The top predator in the coral reef food web is a blacktip reef shark. >> One of the most famous and feared tertiary consumers in the world is the great white shark. sea otter would probably be a tertiary feeder and a starfish a secondary feeder. [3] Secondary consumers (ex. Also called an alpha predator or apex predator. /CSp /DeviceRGB The stars can be red, pink, pale orange or purple. 1 decade ago. 12% of known sea star species living in the Southern Ocean Important group of Antarctic benthos with possible trophic diversity (McClintock 1994) Regional variations in changes of sea ice extent and ice season The bottom trophic level is the producers, which make their own food. secondary consumers (ex. /SM 0.02 In ecology, the trophic level is the position that an organism occupies in a food chain - what it eats, and what eats it. Trending Questions. Hundreds of tube feet, which help feeding and movement. 1 0 obj Each living thing in an ecosystem is part of multiple food chains. This makes ecology an exciting field of study, because new discoveries can have an enormous impact. A sea star can lose one or more arms and grow new ones. Illustration Gallery. The red cushion star occurs in many regions of the Western Central Atlantic, including the Bahamas, Cape Frio, Cape Hatteras, the Caribbean Sea, Florida, the Gulf of Mexico, Guyanas and Yucatán. Ecology questions- sea otter, sea star trophic levels? endobj Foundation species support communities across a wide range of ecosystems. On the other hand, many aquatic detritivores, including barnacles, polychaete worms and corals obtain their nutrition by feeding on floating organic detritus (called marine snow). Trophic level, step in a nutritive series, or food chain, of an ecosystem.The organisms of a chain are classified into these levels on the basis of their feeding behaviour.The first and lowest level contains the producers, green plants.The plants or their products are consumed by the second-level organisms—the herbivores, or plant eaters. insect) Echinoderm trophic level. Win-lose situation. /CreationDate (D:20150711173035-04'00') The trophic level assigned an animal depends upon the trophic levels of the food items it eats. endobj In Abstracts: Second International Conference on Marine Bioinvasions, March 9-11, 2001. Secondary consumers prey on primary-consumers. Detritus feeders (Bathybiaster sp. )Predators on active prey (Labidiaster sp. Mendel Skulski 53:56 Non-trophic interactions are considered the primary way foundation species influence communities, with their trophic interactions having little impact on community structure. /Resources 10 0 R top carnivore. /Type /ExtGState It is important to note, that the reference to differentiate a red cushion starfish in its adult or juvenile stage; is linked to its size, since less than 8 centimeters in diameter is considered the measure attributed to an individual in juvenile stage, while adults usually exceed 8 centimeters.. How is energy transfered through a food web? [ ] sea otters (Enhydra lutris) sea gulls >> With tentacles up to 120 feet long, some individuals even rival in size the blue whale, the largest animal in the world.Most lion's mane jellyfish live in the Arctic and North Pacific Ocean from Alaska to Washington where the waters are cool. community and interactions of living and nonliving things in an area. >> Additionally, they may be able to shut the ambulacral grooves which contain the tube-feet, and then spread the spines over them protectively. Step 1: She stings the cockroach in the underside. Not all energy is transferred from one trophic level to another. )Omnivores (Diplasterias sp. © 1996–2020 National Geographic Society. /Pattern << The primary consumers are zooplankton, corals, sponges, Atlantic blue tang, and queen conch. Though this is the first report of trophic shift for this species, shifts have been reported for other members of the Genus, including the Antarctic sea urchin Sterechinus neumayeri . /F7 7 0 R star hurler: Robert Paine at Mukkaw Bay, on the Olympic Peninsula in Washington, in 1974, and again recently. %PDF-1.4 Why Is the Planet Green? ... Knit Star Pillow Pattern + digital download knit star pillow pattern + starfish pillow pattern + star cushion candycoloredknits. Anonymous. They prey on secondary consumers. 5 out of 5 stars (13) 13 reviews << The Red Cushion Habitat And Distribution. The sum of reads for all libraries passing the quality control parameters for this study totaled 2,229,468 reads with a mean library depth of 25,924 reads/library. stream The bottom trophic level is made of producers. /ColorSpace << /SA true humans) Porifera examples. endobj group of organisms linked in order of the food they eat, from producers to consumers, and from prey, predators, scavengers, and decomposers. Organisms that eat autotrophs are called herbivores or primary consumers. /Producer (�� Q t 4 . Get your answers by asking now. sea otter would probably be a tertiary feeder and a starfish a secondary feeder. As mid‐trophic‐level consumers, lobsters function in the transfer of energy and materials from primary producers and primary consumers to apex predators. organism on the food chain that depends on autotrophs (producers) or other consumers for food, nutrition, and energy. Decomposers are organisms that break down dead organic material and return nutrients to the sediment. substance an organism needs for energy, growth, and life. It is unclear if chiton homing functions in the same way, but they may leave chemical cues along the rock surface and at the home scar which their olfactory senses can detect and home in on. [/Pattern /DeviceRGB] All rights reserved. Stable isotope analyses provide the means to examine the trophic role of animals in complex food webs. /AIS false Trophic level, step in a nutritive series, or food chain, of an ecosystem.The organisms of a chain are classified into these levels on the basis of their feeding behaviour.The first and lowest level contains the producers, green plants.The plants or their products are consumed by the second-level organisms—the herbivores, or plant eaters. /Parent 2 0 R In comparison, the Whale Shark feeds primarily one or two trophic levels above the phytoplankton. The decomposers are the polychaete worm and the queen conch. Energy is used by organisms at each trophic level, meaning that only part of the energy available at one trophic level is passed on to the next level. Plants ( phytoplankton ) occupy trophic level 1; all the higher levels consist of animals. 46; SI Appendix); our model predicted changes in trophic level of up to 27% for species like largehead hairtail from those used in Watson et al. Step 2: She calmly, carefully and clinically stings the roach in the head. When the killer whales entered the system, they turned that three-trophic level system into a four-trophic level system by eating the otters, and it behaved exactly as that theory would predict and that is – that it became an even-numbered ecosystem. Each food chain is one possible path that energy and nutrients may take as they move through the ecosystem. the orientation of a sea star is. Investigate the trophic levels of a coral reef food web. The sea urchin grazer-omnivore S. agassizi was the only species that shifted its trophic level between the F area (primary consumer) and the M area (secondary consumer). an oral mouth facing downward and a aboral spiny face. Ecosystem- The Starfish is found in saltwater oceans. Also called a food cycle. Sea stars are a diverse group of animals, but most of them share the following characteristics: Hard plates under their skin instead of a backbone. That's what sponges are great for, they clean and filter out the water wherever they live. Sea stars commonly have 5 arms, but there are species with many more, including the New Zealand sea star that has 11 arms. The sea star eats by attaching to prey and extending its stomach out through its mouth. A food web consists of all the food chains in a single ecosystem. endobj Primary and Secondary consumers (ex. (Puglisi, 2000) Behavior. Each time you go up one trophic level the mass of organisms goes down. << Under low fishing, biomass accumulates in upper and lower trophic levels, implying a more direct link between primary production and high trophic level fish. a. alga b. grasshopper c. zooplankton d. eagle e. fungus 1. decomposer 2. producer 3. tertiary consumer 4. secondary consumer 5. primary consumer. She aims for a specific part of the brain for an equally specific effect. Ecology has a long history of being a historical science, and has only started to become an experimental science in the last 50 - 60 years. Trophic levels are the different layers of a food web. sponges. 1 Answer. Food webs consist of different organism groupings called trophic levels. This is because most of the biomass/energy is lost and so does not become part of the biomass in the next level up. http://www.nationalgeographic.org/media/coral-reef-food-web/, Consumers are organisms that depend on producers or other consumers to get their food, energy, and nutrition. demonstrate how humans influence the trophic structure of coral reef fish assemblages. Navy Blue Starfish Pillow Cover Sea Star - Seafriends Premier Navy - Lumbar 12 14 16 18 20 22 24 26 Euro - Hidden Zipper Closure motion52. Biology, Ecology, Earth Science, Oceanography, For the complete illustrations with media resources, visit: The otter-urchin-kelp system is an example of a three- trophic level system, where you have predatory otters influencing the herbivorous sea urchins, thus influencing the plants, the kelps in the system. They eat producers. From shop candycoloredknits. Hand out copies of the script (see pages 110-114) and assign a role to each student. Phytoplankton are then consumed at the next trophic level in the food chain by microscopic animals called zooplankton. In this example of a coral reef, there are producers, consumers, and decomposers. organism that eats a variety of organisms, including plants, animals, and fungi. Producers are usually plants, but can also be algae or bacteria. (Grzimeck, 1972) Known Predators. Favorite Answer. We would like to show you a description here but the site won't allow us. URI: http://rs.tdwg.org/dwc/terms/habitat Definition: A category or description of the habitat in which the Event occurred. Reducing trophic levels by only 14% for secondary consumers and above can bring fisheries in balance with primary productivity (reproducing ref. 10 0 obj ecologically, sea stars are at what trophic level. Parasitism: This is when organisms live on a host organism but the host organism is harmed and the organism on the host benefits. Reducing trophic levels by only 14% for secondary consumers and above can bring fisheries in balance with primary productivity (reproducing ref. Detritivores and decomposers complete the cycling of energy through the food web. Sea otters and gulls prey on this starfish. For example, a queen conch can be both a consumer and a detritivore, or decomposer. Trophic Cascade: A classic example of a trophic cascade involves sea otters, urchins, and kelp. At the larval stage, Pisaster ochraceus are filter feeders, eating plankton. Tiny organisms can be swallowed whole. They are large‐bodied and conspicuous, and can comprise a considerable proportion of the collective consumer biomass. /CA 1.0 For the complete illustrations with media resources, visit: ... organisms at each trophic level, meaning that only part of the energy available at one trophic level … branch of biology that studies the relationship between living organisms and their environment. )Unknown (Peribolaster sp.) Relevance. A trophic level refers to the organisms position in the food chain. These. << /GSa 3 0 R Not all energy is transferred from one trophic level to another. The intermediate consumers are the sergeant major, flaming tongue snail, bar jack, grouper, Caribbean lobster, bicolor damselfish, polychaete worm, cushion sea star, and southern stingray. /F6 6 0 R sea star) Chordate trophic level. Extraordinarily rapid life-history divergence between Cryptasterina sea star species. A sea star's mouth is on its underside. Trending Questions. Here, we used stable isotope analyses to characterize the feeding ecology of reef manta rays (Mobula alfredi) at a remote coral reef in the Western Indian Ocean.Muscle samples of M. alfredi were collected from D'Arros Island and St. Joseph Atoll, Republic of Seychelles, in November 2016 and 2017. The lion's mane jellyfish cannot be missed in the open ocean where it prefers to float about. /SMask /None>> New Orleans, LA Summary: Report into the rate of spread by the invasive sea star. As keystone species, sea stars serve to maintain biodiversity and species distribution through trophic level interactions in marine ecosystems. /CSpg /DeviceGray Trophic Level Example Organisms Ingestion Impacts Entanglement Impacts Primary Consumers #1 elkhorn coral #15 purple sea fan #18 Atlantic blue tang ... #33 cushion sea star #35 southern stingray A straw blocking a sea turtle's nostril can make it difficult to … Chemical components in sponges have been found to contribute to the production of more successful antibiotics that help treat/cure strep throat, arthritis, leukemia, and more. Energy is used by organisms at each trophic level, meaning that only part of the energy available at one trophic level is passed on to the next level. Producers make their own food, providing energy for the rest of the ecosystem. Tertiary-consumers are carnivores that mostly eat other carnivores. All of the interconnected and overlapping food chains in an ecosystem make up a food web. To understand the role of predatory starfish he hurled them from an area and later returned to assess the sea life without them. There are many different types of consumers. Factors influencing the distribution and abundance of the exotic sea star Asterias amunrensis during the early phase of its establishment in Port Phillip Bay, Southern California. Recently, Sea Star Wasting Disease (SSWD) has caused widespread mass mortality in several sea star species from the Pacific Coast of the United States of America (USA) and Asterias forbesi on the Atlantic Coast. Graham et al. The fourth trophic level consists of predatory fish, marine mammals and seabirds that consume forage fish. It is calculated as the production of all large fishes (trophic level 5) divided by the net primary production (trophic level 1) in each model grid cell. 46; SI Appendix); our model predicted changes in trophic level of up to 27% for species like largehead hairtail from those used in Watson et al. /XObject << Sea star (Image Credit: Flickr) Some marine detritivores survive on the seabed, and these organisms are generally referred to as bottom-feeders. /Creator (��) >> Spines or spicules covering the top (or dorsal) surface. Apex predators, such as orcas, which can consume seals, and shortfin mako sharks, which can consume swordfish, make up a fifth trophic level. Favorite Answer. /Font << Here, we used stable isotope analyses to characterize the feeding ecology of reef manta rays (Mobula alfredi) at a remote coral reef in the Western Indian Ocean.Muscle samples of M. alfredi were collected from D'Arros Island and St. Joseph Atoll, Republic of Seychelles, in November 2016 and 2017. /ExtGState << The most basic way of viewing trophic levels is producers>herbivores>omnivores>carnivores, but it is also necessary to have detritivores to filter out the left overs. Not all energy is transferred from one trophic level to another. Under heavy fishing, sea urchins replace low trophic levels, driving mean trophic level of fish communities up. ... Caribbean lobster, bicolor damselfish, polychaete worm, cushion sea star, and southern stingray. Answer $\mathrm{a} 2, \mathrm{b} 5, \mathrm{c} 5, \mathrm{d} 3$ or $\mathrm{d} 4, \mathrm{e} 1$ Chapter 20 Communities and Ecosystems Campbell … 0 0. >> organism that eats mainly plants and other producers. Many echinoderms have spines, the spines are part of the internal skeleton and are covered by epidermis. 3 0 obj In the marine food web, most species feed upon a variety of items and, consequently, may fit in more than one trophic level. Ask Question + 100. Spines or spicules covering the top (or dorsal) surface. animal that hunts other animals for food. The primary producers are blue-green algae, phytoplankton, zooxanthelle, seagrass, and brown algae. Under low fishing, biomass accumulates in upper and lower trophic levels, implying a more direct link between primary production and high trophic level fish. jellies) ... (ex. Still have questions? Kingdom: Animalia Phylum: Echinodermata Class: Asteroidea Order: Valvatida Family: Oreasteridae Genus: Oreaster Species: O. reticulates Common name: Cushioned Star An organism that eats herbivores is a carnivore and a secondary consumer. The role of predatory starfish he hurled them from an area and later returned to assess the sea without. Worm, cushion sea star ' s stomach secondary, and queen conch can omnivores! Plants became rare, sponges, Atlantic blue tang, and Mussels around! Decomposers in the food chain is one possible path that energy and nutrients may as... Own food 's shell and pull it open just slightly clinically stings the roach cushion sea star trophic level the coral reef web. It then digests the animal 's shell the lion ' s stomach get their,. Many old ideas have not had a chance to be tested thoroughly the relationship living! Sea urchins replace low trophic levels the fourth trophic level of an that. One role in a food web 110-114 ) and assign a role to each student the biomass in coral! ( you may choose cushion sea star trophic level level more than once ) that is and. The different layers of a trophic Cascade involves sea otters, urchins, southern! Both a consumer and a starfish a secondary feeder called zooplankton from one trophic level refers to the trophic.. Seabirds that consume forage fish is … in this ecosystem, the 's! Of two structures: muscular is calles media resources, visit: http: //www.nationalgeographic.org/media/coral-reef-food-web/ and nonliving things in ecosystem... Sea stars are at what trophic level ( you may choose a more... Assess the relative trophic importance of a coral reef food web illustration 's front pair legs. In Abstracts: second International Conference on marine Bioinvasions, March 9-11, 2001 of. The cycling of energy through the ecosystem it then digests the animal 's and! Top predator in the coral reef, there are producers, consumers are that.: the sea star feeds primarily one or more arms and grow new ones at trophic... Omnivores as well are then consumed at the next trophic level to another may be able to shut the grooves. Level consists of predatory starfish he hurled them from an area and later returned to assess sea. As they move through the ecosystem, Mussel or Oyster beds / Alamy.com ; Right Kevin... Which are Clams, Oysters, and energy nutrients to the organisms position the...: a category or description of the script ( see pages 110-114 ) and assign a role each. Food, nutrition, and southern stingray with media resources, visit: http: //rs.tdwg.org/dwc/terms/habitat Definition: category... A sea star that eat autotrophs are called herbivores or primary consumers are organisms that depend on or... Cascade involves sea otters, urchins, and tertiary consumers in the food that... Is when organisms live on a host organism but the host organism is harmed and the organism the. Consumer 5. primary consumer a variety of organisms goes down choose a level more one... Ecologically, sea stars serve to maintain biodiversity and species distribution through trophic level assigned animal! The polychaete worm, cushion sea star pushes its stomach through its mouth and the! Front pair of legs are temporarily paralysed such that it ca n't run away ecosystem, sea.: //rs.tdwg.org/dwc/terms/habitat Definition: a classic example of a coral reef food web illustration ) assign. Arms around the animal 's shell and pull it open just slightly from primary producers are blue-green algae phytoplankton! Large‐Bodied and conspicuous, and brown algae level up coral reef fish.... Of multiple food chains in a food web may be able to shut the ambulacral grooves contain! Food availability, the spines over them protectively over them protectively level making it a secondary consumer herbivores or consumers... Reef shark Orleans, LA Summary: Report into the rate of spread by producers... Retract such sensitive areas as the relative trophic importance of a coral reef food web illustration cushion sea star trophic level! Ftse Bursa Malaysia Small Cap Index Companies, Shure 1/4 Wave Antenna, Bosch Ahs 50-16 Hedge Cutter, Best Laptop For Video Editing, Giant Sea Star Reproduction, 75 Watt 125 Volt Bulb, Microwave Solid State Devices Pdf, How To Make A Dog Gate Out Of Cardboard, cushion sea star trophic level 2020
CommonCrawl
MathZsolution Number of ways to write n as a sum of k nonnegative integers How many ways can I write a positive integer $n$ as a sum of $k$ nonnegative integers up to commutativity? For example, I can write $4$ as $0+0+4$, $0+1+3$, $0+2+2$, and $1+1+2$. I know how to find the number of noncommutative ways to form the sum: Imagine a line of $n+k-1$ positions, where each position can contain either a cat or a divider. If you have $n$ (nameless) cats and $k-1$ dividers, you can split the cats in to $k$ groups by choosing positions for the dividers: $\binom{n+k-1}{k-1}$. The size of each group of cats corresponds to one of the nonnegative integers in the sum. As Brian M. Scott mentions, these are partitions of $n$. However, allowing $0$ into the mix, makes them different to the usual definition of a partition (which assumes non-zero parts). However, this can be adjusted for by taking partitions of $n+k$ into $k$ non-zero parts (and subtracting $1$ from each part). If $p(k,n)$ is the number of partitions of $n$ into $k$ non-zero parts, then $p(k,n)$ satisfies the recurrence relation p(k,n) &= 0 & \text{if } k>n \\ p(k,n) &= 1 & \text{if } k=n \\ p(k,n) &= p(k+1,n)+p(k,n-k) & \text{otherwise}. \\ (this recurrence is explained on Wikipedia). Note: in the above case, remember to change $n$ to $n+k$. This gives a (moderately efficient) method for computing $p(k,n)$. The number of partitions of $n$ into $k$ parts in $\{0,1,\ldots,n\}$ can be computed in GAP using: NrPartitions(n+k,k); Some small values are listed below: $$\begin{array}{c|ccccccccccccccc} & k=1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 6 & 1 & 4 & 7 & 9 & 10 & 11 & 11 & 11 & 11 & 11 & 11 & 11 & 11 & 11 & 11 \\ 7 & 1 & 4 & 8 & 11 & 13 & 14 & 15 & 15 & 15 & 15 & 15 & 15 & 15 & 15 & 15 \\ 8 & 1 & 5 & 10 & 15 & 18 & 20 & 21 & 22 & 22 & 22 & 22 & 22 & 22 & 22 & 22 \\ 10 & 1 & 6 & 14 & 23 & 30 & 35 & 38 & 40 & 41 & 42 & 42 & 42 & 42 & 42 & 42 \\ 13 & 1 & 7 & 21 & 39 & 57 & 71 & 82 & 89 & 94 & 97 & 99 & 100 & 101 & 101 & 101 \\ 14 & 1 & 8 & 24 & 47 & 70 & 90 & 105 & 116 & 123 & 128 & 131 & 133 & 134 & 135 & 135 \\ 15 & 1 & 8 & 27 & 54 & 84 & 110 & 131 & 146 & 157 & 164 & 169 & 172 & 174 & 175 & 176 \\ \end{array}$$ If you want a list of the possible partitions, then use: RestrictedPartitions(n,[0..n],k); Comment: In the latest version of GAP, NrRestrictedPartitions(n,[0..n],k); does not seem to work properly here, since it does not match Size(RestrictedPartitions(n,[0..n],k)); when $k>n$. I emailed the support team about this, and they said that NrRestrictedPartitions and RestrictedPartitions are only intended to be valid for sets of positive integers. (I still think the above is a bug, but let's let that slide.) This means that NrPartitions(n+k,k); is the technically correct choice, and, strictly speaking, we shouldn't use RestrictedPartitions(n,[0..n],k);, but judging from the source code, it will work as expected. Source : Link , Question Author : Yellow , Answer Author : Douglas S. Stones Categories additive-combinatorics Tags additive-combinatorics Post navigation Mathematicians shocked(?) to find pattern in prime numbers Closed form for ∫∞0lnJμ(x)2+Yμ(x)2Jν(x)2+Yν(x)2dx\int_0^\infty\ln\frac{J_\mu(x)^2+Y_\mu(x)^2}{J_\nu(x)^2+Y_\nu(x)^2}\mathrm dx Determinant twist and $Pin _{\pm}$ structure on $4k$-dimensional bundles [Reference request] Hasse diagrams of G/P_1 and G/P_2 Is the family of probabilities generated by a random walk on a finitely generated amenable group asymptotically invariant? Existence of martingales given some constraint on laws The best constant in Poincare-liked inequality in BVBV and BDBD space Abelian Groups Torsionless not separable abelian groups The action of the unitary divisors group on the set of divisors and odd perfect numbers Minimal generation for finite abelian groups Short exact sequence 0→Z→A→R→00\to \mathbb Z\to A \to \mathbb R \to 0 Cardinality of the set of elements of fixed order. The original proof of Szemerédi's Theorem Rational points on the unit circle Is there another proof for Dirichlet's theorem? [duplicate] Non-asymptotically densest progression-free sets Can we do better than random when constructing dense kk-AP-free sets © 2023 MathZsolution • Built with GeneratePress
CommonCrawl
•https://doi.org/10.1364/OE.382139 Ultrabroadband light absorption based on photonic topological transitions in hyperbolic metamaterials Xiaoyun Jiang, Tao Wang, Qingfang Zhong, Ruoqin Yan, and Xing Huang Xiaoyun Jiang, Tao Wang,* Qingfang Zhong, Ruoqin Yan, and Xing Huang Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China *Corresponding author: [email protected] X Jiang T Wang Q Zhong R Yan X Huang Xiaoyun Jiang, Tao Wang, Qingfang Zhong, Ruoqin Yan, and Xing Huang, "Ultrabroadband light absorption based on photonic topological transitions in hyperbolic metamaterials," Opt. Express 28, 705-714 (2020) Tunable optical angular selectivity in hyperbolic metamaterial via photonic topological transitions Xiaoyun Jiang, et al. Ultra-broadband solar absorbers for high-efficiency thermophotovoltaics Jin Zhou, et al. Ultra-broadband metamaterial absorber from ultraviolet to long-wave infrared based on... Song Yue, et al. Nanophotonics, Metamaterials, and Photonic Crystals Light matter interactions Metamaterial absorbers Original Manuscript: November 4, 2019 Revised Manuscript: December 13, 2019 Structure and model Results and analysis Photonic topological transitions (PTTs) in metamaterials open up a novel approach to design a variety of high-performance optical devices and provide a flexible platform for manipulating light-matter interactions at nanoscale. Here, we present a wideband spectral-selective solar absorber based on multilayered hyperbolic metamaterial (HMM). Absorptivity of higher than 90% at normal incidence is supported over a wide wavelength range from 300 to 2215 nm, due to the topological change in the isofrequency surface (IFS). The operating bandwidth can be flexibly tailored by adjusting the thicknesses of the metal and dielectric layers. Moreover, the near-ideal absorption performance can be retained well at a wide angular range regardless of the incident light polarization. These features make the proposed design hold great promise for practical applications in energy harvesting. In light of the global energy crisis and the rapid deterioration of ecological environment, the utilization and development of renewable energy is urgently needed. The importance of solar energy as a clean, safe and abundant source of sustainable energy has been recognized. As one of the most commonly used methods of harvesting solar energy, solar thermal systems can directly convert solar radiation into heat energy, which can be widely applied in many industrial processes, such as steam generation, desalination and wastewater treatment [1–3]. Specifically, in such applications, the solar absorbers, as an indispensable component, have a great impact on the performance of the entire systems. In order to maximize the utilization of solar energy, a solar absorber with spectral selective absorption is essential, which means it can absorb all the solar energy efficiently while suppressing mid-infrared emission [4–6]. Since solar radiation varies with time and place, polarization- and angle-independence are also crucial for assessing the overall performance of solar absorbers. Therefore, the design of near-ideal solar selective absorbers is of fundamental significance for many practical applications, and the improvement of absorption efficiency of solar absorber will greatly promote the development of solar-thermal industry. In 2008, Landy et al. presented a single-wavelength metamaterial perfect absorber made of two metallic split-ring resonators and a metallic cutting wire [7]. Later on, various types of ingenious designs have been proposed to tailor the optical properties of the metamaterial absorbers [8–10]. Unfortunately, these schemes suffer common disadvantage of limited bandwidth, which will reflect a great amount of incident energy. To achieve perfect absorption over a broadband, the most common strategy is to combine several different strong resonators together [11–13], but the absorption bandwidth of metamaterial absorbers cannot be broadened significantly, and these multi-sized resonators will add complexity to the nanofabrication. Besides, the slow-light waveguide constructed from a periodic array of sawtoothed anisotropic metamaterial provides another effective approaches for absorbing the electromagnetic radiation over an ultrawide band [14–16]. However, the poor selectivity of absorption spectrum and low melting point metals in HMM nanostructures impede its application potential in solar-thermal energy harvesting. Therefore, it still remains challenging to achieve a wideband spectral-selective solar absorber with simultaneous low cost, high efficiency, and fabrication simplicity. In this paper, we propose a near-ideal solar selective absorber which exhibits near-perfect absorption covering almost the whole solar spectrum. It is realized based on photonic topological transition (PTT) in HMM nanostructure consisting of a periodic SiO$_{2}$/TiO$_{2}$/W multilayer on tungsten substrate. Both simulations and theoretical calculations show that the absorption performance is superior with absorptivity higher than 90% covering the range from 300 to 2215 nm and a near-ideal total photothermal conversion efficiency up to 91.8% at 1000 K, which indicates that most of the incident energy can be absorbed and utilized efficiently. In addition, we give a detailed theoretical description of the underlying physics and prove that the transition point of PTT can be employed to manipulate the multilayer's absorbing characteristics by changing structural parameters of the metamaterial. Moreover, the proposed nanostructure can maintain the performance of very high and broadband absorption even when the incident angle reaches up to $70^\circ$ for TM polarization, while for TE polarization the absorption efficiency is still satisfactory when the incident angle approaches $60^\circ$. Compared with previous works, our proposed metamaterial absorber is cost-effective and spectral-selective, showing broad prospects for large-scale applications that require omnidirectional, and ultra-broadband perfect absorption, such as energy harvesting, optical modulators and thermal emitters. 2. Structure and model As shown in Fig. 1(a), the proposed solar absorber is formed by periodically deposited unit cells consisting of a metal layer and two dielectric layers. The metal layer is selected as W, and the dielectric layers are made of SiO$_{2}$ and TiO$_{2}$, respectively. The total number of SiO$_{2}$/TiO$_{2}$/W pairs ($N$) is 18. For W with good thermal stability, its optical properties is taken from the experimental data [17]. The refractive indices of SiO$_{2}$ and TiO$_{2}$ are 1.46 and 2.56, respectively [18]. The geometrical parameters are initially assumed as $d_1=70$ nm, $d_2=15$ nm, $d_3=3$ nm, $P=50$ nm, and $d=200$ nm. To ensure the reliability and precision of the numerical results, the optical characteristics can be numerically investigated using the transfer matrix method (TMM) and finite-difference time-domain (FDTD) method, respectively [19]. The FDTD calculation is performed by a commercial software package (Lumerical FDTD solutions). In the simulation, periodic boundary conditions are employed in the x and y directions, and perfectly matched layers are utilized in the z direction. In order to balance the simulation time and accuracy, the mesh cell size along the x-, y-, and z-direction is set to 2.5 nm $\times$ 2.5 nm $\times$ 0.5 nm, respectively. A plane wave with a wavelength ($\lambda$) is launched onto the proposed multilayer configuration with an angle ($\theta$), the absorptivity based on the Poynting theorem can be calculated by $A(\lambda ) = 1- R(\lambda ) - T(\lambda )$, where $R(\lambda )$ and $T(\lambda )$ represent the reflectivity and transmissivity, respectively. Here, considering that an opaque W film is used as substrate, the transmissivity of the nanostructure can be ignored. Fig. 1. (a) Schematic illustration of the proposed spectral-selective solar absorber. $d_3$ ($d_1$ and $d_2$) represents the thickness of W (SiO$_{2}$ and TiO$_{2}$) layer in the nanostructure with a period number $N$. $D$ is the period of the multilayer system, and $P$ is the periodicity. The substrate layer is W with the thickness $d$. The local enlarged drawing of the unit cell of the metamaterial, and inset shows the model of numerical simulation. (b) Calculated effective complex permittivities, $\varepsilon _{\bot }$ and $\varepsilon _{\rVert }$, of the metamaterial with $d_1=70$ nm, $d_2=15$ nm, and $d_3=3$ nm. Inset shows the definition of $\bot$ and $\rVert$ directions. When $Re(\varepsilon _{\bot })Re(\varepsilon _{\rVert })>0$, one can achieve elliptical response, it turns into hyperboloid while $Re(\varepsilon _{\bot })Re(\varepsilon _{\rVert })<0$. The yellow area represents the ENZ ($Re(\varepsilon _{\rVert }) \simeq 0$) regime, and the green area highlights the spectral range of hyperbolic response. 3. Results and analysis Since the period of multilayer ($D$) is much smaller than the wavelength of light, such metamaterial behaves as a uniform uniaxial media with effective parameters [20]. According to the effective medium theory (EMT) [21], the effective permittivity tensor of the stacked multilayer can be described by the mixing formulae [18] (1)$$\varepsilon_{\bot} = \left(f_1/\varepsilon_{s} + f_2/\varepsilon_{t}+ f_3/\varepsilon_{w}\right)^{-1},$$ (2)$$\varepsilon_{\rVert}= \varepsilon_{s}f_1 + \varepsilon_{t}f_2 + \varepsilon_{w}f_3,$$ where the subscripts $\varepsilon _{\bot }$ and $\varepsilon _{\rVert }$ represent components perpendicular and parallel to the multilayers, respectively. $f_m = d_m/D$ is the volume filling fraction of the $m$th layer, and $\varepsilon _{w}$($\varepsilon _{s}$ or $\varepsilon _{t}$) is the permittivity of metal (dielectric) constitution. As a result, with the above geometry parameters, we can get $Re(\varepsilon _{\bot })Re(\varepsilon _{\rVert })\,<\,0$ to achieve hyperbolic response, as can be seen in Fig. 1(b). Actually, $Re(\varepsilon _{\rVert })$ undergoes a sign switch around a certain wavelength, which is known as the epsilon-near-zero (ENZ) regime, while $Re(\varepsilon _{\bot })$ varies slowly within that range. The obtained absorption spectra of $N = 18$ under normal incidence is shown in Fig. 2(a). The absorptivity ($A$) of TM-polarized light is higher than 90% over an ultrabroad range from 300 to 2215 nm, which shows a superior absorption performance over previous works. The FDTD simulation agree well with the theoretical calculation by the TMM method. To further understand the ultrabroadband absorption behaviors, the impedance ($Z$) of the proposed metamaterial is analyzed based on the impedance matching method [22]. As depicted in Fig. 2(b), the real part of $Z$ is close to one and its imaginary part approaches zero, which satisfies the impedance matching conditions, thus explaining the ultrahigh absorption band of our solar absorber as shown in Fig. 2(a). It is worth noting that the designed absorber exhibits spectral-selective behavior with a high absorption above 90% in the range of 300-2215 nm with a sharp drop for wavelengths larger than the ENZ regime. Fig. 2. (a) Absorption spectra for the SiO$_{2}$/TiO$_{2}$/W multilayered structure with number of periods $N = 18$ in the spectral range of 0.3-4 $\mu$m. The solid line (dashed line) is the numerical (theoretical) result calculated by the FDTD (TMM) methods. (b) The impedance curve of the designed metamaterial nanostructure. The yellow area indicates the region of ENZ ($\varepsilon _{\rVert } \simeq 0$), and the hyperbolic wavelength regime is drawn in the green region. To understand the physical mechanism of absorption in our structure, we start by studying the isofrequency surface of extraordinary electromagnetic waves, which is given by [23] (3)$$\frac{k_x^2 + k_y^2}{\varepsilon_{\bot}} + \frac{k_z^2}{\varepsilon_{\rVert}} = \left( \frac{2\pi}{\lambda} \right)^2,$$ where $k_x$, $k_y$, and $k_z$ are, respectively, the wavevector components along x-, y-, and z-directions. In the bidimensional $\boldsymbol {k}$-space, the tangential component of wavevector ($k_x$) is conserved at the interface between air and the multilayer [24]. As shown in Fig. 3(a), the IFS at the wavelength of 1949 nm is ellipsoidal. The wavevectors of excited modes depend on the surface of the ellipsoid and its tangential components ($k_x$) can match to that of vacuum modes. Thus the modes from free space (black curve) can be efficiently coupled to the modes from metamaterial (blue curve) maximizing the incident energy absorption. In other words, since the multilayer supports radiative modes, light from free space can penetrate into the nanostructure and get absorbed, leading to high absorptivity. With the increasing wavelength, $Re(\varepsilon _{\rVert })$ turns into negative around the ENZ regime and the proposed metamaterial undergoes a PTT from an effective dielectric to an HMM. Correspondingly, the IFS turns from an ellipsoid into a hyperboloid as shown in Fig. 3(b). In this case, such as $\lambda =3075$ nm, the HMM only supports high-$\boldsymbol {k}$ modes with large tangential components of the wavevectors ($k_{rx}$), which cannot match to the low-$\boldsymbol {k}$ modes propagating in vacuum. Thus there is no coupling between the vacuum modes and the hyperbolic modes leading to a strong suppression of light absorption in the hyperbolic regime. As a result, only above the critical angle ($\theta _c$), the total internal reflection (TIR) occurs at the first TiO$_{2}$/W interface, producing an evanescent wave with little energy that can be coupled to the multilayered structure to achieve high reflection and low absorption in this regime [25]. To further verify the absorption characteristics, we study the normalized distributions of the electric field |$E$| at the above two wavelengths in Fig. 4(a). It is found that the electric field intensity at $\lambda =1949$ nm is attenuated with propagating in the SiO$_{2}$/TiO$_{2}$/W multilayered structure, which indicates that most of the incident power within the studied wavelength range can be absorbed. However, for a longer wavelength of $\lambda =3075$ nm, the multilayered system works as an HMM so that most of the incident power propagates downwards along the z-direction in the air layer without penetrating into the nanostructure, leading to a strong suppression of absorption in this wavelength range. Here, it should be noted that, our approach is based on intrinsic material properties, which is fundamentally different from structural resonances or interference effects, thus the unique electromagnetic responses in the nanostructure are also slightly different from before. Fig. 3. Schematic of the IFS in free space (black curves) and the multilayer (blue curves). The IFS of TM-polarized light in the SiO$_{2}$/TiO$_{2}$/W multilayered structure at the wavelengths of (a) 1949 nm and (b) 3075 nm. $\vec {k}$ stands for the direction of phase propagation, and $\vec {S}$ represents the direction of energy flow. $\theta$ is the angle of incident light and $k_0$ is free space wavenumber. In the isotropic medium (such as air), the circular IFS forces the wavevector ($k_i$) and the Poynting vector ($S_i$) being collinear. While for anisotropic metamaterials (such as HMMs), the Poynting vector ($S_t$ or $S_r$) is orthogonal to the IFS. Fig. 4. (a) Distributions of electric field for the proposed spectral-selective solar absorber at different incident wavelengths. (b) The absorption spectrum with different thicknesses of W layer $d_3$ in the multilayer system, when $d_1=70$ nm, $d_2=15$ nm, and $N=18$. PTT points are plotted as blue dots, which separate the ellipsoidal $(\varepsilon _{\rVert }\varepsilon _{\bot }>0)$ and hyperbolic $(\varepsilon _{\rVert }\varepsilon _{\bot }<0)$ regime. According to the aforementioned analyses, the PTT occurs around the ENZ regime, where $Re(\varepsilon _{\rVert }) \simeq 0$ and $Re(\varepsilon _{\bot }) > 0$. Correspondingly, the topology of IFS undergoes the transition from the closed (ellipsoid $\varepsilon _{\rVert }\varepsilon _{\bot }>0$) to the open (hyperboloid $\varepsilon _{\rVert }\varepsilon _{\bot }<\,0$) one through the transform point. In this case, the incident light is strongly absorbed in the ellipsoidal regime and remarkably suppressed in the hyperbolic regime, which contributes to the excellent spectral selectivity of absorption in our nanostructure. Therefore, we can utilize the PTT to tailor the bandwidth of the near-perfect absorption by adjusting the structural thickness in the multilayered system. Figure 4(b) is the calculated absorption efficiency as a function of metal (W) layer thickness $d_3$, when $d_1=70$ nm, $d_2=15$ nm, and $N=18$. It is found that the bandwidth of the absorption spectrum decreases gradually with an increasing of $d_3$. And meanwhile the PTT points are calculated by Eq. (2) with different $d_3$, which matches nicely with the end of the near-unity absorption broadband. Furthermore, to determine the tunability of the proposed solar absorber, we further investigate the influence of dielectric layers thicknesses on the absorption performance. As shown on Figs. 5(a)–5(b), the end wavelength of the broadband absorption possesses a linear redshift with the increase of the SiO$_{2}$ thickness ($d_1$) and TiO$_{2}$ thickness ($d_2$). It is well explained from the effective permittivity of the HMM based on the effective medium theory. In other words, with the increment of dielectric layer thickness, the ENZ regime shifts toward the long wavelength due to the increase of $f_1$ (or $f_2$), which also leads to an ultra-broad absorbing band. Meanwhile, topological transition points of the IFS correspond well to the end of broadband, and these results provide more freedom to control the operating bandwidth of the absorption spectrum. Fig. 5. Absorption spectra as a function of dielectric (SiO$_{2}$ and TiO$_{2}$) layers thicknesses (a) $d_1$ and (b) $d_2$, when $d_3=3$ nm and $N=18$ . The blue dots stand for the transition point of PTT, and the dotted box represents the absorption peaks caused by the Bloch mode. Furthermore, as depicted in Fig. 5(a), it is found that, the Bloch mode can be generated at the shorter wavelengths, which leads to a dip in absorption spectrum [26]. To further verify this phenomena, the absorption spectra of the multilayer with $d_1=105$ nm, $d_2=15$ nm, $d_3=3$ nm, and $N=18$ under normal incidence at the shorter wavelengths is calculated by the FDTD method, as shown in Fig. 6(a). The obvious dip around 360 nm is observed in the absorption spectrum due to the generation of Bloch mode, which can be proved by the distribution of the electric field in the metamaterial structure. Judging from the electric field distribution, smaller optical power is trapped within nanostructure due to Bloch effect, indicating the weak interaction between light and matter, and resulting in a low light absorption as illustrated in Fig. 4(a). According to the photonic crystal bandgap critical condition [27], one can move the location of Bloch mode toward the shorter wavelength by reducing the thickness of the dielectric layer, but with decreasing of the absorption band within the studied wavelength range, which degrades the absorption performance of solar absorber for energy harvesting. Therefore, in this work, we propose a novel way to replace the conventional two-layer (dielectric/ metal) structure unit with a three-layer one, which not only avoids the effect of Bloch mode at the shorter wavelengths by changing the critical condition of the photonic bandgap, but also can achieve a superior absorption performance based on PTT in HMM. In addition, increasing the period number $N$ of the multilayered metamaterial can effectively improve the accuracy of PTT point as predicted by the EMT method and further enhance the stability of the high-efficiency light absorption in the nanostructure. As shown in Fig. 6(b), we numerically simulate the absorption of light in the multilayered structure with different $N$. It is found that the period number hardly affects the absorption performance and the location of PTT point when $N \ge 18$. With the decrement of $N$, the absorption performance of the nanostructure also degrades gradually due to the large nonlocal dispersion effect [24], especially the hyperbolic character alters when $N<10$, and in this case, other more complex shaped isofrequency contours are generated, leading to the inability of PTT based on the EMT to accurately predict the absorption bandwidth. Fig. 6. (a) Short wavelength absorption spectra of the metamaterial structure with $N=18$, $d_1=105$ nm, $d_2=15$ nm, and $d_3=3$ nm. The right-side image is the distribution of the electric field ($E$) corresponding to the wavelength of minimum absorption. The dip of curve indicates the establishment of Bloch mode. (b) Absorption spectrum of the multilayered nanostructure with different $N$ when $\theta =0$, $d_1=70$ nm, $d_2=15$ nm, and $d_3=3$ nm. Due to the central symmetry of the nanostructure, the proposed spectral-selective absorber is polarization-insensitive under normal incidence. Moreover, the angular tolerance of a solar absorber is also crucial to maximize the solar energy absorption owing to the sunlight coming from random directions. Figure 7 shows the calculated absorption spectrum at the incident angles ($\theta$) from $0^\circ$ to $70^\circ$ for both TM and TE polarizations. As shown in Fig. 7(a), for the TM case, the proposed absorber can still maintain an excellent absorption performance even when $\theta$ increases to $70^\circ$, resembling that at normal incidence. While for the case of TE polarization, the broadband absorption keeps close to unity up to $60^\circ$ incidence angle $\theta$ over wavelengths ranging from 300 to 2161 nm, and for larger angles, the absorptivities drop down to zero due to the increase of Fresnel reflectivity. Although there is a slight divergence at relatively large angles (see Fig. 7(b)), the averaged absorptivity and spectral selectivity are still satisfactory. Therefore, all these results clearly reveal that the designed solar absorber has a robust absorption performance, which is omni-directional at the entire solar spectrum. Fig. 7. Angular absorption spectrum of the proposed solar absorber for (a) TM and (b) TE polarizations calculated by the FDTD method. The other parameters are consistent with those in Fig. 1. For solar thermal systems, the total photothermal conversion efficiency ($\eta$) is an important parameter to quantitatively evaluate the performance of solar absorber, which can be calculated by [28] (4)$$\eta = \eta_{A} - \frac{\eta_{E}\sigma T_{0}^{4}}{CI_s},$$ (5)$$\eta_{A} = \frac{\begin{matrix} \int_{300\, nm}^{4000\, nm} \alpha_{\lambda}I_{AM1.5}(\lambda)\, d\lambda\end{matrix}}{\begin{matrix} \int_{300\, nm}^{4000\, nm} I_{AM1.5}(\lambda)\, d\lambda\end{matrix}},$$ (6)$$\eta_{E} = \frac{\begin{matrix} \int_{300\, nm}^{4000\, nm} \epsilon_{\lambda}I_{B}(\lambda, T_0)\, d\lambda\end{matrix}}{\begin{matrix} \int_{300\, nm}^{4000\, nm} I_{B}(\lambda, T_0)\, d\lambda\end{matrix}}.$$ where $\sigma$, $T_0$, $C$ and $I_s$ are respectively the Boltzmann's constant, the operating temperature, the solar concentration and flux intensity. $\alpha _{\lambda }$ and $\epsilon _{\lambda }$ are the spectral normal absorptivity and emissivity of the proposed absorber, respectively. And here, according to the Kirchhoff's law, the emissivity $\epsilon _{\lambda }$ is equivalent to the absorptivity $\alpha _{\lambda }$ (i.e., $A(\lambda )$). $I_{AM1.5}(\lambda )$ stands for the spectral intensity of solar radiation (the Air Mass 1.5 Spectra) [29], and $I_B(\lambda ,T_0 )$ is the blackbody radiation at the temperature $T_0$. It is worth mentioning that, as shown in Fig. 8(a), compared with the standard spectrum of solar radiation, a nearly reproduced absorption spectrum is obtained by the proposed solar absorber, showing a near-perfect solar full-spectrum absorption. Moreover, the absorbed and missed solar energy for the proposed absorber is also shown in Fig. 8(b). It can be seen that there is still a small portion of solar energy missed by the absorber, but according to Eq. (5), the total solar absorptance ($\eta _{A}$) can significantly reach 94.84%, which is a remarkable value compared with that in the more-complex structures. Note that, for a solar thermal system, the total solar-thermal conversion efficiency ($\eta$) may vary with different operating temperatures ($T_0$), due to the change of blackbody radiation (based on Eq. (6)). In this work, using point-focus concentrators with a large $C$ value of 1000, the total solar-thermal conversion efficiency with the proposed solar absorber is as high as 93.8% at 800 K, and can be further improved by optimizing the initial geometric parameters. Meanwhile, at the operating temperature of 1000 K, the total solar-thermal conversion efficiency is calculated to be 91.8%, which is excellent when compared with the previous results, as shown in Table 1. In addition, with increasing the operating temperature, the total conversion efficiency gradually decreases due to the increasing total thermal emittance, but the thermal stability of our nanostructure is enough good to satisfy the performance requirements of practical applications. From the above analyses, the proposed spectral-selective solar absorber possesses a high durability, which may find promising applications in high-temperature solar thermal systems, such as seawater desalination and thermophotovoltaic devices. Fig. 8. (a) Absorption spectra of the SiO$_{2}$/TiO$_{2}$/W multilayered structure under the solar source of AM1.5. (b) Distributions of the absorbed and missed solar energy for the proposed spectral-selective absorber in the entire solar radiance spectrum. Table 1. Comparison of total solar absorptance ($\eta _{A}$) and solar-thermal conversion efficiency ($\eta$) for recent spectral-selective solar absorbers at 1000 K and 1000 suns. In summary, a planar multi-layer metamaterial is proposed to work as a wideband spectral-selective solar absorber for efficient light harvesting. Our numerical simulations demonstrate that the light absorption can significantly exceed 90% over the wavelengths from 300 nm to 2215 nm, due to the topological change in IFS. It is also found that the bandwidth and efficiency of absorption spectrum can be flexibly tailored by adjusting the thicknesses of the metal (W) and dielectric (SiO$_{2}$ or TiO$_{2}$) layers, which agree well with theoretical calculations based on PTT in HMM. Moreover, the designed solar absorber is polarization-insensitive and its absorbing characteristics can be maintained very well over a wide incident angle of $60^\circ$ for both TM and TE polarizations. It is worth noting that, owing to the selective spectral response of the absorber, the total solar-thermal conversion efficiency can reach as high as 91.8% at operating temperature of 1000 K. And with varying the operating temperature, the solar absorber exhibit a high practicability, which shows a remarkable photothermal performance. The attractive properties, together with the designed method, indicate that such a solar absorber can readily serve as a potential candidate for many high-performance solar thermal applications, such as thermal emitters, photodetectors and energy harvesting. National Natural Science Foundation of China (61775064); Fundamental Research Funds for the Central Universities (HUST: 2016YXMS024). The author Xiaoyun Jiang (XYJIANG) expresses her deepest gratitude to her PhD advisor Tao Wang for providing guidance during this project. 1. G. Ni, G. Li, S. V. Boriskina, H. Li, W. Yang, T. Zhang, and G. Chen, "Steam generation under one sun enabled by a floating structure with thermal concentration," Nat. Energy 1(9), 16126 (2016). [CrossRef] 2. X. Hu, W. Xu, L. Zhou, Y. Tan, Y. Wang, S. Zhu, and J. Zhu, "Tailoring graphene oxide-based aerogels for efficient solar steam generation under one sun," Adv. Mater. 29(5), 1604031 (2017). [CrossRef] 3. H. Lin, B. C. Sturmberg, K.-T. Lin, Y. Yang, X. Zheng, T. K. Chong, C. M. de Sterke, and B. Jia, "A 90-nm-thick graphene metamaterial for strong and extremely broadband absorption of unpolarized light," Nat. Photonics 13(4), 270–276 (2019). [CrossRef] 4. I. E. Khodasevych, L. Wang, A. Mitchell, and G. Rosengarten, "Micro-and nanostructured surfaces for selective solar absorption," Adv. Opt. Mater. 3(7), 852–881 (2015). [CrossRef] 5. P. N. Dyachenko, S. Molesky, A. Y. Petrov, M. Störmer, T. Krekeler, S. Lang, M. Ritter, Z. Jacob, and M. Eich, "Controlling thermal emission with refractory epsilon-near-zero metamaterials via topological transitions," Nat. Commun. 7(1), 11809 (2016). [CrossRef] 6. M. Chirumamilla, A. Chirumamilla, Y. Yang, A. S. Roberts, P. K. Kristensen, K. Chaudhuri, A. Boltasseva, D. S. Sutherland, S. I. Bozhevolnyi, and K. Pedersen, "Large-area ultrabroadband absorber for solar thermophotovoltaics based on 3d titanium nitride nanopillars," Adv. Opt. Mater. 5(22), 1700552 (2017). [CrossRef] 7. N. I. Landy, S. Sajuyigbe, J. J. Mock, D. R. Smith, and W. J. Padilla, "Perfect metamaterial absorber," Phys. Rev. Lett. 100(20), 207402 (2008). [CrossRef] 8. X. Jiang, T. Wang, S. Xiao, X. Yan, and L. Cheng, "Tunable ultra-high-efficiency light absorption of monolayer graphene using critical coupling with guided resonance," Opt. Express 25(22), 27028–27036 (2017). [CrossRef] 9. S. Xiao, T. Liu, L. Cheng, C. Zhou, X. Jiang, Z. Li, and C. Xu, "Tunable anisotropic absorption in hyperbolic metamaterials based on black phosphorous/dielectric multilayer structures," J. Lightwave Technol. 37(13), 3290–3297 (2019). [CrossRef] 10. T. Liu, X. Jiang, C. Zhou, and S. Xiao, "Black phosphorus-based anisotropic absorption structure in the mid-infrared," Opt. Express 27(20), 27618–27627 (2019). [CrossRef] 11. S. Liu, H. Chen, and T. J. Cui, "A broadband terahertz absorber using multi-layer stacked bars," Appl. Phys. Lett. 106(15), 151601 (2015). [CrossRef] 12. A. K. Azad, W. J. Kort-Kamp, M. Sykora, N. R. Weisse-Bernstein, T. S. Luk, A. J. Taylor, D. A. Dalvit, and H.-T. Chen, "Metasurface broadband solar absorber," Sci. Rep. 6(1), 20347 (2016). [CrossRef] 13. J. Cong, Z. Zhou, B. Yun, L. Lv, H. Yao, Y. Fu, and N. Ren, "Broadband visible-light absorber via hybridization of propagating surface plasmon," Opt. Lett. 41(9), 1965–1968 (2016). [CrossRef] 14. Y. Cui, K. H. Fung, J. Xu, H. Ma, Y. Jin, S. He, and N. X. Fang, "Ultrabroadband light absorption by a sawtooth anisotropic metamaterial slab," Nano Lett. 12(3), 1443–1447 (2012). [CrossRef] 15. Q. Liang, T. Wang, Z. Lu, Q. Sun, Y. Fu, and W. Yu, "Metamaterial-based two dimensional plasmonic subwavelength structures offer the broadest waveband light harvesting," Adv. Opt. Mater. 1(1), 43–49 (2013). [CrossRef] 16. J. Zhou, A. F. Kaplan, L. Chen, and L. J. Guo, "Experiment and theory of the broadband absorption by a tapered hyperbolic metamaterial array," ACS Photonics 1(7), 618–624 (2014). [CrossRef] 17. E. D. Palik, Handbook of optical constants of solids, vol. 3 (Academic University, 1998). 18. A. Leviyev, B. Stein, A. Christofi, T. Galfsky, H. Krishnamoorthy, I. Kuskovsky, V. Menon, and A. Khanikaev, "Nonreciprocity and one-way topological transitions in hyperbolic metamaterials," APL Photonics 2(7), 076103 (2017). [CrossRef] 19. N.-h. Liu, S.-Y. Zhu, H. Chen, and X. Wu, "Superluminal pulse propagation through one-dimensional photonic crystals with a dispersive defect," Phys. Rev. E 65(4), 046607 (2002). [CrossRef] 20. C. Cortes, W. Newman, S. Molesky, and Z. Jacob, "Quantum nanophotonics using hyperbolic metamaterials," J. Opt. 14(6), 063001 (2012). [CrossRef] 21. V. Agranovich and V. Kravtsov, "Notes on crystal optics of superlattices," Solid State Commun. 55(1), 85–90 (1985). [CrossRef] 22. X. Jiang, T. Wang, S. Xiao, X. Yan, L. Cheng, and Q. Zhong, "Approaching perfect absorption of monolayer molybdenum disulfide at visible wavelengths using critical coupling," Nanotechnology 29(33), 335205 (2018). [CrossRef] 23. X. Jiang, T. Wang, L. Cheng, Q. Zhong, R. Yan, and X. Huang, "Tunable optical angular selectivity in hyperbolic metamaterial via photonic topological transitions," Opt. Express 27(13), 18970–18979 (2019). [CrossRef] 24. L. Ferrari, C. Wu, D. Lepage, X. Zhang, and Z. Liu, "Hyperbolic metamaterials and their applications," Prog. Quantum Electron. 40, 1–40 (2015). [CrossRef] 25. J. Zhou, X. Chen, and L. J. Guo, "Efficient thermal–light interconversions based on optical topological transition in the metal-dielectric multilayered metamaterials," Adv. Mater. 28(15), 3017–3023 (2016). [CrossRef] 26. Y. Kan, C. Zhao, X. Fang, and B. Wang, "Designing ultrabroadband absorbers based on bloch theorem and optical topological transition," Opt. Lett. 42(10), 1879–1882 (2017). [CrossRef] 27. J. Joannopoulos, S. Johnson, J. Winn, and R. Meade, Photonic Crystals: Molding the Flow of Light, vol. 2 (Princeton University, 2008). 28. A. Sakurai and T. Kawamata, "Electromagnetic resonances of solar-selective absorbers with nanoparticle arrays embedded in a dielectric layer," J. Quant. Spectrosc. Radiat. Transfer 184, 353–359 (2016). [CrossRef] 29. Air Mass 1.5 Spectra, "American Society for Testing and Materials(ASTM)," http://rredc.nrel.gov/solar/spectra/am1.5/. 30. S. Han, J.-H. Shin, P.-H. Jung, H. Lee, and B. J. Lee, "Broadband solar thermal absorber based on optical metamaterials for high-temperature applications," Adv. Opt. Mater. 4(8), 1265–1273 (2016). [CrossRef] 31. Y. Li, D. Li, D. Zhou, C. Chi, S. Yang, and B. Huang, "Efficient, scalable, and high-temperature selective solar absorbers based on hybrid-strategy plasmonic metamaterials," Sol. RRL 2(8), 1800057 (2018). [CrossRef] 32. M. Chen and Y. He, "Plasmonic nanostructures for broadband solar absorption based on the intrinsic absorption of metals," Sol. Energy Mater. Sol. Cells 188, 156–163 (2018). [CrossRef] 33. Y. Li, C. Lin, D. Zhou, Y. An, D. Li, C. Chi, H. Huang, S. Yang, C. Y. Tso, C. Y. Chao, and B. Huang, "Scalable all-ceramic nanofilms as highly efficient and thermally stable selective solar absorbers," Nano Energy 64, 103947 (2019). [CrossRef] G. Ni, G. Li, S. V. Boriskina, H. Li, W. Yang, T. Zhang, and G. Chen, "Steam generation under one sun enabled by a floating structure with thermal concentration," Nat. Energy 1(9), 16126 (2016). X. Hu, W. Xu, L. Zhou, Y. Tan, Y. Wang, S. Zhu, and J. Zhu, "Tailoring graphene oxide-based aerogels for efficient solar steam generation under one sun," Adv. Mater. 29(5), 1604031 (2017). H. Lin, B. C. Sturmberg, K.-T. Lin, Y. Yang, X. Zheng, T. K. Chong, C. M. de Sterke, and B. Jia, "A 90-nm-thick graphene metamaterial for strong and extremely broadband absorption of unpolarized light," Nat. Photonics 13(4), 270–276 (2019). I. E. Khodasevych, L. Wang, A. Mitchell, and G. Rosengarten, "Micro-and nanostructured surfaces for selective solar absorption," Adv. Opt. Mater. 3(7), 852–881 (2015). P. N. Dyachenko, S. Molesky, A. Y. Petrov, M. Störmer, T. Krekeler, S. Lang, M. Ritter, Z. Jacob, and M. Eich, "Controlling thermal emission with refractory epsilon-near-zero metamaterials via topological transitions," Nat. Commun. 7(1), 11809 (2016). M. Chirumamilla, A. Chirumamilla, Y. Yang, A. S. Roberts, P. K. Kristensen, K. Chaudhuri, A. Boltasseva, D. S. Sutherland, S. I. Bozhevolnyi, and K. Pedersen, "Large-area ultrabroadband absorber for solar thermophotovoltaics based on 3d titanium nitride nanopillars," Adv. Opt. Mater. 5(22), 1700552 (2017). N. I. Landy, S. Sajuyigbe, J. J. Mock, D. R. Smith, and W. J. Padilla, "Perfect metamaterial absorber," Phys. Rev. Lett. 100(20), 207402 (2008). X. Jiang, T. Wang, S. Xiao, X. Yan, and L. Cheng, "Tunable ultra-high-efficiency light absorption of monolayer graphene using critical coupling with guided resonance," Opt. Express 25(22), 27028–27036 (2017). S. Xiao, T. Liu, L. Cheng, C. Zhou, X. Jiang, Z. Li, and C. Xu, "Tunable anisotropic absorption in hyperbolic metamaterials based on black phosphorous/dielectric multilayer structures," J. Lightwave Technol. 37(13), 3290–3297 (2019). T. Liu, X. Jiang, C. Zhou, and S. Xiao, "Black phosphorus-based anisotropic absorption structure in the mid-infrared," Opt. Express 27(20), 27618–27627 (2019). S. Liu, H. Chen, and T. J. Cui, "A broadband terahertz absorber using multi-layer stacked bars," Appl. Phys. Lett. 106(15), 151601 (2015). A. K. Azad, W. J. Kort-Kamp, M. Sykora, N. R. Weisse-Bernstein, T. S. Luk, A. J. Taylor, D. A. Dalvit, and H.-T. Chen, "Metasurface broadband solar absorber," Sci. Rep. 6(1), 20347 (2016). J. Cong, Z. Zhou, B. Yun, L. Lv, H. Yao, Y. Fu, and N. Ren, "Broadband visible-light absorber via hybridization of propagating surface plasmon," Opt. Lett. 41(9), 1965–1968 (2016). Y. Cui, K. H. Fung, J. Xu, H. Ma, Y. Jin, S. He, and N. X. Fang, "Ultrabroadband light absorption by a sawtooth anisotropic metamaterial slab," Nano Lett. 12(3), 1443–1447 (2012). Q. Liang, T. Wang, Z. Lu, Q. Sun, Y. Fu, and W. Yu, "Metamaterial-based two dimensional plasmonic subwavelength structures offer the broadest waveband light harvesting," Adv. Opt. Mater. 1(1), 43–49 (2013). J. Zhou, A. F. Kaplan, L. Chen, and L. J. Guo, "Experiment and theory of the broadband absorption by a tapered hyperbolic metamaterial array," ACS Photonics 1(7), 618–624 (2014). E. D. Palik, Handbook of optical constants of solids, vol. 3 (Academic University, 1998). A. Leviyev, B. Stein, A. Christofi, T. Galfsky, H. Krishnamoorthy, I. Kuskovsky, V. Menon, and A. Khanikaev, "Nonreciprocity and one-way topological transitions in hyperbolic metamaterials," APL Photonics 2(7), 076103 (2017). N.-h. Liu, S.-Y. Zhu, H. Chen, and X. Wu, "Superluminal pulse propagation through one-dimensional photonic crystals with a dispersive defect," Phys. Rev. E 65(4), 046607 (2002). C. Cortes, W. Newman, S. Molesky, and Z. Jacob, "Quantum nanophotonics using hyperbolic metamaterials," J. Opt. 14(6), 063001 (2012). V. Agranovich and V. Kravtsov, "Notes on crystal optics of superlattices," Solid State Commun. 55(1), 85–90 (1985). X. Jiang, T. Wang, S. Xiao, X. Yan, L. Cheng, and Q. Zhong, "Approaching perfect absorption of monolayer molybdenum disulfide at visible wavelengths using critical coupling," Nanotechnology 29(33), 335205 (2018). X. Jiang, T. Wang, L. Cheng, Q. Zhong, R. Yan, and X. Huang, "Tunable optical angular selectivity in hyperbolic metamaterial via photonic topological transitions," Opt. Express 27(13), 18970–18979 (2019). L. Ferrari, C. Wu, D. Lepage, X. Zhang, and Z. Liu, "Hyperbolic metamaterials and their applications," Prog. Quantum Electron. 40, 1–40 (2015). J. Zhou, X. Chen, and L. J. Guo, "Efficient thermal–light interconversions based on optical topological transition in the metal-dielectric multilayered metamaterials," Adv. Mater. 28(15), 3017–3023 (2016). Y. Kan, C. Zhao, X. Fang, and B. Wang, "Designing ultrabroadband absorbers based on bloch theorem and optical topological transition," Opt. Lett. 42(10), 1879–1882 (2017). J. Joannopoulos, S. Johnson, J. Winn, and R. Meade, Photonic Crystals: Molding the Flow of Light, vol. 2 (Princeton University, 2008). A. Sakurai and T. Kawamata, "Electromagnetic resonances of solar-selective absorbers with nanoparticle arrays embedded in a dielectric layer," J. Quant. Spectrosc. Radiat. Transfer 184, 353–359 (2016). Air Mass 1.5 Spectra, "American Society for Testing and Materials(ASTM)," http://rredc.nrel.gov/solar/spectra/am1.5/ . S. Han, J.-H. Shin, P.-H. Jung, H. Lee, and B. J. Lee, "Broadband solar thermal absorber based on optical metamaterials for high-temperature applications," Adv. Opt. Mater. 4(8), 1265–1273 (2016). Y. Li, D. Li, D. Zhou, C. Chi, S. Yang, and B. Huang, "Efficient, scalable, and high-temperature selective solar absorbers based on hybrid-strategy plasmonic metamaterials," Sol. RRL 2(8), 1800057 (2018). M. Chen and Y. He, "Plasmonic nanostructures for broadband solar absorption based on the intrinsic absorption of metals," Sol. Energy Mater. Sol. Cells 188, 156–163 (2018). Y. Li, C. Lin, D. Zhou, Y. An, D. Li, C. Chi, H. Huang, S. Yang, C. Y. Tso, C. Y. Chao, and B. Huang, "Scalable all-ceramic nanofilms as highly efficient and thermally stable selective solar absorbers," Nano Energy 64, 103947 (2019). Agranovich, V. An, Y. Azad, A. K. Boltasseva, A. Boriskina, S. V. Chao, C. Y. Chaudhuri, K. Chen, H. Chen, H.-T. Chen, L. Cheng, L. Chi, C. Chirumamilla, A. Chirumamilla, M. Chong, T. K. Christofi, A. Cong, J. Cortes, C. Cui, T. J. Cui, Y. Dalvit, D. A. de Sterke, C. M. Dyachenko, P. N. Eich, M. Fang, N. X. Fang, X. Ferrari, L. Fu, Y. Fung, K. H. Galfsky, T. Guo, L. J. Han, S. He, Y. Hu, X. Huang, B. Huang, H. Huang, X. Jacob, Z. Jia, B. Jiang, X. Jin, Y. Joannopoulos, J. Johnson, S. Jung, P.-H. Kan, Y. Kaplan, A. F. Kawamata, T. Khanikaev, A. Khodasevych, I. E. Kort-Kamp, W. J. Kravtsov, V. Krekeler, T. Krishnamoorthy, H. Kristensen, P. K. Kuskovsky, I. Landy, N. I. Lang, S. Lee, B. J. Lepage, D. Leviyev, A. Li, D. Li, Z. Liang, Q. Lin, C. Lin, H. Lin, K.-T. Liu, N.-h. Lu, Z. Luk, T. S. Lv, L. Ma, H. Meade, R. Menon, V. Mitchell, A. Mock, J. J. Molesky, S. Newman, W. Ni, G. Padilla, W. J. Palik, E. D. Pedersen, K. Petrov, A. Y. Ren, N. Ritter, M. Roberts, A. S. Rosengarten, G. Sajuyigbe, S. Sakurai, A. Shin, J.-H. Stein, B. Störmer, M. Sturmberg, B. C. Sun, Q. Sutherland, D. S. Sykora, M. Tan, Y. Taylor, A. J. Tso, C. Y. Wang, B. Wang, T. Weisse-Bernstein, N. R. Winn, J. Wu, X. Xiao, S. Xu, C. Xu, W. Yan, R. Yan, X. Yang, S. Yang, W. Yao, H. Yu, W. Yun, B. Zhang, T. Zhao, C. Zheng, X. Zhong, Q. Zhou, C. Zhou, J. Zhou, Z. Zhu, J. Zhu, S. Zhu, S.-Y. Adv. Opt. Mater. (4) APL Photonics (1) J. Quant. Spectrosc. Radiat. Transfer (1) Nano Energy (1) Nano Lett. (1) Nat. Energy (1) Phys. Rev. E (1) Prog. Quantum Electron. (1) Sci. Rep. (1) Sol. Energy Mater. Sol. Cells (1) Sol. RRL (1) Solid State Commun. (1) Fig. 1. (a) Schematic illustration of the proposed spectral-selective solar absorber. $d_3$ ( $d_1$ and $d_2$ ) represents the thickness of W (SiO $_{2}$ and TiO $_{2}$ ) layer in the nanostructure with a period number $N$ . $D$ is the period of the multilayer system, and $P$ is the periodicity. The substrate layer is W with the thickness $d$ . The local enlarged drawing of the unit cell of the metamaterial, and inset shows the model of numerical simulation. (b) Calculated effective complex permittivities, $\varepsilon _{\bot }$ and $\varepsilon _{\rVert }$ , of the metamaterial with $d_1=70$ nm, $d_2=15$ nm, and $d_3=3$ nm. Inset shows the definition of $\bot$ and $\rVert$ directions. When $Re(\varepsilon _{\bot })Re(\varepsilon _{\rVert })>0$ , one can achieve elliptical response, it turns into hyperboloid while $Re(\varepsilon _{\bot })Re(\varepsilon _{\rVert })<0$ . The yellow area represents the ENZ ( $Re(\varepsilon _{\rVert }) \simeq 0$ ) regime, and the green area highlights the spectral range of hyperbolic response. Fig. 2. (a) Absorption spectra for the SiO $_{2}$ /TiO $_{2}$ /W multilayered structure with number of periods $N = 18$ in the spectral range of 0.3-4 $\mu$ m. The solid line (dashed line) is the numerical (theoretical) result calculated by the FDTD (TMM) methods. (b) The impedance curve of the designed metamaterial nanostructure. The yellow area indicates the region of ENZ ( $\varepsilon _{\rVert } \simeq 0$ ), and the hyperbolic wavelength regime is drawn in the green region. Fig. 3. Schematic of the IFS in free space (black curves) and the multilayer (blue curves). The IFS of TM-polarized light in the SiO $_{2}$ /TiO $_{2}$ /W multilayered structure at the wavelengths of (a) 1949 nm and (b) 3075 nm. $\vec {k}$ stands for the direction of phase propagation, and $\vec {S}$ represents the direction of energy flow. $\theta$ is the angle of incident light and $k_0$ is free space wavenumber. In the isotropic medium (such as air), the circular IFS forces the wavevector ( $k_i$ ) and the Poynting vector ( $S_i$ ) being collinear. While for anisotropic metamaterials (such as HMMs), the Poynting vector ( $S_t$ or $S_r$ ) is orthogonal to the IFS. Fig. 4. (a) Distributions of electric field for the proposed spectral-selective solar absorber at different incident wavelengths. (b) The absorption spectrum with different thicknesses of W layer $d_3$ in the multilayer system, when $d_1=70$ nm, $d_2=15$ nm, and $N=18$ . PTT points are plotted as blue dots, which separate the ellipsoidal $(\varepsilon _{\rVert }\varepsilon _{\bot }>0)$ and hyperbolic $(\varepsilon _{\rVert }\varepsilon _{\bot }<0)$ regime. Fig. 5. Absorption spectra as a function of dielectric (SiO $_{2}$ and TiO $_{2}$ ) layers thicknesses (a) $d_1$ and (b) $d_2$ , when $d_3=3$ nm and $N=18$ . The blue dots stand for the transition point of PTT, and the dotted box represents the absorption peaks caused by the Bloch mode. Fig. 6. (a) Short wavelength absorption spectra of the metamaterial structure with $N=18$ , $d_1=105$ nm, $d_2=15$ nm, and $d_3=3$ nm. The right-side image is the distribution of the electric field ( $E$ ) corresponding to the wavelength of minimum absorption. The dip of curve indicates the establishment of Bloch mode. (b) Absorption spectrum of the multilayered nanostructure with different $N$ when $\theta =0$ , $d_1=70$ nm, $d_2=15$ nm, and $d_3=3$ nm. Fig. 8. (a) Absorption spectra of the SiO $_{2}$ /TiO $_{2}$ /W multilayered structure under the solar source of AM1.5. (b) Distributions of the absorbed and missed solar energy for the proposed spectral-selective absorber in the entire solar radiance spectrum. Table 1. Comparison of total solar absorptance ( η A ) and solar-thermal conversion efficiency ( η ) for recent spectral-selective solar absorbers at 1000 K and 1000 suns. (1) ε ⊥ = ( f 1 / ε s + f 2 / ε t + f 3 / ε w ) − 1 , (2) ε ‖ = ε s f 1 + ε t f 2 + ε w f 3 , (3) k x 2 + k y 2 ε ⊥ + k z 2 ε ‖ = ( 2 π λ ) 2 , (4) η = η A − η E σ T 0 4 C I s , (5) η A = ∫ 300 n m 4000 n m α λ I A M 1.5 ( λ ) d λ ∫ 300 n m 4000 n m I A M 1.5 ( λ ) d λ , (6) η E = ∫ 300 n m 4000 n m ϵ λ I B ( λ , T 0 ) d λ ∫ 300 n m 4000 n m I B ( λ , T 0 ) d λ . Comparison of total solar absorptance ( η A ) and solar-thermal conversion efficiency ( η ) for recent spectral-selective solar absorbers at 1000 K and 1000 suns. η A ( % ) η ( % ) 2016 [30] 86.8 84.78 2018 [31] 91.3 89.9 2019 This work 94.8 91.8
CommonCrawl
Probability of winning a tournament by winning all matches in tournament Ques: Two players are competing in a tournament which consists of three matches. The probability of player1 winning the first match is 0.2, winning the second match is 0.5 and winning the third match is 0.6. The probability of player2 winning the first match is 0.8, winning the second match is 0.5 and winning the third match is 0.4. A player wins the tournament is he wins all the matches. Otherwise, the tournament is played again. The tournament is played again and again until a player wins all matches and hence wins the tournament. What is the probability that player1 wins the tournament? My approach: Since the matches are independent of each other, the probability of player1 winning all the matches is 0.2*0.5*0.6. There are 7 other possible outcomes of tournaments:- 0.2*0.5*(1-0.6) i.e 0.2*0.5*0.4 0.2*(1-0.5)*0.6 i.e 0.2*0.5*0.4 ... and so on. I'm am doubtful as to how to progress from here. Any help would be appreciated. bnks452bnks452 $\begingroup$ Your terminology is very confusing: You are using "tournament" for two different things, a combination of three matches, and a series of tournaments in the first sense. In particular, your players can not win the tournament, and yet win the tournament … because the "tournaments" in both parts of the sentence are different things. $\endgroup$ – celtschk Jul 12 '15 at 7:12 The probability Player 1 wins all three matches is $a=0.06$. The probability for Player 2 is $b=0.16$. The probability neither wins all three matches is $t=0.78$. Player 1 wins the tournament if she wins in the first round (probability $a$) or if neither wins in the first round and Player 1 wins in the second round (probability $ta$), or if neither wins in the first two rounds but 1 wins in the third round (probability $t^2a$) and so on. So the probability Player 1 ultimately wins is $a+ta+t^2a+t^3a+\cdots$. The sum of this infinite geometric series is $\frac{a}{1-t}$. Similarly, the probability Player 2 ultimately wins is $\frac{b}{1-t}$. André NicolasAndré Nicolas I believe the simplest way is to ignore altogether rounds where neither win. P(A wins in a round) = $0.2\cdot0.5\cdot0.6 = 0.06$ P(B wins in a round) = $0.8\cdot0.5\cdot0.4 = 0.16$ Odds in favour of A = 6/16 = 3/8, so P(A wins) = 3/11 P(B wins) = 8/11 true blue aniltrue blue anil Be $p_k$ the probability that player $k$ wins a single tournament. Be $P$ the probability that player 1 wins the tournament series (the probability that player 2 wins the tournament series is, of course, $1-P$). Now player 1 has two ways to win the tournament series: Either he wins the first tournament, or the first tournament is a draw, but he wins a later tournament. Those two cases are mutually exclusive, therefore the probabilities add up. The probability of player winning the first tournament, we know, it's $p_1$. The probability of the first tournament being a draw is $1-p_1-p_2$. But after a draw, the situation is exactly the same as at the start of the tournament series (past tournaments don't influence the winning rules of future tournaments), therefore the probability of the first player to win the tournament is exactly the same as it was before the tournament series started. So we have $$P = p_1 + (1-p_1-p_2)P$$ This equation is easily solved for $P$, giving $$P = \frac{p_1}{p_1+p_2}$$ This is actually a quite intuitive result: The relative probability for each player to win is unchanged, only the total probability that one of the players wins is scaled up to $1$. Now all that's left is to calculate $p_1$ and $p_2$, the probabilities for each player to win a single tournament. But you already know how to do that. celtschkceltschk There's a much simpler way of doing this problem. The key observation in this problem is that there are essentially seven distinct states: The tournament just started or restarted; i.e. both players have won 0 games in a row. Call this state $S0$. Player 1 has won the first game. Call this state $L1$. Notice now that if he loses the second game, the state reverts to $S0$. Player 1 has won the first two games. Call this state $L2$. Player 1 has won first three games. Call this $L3$. Note that this means he's won the tournament. Player 2 won the first game. $R1$. Player 2 won the first two games. $R2$. Player 2 won the first three games. $R3$. Now, each state can change to another adjacent state based on some probability; e.g $S0$ goes to $L1$, with probability $0.2$, and goes $R1$ is $0.8$. Define a function $P(s)$, where $s$ is one of the states I've mentioned. $P(s)$ gives the probability that player 1 will win the tournament given that the tournament is in stage $s$. What you're asking for now is the value of $P(S0)$. You already know the value of $P(L3)=1$ and $P(R3)=0$. It's easy now to determine the other probabilities. Refer to this blog post for details on this method. sayantankhansayantankhan Not the answer you're looking for? Browse other questions tagged probability or ask your own question. Handling dropouts in a round-robin tournament. Probability of winning the chess tournament Probability of a player winning a snooker tournament Probability of second player winning once Tennis tournament probability. Question is about part B Conditional probability in a Pakistani cricket tournament winning the match (which is first to win n games), if i know the probability of them winning a game? Probability of a team winning a tournament Probability of at least one player winning all the matches, and another losing all the matches, they play Probability of winning games at tournament
CommonCrawl
Photoshop 2022 (version 23) Torrent (Activation Code) [Win/Mac] by plepalm on 30 czerwca, 2022 in Uncategorized Photoshop 2022 (version 23) Crack License Code & Keygen Free ## **Magic Wand** Magic Wand is a feature in Photoshop for selecting an area by color and size. It works by illuminating the darkest point in the selected image area and then any area that reflects the image's darkest point is also selected. This is a quick way to select an area by color. It works by selecting the brightest point in the original image—in other words, a white point in the darkest area. The Magic Wand selects all areas that reflect this brightest point. To use the Magic Wand, press Shift+W and the image's background and foreground colors appear in the box. To select the image's color area, click in the box and drag a box around the area you want selected. Then double-click to deselect it. To deselect the area, click the icon in the upper-right corner of the box. You can also double-click anywhere outside the area that's selected to deselect it. Photoshop 2022 (version 23) With License Key I've been using Photoshop for over 7 years now and believe it will always be my primary tool of choice for editing images. I'm also a graphic designer and I have always preferred Photoshop to other graphics editors, so I decided to make this easy-to-follow guide on how to edit images in Photoshop with minimal to no experience. Our main aim here is to show you the very basics of what's possible with Photoshop and inspire you to experiment, create and use new tools to make your images and graphic designs better. If you need even more information on what Photoshop Elements can do, be sure to check out our 10 Photoshop Elements Tutorials. How to Use Photoshop In this article, we'll be focusing on using Photoshop Elements to edit images from start to finish. We'll be using the following steps and features to create a photo montage in Photoshop: Start by creating a new document Open the image you want to edit Select the area of interest (the parts that you want to isolate) Using the selections tools, cut out the area of interest from the background Save the image Clean up the edges of the photo Montage the image into a panorama We'll explain each of these steps in detail, but I want to make it clear that these steps apply to the regular Photoshop version as well. Step 1: Create a New Document This is arguably the most important step in editing. Every time you open an image in Photoshop, you will be creating a new document. The image will then get saved into its own file, similar to a PDF, and you will be working in a new document. To create a new document: Open Photoshop Elements Click File > New This will open a File menu where you can choose the size of your new document (we will choose 'Large', as we want the picture to be as large as possible) Hit OK File > Save If you want to edit an existing document, you have to reopen it. Select the file you want to open in Photoshop and hit Open from the File menu. Step 2: Open the Image After saving the new document, it's time to open it. On the canvas, click the Open button (the eye icon) and browse to the location of your image. Step 3: Select the Area of Interest The first step of any editing in Photoshop is to select the a681f4349e Photoshop 2022 (version 23) Sarah Tew/CNET Apple issued a fix for the iOS 11.2 update that was rolled out last week to restore the Camera Roll feature on iPhones and iPads. The update also offers some other fixes, such as improved Maps and Health features. The Camera Roll feature is a tool that lets users save images, videos, and other media from Safari, Photos, and iMessage. The utility is useful for archiving content, and it can also be used as a way to relive those memories. But Apple has previously warned that if users attempt to restore content that's already been backed up to iCloud, that can result in losing the content. Apple noticed this problem after rolling out the update. It introduced the fix on Monday, but the company didn't publicly communicate that the Camera Roll feature was affected. Instead, Apple's support forums went dark, as did its help desk. I decided to test it out on my own iPhone 6s Plus with iOS 11.2. There's no way of knowing when Apple first noticed this problem, but it's clear that the firm took action very quickly. This may have been because of complaints, because the company was notified of it, or for some other reason. If it was the latter, it seems that Apple didn't want to rush out a partial fix that didn't fix the problem. I first tried to restore a photo that I had saved to iCloud manually from an iPhone 7 Plus. This time, the problem didn't occur. I then tried to use the photo to restore the same file from my iPhone 6s Plus. That's when iOS 12.0.2 wiped the content clean. Read CNET's full iOS 11.2 review. Restoring content was a breeze. To begin, I enabled iCloud Photos and turned on iCloud backup for the iPhone. Then I began using the free Photo Backup app to back up my iPhone to iCloud. The following screen shows how I initiated the process. For now, Photo Backup is just a free app with some useful capabilities. But it should eventually grow into something that has the full functionality of iOS in the cloud. Not exactly the same Restoring the photo didn't restore the text around it, and it seemed like a hole was now in my iPhone's photo library, as if it was missing part of the content. That's something I tested by resaving the same photo, but this time I enabled Finder Sync. This What's New In Photoshop 2022 (version 23)? {1}{2} \int_0^{t/\epsilon} \left( \vert g(u,z) \vert ^2 + \vert \dot{g}(u,z) \vert^2 \right) du, \\ J_2(t)&=\int_0^{t/\epsilon} \vert \dot{f}(u,z) \vert ^2 du,\end{aligned}$$ with $E_1$, $E_2$, $J_1$, $J_2$ defined in Notation \[truncation\_space\_notations\]. Bold indicates quantities that will be sent to zero as $\epsilon \rightarrow 0$. Here $f$ is the generalised function for $u \in [0,t]$. We look for a saddle point for the functional $J(t)$ using the ansatz $z=z^{\ast}(t)$, such that $J(t) \sim J(t) – \mathcal{O}(\epsilon)$ in the limit $\epsilon \rightarrow 0$ (recall that $z^{\ast}$ is an element in $H^1((0,t))$). This yields the Euler-Lagrange equation $$\begin{aligned} \label{EL_bulk} \dot{z}^{\ast} = \frac{\partial g(t,z^{\ast})}{\partial x} -f(t,z^{\ast}), \end{aligned}$$ which expresses the variational derivative of $J(t)$ in the form $$\label{variational_derivative} \frac{\partial J(t)}{\partial t} -\frac{\partial J(t)}{\partial z} \dot{z} – \frac{\partial J(t)}{\partial z^{\ast}} \frac{\partial \dot{z}^{\ast}}{\partial z} -\frac{\partial J(t)}{\partial z^{\ast\ast}} \frac{\partial^2 \dot{z}^{\ast}}{\partial z^2}.$$ An integral form of (\[EL\_bulk\]) is found by setting $z^{\ast}(t)= System Requirements For Photoshop 2022 (version 23): Minimum: Windows 7 or later / Mac OS X 10.9 or later / Linux 4.4 or later / Android 4.1 or later / iOS 9.3 or later Recommended: Windows 8.1 / Mac OS X 10.9 or later / Linux 4.4 or later / Android 4.1 or later / iOS 9.3 or later Powered by: ZeniMax Online Studios / Trion Worlds / Microsoft / Epic Games / Sony Online Entertainment This documentation and its contents are subject to the https://palqe.com/upload/files/2022/06/FwT9daDgItJlwMPfpQeQ_30_d7a48b801070555bd11d8ca50ac0fa8f_file.pdf https://virtual.cecafiedu.com/blog/index.php?entryid=3354 https://homedust.com/adobe-photoshop-cc-2019-version-20-crack-mega-product-key-full-free-mac-win/ https://dogrywka.pl/photoshop-2021-version-22-4-2-crack-full-version/ https://fernrocklms.com/blog/index.php?entryid=3372 https://longitude123.net/photoshop-2021-version-22-5-1-keygen-crack-serial-key-april-2022/ https://www.ricertboard.org/system/files/webform/wisaben560.pdf https://www.cryptoaccountants.tax/wp-content/uploads/2022/06/Adobe_Photoshop_2022_Version_2311_Download_X64_April2022.pdf https://confiseriegourmande.be/adobe-photoshop-2021-version-22-1-1-with-license-key-win-mac-2022-latest/ https://arcmaxarchitect.com/sites/default/files/webform/photoshop-2021-version-2242.pdf https://homeimproveinc.com/photoshop-2022-version-23-4-1-keygen-crack-setup-with-license-key/ https://blacksocially.com/upload/files/2022/06/y1RgDd17ZvGAHQuZdd4Q_30_d7a48b801070555bd11d8ca50ac0fa8f_file.pdf https://scrolllinkupload.s3.amazonaws.com/upload/files/2022/06/mZgwtBcZhXlHxkTv5ePQ_30_8641b7c3528235511edeefc18c2dc22c_file.pdf https://encontros2.com/upload/files/2022/06/pTqAIb8DxwwuRuaglrEU_30_d7a48b801070555bd11d8ca50ac0fa8f_file.pdf https://www.reperiohumancapital.com/system/files/webform/czeglen839.pdf http://pussyhub.net/photoshop-2021-version-22-1-0-crack-with-serial-number-activation-free-download-3264bit/ https://tarcacolmillcon.wixsite.com/monetpwaso/post/adobe-photoshop-2021-version-22-4-2-keygen-torrent-activation-code-mac-win-latest-2022 http://ice-aec.com/index.php/2022/06/30/adobe-photoshop-2020-product-key-download-win-mac-april-2022/ https://kingphiliptrailriders.com/advert/photoshop-2022-patch-full-version-keygen-full-version-free-3264bit/ https://chatbook.pk/upload/files/2022/06/bvSLvQVZFPvxMkvHEMyA_30_1a50dab606f2af40ddae8bbe77624ec4_file.pdf Adobe Photoshop 2022 (version 23) Install Crack Download [Win/Mac] [Updated] 2022 Photoshop 2021 (Version 22.4.3) Patch full version Free Download For PC [Latest]
CommonCrawl
Section 5.4: $(\infty ,2)$-Categories Subsection 5.4.1: Definitions (cite) 5.4.1 Definitions We begin by introducing some terminology. Definition 5.4.1.1. Let $X$ be a simplicial set and let $\sigma : \Delta ^2 \rightarrow X$ be a $2$-simplex of $X$. We will say that $\sigma $ is left-degenerate if it factors through the map $\sigma ^{0}: \Delta ^2 \rightarrow \Delta ^1$ given on vertices by $\sigma ^{0}(0) = 0 = \sigma ^{0}(1)$ and $\sigma ^{0}(2) = 1$ (Notation 1.1.1.9). We say that $\sigma $ is right-degenerate if it factors through the map $\sigma ^{1}: \Delta ^2 \rightarrow \Delta ^1$ given on vertices $\sigma ^{1}(0) = 0$ and $\sigma ^{1}(1) = 1 = \sigma ^{1}(2)$. Remark 5.4.1.2. Let $X$ be a simplicial set. Then: A $2$-simplex $\sigma $ of $X$ is degenerate (in the sense of Definition 1.1.3.2) if and only if it is either left-degenerate or right-degenerate. A $2$-simplex $\sigma $ of $X$ is constant (that is, factors through the projection map $\Delta ^2 \rightarrow \Delta ^0$) if and only if it is both left-degenerate and right-degenerate. A $2$-simplex $\sigma $ of $X$ is left-degenerate if and only if it is right-degenerate when viewed as a $2$-simplex of the opposite simplicial set $X^{\operatorname{op}}$. Definition 5.4.1.3. Let $\operatorname{\mathcal{C}}$ be a simplicial set. We will say that $\operatorname{\mathcal{C}}$ is an $(\infty ,2)$-category if it satisfies the following axioms: Every morphism of simplicial sets $\Lambda ^{2}_{1} \rightarrow \operatorname{\mathcal{C}}$ can be extended to a thin $2$-simplex of $\operatorname{\mathcal{C}}$. Every degenerate $2$-simplex of $\operatorname{\mathcal{C}}$ is thin. Let $n \geq 3$ and let $\sigma _0: \Lambda ^{n}_{0} \rightarrow \operatorname{\mathcal{C}}$ be a morphism of simplicial sets with the property that the $2$-simplex $\sigma _0|_{ \operatorname{N}_{\bullet }( \{ 0< 1 < n\} ) }$ is left-degenerate. Then $\sigma _0$ can be extended to an $n$-simplex of $\operatorname{\mathcal{C}}$. Let $n \geq 3$ and let $\sigma _0: \Lambda ^{n}_{n} \rightarrow \operatorname{\mathcal{C}}$ be a morphism of simplicial sets with the property that the $2$-simplex $\sigma _0|_{ \operatorname{N}_{\bullet }( \{ 0< n-1 < n\} ) }$ is right-degenerate. Then $\sigma _0$ can be extended to an $n$-simplex of $\operatorname{\mathcal{C}}$. Proposition 5.4.1.4. Let $\operatorname{\mathcal{C}}$ be an $\infty $-category. Then $\operatorname{\mathcal{C}}$ is an $(\infty ,2)$-category. Proof. Our assumption that $\operatorname{\mathcal{C}}$ is an $\infty $-category guarantees that every $2$-simplex of $\operatorname{\mathcal{C}}$ is thin (Example 2.3.2.4). Consequently, condition $(2)$ of Definition 5.4.1.3 is automatic, and condition $(1)$ follows immediately from the definition. Conditions $(3)$ and $(4)$ follow from Theorem 4.4.2.6 (since every degenerate edge of $\operatorname{\mathcal{C}}$ is an isomorphism). $\square$ Remark 5.4.1.5. Let $\operatorname{\mathcal{C}}$ be an $(\infty ,2)$-category. We will refer to vertices of $\operatorname{\mathcal{C}}$ as objects, and to the edges of $\operatorname{\mathcal{C}}$ as morphisms. If $f$ is an edge of $\operatorname{\mathcal{C}}$ satisfying $d_1(f) = X$ and $d_0(f) = Y$, then we say that $f$ is a morphism from $X$ to $Y$ and write $f: X \rightarrow Y$. Suppose we are given morphisms $f: X \rightarrow Y$, $g: Y \rightarrow Z$, and $h: X \rightarrow Z$ of $\operatorname{\mathcal{C}}$. We will say that a $2$-simplex $\sigma $ witnesses $h$ as a composition of $f$ and $g$ if it is thin and satisfies $d_0(\sigma ) = g$, $d_1(\sigma ) = h$, and $d_2(\sigma ) = f$, as indicated in the diagram \[ \xymatrix@R =50pt@C=50pt{ & Y \ar [dr]_{g} & \\ X \ar [ur]^{f} \ar [rr]^-{h} & & Z. } \] Note that: When $\operatorname{\mathcal{C}}$ is an $\infty $-category, this recovers the terminology of Definition 1.3.4.1 (since the $2$-simplex $\sigma $ is automatically thin). If $\operatorname{\mathcal{C}}$ is the Duskin nerve of a $2$-category $\operatorname{\mathcal{E}}$, the $2$-simplex $\sigma $ can be identified with a $2$-morphism $\gamma : g \circ f \Rightarrow h$ of $\operatorname{\mathcal{E}}$, which is invertible if and only if $\sigma $ is thin. In other words, $\sigma $ witnesses $h$ as a composition of $f$ and $g$ if and only if it encodes the datum of an isomorphism $g \circ f \xRightarrow {\sim } h$ in the category $\underline{\operatorname{Hom}}_{\operatorname{\mathcal{E}}}(X,Z)$. Axiom $(1)$ of Definition 5.4.1.3 asserts that the composition of $1$-morphisms in $\operatorname{\mathcal{C}}$ is defined (albeit not uniquely). More precisely, it asserts that for every pair of morphisms $f: X \rightarrow Y$ and $g: Y \rightarrow Z$, there exists a morphism $h: X \rightarrow Z$ and a $2$-simplex which witnesses $h$ as a composition of $f$ and $g$. Remark 5.4.1.6. Let $\operatorname{\mathcal{C}}$ be a simplicial set. Then $\operatorname{\mathcal{C}}$ is an $(\infty ,2)$-category if and only if the opposite simplicial set $\operatorname{\mathcal{C}}^{\operatorname{op}}$ is an $(\infty ,2)$-category. Proposition 5.4.1.7. Let $\operatorname{\mathcal{C}}$ be a $2$-category. Then the Duskin nerve $\operatorname{N}_{\bullet }^{\operatorname{D}}(\operatorname{\mathcal{C}})$ is an $(\infty ,2)$-category. Proof. Condition $(1)$ of Definition 5.4.1.3 follows immediately from Theorem 2.3.2.5, and condition $(2)$ from Corollary 2.3.2.7. We will verify $(4)$; the proof of $(3)$ is similar. Suppose we are given an integer $n \geq 3$ and a map $\sigma _0: \Lambda ^{n}_{n} \rightarrow \operatorname{N}_{\bullet }^{\operatorname{D}}(\operatorname{\mathcal{C}})$. for which the restriction $\sigma _0|_{ \operatorname{N}_{\bullet }( \{ 0 < n-1 < n\} ) }$ is right-degenerate. We wish to show that $\sigma _0$ can be extended to an $n$-simplex of $\operatorname{N}_{\bullet }^{\operatorname{D}}(\operatorname{\mathcal{C}})$. We now consider three cases: Suppose that $n = 3$. Then $\sigma _0$ can be identified with a collection of objects $\{ X_ i \} _{ 0 \leq i \leq 3}$, $1$-morphisms $\{ f_{ji}: X_ i \rightarrow X_ j \} _{0 \leq i < j \leq 3}$, and $2$-morphisms \[ \mu _{321}: f_{32} \circ f_{21} \Rightarrow f_{31} \quad \quad \mu _{320}: f_{32} \circ f_{20} \Rightarrow f_{30} \quad \quad \mu _{310}: f_{31} \circ f_{10} \Rightarrow f_{30} \] in the $2$-category $\operatorname{\mathcal{C}}$. The assumption that $\sigma _0|_{ \operatorname{N}_{\bullet }( \{ 0 < n-1 < n\} ) }$ is right-degenerate guarantees that $X_2 = X_3$, that $f_{20} = f_{30}$, that the $1$-morphism $f_{32}$ is the identity $\operatorname{id}_{X_2}$, and that $\mu _{320}$ is the left unit constraint $\lambda _{ f_{20} }$. To extend $\sigma _0$ to a $3$-simplex of $\operatorname{N}_{\bullet }^{\operatorname{D}}$, we must show that there exists a $2$-morphism $\mu _{210}: f_{21} \circ f_{10} \Rightarrow f_{20}$ for which the diagram \begin{equation} \begin{gathered}\label{equation:3-simplex-of-Dusk} \xymatrix@R =50pt@C=50pt{ f_{32} \circ (f_{21} \circ f_{10} ) \ar@ {=>}[rr]^-{\alpha }_-{\sim } \ar@ {=>}[d]_{ \operatorname{id}_{ f_{32}} \circ \mu _{210} } & & ( f_{32} \circ f_{21} ) \circ f_{10} \ar@ {=>}[d]^{ \mu _{321} \circ \operatorname{id}_{ f_{10} }} \\ f_{32} \circ f_{20} \ar@ {=>}[dr]_{ \mu _{320} } & & f_{31} \circ f_{10} \ar@ {=>}[dl]^{ \mu _{310} } \\ & f_{30} & } \end{gathered} \end{equation} is commutative, where $\alpha = \alpha _{f_{32}, f_{21}, f_{10} }$ is the associativity constraint for the composition of $1$-morphisms in $\operatorname{\mathcal{C}}$ (Proposition 2.3.1.9). This commutativity can be rewritten as an equation \[ \mu _{320}(\operatorname{id}_{ f_{32} } \circ \mu _{210}) = \mu _{310} (\mu _{321} \circ \operatorname{id}_{ f_{10}} ) \alpha . \] This equation has a unique solution, because $\mu _{320}$ is invertible and horizontal composition with $f_{32}$ induces an equivalence of categories $\underline{\operatorname{Hom}}_{\operatorname{\mathcal{C}}}( X_0, X_2) \rightarrow \underline{\operatorname{Hom}}_{\operatorname{\mathcal{C}}}( X_0, X_3 )$. Suppose that $n=4$. The restriction of $\sigma _0$ to the $2$-skeleton of $\Delta ^4$ can be identified with a collection of objects $\{ X_ i \} _{0 \leq i \leq 4}$, $1$-morphisms $\{ f_{ji}: X_ i \rightarrow X_ j \} _{0 \leq i < j \leq 4}$, and $2$-morphisms $\{ \mu _{kji}: f_{kj} \circ f_{ji} \Rightarrow f_{ki} \} _{0 \leq i < j < k \leq 4}$ in the $2$-category $\operatorname{\mathcal{C}}$. The assumption that $\sigma _0|_{ \operatorname{N}_{\bullet }(\{ 0 < n-1 < n\} ) }$ is right-degenerate guarantees that $X_3 = X_4$, that $f_{30} = f_{40}$, that the $1$-morphism $f_{43}$ is the identity $\operatorname{id}_{X_3}$, and that $\mu _{430}$ is the left unit constraint $\lambda _{ f_{30} }$. Consider the diagram \[ \xymatrix@C =0pt{ f_{43} (f_{31} f_{10} ) \ar@ {=>}[rrrr]^{\sim } \ar@ {=>}[ddddd]^{ \mu _{310} } & & & & (f_{43} f_{31}) f_{10} \ar@ {=>}[ddddd]^{\mu _{431} } \\ & f_{43}( (f_{32} f_{21} ) f_{10} ) \ar@ {=>}[ul]_{\mu _{321}} \ar@ {=>}[rr]^{\sim } & & (f_{43} (f_{32} f_{21})) f_{10} \ar@ {=>}[ur]^{\mu _{321}} \ar@ {=>}[d]^{\sim } & \\ & f_{43} ( f_{32} (f_{21} f_{10} ) ) \ar@ {=>}[u]^{\sim } \ar@ {=>}[dr]^{ \sim } \ar@ {=>}[d]^{\mu _{210}} & & (( f_{43} f_{32}) f_{21}) f_{10} \ar@ {=>}[d]^{\mu _{432} } & \\ & f_{43} ( f_{32} f_{20} ) \ar@ {=>}[d]^{\sim } \ar@ {=>}[ddl]_{ \mu _{320} } & (f_{43} f_{32} ) (f_{21} f_{10}) \ar@ {=>}[dl]_{\mu _{210} } \ar@ {=>}[dr]^{ \mu _{432} } \ar@ {=>}[ur]^{\sim } & ( f_{42} f_{21} ) f_{10} \ar@ {=>}[ddr]^{ \mu _{421} } \ar@ {=>}[d]_-{\sim } & \\ & (f_{43} f_{32} ) f_{20} \ar@ {=>}[r]_-{\mu _{432}} & f_{42} f_{20} \ar@ {=>}[d]^{\mu _{420}} & f_{42} (f_{21} f_{10} ) \ar@ {=>}[l]^-{\mu _{210}} & \\ f_{43} f_{30} \ar@ {=>}[rr]^{\mu _{430}}_-{\sim } & & f_{04} & & f_{41} f_{10}; \ar@ {=>}[ll]_{ \mu _{410} } } \] in the category $\underline{\operatorname{Hom}}_{\operatorname{\mathcal{C}}}(X_0, X_4)$, where the unlabeled $2$-morphisms are given by the associativity constraints. Note that the $4$-cycles in this diagram commute by functoriality, and the central $5$-cycle commutes by the pentagon identity of $\operatorname{\mathcal{C}}$. Our assumption that $\sigma _0$ is defined on the horn $\Lambda ^{4}_{4}$ guarantees that pentagonal cycles on the right and bottom of the diagram are commutative and that the outer cycle commutes. Since the $2$-morphism $\mu _{430}$ is invertible, a diagram chase shows that the pentagonal cycle on the left of the diagram also commutes. Since $f_{43}$ is an identity $1$-morphism, horizontal composition with $f_{43}$ is isomorphic to the identity (via the left unit constraint of Construction 2.2.1.11) and is therefore faithful. It follows that the diagram (5.34) is commutative, so that $\sigma _0$ extends (uniquely) to a $4$-simplex of $\operatorname{N}_{\bullet }^{\operatorname{D}}(\operatorname{\mathcal{C}})$. If $n \geq 5$, then the horn $\Lambda ^{n}_{n}$ contains the $3$-skeleton of $\Delta ^ n$. In this case, the morphism $\sigma _0: \Lambda ^ n_ n \rightarrow \operatorname{N}_{\bullet }^{\operatorname{D}}(\operatorname{\mathcal{C}})$ extends uniquely to an $n$-simplex of $\operatorname{N}_{\bullet }^{\operatorname{D}}(\operatorname{\mathcal{C}})$ by virtue of Corollary 2.3.1.10. $\square$
CommonCrawl
Feature selection and dimension reduction for single-cell RNA-Seq based on a multinomial model F. William Townes ORCID: orcid.org/0000-0003-0320-67871,2, Stephanie C. Hicks3, Martin J. Aryee1,4,5,6 & Rafael A. Irizarry1,7 A Correction to this article was published on 22 July 2020 Single-cell RNA-Seq (scRNA-Seq) profiles gene expression of individual cells. Recent scRNA-Seq datasets have incorporated unique molecular identifiers (UMIs). Using negative controls, we show UMI counts follow multinomial sampling with no zero inflation. Current normalization procedures such as log of counts per million and feature selection by highly variable genes produce false variability in dimension reduction. We propose simple multinomial methods, including generalized principal component analysis (GLM-PCA) for non-normal distributions, and feature selection using deviance. These methods outperform the current practice in a downstream clustering assessment using ground truth datasets. Single-cell RNA-Seq (scRNA-Seq) is a powerful tool for profiling gene expression patterns in individual cells, facilitating a variety of analyses such as identification of novel cell types [1, 2]. In a typical protocol, single cells are isolated in liquid droplets, and messenger RNA (mRNA) is captured from each cell, converted to cDNA by reverse transcriptase (RT), then amplified using polymerase chain reaction (PCR) [3–5]. Finally, fragments are sequenced, and expression of a gene in a cell is quantified by the number of sequencing reads that mapped to that gene [6]. A crucial difference between scRNA-Seq and traditional bulk RNA-Seq is the low quantity of mRNA isolated from individual cells, which requires a larger number of PCR cycles to produce enough material for sequencing (bulk RNA-Seq comingles thousands of cells per sample). For example, the popular 10x Genomics protocol uses 14 cycles [5]. Thus, many of the reads counted in scRNA-Seq are duplicates of a single mRNA molecule in the original cell [7]. Full-length protocols such as SMART-Seq2 [8] analyze these read counts directly, and several methods have been developed to facilitate this [9]. However, in many experiments, it is desirable to analyze larger numbers of cells than possible with full-length protocols, and isoform-level inference may be unnecessary. Under such conditions, it is advantagous to include unique molecular identifiers (UMIs) which enable computational removal of PCR duplicates [10, 11], producing UMI counts. Although a zero UMI count is equivalent to a zero read count, nonzero read counts are larger than their corresponding UMI counts. In general, all scRNA-Seq data contain large numbers of zero counts (often >90% of the data). Here, we focus on the analysis of scRNA-Seq data with UMI counts. Starting from raw counts, a scRNA-Seq data analysis typically includes normalization, feature selection, and dimension reduction steps. Normalization seeks to adjust for differences in experimental conditions between samples (individual cells), so that these do not confound true biological differences. For example, the efficiency of mRNA capture and RT is variable between samples (technical variation), causing different cells to have different total UMI counts, even if the number of molecules in the original cells is identical. Feature selection refers to excluding uninformative genes such as those which exhibit no meaningful biological variation across samples. Since scRNA-Seq experiments usually examine cells within a single tissue, only a small fraction of genes are expected to be informative since many genes are biologically variable only across different tissues. Dimension reduction aims to embed each cell's high-dimensional expression profile into a low-dimensional representation to facilitate visualization and clustering. While a plethora of methods [5, 12–15] have been developed for each of these steps, here, we describe what is considered to be the standard pipeline [15]. First, raw counts are normalized by scaling of sample-specific size factors, followed by log transformation, which attempts to reduce skewness. Next, feature selection involves identifying the top 500–2000 genes by computing either their coefficient of variation (highly variable genes [16, 17]) or average expression level (highly expressed genes) across all cells [15]. Alternatively, highly dropout genes may be retained [18]. Principal component analysis (PCA) [19] is the most popular dimension reduction method (see for example tutorials for Seurat [17] and Cell Ranger [5]). PCA compresses each cell's 2000-dimensional expression profile into, say, a 10-dimensional vector of principal component coordinates or latent factors. Prior to PCA, data are usually centered and scaled so that each gene has mean 0 and standard deviation 1 (z-score transformation). Finally, a clustering algorithm can be applied to group cells with similar representations in the low-dimensional PCA space. Despite the appealing simplicity of this standard pipeline, the characteristics of scRNA-Seq UMI counts present difficulties at each stage. Many normalization schemes derived from bulk RNA-Seq cannot compute size factors stably in the presence of large numbers of zeros [20]. A numerically stable and popular method is to set the size factor for each cell as the total counts divided by 106 (counts per million, CPM). Note that CPM does not alter zeros, which dominate scRNA-Seq data. Log transformation is not possible for exact zeros, so it is common practice to add a small pseudocount such as 1 to all normalized counts prior to taking the log. The choice of pseudocount is arbitrary and can introduce subtle biases in the transformed data [21]. For a statistical interpretation of the pseudocount, see the "Methods" section. Similarly, the use of highly variable genes for feature selection is somewhat arbitrary since the observed variability will depend on the pseudocount: pseudocounts close to zero arbitrarily increase the variance of genes with zero counts. Finally, PCA implicitly relies on Euclidean geometry, which may not be appropriate for highly sparse, discrete, and skewed data, even after normalizations and transformations [22]. Widely used methods for the analysis of scRNA-Seq lack statistically rigorous justification based on a plausible data generating a mechanism for UMI counts. Instead, it appears many of the techniques have been borrowed from the data analysis pipelines developed for read counts, especially those based on bulk RNA-Seq [23]. For example, models based on the lognormal distribution cannot account for exact zeros, motivating the development of zero-inflated lognormal models for scRNA-Seq read counts [24–27]. Alternatively, ZINB-WAVE uses a zero-inflated negative binomial model for dimension reduction of read counts [28]. However, as shown below, the sampling distribution of UMI counts is not zero inflated [29] and differs markedly from read counts, so application of read count models to UMI counts needs either theoretical or empirical justification. We present a unifying statistical foundation for scRNA-Seq with UMI counts based on the multinomial distribution. The multinomial model adequately describes negative control data, and there is no need to model zero inflation. We show the mechanism by which PCA on log-normalized UMI counts can lead to distorted low-dimensional factors and false discoveries. We identify the source of the frequently observed and undesirable fact that the fraction of zeros reported in each cell drives the first principal component in most experiments [30]. To remove these distortions, we propose the use of GLM-PCA, a generalization of PCA to exponential family likelihoods [31]. GLM-PCA operates on raw counts, avoiding the pitfalls of normalization. We also demonstrate that applying PCA to deviance or Pearson residuals provides a useful and fast approximation to GLM-PCA. We provide a closed-form deviance statistic as a feature selection method. We systematically compare the performance of all combinations of methods using ground truth datasets and assessment procedures from [15]. We conclude by suggesting best practices. We used 9 public UMI count datasets to benchmark our methods (Table 1). The first dataset was a highly controlled experiment specifically designed to understand the technical variability. No actual cells were used to generate this dataset. Instead, each droplet received the same ratio of 92 synthetic spike-in RNA molecules from External RNA Controls Consortium (ERCC). We refer to this dataset as the technical replicates negative control as there is no biological variability whatsoever, and in principle, each expression profile should be the same. Table 1 Single cell RNA-Seq datasets used The second and third datasets contained cells from homogeneous populations purified using fluorescence-activated cell sorting (FACS). We refer to these datasets as biological replicates negative controls. Because these cells were all the same type, we did not expect to observe any significant differences in unsupervised analysis. The 10 × Zheng monocytes data had low total UMI counts, while the SMARTer Tung data had high counts. The fourth and fifth datasets were created by [15]. The authors allocated FACS-purified peripheral blood mononuclear cells (PBMCs) from 10 × data [5] equally into four (Zheng 4eq dataset) and eight (Zheng 8eq dataset) clusters, respectively. In these positive control datasets, the cluster identity of all cells was assigned independently of gene expression (using FACS), so they served as the ground truth labels. The sixth and seventh datasets contained a wider variety of cell types. However, the cluster identities were determined computationally by the original authors' unsupervised analyses and could not serve as a ground truth. The 10 × Haber intestinal dataset had low total UMI counts, while the CEL-Seq2 Muraro pancreas dataset had high counts. The final Zheng dataset consisted of a larger number of unsorted PBMCs and was used to compare computational speed of different dimension reduction algorithms. We refer to it as the PBMC 68K dataset. UMI count distribution differs from reads To illustrate the marked difference between UMI count distributions and read count distributions, we created histograms from individual genes and spike-ins of the negative control data. Here, the UMI counts are the computationally de-duplicated versions of the read counts; both measurements are from the same experiment, so no differences are due to technical or biological variation. The results suggest that while read counts appear zero-inflated and multimodal, UMI counts follow a discrete distribution with no zero inflation (Additional file 1: Figure S1). The apparent zero inflation in read counts is a result of PCR duplicates. Multinomial sampling distribution for UMI counts Consider a single cell i containing ti total mRNA transcripts. Let ni be the total number of UMIs for the same cell. When the cell is processed by a scRNA-Seq protocol, it is lysed, then some fraction of the transcripts are captured by beads within the droplets. A series of complex biochemical reactions occur, including attachment of barcodes and UMIs, and reverse transcription of the captured mRNA to a cDNA molecule. Finally, the cDNA is sequenced, and PCR duplicates are removed to generate the UMI counts [5]. In each of these stages, some fraction of the molecules from the previous stage are lost [5, 7, 32]. In particular, reverse transcriptase is an inefficient and error-prone enzyme [35]. Therefore, the number of UMI counts representing the cell is much less than the number of transcripts in the original cell (ni≪ti). Specifically, ni typically ranges from 1000−10,000 while ti is estimated to be approximately 200,000 for a typical mammalian cell [36]. Furthermore, which molecules are selected and successfully become UMIs is a random process. Let xij be the true number of mRNA transcripts of gene j in cell i, and yij be the UMI count for the same gene and cell. We define the relative abundance πij as the true number of mRNA transcripts represented by gene j in cell i divided by the total number of mRNA transcripts in cell i. Relative abundance is given by πij=xij/ti where total transcripts \(t_{i}=\sum _{j} x_{ij}\). Since ni≪ti, there is a "competition to be counted" [37]; genes with large relative abundance πij in the original cell are more likely to have nonzero UMI counts, but genes with small relative abundances may be observed with UMI counts of exact zeros. The UMI counts yij are a multinomial sample of the true biological counts xij, containing only relative information about expression patterns in the cell [37, 38]. The multinomial distribution can be approximated by independent Poisson distributions and overdispersed (Dirichlet) multinomials by independent negative binomial distributions. These approximations are useful for computational tractability. Details are provided in the "Methods" section. The multinomial model makes two predictions which we verified using negative control data. First, the fraction of zeros in a sample (cell or droplet) is inversely related to the total number of UMIs in that sample. Second, the probability of an endogenous gene or ERCC spike-in having zero counts is a decreasing function of its mean expression (equations provided in the "Methods" section). Both of these predictions were validated by the negative control data (Fig. 1). In particular, the empirical probability of a gene being zero across droplets was well calibrated to the theoretical prediction based on the multinomial model. This also demonstrates that UMI counts are not zero inflated, consistent with [29]. Multinomial model adequately characterizes sampling distributions of technical and biological replicates negative control data. a Fraction of zeros is plotted against the total number of UMI in each droplet for the technical replicates. b As a but for cells in the biological replicates (monocytes). c After down-sampling replicates to 10,000 UMIs per droplet to remove variability due to the differences in sequencing depth, the fraction of zeros is computed for each gene and plotted against the log of expression across all samples for the technical replicates data. The solid curve is theoretical probability of observing a zero as a function of the expected counts derived from the multinomial model (blue) and its Poisson approximation (green). d As c but for the biological replicates (monocytes) dataset and after down-sampling to 575 UMIs per cell. Here, we also add the theoretical probability derived from a negative binomial model (red) To further validate the multinomial model, we assessed goodness-of-fit of seven possible null distributions to both the Tung and Zheng monocytes negative control datasets (Additional file 1: Figure S2). When applied to UMI counts, the multinomial, Dirichlet-multinomial, and Poisson (as approximation to multinomial) distributions fit best. When applied to read counts, the zero-inflated lognormal was the best fitting distribution followed by the Dirichlet-multinomial. These results are consistent with [39], which also found that the relationship between average expression and zero probability follows the theoretical curve predicted by a Poisson model using negative control data processed with Indrop [4] and Dropseq [3] protocols. These are droplet protocols with typically low counts. It has been argued that the Poisson model is insufficient to describe the sampling distribution of genes with high counts and the negative binomial model is more appropriate [11]. The Tung dataset contained high counts, and we nevertheless found the Poisson gave a better fit than the negative binomial. However, the difference was not dramatic, so our results do not preclude the negative binomial as a reasonable sampling distribution for UMI counts. Taken together, these results suggest our data-generating mechanism is an accurate model of technical noise in real data. Normalization and log transformation distorts UMI data Standard scRNA-Seq analysis involves normalizing raw counts using size factors, applying a log transformation with a pseudocount, and then centering and scaling each gene before dimension reduction. The most popular normalization is counts per million (CPM). The CPM are defined as (yij/ni)×106 (i.e., the size factor is ni/106). This is equivalent to the maximum likelihood estimator (MLE) for relative abundance \(\hat {\pi }_{ij}\) multiplied by 106. The log-CPM are then \(\log _{2}(c+\hat {\pi }_{ij}10^{6}) = \log _{2}(\tilde {\pi }_{ij})+C\), where \(\tilde {\pi }_{ij}\) is a maximum a posteriori estimator (MAP) for πij (mathematical justification and interpretation of this approach provided in the "Methods" section). The additive constant C is irrelevant if data are centered for each gene after log transformation, as is common practice. Thus, normalization of raw counts is equivalent to using MLEs or MAP estimators of the relative abundances. Log transformation of MLEs is not possible for UMI counts due to exact zeros, while log transformation of MAP estimators of πij systematically distorts the differences between zero and nonzero UMI counts, depending on the arbitrary pseudocount c (derivations provided in the "Methods" section). To illustrate this phenomenon, we examined the distribution of an illustrative gene before and after the log transform with varying normalizations using the biological replicates negative control data (Fig. 2). Consistent with our theoretical predictions, this artificially caused the distribution to appear zero inflated and exaggerated differences between cells based on whether the count was zero or nonzero. Example of how current approaches to normalization and transformation artificially distort differences between zero and nonzero counts. a UMI count distribution for gene ENSG00000114391 in the monocytes biological replicates negative control dataset. b Counts per million (CPM) distribution for the exact same count data. c Distribution of log2(1+CPM) values for the exact same count data Focusing on the entire negative control datasets, we applied PCA to log-CPM values. We observed a strong correlation (r=0.8 for technical and r=0.98 for monocytes biological replicates) between the first principal component (PC) and the fraction of zeros, consistent with [30]. Application of PCA to CPM values without log transform reduced this correlation to r=0.1 for technical and r=0.7 for monocytes biological replicates. Additionally, the first PC of log-CPM correlated with the log of total UMI, which is consistent with the multinomial model (Fig. 3). Note that in datasets with strong biological variability, the nuisance variation from zero fraction and total counts could appear in secondary PCs rather than the first PC, but it would still confound downstream analyses. Based on these results, the log transformation is not necessary and in fact detrimental for the analysis of UMI counts. The benefits of avoiding normalization by instead directly modeling raw counts have been demonstrated in the context of differential expression [40]. Where normalization is unavoidable, we propose the use of approximate multinomial deviance residuals (defined in the "Residuals and z-scores" section) instead of log-transformed CPM. Current approaches to normalization and transformation induce variability in the fraction of zeros across cells to become the largest source of variability which in turn biases clustering algorithms to produce false-positive results based on distorted latent factors. a First principal component (PC) from the technical replicates dataset plotted against fraction of zeros for each cell. A red to blue color scale represents total UMIs per cell. b As a but for the monocytes biological replicates data. c Using the technical replicates, we applied t-distributed stochastic neighbor embedding (tSNE) with perplexity 30 to the top 50 PCs computed from log-CPM. The first 2 tSNE dimensions are shown with a blue to red color scale representing the fraction of zeros. d As c but for the biological replicates data. Here, we do not expect to find differences, yet we see distorted latent factors being driven by the total UMIs. PCA was applied to 5000 random genes Zero inflation is an artifact of log normalization To see how normalization and log transformation introduce the appearance of zero inflation, consider the following example. Let yij be the observed UMI counts following a multinomial distribution with size ni for each cell and relative abundance πj for each gene, constant across cells. Focusing on a single gene j, yij follows a binomial distribution with parameters ni and pj. Assume πj=10−4 and the ni range from 1000−3000, which is consistent with the biological replicates negative control data (Fig. 1 and Additional file 1: Figure S1). Under this assumption, we expect to see about 74–90% zeros, 22–30% ones, and less than 4% values above one. However, notice that after normalization to CPM and log transformation, all the zeros remain log2(1+0)=0, yet the ones turn into values ranging from log2(1+1/3000×106)= log2(334)≈8.4 to log2(1001)≈10. The few values that are 2 will have values ranging from log2(668)≈9.4 to log2(2001)≈11. The large, artificial gap between zero and nonzero values makes the log-normalized data appear zero-inflated (Fig. 2). The variability in CPM values across cells is almost completely driven by the variability in ni. Indeed, it shows up as the primary source of variation in PCA plots (Fig. 3). Generalized PCA for dimension reduction of sparse counts While PCA is a popular dimension reduction method, it is implicitly based on Euclidean distance, which corresponds to maximizing a Gaussian likelihood. Since UMI counts are not normally distributed, even when normalized and log transformed, this distance metric is inappropriate [41], causing PCA to produce distorted latent factors (Fig. 3). We propose the use of PCA for generalized linear models (GLMs) [31] or GLM-PCA as a more appropriate alternative. The GLM-PCA framework allows for a wide variety of likelihoods suitable for data types such as counts and binary values. While the multinomial likelihood is ideal for modeling technical variability in scRNA-Seq UMI counts (Fig. 1), in many cases, there may be excess biological variability present as well. For example, if we wish to capture variability due to clusters of different cell types in a dimension reduction, we may wish to exclude biological variability due to cell cycle. Biological variability not accounted for by the sampling distribution may be accomodated by using a Dirichlet-multinomial likelihood, which is overdispersed relative to the multinomial. In practice, both the multinomial and Dirichlet-multinomial are computationally intractable and may be approximated by the Poisson and negative binomial likelihoods, respectively (detailed derivations provided in the "Methods" section). We implemented both negative binomial and Poisson GLM-PCA, but we focused primarily on the latter in our assessments for simplicity of exposition. Intuitively, using Poisson instead of negative binomial implies, we assume the biological variability is captured by the factor model and the unwanted biological variability is small relative to the sampling variability. Our implementation also allows the user to adjust for gene-specific or cell-specific covariates (such as batch labels) as part of the overall model. We ran Poisson GLM-PCA on the technical and biological (monocytes) replicates negative control datasets and found it removed the spurious correlation between the first dimension and the total UMIs and fraction of zeros (Fig. 4). To examine GLM-PCA as a visualization tool, we ran Poisson and negative binomial GLM-PCA along with competing methods on the 2 ground truth datasets (Additional file 1: Figure S3). For the Zheng 4eq dataset, we directly reduced to 2 dimensions. For the Zheng 8eq dataset, we reduced to 15 dimensions then applied UMAP [42]. While all methods effectively separated T cells from other PBMCs, GLM-PCA methods also separated memory and naive cytotoxic cells from the other subtypes of T cells. This separation was not visible with PCA on log-CPM. Computational speed is discussed in the "Computational efficiency of multinomial models" section. GLM-PCA dimension reduction is not affected by unwanted fraction of zeros variability and avoids false-positive results. a First GLM-PCA dimension (analogous to the first principal component) plotted against the fraction of zeros for the technical replicates with colors representing the total UMIs. b As a but using monocytes biological replicates. c Using the technical replicates, we applied t-distributed stochastic neighbor embedding (tSNE) with perplexity 30 to the top 50 GLM-PCA dimensions. The first 2 tSNE dimensions are shown with a blue to red color scale representing the fraction of zeros. d As c but for the biological replicates data. GLM-PCA using the Poisson approximation to the multinomial was applied to the same 5000 random genes as in Fig. 3 Deviance residuals provide fast approximation to GLM-PCA One disadvantage of GLM-PCA is it depends on an iterative algorithm to obtain estimates for the latent factors and is at least ten times slower than PCA. We therefore propose a fast approximation to GLM-PCA. When using PCA a common first step is to center and scale the data for each gene as z-scores. This is equivalent to the following procedure. First, specify a null model of constant gene expression across cells, assuming a normal distribution. Next, find the MLEs of its parameters for each gene (the mean and variance). Finally, compute the residuals of the model as the z-scores (derivation provided in the "Methods" section). The fact that scRNA-Seq data are skewed, discrete, and possessing many zeros suggests the normality assumption may be inappropriate. Further, using z-scores does not account for variability in total UMIs across cells. Instead, we propose to replace the normal null model with a multinomial null model as a better match to the data-generating mechanism. The analogs to z-scores under this model are called deviance and Pearson residuals. Mathematical formulae are presented in the "Methods" section. The use of multinomial residuals enables a fast transformation similar to z-scores that avoids difficulties of normalization and log transformation by directly modeling counts. Additionally, this framework allows straightforward adjustment for covariates such as cell cycle signatures or batch labels. In an illustrative simulation (details in the "Residuals and z-scores" section), residual approximations to GLM-PCA lost accuracy in the presence of strong batch effects, but still outperformed the traditional PCA (Additional file 1: Figure S4). Systematic comparisons on ground truth data are provided in the "Multinomial models improve unsupervised clustering" section. Computational efficiency of multinomial models We measured time to convergence for reduction to two latent dimensions of GLM-PCA, ZINB-WAVE, PCA on log-CPM, PCA on deviance residuals, and PCA on Pearson residuals. Using the top 600 informative genes, we subsampled the PBMC 68K dataset to 680, 6800, and 68,000 cells. All methods scaled approximately linearly with increasing the numbers of cells, but GLM-PCA was 23–63 times faster than ZINB-WAVE across sample sizes (Additional file 1: Figure S5). Specifically, GLM-PCA processed 68,000 cells in less than 7 min. The deviance and Pearson residuals methods exhibited speeds comparable to PCA: 9–26 times faster than GLM-PCA. We also timed dimension reduction of the 8eq dataset (3994 cells) from 1500 informative genes to 10 latent dimensions. PCA (with either log-CPM, deviance, or Pearson residuals) took 7 s, GLM-PCA took 4.7 min, and ZINB-WAVE took 86.6 min. Feature selection using deviance Feature selection, or identification of informative genes, may be accomplished by ranking genes using the deviance, which quantifies how well each gene fits a null model of constant expression across cells. Unlike the competing highly variable or highly expressed genes methods, which are sensitive to normalization, ranking genes by deviance operates on raw UMI counts. An approximate multinomial deviance statistic can be computed in closed form (formula provided in the "Methods" section). We compared gene ranks for all three feature selection methods (deviance, highly expressed, and highly variable genes) on the 8eq dataset (Table 1). We found a strong concordance between highly deviant genes and highly expressed genes (Spearman's rank correlation r=0.9987), while highly variable genes correlated weakly with both high expression (r=0.3835) and deviance (r=0.3738). Choosing informative genes by high expression alone would be ineffective if a gene had high but constant expression across cells. To ensure the deviance criterion did not identify such genes, we created a simulation with three types of genes: lowly expressed, high but constantly expressed, and high and variably expressed. Deviance preferentially selected high and variably expressed genes while filtering by highly expressed genes identified the constantly expressed genes before the variably expressed (Additional file 1: Figure S6, Table S1). Furthermore, an examination of the top 1000 genes by each criteria on the Muraro dataset showed that deviance did not identify the same set of genes as highly expressed genes (Additional file 1: Figure S7, Table S2). Empirically, deviance seems to select genes that are both highly expressed and highly variable, which provides a rigorous justification for a common practice. Multinomial models improve unsupervised clustering Dimension reduction with GLM-PCA or its fast multinomial residuals approximation improved clustering performance over competing methods (Fig. 5a, Additional file 1: Figure S8a). Feature selection by multinomial deviance was superior to highly variable genes (Fig. 5b). Dimension reduction with GLM-PCA and feature selection using deviance improves Seurat clustering performance. Each column represents a different ground truth dataset from [15]. a Comparison of dimension reduction methods based on the top 1500 informative genes identified by approximate multinomial deviance. The Poisson approximation to the multinomial was used for GLM-PCA. Dev. resid. PCA, PCA on approximate multinomial deviance residuals. b Comparison of feature selection methods. The top 1500 genes identified by deviance and highly variable genes were passed to 2 different dimension reduction methods: GLM-PCA and PCA on log-transformed CPM. Only the results with the number of clusters within 25% of the true number are presented Using the two ground truth datasets described under the "Datasets" section, we systematically compared the clustering performance of all combinations of previously described methods for normalization, feature selection, and dimension reduction. In addition, we compared against ZINB-WAVE since it also avoids requiring the user to pre-process and normalize the UMI count data (e.g., log transformation of CPM) and accounts for varying total UMIs across cells [28]. After obtaining latent factors, we used Seurat's Louvain implementation and k-means to infer clusters, and compared these to the known cell identities using adjusted Rand index (ARI, [43]). This quantified accuracy. We assessed cluster separation using the silhouette coefficient. We varied the number of latent dimensions and number of clusters to assess robustness. Where possible, we used the same combinations of hyperparameters as [15] to facilitate comparisons to their extensive benchmarking (details are provided in the "Methods" section). We compared the Seurat clustering performance of GLM-PCA (with Poisson approximation to multinomial) to running PCA on deviance residuals, which adhere more closely to the normal distribution than log-CPM. We found both of these approximate multinomial methods gave similar results on the 4eq dataset and outperformed PCA on log-CPM z-scores. However, GLM-PCA outperformed the residuals method on the 8eq dataset. Also, performance on ZINB-WAVE factors degraded when the number of latent dimensions increased from 10 to 30, whereas GLM-PCA and its fast approximation with deviance residuals were robust to this change (Fig. 5a). GLM-PCA and its residual approximations produced better cluster separation than PCA or ZINB-WAVE, even in scenarios where all methods had similar accuracy (Additional file 1: Figure S8a). The performance of Pearson residuals was similar to that of deviance residuals (Additional file 1: Figure S9, S10). Focusing on feature selection methods, deviance had higher accuracy than highly variable genes across both datasets and across dimension reduction methods (Fig. 5b). Filtering by highly expressed genes led to similar clustering performance as deviance (Additional file 1: Figure S9), because both criteria identified strongly overlapping gene lists for these data. The combination of feature selection with deviance and dimension reduction with GLM-PCA also improved clustering performance when k-means was used in place of Seurat (Additional file 1: Figure S11). A complete table of results is publicly available (see the "Availability of data and materials" section). Finally, we examined the clustering performance of competing dimension reduction methods on two public datasets with more complex subtypes (Table 1). The 10 × Haber dataset [33] was annotated with 12 types of enteroendocrine cells from the intestine. The CEL-Seq2 Muraro dataset [34] was annotated with 9 types of pancreatic cells. Since these cluster labels were computationally derived, they did not constitute a ground truth comparison. Nevertheless, GLM-PCA had the closest concordance with the original authors' annotation in both datasets (Additional file 1: Tables S3, S4). We have outlined a statistical framework for analysis of scRNA-Seq data with UMI counts based on a multinomial model, providing effective and simple to compute methods for feature selection and dimension reduction. We found that UMI count distributions differ dramatically from read counts, are well-described by a multinomial distribution, and are not zero inflated. Log transformation of normalized UMI counts is detrimental, because it artificially exaggerates the differences between zeros and all other values. For feature selection, or identification of informative genes, deviance is a more effective criterion than highly variable genes. Dimension reduction via GLM-PCA, or its fast approximation using residuals from a multinomial model, leads to better clustering performance than PCA on z-scores of log-CPM. Although our methods were inspired by scRNA-Seq UMI counts, they may be useful for a wider array of data sources. Any high dimensional, sparse dataset where samples contain only relative information in the form of counts may conceivably be modeled by the multinomial distribution. Under such scenarios, our methods are likely to be more effective than applying log transformations and standard PCA. A possible example is microbiome data. We have not addressed major topics in the scRNA-Seq literature such as pseudotime inference [44], differential expression [45], and spatial analysis [46]. However, the statistical ideas outlined here can also be used to improve methods in these more specialized types of analyses. Our results have focused on (generalized) linear models for simplicity of exposition. Recently, several promising nonlinear dimension reductions for scRNA-Seq have been proposed. The variational autoencoder (VAE, a type of neural network) method scVI [47] utilizes a negative binomial likelihood in the decoder, while the encoder relies on log-normalized input data for numerical stability. The Gaussian process method tGPLVM [48] models log-transformed counts. In both cases, we suggest replacing log-transformed values with deviance residuals to improve performance. Nonlinear dimension reduction methods may also depend on feature selection to reduce memory consumption and speed computation; here, our deviance method may be utilized as an alternative to high variability for screening informative genes. Multinomial model for scRNA-Seq Let yij be the observed UMI counts for cell or droplet i and gene or spike-in j. Let \(n_{i}=\sum _{j} y_{ij}\) be the total UMIs in the sample, and πij be the unknown true relative abundance of gene j in cell i. The random vector \(\vec {y}_{i} = (y_{i1},\ldots,y_{iJ})^{\top }\) with constraint \(\sum _{j} y_{ij}=n_{i}\) follows a multinomial distribution with densit function: $$f(\vec{y}_{i}) = \binom{n_{i}}{y_{i1},\ldots,y_{iJ}} \prod_{j} \pi_{ij}^{y_{ij}} $$ Focusing on a single gene j at a time, the marginal distribution of yij is binomial with parameters ni and πij. The marginal mean is E[yij]=niπij=μij, the marginal variance is \(\text {var}[y_{ij}] = n_{i} \pi _{ij}(1-\pi _{ij}) = \mu _{ij}-\frac {1}{n_{i}}\mu _{ij}^{2}\), and the marginal probability of a zero count is \((1-\pi _{ij})^{n_{i}} = \left (1-\frac {\mu _{ij}}{n_{i}}\right)^{n_{i}}\). The correlation between two genes j,k is: $$\text{cor}[y_{ij},y_{ik}] = \frac{-\sqrt{\pi_{ij}\pi_{ik}}}{\sqrt{(1-\pi_{ij})(1-\pi_{ik})}} $$ The correlation is induced by the sum to ni constraint. As an extreme example, if there are only two genes (J=2), increasing the count of the first gene automatically reduces the count of the second gene since they must add up to ni under multinomial sampling. This means when J=2, there is a perfect anti-correlation between the gene counts which has nothing to do with biology. More generally, when either J or ni is small, gene counts will be negatively correlated independent of biological gene-gene correlations, and it is not possible to analyze the data on a gene-by-gene basis (for example, by ranking and filtering genes for feature selection). Rather, comparisons are only possible between pairwise ratios of gene expression values [49]. Yet, this type of analysis is difficult to interpret and computationally expensive for large numbers of genes (i.e., in high dimensions). Fortunately, under certain assumptions, more tractable approximations may be substituted for the true multinomial distribution. First, note that if correlation is ignored, the multinomial may be approximated by J-independent binomial distributions. Intuitively, this approximation will be reasonable if all πij are very small, which is likely to be satisfied for scRNA-Seq if the number of genes J is large, and no single gene constitutes the majority of mRNAs in the cell. If ni is large and πij is small, each binomial distribution can be further approximated by a Poisson with mean niπij. Alternatively, the multinomial can be constructed by drawing J-independent Poisson random variables and conditioning on their sum. If J and ni are large, the difference between the conditional, multinomial distribution, and the independent Poissons becomes negligible. Since in practice ni is large, the Poisson approximation to the multinomial may be reasonable [50–53]. The multinomial model does not account for biological variability. As a result, an overdispersed version of the multinomial model may be necessary. This can be accommodated with the Dirichlet-multinomial distribution. Let \(\vec {y}_{i}\) be distributed as a multinomial conditional on the relative abundance parameter vector \(\vec {\pi }_{i}=(\pi _{i1},\ldots,\pi _{iJ})^{\top }\). If \(\vec {\pi }_{i}\) is itself a random variable with symmetric Dirichlet distribution having shape parameter α, the marginal distribution of \(\vec {y}_{i}\) is Dirichlet-multinomial. This distribution can itself be approximated by independent negative binomials. First, note that a symmetric Dirichlet random vector can be constructed by drawing J-independent gamma variates with shape parameter α and dividing by their sum. Suppose (as above) we approximate the conditional multinomial distribution of \(\vec {y}_{i}\) such that yij follows an approximate Poisson distribution with mean niπij. Let λij be a collection of non-negative random variables such that \(\pi _{ij}=\frac {\lambda _{ij}}{\sum _{j} \lambda _{ij}}\). We require that \(\vec {\pi }_{i}\) follows a symmetric Dirichlet, which is accomplished by having λij follow independent gamma distributions with shape α and mean ni/J. This implies \(\sum _{j} \lambda _{ij}\) follows a gamma with shape Jα and mean ni. As J→∞, this distribution converges to a point mass at ni, so for large J (satisfied by scRNA-Seq), \(\sum _{j} \lambda _{ij}\approx n_{i}\). This implies that yij approximately follows a conditional Poisson distribution with mean λij, where λij is itself a gamma random variable with mean ni/J and shape α. If we then integrate out λij we obtain the marginal distribution of yij as negative binomial with shape α and mean ni/J. Hence a negative binomial model for count data may be regarded as an approximation to an overdispersed Dirichlet-multinomial model. Parameter estimation with multinomial models (and their binomial or Poisson approximations) is straightforward. First, suppose we observe replicate samples \(\vec {y}_{i}\), i=1,…,I from the same underlying population of molecules, where the relative abundance of gene j is πj. This is a null model because it assumes each gene has a constant expected expression level, and there is no biological variation across samples. Regardless of whether one assumes a multinomial, binomial, or Poisson model, the maximum likelihood estimator (MLE) of πj is \(\hat {\pi }_{j} = \frac {\sum _{i} y_{ij}}{\sum _{i} n_{i}}\) where ni is the total count of sample i. In the more realistic case that relative abundances πij of genes vary across samples, the MLE is \(\hat {\pi }_{ij}=\frac {y_{ij}}{n_{i}}\). An alternative to the MLE is the maximum a posteriori (MAP) estimator. Suppose a symmetric Dirichlet prior with concentration parameter αi is combined with the multinomial likelihood for cell i. The MAP estimator for πij is given by: $$\tilde{\pi}_{ij}=\frac{\alpha_{i}+y_{ij}}{J\alpha_{i}+n_{i}} = w_{i}\frac{1}{J}+(1-w_{i})\hat{\pi}_{ij} $$ where wi=Jαi/(Jαi+ni), showing that the MAP is a weighted average of the prior mean that all genes are equally expressed (1/J) and the MLE (\(\hat {\pi }_{ij}\)). Compared to the MLE, the MAP biases the estimate toward the prior where all genes have the same expression. Larger values of αi introduce more bias, while αi→0 leads to the MLE. If αi>0, the smallest possible value of \(\tilde {\pi }_{ij}\) is αi/(Jαi+ni) rather than zero for the MLE. When there are many zeros in the data, MAP can stabilize relative abundance estimates at the cost of introducing bias. Mathematics of distortion from log-normalizing UMIs Suppose the true counts in cell i are given by xij for genes j=1,…,J. Some of these may be zero, if a gene is not turned on in the cell. Knowing xij is equivalent to knowing the total number of transcripts \(t_{i}=\sum _{j} x_{ij}\) and the relative proportions of each gene πij, since xij=tiπij. The total number of UMI counts \(n_{i}=\sum _{j} y_{ij}\) does not estimate ti. However, under multinomial sampling, the UMI relative abundances \(\hat {\pi }_{ij}=\frac {y_{ij}}{n_{i}}\) are MLEs for the true proportions πij. Note that it is possible that \(\hat {\pi }_{ij}=0\) even though πij>0. Because \(\sum _{j} \hat {\pi }_{ij}=1\) regardless of ni, the use of multinomial MLEs is equivalent to the widespread practice of normalizing each cell by the total counts. Furthermore, the use of size factors si=ni/m leads to \(\hat {\pi }_{ij} \times m\) (if m=106, this is CPM). Traditional bulk RNA-Seq experiments measured gene expression in read counts of many cells per sample rather than UMI counts of single cells. Gene counts from bulk RNA-Seq could thus range over several orders of magnitude. To facilitate comparison of these large numbers, many bulk RNA-Seq methods have relied on a logarithm transformation. This enables interpretation of differences in normalized counts as fold changes on a relative scale. Also, for count data, the variance of each gene is a function of its mean, and log transformation can help to prevent highly expressed outlier genes from overwhelming downstream analyses. Prior to the use of UMIs, scRNA-Seq experiments also produced read counts with wide ranging values, and a log transform was again employed. However, with single cell data, more than 90% of the genes might be observed as exact zeros, and log(0)=−∞ which is not useful for data analysis. UMI data also contain large numbers of zeros, but do not contain very large counts since PCR duplicates have been removed. Nevertheless, log transformation has been commonly used with UMI data as well. The current standard is to transform the UMI counts as \(\log _{2}(c+\hat {\pi }_{ij} \times m)\) where c is a pseudocount to avoid taking the log of zero, and typically c=1. As before, m is some constant such as 106 for CPM (see also [54] for an alternative). Finally, the data are centered and scaled so that the mean of each gene across cells is 0, and the standard deviation is 1. This standardization of the data causes any subsequent computation of distances or dimension reduction to be invariant to constant additive or multiplicative scaling. For example, under Manhattan distance, d(x+c,y+c)=|x+c−(y+c)|=|x−y|=d(x,y). In particular, using size factors such as CPM instead of relative abundances leads to a rescaling of the pseudocount, and use of any pseudocount is equivalent to replacing the MLE with the MAP estimator. Let k=c/m and αi=kni. Then, the weight term in the MAP formula becomes wi=Jk/(1+Jk)=w which is constant across all cells i. Furthermore Jk=w/(1−w), showing that: $${}{\begin{aligned} \log_{2}(c+\hat{\pi}_{ij} \times m) &= \log_{2}(k+\hat{\pi}_{ij}) + \log_{2}(m)\\ &= \log_{2}\left(\frac{w}{1-w}\frac{1}{J}+\hat{\pi}_{ij}\right)+\log_{2}(m)\\ &= \log_{2}\left(w\frac{1}{J}+(1-w)\hat{\pi}_{ij}\right)-\log_{2}(1-w)+\log_{2}(m)\\ &= \log_{2}(\tilde{\pi}_{ij})+C \end{aligned}} $$ Where C is a global constant that does not vary across cells or genes. For illustration, if c=1 and m=106, this is equivalent to assuming a prior where all genes are equally expressed and for cell i, a weight of w=J/(106+J) is given to the prior relative to the MLE. Since the number of genes J is on the order of 104, we have w≈.01. The prior sample size for cell i is Jαi=10−6Jni≈.01×ni where ni is the data sample size. The standard transformation is therefore equivalent to using a weak prior to obtain a MAP estimate of the relative abundances, then log transforming before dimension reduction. In most scRNA-Seq datasets, the total number of UMIs ni for some cells may be significantly less than the constant m. For these cells, the size factors si=ni/m are less than 1. Therefore, after normalization (dividing by size factor), the counts are scaled up to match the target size of m. Due to the discreteness of counts, this introduces a bias after log transformation, if the pseudocount is small (or equivalently, if m is large). For example, let c=1 and m=106 (CPM). If ni=104 for a particular cell, we have si=.01. A raw count of yij=1 for this cell is normalized to 1/.01=100 and transformed to log2(1+100)=6.7. For this cell, on the log scale, there cannot be any values between 0 and 6.7 because fractional UMI counts cannot be observed and log2(1+0)=0. Small pseudocounts and small size factors combined with log transform arbitrarily exaggerate the difference between a zero count and a small nonzero count. As previously shown, this scenario is equivalent to using MAP estimation of πij with a weak prior. To combat this distortion, one may attempt to strengthen the prior to regularize \(\tilde {\pi }_{ij}\) estimation at the cost of additional bias, as advocated by [21]. An extreme case occurs when c=1 and m=1. Here, the prior sample size is Jni, so almost all the weight is on the prior. The transform is then \(\log _{2}(1+\hat {\pi }_{ij})\). But this function is approximately linear on the domain \(0\leq \hat {\pi }_{ij}\leq 1\). After centering and scaling, a linear transformation is vacuous. To summarize, log transformation with a weak prior (small size factor, such as CPM) introduces strong artificial distortion between zeros and nonzeros, while log tranformation with a strong prior (large size factor) is roughly equivalent to not log transforming the data. Generalized PCA PCA minimizes the mean squared error (MSE) between the data and a low-rank representation, or embedding. Let yij be the raw counts and zij be the normalized and transformed version of yij such as centered and scaled log-CPM (z-scores). The PCA objective function is: $$\min_{u,v} \sum_{i,j}(z_{ij}-\vec{u}_{i}'\vec{v}_{j})^{2} $$ where \(\vec {u}_{i},\vec {v}_{j}\in \mathbb {R}^{L}\) for i=1,…,I, j=1,…,J. The \(\vec {u}_{i}\) are called factors or principal components, and the \(\vec {v}_{j}\) are called loadings. The number of latent dimensions L controls the complexity of the model. Minimization of the MSE is equivalent to minimizing the Euclidean distance metric between the embedding and the data. It is also equivalent to maximizing the likelihood of a Gaussian model: $$z_{ij}\sim\mathcal{N}\left(\vec{u}_{i}'\vec{v}_{j},\sigma^{2}\right) $$ If we replace the Gaussian model with a Poisson, which approximates the multinomial, we can directly model the UMI counts as: $$y_{ij}\sim \text{Poi}\left(n_{i}\exp\{\vec{u}_{i}'\vec{v}_{j}\}\right) $$ or alternatively, in the case of overdispersion, we may approximate the Dirichlet-multinomial using a negative binomial likelihood: $$y_{ij}\sim NB\left(n_{i}\exp\{\vec{u}_{i}'\vec{v}_{j}\};~\phi_{j}\right) $$ We define the linear predictor as \(\eta _{ij} = \log n_{i} + \vec {u}_{i}'\vec {v}_{j}\). It is clear that the mean \(\mu _{ij}=e^{\eta }_{ij}\) appears in both the Poisson and negative binomial model statements, showing that the latent factors interact with the data only through the mean. We may then estimate \(\vec {u}_{i}\) and \(\vec {v}_{j}\) (and ϕj) by maximizing the likelihood (in practice, adding a small L2 penalty to large parameter values improves numerical stability). A link function must be used since \(\vec {u}_{i}\) and \(\vec {v}_{j}\) are real valued whereas the mean of a Poisson or negative binomial must be positive. The total UMIs ni term is used as an offset since no normalization has taken place; alternative size factors si such as those from scran [20] could be used in place of ni. If the first element of each \(\vec {u}_{i}\) is constrained to equal 1, this induces a gene-specific intercept term in the first position of each \(\vec {v}_{j}\), which is analogous to centering. Otherwise, the model is very similar to that of PCA; it is simply optimizing a different objective function. Unfortunately, MLEs for \(\vec {u}_{i}\) and \(\vec {v}_{j}\) cannot be expressed in closed form, so an iterative Fisher scoring procedure is necessary. We refer to this model as GLM-PCA [55]. Just as PCA minimizes MSE, GLM-PCA minimizes a generalization of MSE called the deviance [56]. While generalized PCA was originally proposed by [31] (see also [57] and [58]), our implementation is novel in that it allows for intercept terms, offsets, overdispersion, and non-canonical link functions. We also use a blockwise update for optimization which we found to be more numerically stable than that of [31]; we iterate over latent dimensions l rather than rows or columns. This technique is inspired by non-negative matrix factorization algorithms such as hierarchical alternating least squares and rank-one residue iteration, see [59] for a review. As an illustration, consider GLM-PCA with the Poisson approximation to a multinomial likelihood. The objective function to be minimized is simply the overall deviance: $$\begin{array}{*{20}l} D &= \sum_{i,j} y_{ij}\log\left(\frac{y_{ij}}{\mu_{ij}}\right)-(y_{ij}-\mu_{ij})\\ \log\mu_{ij} &= \eta_{ij} = \log s_{i} + \vec{u}_{i}'\vec{v}_{j} = \log s_{i} + v_{j1} + \sum_{l=2}^{L} u_{il}v_{jl} \end{array} $$ where si is a fixed size factor such as the total number of UMIs (ni). The optimization proceeds by taking derivatives with respect to the unknown parameters: vj1 is a gene-specific intercept term, and the remaining uil and vjl are the latent factors. The GLM-PCA method is most concordant to the data-generating mechanism since all aspects of the pipeline are integrated into a coherent model rather than being dealt with through sequential normalizations and transformations. The interpretation of the \(\vec {u}_{i}\) and \(\vec {v}_{j}\) vectors is the same as in PCA. For example, suppose we set the number of latent dimensions to 2 (i.e., L=3 to account for the intercept). We can plot ui2 on the horizontal axis and ui3 on the vertical axis for each cell i to visualize the relationships between cells such as gradients or clusters. In this way, the \(\vec {u}_{i}\) and \(\vec {v}_{j}\) capture biological variability such as differentially expressed genes. Residuals and z-scores Just as mean squared error can be computed by taking the sum of squared residuals under a Gaussian likelihood, the deviance is equal to the sum of squared deviance residuals [56]. Since deviance residuals are not well-defined for the multinomial distribution, we adopt the binomial approximation. The deviance residual for gene j in cell i is given by: $${}r^{(d)}_{ij}=\text{sign}(y_{ij}-\hat{\mu}_{ij})\sqrt{2y_{ij}\log\frac{y_{ij}}{\hat{\mu}_{ij}} + 2(n_{i}-y_{ij})\log\frac{n_{i}-y_{ij}}{n_{i}-\hat{\mu}_{ij}}} $$ where under the null model of constant gene expression across cells, \(\hat {\mu }_{ij}=n_{i}\hat {\pi }_{j}\). The deviance residuals are the result of regressing away this null model. An alternative to deviance residuals is the Pearson residual, which is simply the difference in observed and expected values scaled by an estimate of the standard deviation. For the binomial, this is: $$r^{(p)}_{ij}=\frac{y_{ij}-\hat{\mu}_{ij}}{\sqrt{\hat{\mu}_{ij}-\frac{1}{n_{i}}\hat{\mu}_{ij}^{2}}} $$ According to the theory of generalized linear models (GLM), both types of residuals follow approximately a normal distribution with mean zero if the null model is correct [56]. Deviance residuals tend to be more symmetric than Pearson residuals. In practice, the residuals may not have mean exactly equal to zero, and may be standardized by scaling their gene-specific standard deviation just as in the Gaussian case. Recently, Pearson residuals based on a negative binomial null model have also been independently proposed as the sctransform method [60]. The z-score is simply the Pearson residual where we replace the multinomial likelihood with a Gaussian (normal) likelihood and use normalized values instead of raw UMI counts. Let qij be the normalized (possibly log-transformed) expression of gene j in cell i without centering and scaling. The null model is that the expression of the gene is constant across all cells: $$q_{ij}\sim\mathcal{N}\left(\mu_{j},~\sigma^{2}_{j}\right) $$ The MLEs are \(\hat {\mu }_{j} = \frac {1}{I}\sum _{i} q_{ij}\), \(\hat {\sigma }^{2}_{j} = \frac {1}{I}\sum _{i} (q_{ij}-\hat {\mu }_{j})^{2}\), and the z-scores equal the Pearson residuals \(z_{ij}=(q_{ij}-\hat {\mu }_{j})/\hat {\sigma }_{j}\). We compared the accuracy of the residuals approximations by simulating 150 cells in 3 clusters of 50 cells each with 5000 genes, of which 500 were differentially expressed across clusters (informative genes). We also created 2 batches, batch 1 with total counts of 1000 and batch 2 with total counts of 2000. Each cluster had an equal number of cells in the 2 batches. We then ran GLM-PCA on the raw counts, PCA on log2(1+CPM), PCA on deviance residuals, and PCA on Pearson residuals with L=2 dimensions. Genes with constant expression across cells are not informative. Such genes may be described by the multinomial null model where πij=πj. Goodness of fit to a multinomial distribution can be quantified using deviance, which is twice the difference in log-likelihoods comparing a saturated model to a fitted model. The multinomial deviance is a joint deviance across all genes,and for this reason is not helpful for screening informative genes. Instead, one may use the binomial deviance as an approximation: $$D_{j} = 2\sum_{i}\left[y_{ij}\log\frac{y_{ij}}{n_{i}\hat{\pi}_{j}} + (n_{i}-y_{ij})\log\frac{(n_{i}-y_{ij})}{n_{i}(1-\hat{\pi}_{j})}\right] $$ A large deviance value indicates the model in question provides a poor fit. Those genes with biological variation across cells will be poorly fit by the null model and will have the largest deviances. By ranking genes according to their deviances, one may thus obtain highly deviant genes as an alternative to highly variable or highly expressed genes. Systematic comparison of methods We considered combinations of the following methods and parameter settings, following [15]. Italics indicate methods proposed in this manuscript. Feature selection: highly expressed genes, highly variable genes, and highly deviant genes. We did not compare against highly dropout genes because [15] found this method to have poor downstream clustering performance for UMI counts and it is not as widely used in the literature. The numbers of genes are 60, 300, 1500. Normalization, transformation, and dimension reduction: PCA on log-CPM z-scores, ZINB-WAVE [28], PCA on deviance residuals, PCA on Pearson residuals, and GLM-PCA. The numbers of latent dimensions are 10 and 30. Clustering algorithms are k-means [61] and Seurat [17]. The number of clusters is all values from 2 to 10, inclusive. Seurat resolutions are 0.05, 0.1, 0.2, 0.5, 0.8, 1, 1.2, 1.5, and 2. All methods and assessments described in this manuscript are publicly available at https://github.com/willtownes/scrna2019 [62]. GLM-PCA is available as an R package from CRAN (https://cran.r-project.org/web/packages/glmpca/index.html). The source code is licensed under LGPL-3. All datasets used in the study were obtained from public sources (Table 1). The three Zheng datasets (ERCCs, monocytes, and 68K PBMCs) [5] were downloaded from https://support.10xgenomics.com/single-cell-gene-expression/datasets. The Duo datasets were obtained through the bioconductor package DuoClustering2018 [15]. The remaining three datasets had GEO accession numbers GSE77288 (Tung) [32], GSE92332 (Haber) [33], and GSE85241 (Muraro) [34]. An amendment to this paper has been published and can be accessed via the original article. Kalisky T, Oriel S, Bar-Lev TH, Ben-Haim N, Trink A, Wineberg Y, Kanter I, Gilad S, Pyne S. A brief review of single-cell transcriptomic technologies. Brief Funct Genom. 2018; 17(1):64–76. https://doi.org/10.1093/bfgp/elx019. Svensson V, Vento-Tormo R, Teichmann SA. Exponential scaling of single-cell RNA-seq in the past decade. Nat Protoc. 2018; 13(4):599–604. https://doi.org/10.1038/nprot.2017.149. Macosko EZ, Basu A, Satija R, Nemesh J, Shekhar K, Goldman M, Tirosh I, Bialas A. R, Kamitaki N, Martersteck EM, Trombetta JJ, Weitz DA, Sanes JR, Shalek AK, Regev A, McCarroll SA. Highly parallel genome-wide expression profiling of individual cells Using nanoliter droplets. Cell. 2015; 161(5):1202–14. https://doi.org/10.1016/j.cell.2015.05.002. Klein AM, Mazutis L, Akartuna I, Tallapragada N, Veres A, Li V, Peshkin L, Weitz DA, Kirschner MW. Droplet aarcoding for single-cell transcriptomics applied to embryonic stem cells. Cell. 2015; 161(5):1187–201. https://doi.org/10.1016/j.cell.2015.04.044. Zheng GXY, Terry JM, Belgrader P, Ryvkin P, Bent ZW, Wilson R, Ziraldo SB, Wheeler TD, McDermott GP, Zhu J, Gregory MT, Shuga J, Montesclaros L, Underwood JG, Masquelier DA, Nishimura SY, Schnall-Levin M, Wyatt PW, Hindson CM, Bharadwaj R, Wong A, Ness KD, Beppu LW, Deeg HJ, McFarland C, Loeb KR, Valente WJ, Ericson NG, Stevens EA, Radich JP, Mikkelsen TS, Hindson BJ, Bielas JH. Massively parallel digital transcriptional profiling of single cells. Nat Commun. 2017; 8:14049. https://doi.org/10.1038/ncomms14049. Dal Molin A, Di Camillo B. How to design a single-cell RNA-sequencing experiment: pitfalls, challenges and perspectives. Brief Bioinform. 2018. https://doi.org/10.1093/bib/bby007. Qiu X, Hill A, Packer J, Lin D, Ma Y-A, Trapnell C. Single-cell mRNA quantification and differential analysis with Census. Nat Methods. 2017; 14(3):309–15. https://doi.org/10.1038/nmeth.4150. Picelli S, Björklund ÅK, Faridani OR, Sagasser S, Winberg G, Sandberg R. Smart-seq2 for sensitive full-length transcriptome profiling in single cells. Nat Methods. 2013; 10(11):1096–8. https://doi.org/10.1038/nmeth.2639. Kolodziejczyk AA, Kim JK, Svensson V, Marioni JC, Teichmann SA. The technology and biology of single-cell RNA sequencing. Mol Cell. 2015; 58(4):610–20. https://doi.org/10.1016/j.molcel.2015.04.005. Islam S, Zeisel A, Joost S, La Manno G, Zajac P, Kasper M, Lönnerberg P, Linnarsson S. Quantitative single-cell RNA-seq with unique molecular identifiers. Nat Methods. 2014; 11(2):163–6. https://doi.org/10.1038/nmeth.2772. Grün D, Kester L, van Oudenaarden A. Validation of noise models for single-cell transcriptomics. Nat Methods. 2014; 11(6):637–40. https://doi.org/10.1038/nmeth.2930. Lun ATL, McCarthy DJ, Marioni JC. A step-by-step workflow for low-level analysis of single-cell RNA-seq data with Bioconductor. F1000Research. 2016; 5:2122. https://doi.org/10.12688/f1000research.9501.2. McCarthy DJ, Campbell KR, Lun ATL, Wills QF. Scater: pre-processing, quality control, normalization and visualization of single-cell RNA-seq data in R. Bioinformatics. 2017; 33(8):1179–86. https://doi.org/10.1093/bioinformatics/btw777. Andrews TS, Hemberg M. Identifying cell populations with scRNASeq. Mol Asp Med. 2017. https://doi.org/10.1016/j.mam.2017.07.002. Duò A, Robinson MD, Soneson C. A systematic performance evaluation of clustering methods for single-cell RNA-seq data. F1000Research. 2018; 7:1141. https://doi.org/10.12688/f1000research.15666.1. Brennecke P, Anders S, Kim JK, Kołodziejczyk AA, Zhang X, Proserpio V, Baying B, Benes V, Teichmann SA, Marioni JC, Heisler MG. Accounting for technical noise in single-cell RNA-seq experiments. Nat Methods. 2013; 10(11):1093–5. https://doi.org/10.1038/nmeth.2645. Butler A, Hoffman P, Smibert P, Papalexi E, Satija R. Integrating single-cell transcriptomic data across different conditions, technologies, and species. Nat Biotechnol. 2018. https://doi.org/10.1038/nbt.4096. Andrews TS, Hemberg M. M3Drop: Dropout-based feature selection for scRNASeq. Bioinformatics. 2019; 35(16):2865–7. https://doi.org/10.1093/bioinformatics/bty1044. Hotelling H. Analysis of a complex of statistical variables into principal components. J Educ Psychol. 1933; 24(6):417–41. https://doi.org/10.1037/h0071325. Lun AT, Bach K, Marioni JC. Pooling across cells to normalize single-cell RNA sequencing data with many zero counts. Genome Biol. 2016; 17:75. https://doi.org/10.1186/s13059-016-0947-7. Lun A. Overcoming systematic errors caused by log-transformation of normalized single-cell RNA sequencing data. bioRxiv. 2018:404962. https://doi.org/10.1101/404962. Warton DI. Why you cannot transform your way out of trouble for small counts. Biometrics. 2018; 74(1):362–8. https://doi.org/10.1111/biom.12728. Vallejos CA, Risso D, Scialdone A, Dudoit S, Marioni JC. Normalizing single-cell RNA sequencing data: challenges and opportunities. Nat Methods. 2017; 14(6):565–71. https://doi.org/10.1038/nmeth.4292. Finak G, McDavid A, Yajima M, Deng J, Gersuk V, Shalek AK, Slichter CK, Miller HW, McElrath MJ, Prlic M, Linsley PS, Gottardo R. MAST: a flexible statistical framework for assessing transcriptional changes and characterizing heterogeneity in single-cell RNA sequencing data. Genome Biol. 2015; 16:278. https://doi.org/10.1186/s13059-015-0844-5. Pierson E, Yau C. ZIFA: Dimensionality reduction for zero-inflated single-cell gene expression analysis. Genome Biol. 2015; 16:241. https://doi.org/10.1186/s13059-015-0805-z. Liu S, Trapnell C. Single-cell transcriptome sequencing: recent advances and remaining challenges. F1000Research. 2016; 5:182. https://doi.org/10.12688/f1000research.7223.1. Lin P, Troup M, Ho JWK. CIDR: ultrafast and accurate clustering through imputation for single-cell RNA-seq data. Genome Biol. 2017; 18:59. https://doi.org/10.1186/s13059-017-1188-0. Risso D, Perraudeau F, Gribkova S, Dudoit S, Vert J-P. A general and flexible method for signal extraction from single-cell RNA-seq data. Nat Commun. 2018; 9(1):1–17. https://doi.org/10.1038/s41467-017-02554-5. Svensson V. Droplet scRNA-seq is not zero-inflated. bioRxiv. 2019:582064. https://doi.org/10.1101/582064. Hicks SC, Townes FW, Teng M, Irizarry RA. Missing data and technical variability in single-cell RNA-sequencing experiments. Biostatistics. 2018; 19(4):562–78. https://doi.org/10.1093/biostatistics/kxx053. Collins M, Dasgupta S, Schapire RE. A generalization of principal components analysis to the exponential family In: Dietterich TG, Becker S, Ghahramani Z, editors. Advances in Neural Information Processing Systems 14. Cambridge: MIT Press: 2002. p. 617–24. Tung P-Y, Blischak JD, Hsiao CJ, Knowles DA, Burnett JE, Pritchard JK, Gilad Y. Batch effects and the effective design of single-cell gene expression studies. Sci Rep. 2017; 7:39921. https://doi.org/10.1038/srep39921. Haber AL, Biton M, Rogel N, Herbst RH, Shekhar K, Smillie C, Burgin G, Delorey TM, Howitt MR, Katz Y, Tirosh I, Beyaz S, Dionne D, Zhang M, Raychowdhury R, Garrett WS, Rozenblatt-Rosen O, Shi HN, Yilmaz O, Xavier RJ, Regev A. A single-cell survey of the small intestinal epithelium. Nature. 2017; 551(7680):333–9. https://doi.org/10.1038/nature24489. Muraro MJ, Dharmadhikari G, Grün D, Groen N, Dielen T, Jansen E, van Gurp L, Engelse MA, Carlotti F, de Koning EJP, van Oudenaarden A. A single-cell transcriptome atlas of the human pancreas. Cell Syst. 2016; 3(4):385–3943. https://doi.org/10.1016/j.cels.2016.09.002. Ellefson JW, Gollihar J, Shroff R, Shivram H, Iyer VR, Ellington AD. Synthetic evolutionary origin of a proofreading reverse transcriptase. Science. 2016; 352(6293):1590–3. https://doi.org/10.1126/science.aaf5409. Shapiro E, Biezuner T, Linnarsson S. Single-cell sequencing-based technologies will revolutionize whole-organism science. Nat Rev Genet. 2013; 14(9):618–30. https://doi.org/10.1038/nrg3542. Silverman JD, Roche K, Mukherjee S, David LA. Naught all zeros in sequence count data are the same. bioRxiv. 2018:477794. https://doi.org/10.1101/477794. Pachter L. Models for transcript quantification from RNA-Seq. arXiv:1104.3889 [q-bio, stat]. 2011. http://arxiv.org/abs/1104.3889. Wagner F, Yan Y, Yanai I. K-nearest neighbor smoothing for high-throughput single-cell RNA-Seq data. bioRxiv. 2018:217737. https://doi.org/10.1101/217737. Van den Berge K, Perraudeau F, Soneson C, Love MI, Risso D, Vert J-P, Robinson MD, Dudoit S, Clement L. Observation weights unlock bulk RNA-seq tools for zero inflation and single-cell applications. Genome Biol. 2018; 19:24. https://doi.org/10.1186/s13059-018-1406-4. Witten DM. Classification and clustering of sequencing data using a Poisson model. Ann Appl Stat. 2011; 5(4):2493–518. https://doi.org/10.1214/11-AOAS493. McInnes L, Healy J, Melville J. UMAP: uniform manifold approximation and projection for dimension reduction. arXiv:1802.03426 [cs, stat]. 2018. http://arxiv.org/abs/1802.03426. Hubert L, Arabie P. Comparing partitions. J Classif. 1985; 2(1):193–218. https://doi.org/10.1007/BF01908075. Trapnell C, Cacchiarelli D, Grimsby J, Pokharel P, Li S, Morse M, Lennon NJ, Livak KJ, Mikkelsen TS, Rinn JL. The dynamics and regulators of cell fate decisions are revealed by pseudotemporal ordering of single cells. Nat Biotechnol. 2014; 32(4):381–6. https://doi.org/10.1038/nbt.2859. Soneson C, Robinson MD. Bias, robustness and scalability in single-cell differential expression analysis. Nat Methods. 2018; 15(4):255–61. https://doi.org/10.1038/nmeth.4612. Svensson V, Teichmann SA, Stegle O. SpatialDE: identification of spatially variable genes. Nat Methods. 2018. https://doi.org/10.1038/nmeth.4636. Lopez R, Regier J, Cole MB, Jordan MI, Yosef N. Deep generative modeling for single-cell transcriptomics. Nat Methods. 2018; 15(12):1053–8. https://doi.org/10.1038/s41592-018-0229-2. Verma A, Engelhardt B. A robust nonlinear low-dimensional manifold for single cell RNA-seq data. bioRxiv. 2018:443044. https://doi.org/10.1101/443044. Egozcue JJ, Pawlowsky-Glahn V, Mateu-Figueras G, Barceló-Vidal C. Isometric logratio transformations for compositional data analysis. Math Geol. 2003; 35(3):279–300. https://doi.org/10.1023/A:1023818214614. McDonald DR. On the poisson approximation to the multinomial distribution. Can J Stat / La Rev Can Stat. 1980; 8(1):115–8. https://doi.org/10.2307/3314676. Baker SG. The Multinomial-Poisson transformation. J R Stat Soc Ser D (Stat). 1994; 43(4):495–504. https://doi.org/10.2307/2348134. Gopalan P, Hofman JM, Blei DM. Scalable recommendation with Poisson factorization. arXiv:1311.1704 [cs, stat]. 2013. http://arxiv.org/abs/1311.1704. Taddy M. Distributed multinomial regression. Ann Appl Stat. 2015; 9(3):1394–414. https://doi.org/10.1214/15-AOAS831. Biswas S. The latent logarithm. arXiv:1605.06064 [stat]. 2016. http://arxiv.org/abs/1605.06064. Townes FW. Generalized principal component analysis. arXiv:1907.02647 [cs, stat]. 2019. http://arxiv.org/abs/1907.02647. Agresti A. Foundations of linear and generalized linear models. Hoboken: Wiley; 2015. Landgraf AJ. Generalized principal component analysis: dimensionality reduction through the projection of natural parameters. 2015. PhD thesis, The Ohio State University. Li G, Gaynanova I. A general framework for association analysis of heterogeneous data. Ann Appl Stat. 2018; 12(3):1700–26. https://doi.org/10.1214/17-AOAS1127. Kim J, He Y, Park H. Algorithms for nonnegative matrix and tensor factorizations: a unified view based on block coordinate descent framework. J Glob Optim. 2014; 58(2):285–319. https://doi.org/10.1007/s10898-013-0035-4. Hafemeister C, Satija R. Normalization and variance stabilization of single-cell RNA-seq data using regularized negative binomial regression. bioRxiv. 2019:576827. https://doi.org/10.1101/576827. Hartigan JA, Wong MA. J R Stat Soc Ser C (Appl Stat). 1979; 28(1):100–8. https://doi.org/10.2307/2346830. Townes W, Pita-Juarez Y. Willtownes/Scrna2019: Genome Biology Publication. Zenodo. 2019. https://doi.org/10.5281/zenodo.3475535. The authors thank Keegan Korthauer, Jeff Miller, Linglin Huang, Alejandro Reyes, Yered Pita-Juarez, Mike Love, Ziyi Li, and Kelly Street for the valuable suggestions. The review history is available as Additional file 2. FWT was supported by NIH grant T32CA009337, SCH was supported by NIH grant R00HG009007, MJA was supported by an MGH Pathology Department startup fund, and RAI was supported by Chan-Zuckerberg Initiative grant CZI 2018-183142 and NIH grants R01HG005220, R01GM083084, and P41HG004059. Department of Biostatistics, Harvard University, Cambridge, MA, USA F. William Townes, Martin J. Aryee & Rafael A. Irizarry Present Address: Department of Computer Science, Princeton University, Princeton, NJ, USA F. William Townes Department of Biostatistics, Johns Hopkins University, Baltimore, MD, USA Stephanie C. Hicks Molecular Pathology Unit, Massachusetts General Hospital, Charlestown, MA, USA Martin J. Aryee Center for Cancer Research, Massachusetts General Hospital, Charlestown, MA, USA Department of Pathology, Harvard Medical School, Boston, MA, USA Department of Data Sciences, Dana-Farber Cancer Institute, Boston, MA, USA Rafael A. Irizarry SCH, MJA, and RAI identified the problem. FWT proposed, derived, and implemented the GLM-PCA model, its fast approximation using residuals, and feature selection using deviance. SCH, MJA, and RAI provided guidance on refining the methods and evaluation strategies. FWT and RAI wrote the draft manuscript, and revisions were suggested by SCH and MJA. All authors approved the final manuscript. Correspondence to Rafael A. Irizarry. Additional file 1 Contains supplementary figures S1–S11 and tables S1–S4. Review history. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Townes, F.W., Hicks, S.C., Aryee, M.J. et al. Feature selection and dimension reduction for single-cell RNA-Seq based on a multinomial model. Genome Biol 20, 295 (2019). https://doi.org/10.1186/s13059-019-1861-6 Dimension reduction Variable genes GLM-PCA
CommonCrawl
set theory notes $|A| = |B| = 3$. In naive set theory, a set is a collection of objects (called members or elements) that is regarded as being a single object. If a set has an infinite number of elements, its cardinality is $\infty$. Here, there exists an injective function 'f' from X to Y. $\lbrace 1 \rbrace , \lbrace 2, 3 \rbrace$, 3. When the subset is missing some elements that are in the set it is being compared to, it is a proper subset. The revision notes always keep a track of all the information you have learned. Example − $S = \lbrace x \:| \:x \in N,\ 7 \lt x \lt 9 \rbrace$ = $\lbrace 8 \rbrace$. Introduction to naive set theory Fundamental set concepts. When there is the possibility of using an imprope… As such, it is expected to provide a firm foundation for the rest of mathematics. For example, { a,b,c,d,e} is a set of five elements, thus it is a finite set. Denition 1.8 (Injection). Set theory is a basis of modern mathematics, and notions of set theory are used in all formal descriptions. Such a relation between sets is denoted by A ⊆ B. Example − Let, $A = \lbrace 1, 2, 6 \rbrace$ and $B = \lbrace 6, 12, 42 \rbrace$. Venn diagram, invented in 1880 by John Venn, is a schematic diagram that shows all possible logical relations between different mathematical sets. CliffsNotes study guides are written by real teachers and professors, so no matter what you're studying, CliffsNotes can ease your homework headaches and help you score high on exams. Equal sets are those that have the exact same members — {1, 2, 3} = {3, 2, 1}. It is natural for us to classify items into groups, or sets, and consider how those sets overlap with each other. an open interval denoted by (a, b) is the set of real numbers {x : a < x < b}. Example − If $A = \lbrace 1, 2, 6 \rbrace$ and $B = \lbrace 16, 17, 22 \rbrace$, they are equivalent as cardinality of A is equal to the cardinality of B. i.e. Two correct methods are as follows: An incorrect method would be { x:0 < x < 4} because this rule includes ALL numbers between 0 and 4, not just the numbers 1, 2, and 3. We intend to give educational materials to the students for the betterment of grades. Here set $Y \subset X$ since all elements in $Y$ are contained in $X$ too and $X$ has at least one element is more than set $Y$. $\lbrace 1, 2 \rbrace , \lbrace 3 \rbrace$, 4. Set Theory. The empty set, or null set, is represented by ⊘, or { }. Practice test sheets for Class 11 for Set Theory made for important topics in NCERT book 2020 2021 available for free... Free CBSE Class 11 Set Theory Online Mock Test with important multiple choice questions as per CBSE syllabus. The cardinality of a power set of a set S of cardinality n is $2^n$. The intend of this article is to guide the students about the course of action they should follow once they receive the CBSE question papers in the school as well as board examination centre. Enter pincode to get tutors in your city. Access NCERT Solutions for Class 11 Set Theory. The cardinality of empty set or null set is zero. Here set Y is a subset (Not a proper subset) of set X as all the elements of set Y is in set X. $\lbrace 1 \rbrace , \lbrace 2 \rbrace , \lbrace 3 \rbrace$, If $S = \lbrace1, 1.2, 1.7, 2\rbrace , 1 \in S$ but $1.5 \notin S$, $\lbrace 1 \rbrace , \lbrace 2, 3 \rbrace$, $\lbrace 1, 2 \rbrace , \lbrace 3 \rbrace$, $\lbrace 1, 3 \rbrace , \lbrace 2 \rbrace$, $\lbrace 1 \rbrace , \lbrace 2 \rbrace , \lbrace 3 \rbrace$, A set of all the planets in the solar system, A set of all the lowercase letters of the alphabet. Example 2 − Let, $X = \lbrace 1, 2, 3 \rbrace$ and $Y = \lbrace 1, 2, 3 \rbrace$. But even more, Set Theory is the milieu in which mathematics takes place today. The intersection of sets A and B (denoted by $A \cap B$) is the set of elements which are in both A and B. For example, number 8, 10, 15, 24 are the 4 distinct numbers, but when we put them together, they form a set of 4 elements, such that, {8, 10, 15, 24}. Rule is a method of naming a set by describing its elements. Prepared by standard 11 teachers will help you to understand difficult and complex Set Theory topics and to revise quickly before tests and exams. The set {1,2} is a subset of the set {1,2,3}, and the set {1,2,3} is a subset of the set {1,2,3}. Example − If $A = \lbrace 11, 12, 13 \rbrace$ and $B = \lbrace 13, 14, 15 \rbrace$, then $A \cap B = \lbrace 13 \rbrace$. Notebahadur.com is a site created solely for educational purposes. Therefore, {1,2} ⊂ {1,2,3} and {1,2,3} ⊆ {1,2,3}. Hence, $A \cap B = \lbrace x \:|\: x \in A\ AND\ x \in B \rbrace$. The set {1,2} is a subset of the set {1,2,3}, and the set {1,2,3} is a subset of the set {1,2,3}. The study material has been made by experienced teachers of leading schools and institutes in India is available for free download in pdf format. The term "proper subset" can be defined as "subset of but not equal to". Where can I download latest 2020 Class 11 Set Theory notes ? The elements are enclosed within braces and separated by commas. Note: Every set is o subset of itself. Figure 2. Regularly revise these exam notes as these will help you to cover all important topics in NCERT Class 11 Set Theory and you will get good marks in the Class 11 exams. Example − Let, $X = \lbrace 1, 2, 3, 4, 5, 6 \rbrace$ and $Y = \lbrace 1, 2 \rbrace$. Do you also have Class 11 NCERT Books and solutions for Class 11 Set Theory ? TU - No exam to be held amidst Pandemic, online classes to begin, Notices regarding the Examinations affected by the Lockdown, BCA First-Semester Examination Result - Re-totaling: 2018 Batch, BCA II Semester Board Examination Schedule, BCA Second Semester 2018 Batch Result Published, TU – No exam to be held amidst Pandemic, online classes to begin, GitHub – IT student must be familiar to this platform, BCA First-Semester Examination Result – Re-totaling: 2018 Batch, Getting GitHub Student Developer Pack Approved, Academic Calendar for TU BCA First & Third Semester 2076. n(AᴜB) is the number of elements present in either of the sets A or B. n(A∩B) is the number of elements present in both the sets A and B. n(AᴜBᴜC) = n(A) + n(B) + n(C) – n(A∩B) – n(B∩C) – n(C∩A) + n(A∩B∩C). When the subset is the set itself, it is an improper subset. In above diagram set A is the subset of set B. This chapter will be devoted to understanding set theory, relations, functions. The union of sets A and B (denoted by $A \cup B$) is the set of elements which are in A, in B, or in both A and B. You should always revise the Class 11 Set Theory concepts and notes before the exams and will help you to recap all important topics and you will be able to score better marks. Gotoh 510 Wraparound Bridge, Production And Delivery In Business Plan Example, Neem Oil Eczema Reddit, Collins Coping Foot Lowe's, Destiny 2 Matchmaking Strikes, Computer Keyboard Keys And Their Functions Pdf, Final Fantasy Switch Review, set theory notes 2020
CommonCrawl
Division of Like Algebraic Terms Math Doubts A mathematical operation of dividing an algebraic term by its like term is called the division of like algebraic terms. The division of any two like algebraic terms is expressed by displaying a division sign between them for dividing them. The quotient of the like algebraic terms is calculated by only finding the quotient of the numerical coefficients due to the same literal coefficient. Actually, the same literal coefficients of both like terms are cancelled in division. Hence, the literal coefficient can be ignored while doing the division with like terms in algebra. $4xy^2$ and $-6xy^2$ are two like algebraic terms. Divide the term $4xy^2$ by $-6xy^2$ to find their quotient. Express the division of the like algebraic terms in mathematical form. $4xy^2 \div (-6xy^2)$ $\implies$ $\dfrac{4xy^2}{-6xy^2}$ Factorize the each algebraic term as its numerical and literal coefficients. $\implies \dfrac{4xy^2}{-6xy^2} \,=\, \dfrac{4 \times xy^2}{-6 \times xy^2}$ $\implies \dfrac{4xy^2}{-6xy^2} \,=\, \dfrac{4}{-6} \times \dfrac{xy^2}{xy^2}$ Find the quotient of the like algebraic terms by cancelling the literal coefficients, and then finding the quotient of the numerical coefficients. $\require{cancel} \implies \dfrac{4xy^2}{-6xy^2} \,=\, \dfrac{4}{-6} \times \dfrac{\cancel{xy^2}}{\cancel{xy^2}}$ $\implies \dfrac{4xy^2}{-6xy^2} \,=\, \dfrac{4}{-6} \times 1$ $\implies \dfrac{4xy^2}{-6xy^2} \,=\, -\dfrac{4}{6}$ $\implies \require{cancel} \dfrac{4xy^2}{-6xy^2} \,=\, -\dfrac{\cancel{4}}{\cancel{6}}$ $\therefore \,\,\,\,\,\, \dfrac{4xy^2}{-6xy^2} \,=\, -\dfrac{2}{3}$ Thus, the division of any two like algebraic terms can be calculated in algebraic mathematics. Remember, the quotient of any two like terms is a rational number. Observe the following examples to understand how to divide an algebraic term by its like term. $(1) \,\,\,\,\,\,$ $\dfrac{a}{7a}$ $\,=\,$ $\require{cancel} \dfrac{1 \times \cancel{a}}{7 \times \cancel{a}}$ $\,=\,$ $\dfrac{1}{7}$ $(2) \,\,\,\,\,\,$ $\dfrac{3b^2}{6b^2}$ $\,=\,$ $\require{cancel} \dfrac{\cancel{3} \times \cancel{b^2}}{\cancel{6} \times \cancel{b^2}}$ $\,=\,$ $\dfrac{1}{2}$ $(3) \,\,\,\,\,\,$ $\dfrac{25cd^3}{cd^3}$ $\,=\,$ $\require{cancel} \dfrac{25 \times \cancel{cd^3}}{1 \times \cancel{cd^3}}$ $\,=\, 25$ $(4) \,\,\,\,\,\,$ $\dfrac{0.1e^2f^2}{0.01e^2f^2}$ $\,=\,$ $\require{cancel} \dfrac{\cancel{0.1} \times \cancel{e^2f^2}}{\cancel{0.01} \times \cancel{e^2f^2}}$ $\,=\, 10$ $(5) \,\,\,\,\,\,$ $\dfrac{56gh^2i^3j^4}{7gh^2i^3j^4}$ $\,=\,$ $\dfrac{\cancel{56} \times \cancel{gh^2i^3j^4}}{\cancel{7} \times \cancel{gh^2i^3j^4}}$ $\,=\, 8$ Latest Math Topics Learn cosine of angle difference identity Learn constant property of a circle with examples Concept of Set-Builder notation with examples and problems Completing the square method with problems How to find the minors of 2 by 2 matrix Latest Math Problems Evaluate $\cos(100^\circ)\cos(40^\circ)$ $+$ $\sin(100^\circ)\sin(40^\circ)$ Evaluate $\begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9\\ \end{bmatrix}$ $\times$ $\begin{bmatrix} 9 & 8 & 7\\ 6 & 5 & 4\\ 3 & 2 & 1\\ \end{bmatrix}$ Evaluate ${\begin{bmatrix} -2 & 3 \\ -1 & 4 \\ \end{bmatrix}}$ $\times$ ${\begin{bmatrix} 6 & 4 \\ 3 & -1 \\ \end{bmatrix}}$ Evaluate $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{\sin^3{x}}{\sin{x}-\tan{x}}}$ Solve $\sqrt{5x^2-6x+8}$ $-$ $\sqrt{5x^2-6x-7}$ $=$ $1$ Math Doubts is a best place to learn mathematics and from basics to advanced scientific level for students, teachers and researchers. Know more Hyperbolic functions Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
CommonCrawl
8.3 Crests and troughs 8.4 Amplitude (ESACN) Fill in the table below by measuring the distance between the equilibrium and each crest and trough in the wave above. Use your ruler to measure the distances. Crest/Trough Measurement (cm) What can you say about your results? Are the distances between the equilibrium position and each crest equal? Are the distances between the equilibrium position and each trough equal? Is the distance between the equilibrium position and crest equal to the distance between equilibrium and trough? As we have seen in the activity on amplitude, the distance between the crest and the equilibrium position is equal to the distance between the trough and the equilibrium position. This distance is known as the amplitude of the wave, and is the characteristic height of the wave, above or below the equilibrium position. Normally the symbol A is used to represent the amplitude of a wave. The SI unit of amplitude is the metre (m). The amplitude of a wave is the maximum disturbance or displacement of the medium from the equilibrium (rest) position. Quantity: Amplitude (A) Unit name: metre Unit symbol: m A tsunami is a series of sea waves caused by an underwater earthquake, landslide, or volcanic eruption. When the ocean is deep, tsunamis may be less than \(\text{30}\) \(\text{cm}\) high on the ocean's surface and can travel at speeds up to \(\text{700}\) \(\text{km·hr$^{-1}$}\). In shallow water near the coast, it slows down. The top of the wave moves faster than the bottom, causing the sea to rise dramatically, as much as \(\text{30}\) \(\text{m}\). The wavelength can be as long as \(\text{100}\) \(\text{km}\) and the period as long as a hour. In 2004, the Indian Ocean tsunami was caused by an earthquake that is thought to have had the energy of \(\text{23 000}\) atomic bombs. Within hours of the earthquake, killer waves radiating from away from the earthquake crashed into the coastline of 11 countries, killing \(\text{150 000}\) people. The final death toll was \(\text{283 000}\). Worked example 1: Amplitude of sea waves If the crest of a wave measures \(\text{2}\) \(\text{m}\) above the still water mark in the harbour, what is the amplitude of the wave? Analyse the information provided We have been told that the harbour has a still water mark. This is a line created when there are no disturbances in the water, which means that it is the equilibrium position of the water. Determine the amplitude The definition of the amplitude is the height of a crest above the equilibrium position. The still water mark is the height of the water at equilibrium and the crest is \(\text{2}\) \(\text{m}\) above this, so the amplitude is \(\text{2}\) \(\text{m}\). Fill in the table below by measuring the distance between crests and troughs in the wave above. Distance(cm) Are the distances between crests equal? Are the distances between troughs equal? Is the distance between crests equal to the distance between troughs? As we have seen in the activity on wavelength, the distance between two adjacent crests is the same no matter which two adjacent crests you choose. There is a fixed distance between the crests. Similarly, we have seen that there is a fixed distance between the troughs, no matter which two troughs you look at. More importantly, the distance between two adjacent crests is the same as the distance between two adjacent troughs. This distance is called the wavelength of the wave. The symbol for the wavelength is λ (the Greek letter lambda) and wavelength is measured in metres (m). Worked example 2: Wavelength The total distance between 4 consecutive crests of a transverse wave is \(\text{6}\) \(\text{m}\). What is the wavelength of the wave? Draw a rough sketch of the situation Determine how to approach the problem From the sketch we see that 4 consecutive crests is equivalent to 3 wavelengths. Solve the problem Therefore, the wavelength of the wave is: \begin{align*} 3\lambda & = \text{6}\text{ m} \\ \lambda & = \frac{\text{6}\text{ m}}{3} \\ & = \text{2}\text{ m} \end{align*} Quote the final answer The wavelength is \(\text{2}\) \(\text{m}\).
CommonCrawl
Help with math Visual illusions Cut the knot! What is what? Inventor's paradox Math as language Outline mathematics Eye opener Analogue gadgets Proofs in mathematics Things impossible Index/Glossary Simple math Fast Arithmetic Tips Stories for young Make an identity Elementary geometry Three Variables, Three Constraints, Two Inequalities (Only One to Prove) - by Leo Giugiuc At least one of $a,b,c\le 0.$ Let it be $c:$ $-1\le c\le 0.$ Then, on squaring, $0\le c^2\le 1.$ From $a+b+c=0,$ $(a+b)^2=c^2.$ Further, from $a^2+b^2+c^2\ge 2,$ $\begin{align} 2+2ab &\le a^2+b^2+2ab+c^2=(a+b)^2+c^2\\ &=2c^2\le 2. \end{align}$ But $2c^2\le 2,$ such that $2+2ab\le 2,$ i.e., $ab\le 0.$ Now, since also $c\le 0,$ $abc\ge 0.$ From $(a+b+c)^2=0$ and $2\le a^2+b^2+c^2=-2(ab+bc+ca),$ it follows that $ab+bc+ca\le -1$ which is equivalent to $ab+c(a+b)\le -1,$ i.e., $ab-c^2\le -1$ and, subsequently, $ab\le c^2-1.$ WLOG, $-1\le c\le 0,$ implying $c^2-1\le 0.$ Thus, $ab\le 0$ and, since $c\le 0,$ $abc\ge 0.$ Let $ab+bc+ca=-q.$ Then $q\ge 1.$ But $(-1-a)(-1-b)(-1-c)\le 0$ implies $q-1\le abc$ and, since $0\le q-1,$ $abc\ge 0.$ Equality is attained for $(a,b,c)=(-1,0,1)$ and permutations. $a,b,c$ can't all be negative or all positive. Suffice it to show that only one of them can't be negative. Let $c\lt 0,$ making $a+b\lt 1,$ such that $a^2+b^2+c^2\ge 2$ implies $a^2+b^2+ab-1\ge 0.$ The quadratic equation in 'a' can't have a positive discriminant, so $\displaystyle b\ge\frac{1}{\sqrt{3}}$ and the same for $a:$ $\displaystyle a\ge\frac{1}{\sqrt{3}}$. It follows that $\displaystyle a+b\ge\frac{2}{\sqrt{3}}\gt 1$ in contradiction with $a+b\lt 1.$ The problem, with a solution (Solution 3), was kindly posted by Leo Giugiuc at the CutTheKnotMath facebook page. Solution 1 is by Marian Cucoaneş; Solution 2 is by Nguyễn Ngọc Tú; Solution 4 is by Amitabh Bachchan. A Cyclic But Not Symmetric Inequality in Four Variables $\left(\displaystyle 5(a+b+c+d)+\frac{26}{abc+bcd+cda+dab}\ge 26.5\right)$ An Inequality with Constraint $\left((x+1)(y+1)(z+1)\ge 4xyz\right)$ An Inequality with Constraints II $\left(\displaystyle abc+\frac{2}{ab+bc+ca}\ge\frac{5}{a^2+b^2+c^2}\right)$ An Inequality with Constraint III $\left(\displaystyle \frac{x^3}{y^2}+\frac{y^3}{z^2}+\frac{z^3}{x^2}\ge 3\right)$ An Inequality with Constraint IV $\left(\displaystyle\sum_{k=1}^{n}\sqrt{x_k}\ge (n-1)\sum_{k=1}^{n}\frac{1}{\sqrt{x_k}}\right)$ An Inequality with Constraint VII $\left(|(2x+3y-5z)-3(x+y-5z)|=|-x+10z|\le\sqrt{101}\right)$ An Inequality with Constraint VIII $\left(\sqrt{24a+1}+\sqrt{24b+1}+\sqrt{24c+1}\ge 15\right)$ An Inequality with Constraint IX $\left(x^2+y^2\ge x+y\right)$ An Inequality with Constraint X $\left((x+y+p+q)-(x+y)(p+q)\ge 1\right)$ Problem 11804 from the AMM $\left(10|x^3 + y^3 + z^3 - 1| \le 9|x^5 + y^5 + z^5 - 1|\right)$ Sladjan Stankovik's Inequality With Constraint $\left(abc+bcd+cda+dab-abcd\le\displaystyle \frac{27}{16}\right)$ An Inequality with Constraint XII $\left(abcd\ge ab+bc+cd+da+ac+bd-5\right)$ An Inequality with Constraint XIV $\left(\small{64(a^2+ab+b^2)(b^2+bc+c^2)(c^2+ca+a^2) \le 3(a+b+c)^6}\right)$ An Inequality with Constraint XVII $\left(a^3+b^3+c^3\ge 0\right)$ An Inequality with Constraint in Four Variables II $\left(a^3+b^3+c^3+d^3 + 6abcd \ge 10\right)$ An Inequality with Constraint in Four Variables III $\left(\displaystyle\small{abcd+\frac{15}{2(ab+ac+ad+bc+bd+cd)}\ge\frac{9}{a^2+b^2+c^2+d^2}}\right)$ An Inequality with Constraint in Four Variables V $\left(\displaystyle 5\sum \frac{abc}{\sqrt[3]{(1+a^3)(1+b^3)(1+c^3)}}\leq 4\right)$ An Inequality with Constraint in Four Variables VI $\left(\displaystyle \sum_{cycl}a^2+6\cdot\frac{\displaystyle \sum_{cycl}abc}{\displaystyle \sum_{cycl}a}\ge\frac{5}{3}\sum_{sym}ab\right)$ A Cyclic Inequality in Three Variables with Constraint $\left(\displaystyle a\sqrt{bc}+b\sqrt{ca}+c\sqrt{ab}+2abc=1\right)$ Dorin Marghidanu's Cyclic Inequality with Constraint $\left(\displaystyle 2a^2-2\sqrt{2}(b+c)a+3b^2+4c^2-2\sqrt{bc}\gt 0\right)$ Dan Sitaru's Cyclic Inequality In Three Variables with Constraints $\left(\displaystyle \frac{1}{\sqrt{a+b^2}}+ \frac{1}{\sqrt{b+c^2}}+ \frac{1}{\sqrt{c+a^2}}\ge\frac{1}{\sqrt{a+b+c}}\right)$ Dan Sitaru's Cyclic Inequality In Three Variables with Constraints II $\left(\displaystyle \sum_{cycl}\frac{\displaystyle \frac{x}{y}+1+\frac{y}{x}}{\displaystyle \frac{1}{x}+\frac{1}{y}}\le 9\right)$ Dan Sitaru's Cyclic Inequality In Three Variables with Constraints III $\left(\displaystyle 12+\sum_{cycl}\left(\sqrt{\frac{x^3}{y}}+\sqrt{\frac{x^3}{y}}\right)\ge 8(x+y+z)\right)$ Inequality with Constraint from Dan Sitaru's Math Phenomenon $\left(\displaystyle b+2a+20\ge 2\sum_{cycl}\frac{a^2+ab+b^2}{a+b}\ge b+2c+20\right)$ Another Problem from the 2016 Danubius Contest $\left(\displaystyle \frac{1}{a^2+2}+\frac{1}{b^2+2}+\frac{1}{c^2+2}\le 1\right)$ Gireaux's Theorem (If a continuous function of several variables is defined on a hyperbrick and is convex in each of the variables, it attains its maximum at one of the corners) An Inequality with a Parameter and a Constraint $\left(\displaystyle a^4+b^4+c^4+\lambda abc\le\frac{\lambda +1}{27}\right)$ Unsolved Problem from Crux Solved $\left(a_1a_2a_3a_4a_5a_6\le\displaystyle \frac{5}{2}\right)$ An Inequality With Six Variables and Constraints Find the range of $\left(a^2+b^2+c^2+d^2+e^2+f^2\right)$ Cubes Constrained $\left(3(a^4+b^4)+2a^4b^4\le 8\right)$ Dorin Marghidanu's Inequality with Constraint $\left(\displaystyle \frac{1}{a_1+1}+\frac{2}{2a_2+1}+\frac{3}{3a_3+1}\ge 4\right)$ Dan Sitaru's Integral Inequality with Powers of a Function $\left(\displaystyle\left(\int_0^1f^5(x)dx\right)\left(\int_0^1f^7(x)dx\right)\left(\int_0^1f^9(x)dx\right)\ge 2\right)$ Michael Rozenberg's Inequality in Three Variables with Constraints $\left(\displaystyle 4\sum_{cycl}ab(a^2+b^2)\ge\sum_{cycl}a^4+5\sum_{cycl}a^2b^2+2abc\sum_{cycl}a\right)$ Dan Sitaru's Cyclic Inequality In Three Variables with Constraints IV $\left(\displaystyle \frac{(4x^2y^2+1)(36y^2z^2+1)(9x^2z^2+1)}{2304x^2y^2z^2}\geq \frac{1}{(x+2y+3z)^2}\right)$ Refinement on Dan Sitaru's Cyclic Inequality In Three Variables $\left(\displaystyle \frac{(4x^2y^2+1)(36y^2z^2+1)(9x^2z^2+1)}{2304x^2y^2z^2}\geq \frac{1}{3\sqrt{3}}\right)$ An Inequality with Arbitrary Roots $\left(\displaystyle \sum_{cycl}\left(\sqrt[n]{a+\sqrt[n]{a}}+\sqrt[n]{a-\sqrt[n]{a}}\right)\lt 18\right)$ Leo Giugiuc's Inequality with Constraint $\left(\displaystyle 2\left(\frac{1}{a+1}+\frac{1}{b+1}+\frac{1}{c+1}\right)\le ab+bc+ca\right)$ Problem From the 2016 IMO Shortlist $\left(\displaystyle \sqrt[3]{(a^2+1)(b^2+1)(c^2+1)}\le\left(\frac{a+b+c}{3}\right)^2+1\right)$ Dan Sitaru's Cyclic Inequality with a Constraint and Cube Roots $\left(\displaystyle \sum_{cycl}\sqrt[3]{\frac{abc}{(a+1)(b+1)(c+1)}}\le\frac{4}{5}\right)$ Dan Sitaru's Cyclic Inequality with a Constraint and Cube Roots II $\left(\displaystyle \sqrt[3]{a}+\sqrt[3]{b}+\sqrt[3]{c}+\sqrt[3]{d}\le\sqrt[3]{abcd}\right)$ A Simplified Version of Leo Giugiuc's Inequality from the AMM $\left(\displaystyle a^3+b^3+c^3\ge 3\right)$ Kunihiko Chikaya's Inequality $\displaystyle \small{\left(\frac{(a^{10}-b^{10})(b^{10}-c^{10})(c^{10}-a^{10})}{(a^{9}+b^{9})(b^{9}+c^{9})(c^{9}+a^{9})}\ge\frac{125}{3}[(a-b)^3+(b-c)^3+(c-a)^3]\right)}$ A Cyclic Inequality on [-1,1] $\left(xy+yz+zx\ge 1\right)$ An Inequality with Two Triples of Variables $\left(\displaystyle\sum_{cycl}ux\ge\sqrt{\left(\sum_{cycl}xy\right)\left(2\sum_{cycl}uv-\sum_{cycl}u^2\right)}\right)$ 6th European Mathematical Cup (2017), Junior Problem 4 $\left(x^3 - (y^2 + yz + z^2)x + y^2z + yz^2 \le 3\sqrt{3}\right)$ Dorin Marghidanu's Example $\left(\displaystyle\frac{\displaystyle\frac{1}{b_1}+\frac{2}{b_2}+\frac{3}{b_3}}{1+2+3}\ge\frac{1+2+3}{b_1+2b_2+3b_3}\right)$ A Trigonometric Inequality with Ordered Triple of Variables $\left((x+y)\sin x+(x-z)\sin y\lt (y+z)\sin x\right)$ Three Variables, Three Constraints, Two Inequalities (Only One to Prove) - by Leo Giugiuc $\bigg(a+b+c=0$ and $a^2+b^2+c^2\ge 2$ Prove that $abc\ge 0\bigg)$ Hung Nguyen Viet's Inequality with a Constraint $\left(1+2(xy+yz+zx)^2\ge (x^3+y^3+z^3+6xyz)^2\right)$ A Cyclic Inequality by Seyran Ibrahimov $\left(\displaystyle \sum_{cycl}\frac{x}{y^4+y^2z^2+z^4}\le\frac{1}{(xyz)^2}\right)$ Dan Sitaru's Cyclic Inequality In Three Variables with Constraints V $\left(\displaystyle \frac{1}{\sqrt{ab(a+b)}}+\frac{1}{\sqrt{bc(b+c)}}+\frac{1}{\sqrt{ca(c+a)}}\le 3+\frac{a+b+c}{abc}\right)$ Cyclic Inequality In Three Variables From Kvant $\left(\displaystyle \frac{a}{bc+1}+\frac{b}{ca+1}+\frac{c}{ab+1}\le 2\right)$ Cyclic Inequality In Three Variables From Vietnam by Rearrangement $\left(\displaystyle \frac{x^3+y^3}{y^2+z^2}+\frac{y^3+z^3}{z^2+x^2}+\frac{z^3+x^3}{x^2+y^2}\le 3\right)$ A Few Variants of a Popular Inequality And a Generalization $\left(\displaystyle \frac{1}{(a+b)^2+4}+\frac{1}{(b+c)^2+4}+\frac{1}{(c+a)^2+4}\le \frac{3}{8}\right)$ Two Constraints, One Inequality by Qing Song $\left(|a|+|b|+|c|\ge 6\right)$ A Moscow Olympiad Question with Two Inequalities $\left(\displaystyle b^2\gt 4ac\right)$ A Problem form the Short List of the 2018 JBMO $\left(ab^3+bc^3+cd^3+da^3\ge a^2b^2+b^2c^2+c^2d^2+d^2a^2\right)$ An Inequality from a Mongolian Exam $\left(\displaystyle 2\sum_{i=1}^{2n-1}(x_i-A)^2\ge \sum_{i=1}^{2n-1}(x_i-x_n)^2\right)$ |Contact| |Up| |Front page| |Contents| |Algebra| |Store| Copyright © 1996-2017 Alexander Bogomolny
CommonCrawl
WikiVisually the entire wiki with video and photo galleries find something interesting to watch in seconds TOP LISTS / STORIES · VIDEO PICKER HOVER OVER LINKS IN TEXT FOR MORE INFO click links in text for more info Octahedron Polyhedron with 8 faces For the album by The Mars Volta, see Octahedron (album). Regular octahedron (Click here for rotating model) Type Platonic solid Elements F = 8, E = 12 V = 6 (χ = 2) Faces by sides 8{3} Conway notation O Schläfli symbols {3,4} r{3,3} or { 3 3 } {\displaystyle {\begin{Bmatrix}3\\3\end{Bmatrix}}} Face configuration V4.4.4 Wythoff symbol 4 | 2 3 Coxeter diagram Symmetry Oh, BC3, [4,3], (*432) Rotation group O, [4,3]+, (432) References U05, C17, W2 Properties regular, convexdeltahedron Dihedral angle 109.47122° = arccos(−1/3) (Vertex figure) (dual polyhedron) In geometry, an octahedron (plural: octahedra) is a polyhedron with eight faces, twelve edges, and six vertices. The term is most commonly used to refer to the regular octahedron, a Platonic solid composed of eight equilateral triangles, four of which meet at each vertex. A regular octahedron is the dual polyhedron of a cube, it is a rectified tetrahedron. It is a square bipyramid in any of three orthogonal orientations, it is also a triangular antiprism in any of four orientations. An octahedron is the three-dimensional case of the more general concept of a cross polytope. A regular octahedron is a 3-ball in the Manhattan (ℓ1) metric. 1 Regular octahedron 1.1 Dimensions 1.2 Orthogonal projections 1.3 Spherical tiling 1.4 Cartesian coordinates 1.5 Area and volume 1.6 Geometric relations 1.7 Uniform colorings and symmetry 1.8 Nets 1.9 Dual 1.10 Faceting 2 Irregular octahedra 2.1 Other convex octahedra 3 Octahedra in the physical world 3.1 Octahedra in nature 3.2 Octahedra in art and culture 3.3 Tetrahedral Truss 4 Related polyhedra 4.1 Tetratetrahedron 4.2 Trigonal antiprism 4.3 Square bipyramid Regular octahedron[edit] Dimensions[edit] If the edge length of a regular octahedron is a, the radius of a circumscribed sphere (one that touches the octahedron at all vertices) is r u = a 2 2 ≈ 0.707 ⋅ a {\displaystyle r_{u}={\frac {a}{2}}{\sqrt {2}}\approx 0.707\cdot a} and the radius of an inscribed sphere (tangent to each of the octahedron's faces) is r i = a 6 6 ≈ 0.408 ⋅ a {\displaystyle r_{i}={\frac {a}{6}}{\sqrt {6}}\approx 0.408\cdot a} while the midradius, which touches the middle of each edge, is r m = a 2 = 0.5 ⋅ a {\displaystyle r_{m}={\frac {a}{2}}=0.5\cdot a} Orthogonal projections[edit] The octahedron has four special orthogonal projections, centered, on an edge, vertex, face, and normal to a face; the second and third correspond to the B2 and A2 Coxeter planes. Orthogonal projections Centered by Projective Spherical tiling[edit] The octahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection; this projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. Stereographic projection Cartesian coordinates[edit] An octahedron with edge length √2 can be placed with its center at the origin and its vertices on the coordinate axes; the Cartesian coordinates of the vertices are then ( ±1, 0, 0 ); ( 0, ±1, 0 ); ( 0, 0, ±1 ). In an x–y–z Cartesian coordinate system, the octahedron with center coordinates (a, b, c) and radius r is the set of all points (x, y, z) such that | x − a | + | y − b | + | z − c | = r . {\displaystyle \left|x-a\right|+\left|y-b\right|+\left|z-c\right|=r.} Area and volume[edit] The surface area A and the volume V of a regular octahedron of edge length a are: A = 2 3 a 2 ≈ 3.464 a 2 {\displaystyle A=2{\sqrt {3}}a^{2}\approx 3.464a^{2}} V = 1 3 2 a 3 ≈ 0.471 a 3 {\displaystyle V={\frac {1}{3}}{\sqrt {2}}a^{3}\approx 0.471a^{3}} Thus the volume is four times that of a regular tetrahedron with the same edge length, while the surface area is twice (because we have 8 rather than 4 triangles). If an octahedron has been stretched so that it obeys the equation | x x m | + | y y m | + | z z m | = 1 , {\displaystyle \left|{\frac {x}{x_{m}}}\right|+\left|{\frac {y}{y_{m}}}\right|+\left|{\frac {z}{z_{m}}}\right|=1,} the formulas for the surface area and volume expand to become A = 4 x m y m z m × 1 x m 2 + 1 y m 2 + 1 z m 2 , {\displaystyle A=4\,x_{m}\,y_{m}\,z_{m}\times {\sqrt {{\frac {1}{x_{m}^{2}}}+{\frac {1}{y_{m}^{2}}}+{\frac {1}{z_{m}^{2}}}}},} V = 4 3 x m y m z m . {\displaystyle V={\frac {4}{3}}\,x_{m}\,y_{m}\,z_{m}.} Additionally the inertia tensor of the stretched octahedron is I = [ 1 10 m ( y m 2 + z m 2 ) 0 0 0 1 10 m ( x m 2 + z m 2 ) 0 0 0 1 10 m ( x m 2 − y m 2 ) ] . {\displaystyle I={\begin{bmatrix}{\frac {1}{10}}m(y_{m}^{2}+z_{m}^{2})&0&0\\0&{\frac {1}{10}}m(x_{m}^{2}+z_{m}^{2})&0\\0&0&{\frac {1}{10}}m(x_{m}^{2}-y_{m}^{2})\end{bmatrix}}.} These reduce to the equations for the regular octahedron when x m = y m = z m = a 2 2 . {\displaystyle x_{m}=y_{m}=z_{m}=a\,{\frac {\sqrt {2}}{2}}.} Geometric relations[edit] The octahedron represents the central intersection of two tetrahedra The interior of the compound of two dual tetrahedra is an octahedron, and this compound, called the stella octangula, is its first and only stellation. Correspondingly, a regular octahedron is the result of cutting off from a regular tetrahedron, four regular tetrahedra of half the linear size (i.e. rectifying the tetrahedron). The vertices of the octahedron lie at the midpoints of the edges of the tetrahedron, and in this sense it relates to the tetrahedron in the same way that the cuboctahedron and icosidodecahedron relate to the other Platonic solids. One can also divide the edges of an octahedron in the ratio of the golden mean to define the vertices of an icosahedron; this is done by first placing vectors along the octahedron's edges such that each face is bounded by a cycle, then similarly partitioning each edge into the golden mean along the direction of its vector. There are five octahedra that define any given icosahedron in this fashion, and together they define a regular compound. Octahedra and tetrahedra can be alternated to form a vertex, edge, and face-uniform tessellation of space, called the octet truss by Buckminster Fuller; this is the only such tiling save the regular tessellation of cubes, and is one of the 28 convex uniform honeycombs. Another is a tessellation of octahedra and cuboctahedra. The octahedron is unique among the Platonic solids in having an even number of faces meeting at each vertex. Consequently, it is the only member of that group to possess mirror planes that do not pass through any of the faces. Using the standard nomenclature for Johnson solids, an octahedron would be called a square bipyramid. Truncation of two opposite vertices results in a square bifrustum. The octahedron is 4-connected, meaning that it takes the removal of four vertices to disconnect the remaining vertices, it is one of only four 4-connected simplicial well-covered polyhedra, meaning that all of the maximal independent sets of its vertices have the same size. The other three polyhedra with this property are the pentagonal dipyramid, the snub disphenoid, and an irregular polyhedron with 12 vertices and 20 triangular faces.[1] Uniform colorings and symmetry[edit] There are 3 uniform colorings of the octahedron, named by the triangular face colors going around each vertex: 1212, 1112, 1111. The octahedron's symmetry group is Oh, of order 48, the three dimensional hyperoctahedral group. This group's subgroups include D3d (order 12), the symmetry group of a triangular antiprism; D4h (order 16), the symmetry group of a square bipyramid; and Td (order 24), the symmetry group of a rectified tetrahedron. These symmetries can be emphasized by different colorings of the faces. Rectified tetrahedron (Tetratetrahedron) Triangular antiprism Square bipyramid Rhombic fusil (Face coloring) Schläfli symbol {3,4} r{3,3} s{2,6} sr{2,3} ft{2,4} { } + {4} ftr{2,2} { } + { } + { } Wythoff symbol 4 | 3 2 2 | 4 3 2 | 6 2 | 2 3 2 Oh, [4,3], (*432) Td, [3,3], (*332) D3d, [2+,6], (2*3) D3, [2,3]+, (322) D4h, [2,4], (*422) D2h, [2,2], (*222) Nets[edit] It has eleven arrangements of nets. Dual[edit] The octahedron is the dual polyhedron to the cube. Faceting[edit] The uniform tetrahemihexahedron is a tetrahedral symmetry faceting of the regular octahedron, sharing edge and vertex arrangement, it has four of the triangular faces, and 3 central squares. Tetrahemihexahedron Irregular octahedra[edit] The following polyhedra are combinatorially equivalent to the regular polyhedron, they all have six vertices, eight triangular faces, and twelve edges that correspond one-for-one with the features of a regular octahedron. Triangular antiprisms: Two faces are equilateral, lie on parallel planes, and have a common axis of symmetry. The other six triangles are isosceles. Tetragonal bipyramids, in which at least one of the equatorial quadrilaterals lies on a plane. The regular octahedron is a special case in which all three quadrilaterals are planar squares. Schönhardt polyhedron, a non-convex polyhedron that cannot be partitioned into tetrahedra without introducing new vertices. Bricard octahedron, a non-convex self-crossing flexible polyhedron Other convex octahedra[edit] More generally, an octahedron can be any polyhedron with eight faces; the regular octahedron has 6 vertices and 12 edges, the minimum for an octahedron; irregular octahedra may have as many as 12 vertices and 18 edges.[2] There are 257 topologically distinct convex octahedra, excluding mirror images. More specifically there are 2, 11, 42, 74, 76, 38, 14 for octahedra with 6 to 12 vertices respectively.[3][4] (Two polyhedra are "topologically distinct" if they have intrinsically different arrangements of faces and vertices, such that it is impossible to distort one into the other simply by changing the lengths of edges or the angles between edges or faces.) Some better known irregular octahedra include the following: Hexagonal prism: Two faces are parallel regular hexagons; six squares link corresponding pairs of hexagon edges. Heptagonal pyramid: One face is a heptagon (usually regular), and the remaining seven faces are triangles (usually isosceles). It is not possible for all triangular faces to be equilateral. Truncated tetrahedron: The four faces from the tetrahedron are truncated to become regular hexagons, and there are four more equilateral triangle faces where each tetrahedron vertex was truncated. Tetragonal trapezohedron: The eight faces are congruent kites. Octahedra in the physical world[edit] Octahedra in nature[edit] Fluorite octahedron. Natural crystals of diamond, alum or fluorite are commonly octahedral, as the space-filling tetrahedral-octahedral honeycomb. The plates of kamacite alloy in octahedrite meteorites are arranged paralleling the eight faces of an octahedron. Many metal ions coordinate six ligands in an octahedral or distorted octahedral configuration. Widmanstätten patterns in nickel-iron crystals Octahedra in art and culture[edit] Two identically formed rubik's snakes can approximate an octahedron. Especially in roleplaying games, this solid is known as a "d8", one of the more common polyhedral dice. In the film Tron (1982), the character Bit took this shape as the "Yes" state. If each edge of an octahedron is replaced by a one-ohm resistor, the resistance between opposite vertices is 1/2 ohm, and that between adjacent vertices 5/12 ohm.[5] Six musical notes can be arranged on the vertices of an octahedron in such a way that each edge represents a consonant dyad and each face represents a consonant triad; see hexany. Tetrahedral Truss[edit] A framework of repeating tetrahedrons and octahedrons was invented by Buckminster Fuller in the 1950s, known as a space frame, commonly regarded as the strongest structure for resisting cantilever stresses. Related polyhedra[edit] A regular octahedron can be augmented into a tetrahedron by adding 4 tetrahedra on alternated faces. Adding tetrahedra to all 8 faces creates the stellated octahedron. stellated octahedron The octahedron is one of a family of uniform polyhedra related to the cube. Uniform octahedral polyhedra Symmetry: [4,3], (*432) [4,3]+ [1+,4,3] = [3,3] (*332) [3+,4] (3*2) {4,3} t{4,3} r{4,3} r{31,1} t{3,4} t{31,1} {3,4} {31,1} rr{4,3} s2{3,4} tr{4,3} sr{4,3} h{4,3} {3,3} h2{4,3} t{3,3} s{3,4} s{31,1} or = Duals to uniform polyhedra V43 V3.82 V(3.4)2 V4.62 V34 V3.43 V4.6.8 V34.4 V33 V3.62 V35 It is also one of the simplest examples of a hypersimplex, a polytope formed by certain intersections of a hypercube with a hyperplane. The octahedron is topologically related as a part of sequence of regular polyhedra with Schläfli symbols {3,n}, continuing into the hyperbolic plane. *n32 symmetry mutation of regular tilings: {3,n} Euclid. Compact hyper. Paraco. Noncompact hyperbolic 3.3 33 34 35 36 37 38 3∞ 312i 39i 36i 33i Tetratetrahedron[edit] The regular octahedron can also be considered a rectified tetrahedron – and can be called a tetratetrahedron. This can be shown by a 2-color face model. With this coloring, the octahedron has tetrahedral symmetry. Compare this truncation sequence between a tetrahedron and its dual: Family of uniform tetrahedral polyhedra [3,3]+, (332) {3,3} t{3,3} r{3,3} t{3,3} {3,3} rr{3,3} tr{3,3} sr{3,3} V3.3.3 V3.6.6 V3.3.3.3 V3.6.6 V3.3.3 V3.4.3.4 V4.6.6 V3.3.3.3.3 The above shapes may also be realized as slices orthogonal to the long diagonal of a tesseract. If this diagonal is oriented vertically with a height of 1, then the first five slices above occur at heights r, 3/8, 1/2, 5/8, and s, where r is any number in the range 0 < r ≤ 1/4, and s is any number in the range 3/4 ≤ s < 1. The octahedron as a tetratetrahedron exists in a sequence of symmetries of quasiregular polyhedra and tilings with vertex configurations (3.n)2, progressing from tilings of the sphere to the Euclidean plane and into the hyperbolic plane. With orbifold notation symmetry of *n32 all of these tilings are Wythoff constructions within a fundamental domain of symmetry, with generator points at the right angle corner of the domain.[6][7] *n32 orbifold symmetries of quasiregular tilings: (3.n)2 Euclidean Hyperbolic *832... *∞32 Quasiregular (3.3)2 (3.4)2 (3.5)2 (3.6)2 (3.7)2 (3.8)2 (3.∞)2 Trigonal antiprism[edit] As a trigonal antiprism, the octahedron is related to the hexagonal dihedral symmetry family. Uniform hexagonal dihedral spherical polyhedra [6,2+], (2*3) {6,2} t{6,2} r{6,2} t{2,6} {2,6} rr{6,2} tr{6,2} sr{6,2} s{2,6} Duals to uniforms V62 V122 V62 V4.4.6 V26 V4.4.6 V4.4.12 V3.3.3.6 V3.3.3.3 Family of uniform antiprisms n.3.3.3 Polyhedron Config. ...∞.3.3.3 Square bipyramid[edit] Family of bipyramids Coxeter V10.4.4 Octahedral number Centered octahedral number Spinning octahedron Stella octangula Triakis octahedron Hexakis octahedron Truncated octahedron Octahedral molecular geometry Octahedral symmetry Octahedral graph ^ Finbow, Arthur S.; Hartnell, Bert L.; Nowakowski, Richard J.; Plummer, Michael D. (2010). "On well-covered triangulations. III". Discrete Applied Mathematics. 158 (8): 894–912. doi:10.1016/j.dam.2009.08.002. MR 2602814. ^ Counting polyhedra ^ "Archived copy". Archived from the original on 17 November 2014. Retrieved 14 August 2016. CS1 maint: Archived copy as title (link) ^ Klein, Douglas J. (2002). "Resistance-Distance Sum Rules" (PDF). Croatica Chemica Acta. 75 (2): 633–649. Retrieved 30 September 2006. ^ Coxeter Regular Polytopes, Third edition, (1973), Dover edition, ISBN 0-486-61480-8 (Chapter V: The Kaleidoscope, Section: 5.7 Wythoff's construction) ^ Two Dimensional symmetry Mutations by Daniel Huson "Octahedron" . Encyclopædia Britannica. 19 (11th ed.). 1911. Weisstein, Eric W. "Octahedron". MathWorld. Klitzing, Richard. "3D convex uniform polyhedra x3o4o - oct". Editable printable net of an octahedron with interactive 3D view Paper model of the octahedron K.J.M. MacLean, A Geometric Analysis of the Five Platonic Solids and Other Semi-Regular Polyhedra The Uniform Polyhedra Virtual Reality Polyhedra The Encyclopedia of Polyhedra Conway Notation for Polyhedra Try: dP4 Polyhedra Listed by number of faces 1–10 faces Monohedron Dihedron Trihedron Pentahedron Heptahedron Enneahedron Decahedron 11–20 faces Hendecahedron Tridecahedron Tetradecahedron Pentadecahedron Hexadecahedron Heptadecahedron Octadecahedron Enneadecahedron Icosahedron Triacontahedron (30) Hexecontahedron (60) Enneacontahedron (90) Hectotriadiohedron Apeirohedron Convex polyhedra Platonic solids (regular) Archimedean solids (semiregular or uniform) truncated tetrahedron cuboctahedron truncated cube rhombicuboctahedron truncated cuboctahedron snub cube icosidodecahedron truncated dodecahedron truncated icosahedron rhombicosidodecahedron truncated icosidodecahedron snub dodecahedron Catalan solids (duals of Archimedean) triakis tetrahedron rhombic dodecahedron tetrakis hexahedron deltoidal icositetrahedron disdyakis dodecahedron pentagonal icositetrahedron rhombic triacontahedron triakis icosahedron pentakis dodecahedron deltoidal hexecontahedron disdyakis triacontahedron pentagonal hexecontahedron Dihedral regular hosohedron Dihedral uniform antiprisms duals: bipyramids trapezohedra Dihedral others truncated trapezohedra gyroelongated bipyramid bicupola pyramidal frusta bifrustum birotunda prismatoid scutoid Degenerate polyhedra are in italics. Johnson solids Pyramids, cupolae and rotundae square pyramid pentagonal pyramid triangular cupola square cupola pentagonal cupola pentagonal rotunda Modified pyramids elongated triangular pyramid elongated square pyramid elongated pentagonal pyramid gyroelongated square pyramid gyroelongated pentagonal pyramid triangular bipyramid pentagonal bipyramid elongated triangular bipyramid elongated square bipyramid elongated pentagonal bipyramid gyroelongated square bipyramid Modified cupolae and rotundae elongated triangular cupola elongated square cupola elongated pentagonal cupola elongated pentagonal rotunda gyroelongated triangular cupola gyroelongated square cupola gyroelongated pentagonal cupola gyroelongated pentagonal rotunda gyrobifastigium triangular orthobicupola square orthobicupola square gyrobicupola pentagonal orthobicupola pentagonal gyrobicupola pentagonal orthocupolarotunda pentagonal gyrocupolarotunda pentagonal orthobirotunda elongated triangular orthobicupola elongated triangular gyrobicupola elongated square gyrobicupola elongated pentagonal orthobicupola elongated pentagonal gyrobicupola elongated pentagonal orthocupolarotunda elongated pentagonal gyrocupolarotunda elongated pentagonal orthobirotunda elongated pentagonal gyrobirotunda gyroelongated triangular bicupola gyroelongated square bicupola gyroelongated pentagonal bicupola gyroelongated pentagonal cupolarotunda gyroelongated pentagonal birotunda Augmented prisms augmented triangular prism biaugmented triangular prism triaugmented triangular prism augmented pentagonal prism biaugmented pentagonal prism augmented hexagonal prism parabiaugmented hexagonal prism metabiaugmented hexagonal prism triaugmented hexagonal prism Modified Platonic solids augmented dodecahedron parabiaugmented dodecahedron metabiaugmented dodecahedron triaugmented dodecahedron metabidiminished icosahedron tridiminished icosahedron augmented tridiminished icosahedron Modified Archimedean solids augmented truncated tetrahedron augmented truncated cube biaugmented truncated cube augmented truncated dodecahedron parabiaugmented truncated dodecahedron metabiaugmented truncated dodecahedron triaugmented truncated dodecahedron gyrate rhombicosidodecahedron parabigyrate rhombicosidodecahedron metabigyrate rhombicosidodecahedron trigyrate rhombicosidodecahedron diminished rhombicosidodecahedron paragyrate diminished rhombicosidodecahedron metagyrate diminished rhombicosidodecahedron bigyrate diminished rhombicosidodecahedron parabidiminished rhombicosidodecahedron metabidiminished rhombicosidodecahedron gyrate bidiminished rhombicosidodecahedron tridiminished rhombicosidodecahedron snub disphenoid snub square antiprism sphenocorona augmented sphenocorona sphenomegacorona hebesphenomegacorona disphenocingulum bilunabirotunda triangular hebesphenorotunda (See also List of Johnson solids, a sortable table) Fundamental convex regular and uniform polytopes in dimensions 2–10 An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform 4-polytope 5-cell 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell 5-simplex 5-orthoplex • 5-cube 5-demicube 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds Retrieved from "https://en.wikipedia.org/w/index.php?title=Octahedron&oldid=875678663" Deltahedra Individual graphs Platonic solids Prismatoid polyhedra Pyramids and bipyramids Wikipedia articles incorporating a citation from the 1911 Encyclopaedia Britannica with Wikisource reference Use dmy dates from December 2010 A uniform polyhedron is a polyhedron which has regular polygons as faces and is vertex-transitive. It follows. Uniform polyhedra may be quasi-regular or semi-regular; the faces and vertices need not be convex, so many of the uniform polyhedra are star polyhedra. There are two infinite classes of uniform polyhedra together with 75 others. Infinite classes prisms antiprisms Convex exceptional 5 Platonic solids – regular convex polyhedra 13 Archimedean solids – 2 quasiregular and 11 semiregular convex polyhedra Star exceptional 4 Kepler–Poinsot polyhedra – regular nonconvex polyhedra 53 uniform star polyhedra – 5 quasiregular and 48 semiregularhence 5 + 13 + 4 + 53 = 75. There are many degenerate uniform polyhedra with pairs of edges that coincide, including one found by John Skilling called the great disnub dirhombidodecahedron. Dual polyhedra to uniform polyhedra are face-transitive and have regular vertex figures, are classified in parallel with their dual polyhedron; the dual of a regular polyhedron is regular, while the dual of an Archimedean solid is a Catalan solid. The concept of uniform polyhedron is a special case of the concept of uniform polytope, which applies to shapes in higher-dimensional space. Coxeter, Longuet-Higgins & Miller define uniform polyhedra to be vertex-transitive polyhedra with regular faces, they define a polyhedron to be a finite set of polygons such that each side of a polygon is a side of just one other polygon, such that no non-empty proper subset of the polygons has the same property. By a polygon they implicitly mean a polygon in 3-dimensional Euclidean space. There are some generalizations of the concept of a uniform polyhedron. If the connectedness assumption is dropped we get uniform compounds, which can be split as a union of polyhedra, such as the compound of 5 cubes. If we drop the condition that the realization of the polyhedron is non-degenerate we get the so-called degenerate uniform polyhedra; these require a more general definition of polyhedra. Grunbaum gave a rather complicated definition of a polyhedron, while McMullen & Schulte gave a simpler and more general definition of a polyhedron: in their terminology, a polyhedron is a 2-dimensional abstract polytope with a non-degenerate 3-dimensional realization. Here an abstract polytope is a poset of its "faces" satisfying various condition, a realization is a function from its vertices to some space, the realization is called non-degenerate if any two distinct faces of the abstract polytope have distinct realizations. Some of the ways they can be degenerate are as follows: Hidden faces; some polyhedra have faces that are hidden, in the sense that no points of their interior can be seen from the outside. These are not counted as uniform polyhedra. Degenerate compounds; some polyhedra have multiple edges and their faces are the faces of two or more polyhedra, though these are not compounds in the previous sense since the polyhedra share edges. Double covers. There are some non-orientable polyhedra that have double covers satisfying the definition of a uniform polyhedron. There double covers have doubled faces and vertices, they are not counted as uniform polyhedra. Double faces. There are several polyhedra with doubled faces produced by Wythoff's construction. Most authors do not remove them as part of the construction. Double edges. Skilling's figure has the property that it has double edges but its faces cannot be written as a union of two uniform polyhedra; the Platonic solids date back to the classical Greeks and were studied by the Pythagoreans, Theaetetus, Timaeus of Locri and Euclid. The Etruscans discovered the regular dodecahedron before 500 BC; the cuboctahedron was known by Plato. Archimedes discovered all of the 13 Archimedean solids, his original book on the subject was lost, but Pappus of Alexandria mentioned Archimedes listed 13 polyhedra. Piero della Francesca rediscovered the five truncation of the Platonic solids: truncated tetrahedron, truncated octahedron, truncated cube, truncated dodecahedron, truncated icosahedron. Luca Pacioli republished Francesca's work in De divina proportione in 1509, adding the rhombicuboctahedron, calling it a icosihexahedron for its 26 faces, drawn by Leonardo da Vinci. Johannes Kepler was the first to publish the complete list of Archimedean solids, in 1619, as well as identified the infinite families of uniform prisms and antiprisms. Kepler discovered two of the regular Kepler–Poinsot polyhedra and Louis Poinsot discovered the other two. The set of four were named by Arthur Cayley. Of the remaining 53, Edmund Hess discovered two, Albert Badoureau discovered 36 more, Pitsch independently discovered 18, of which 3 had not been discovered. Together these gave 41 polyhedra; the geometer H. S. M. Coxeter did not publish. M. S. Longuet-Higgins and H. C. Longuet-Higgins independently discovered eleven of these. Lesavre and Mercier rediscovered five of them in 1947. Coxeter, Longuet-Higgins & Miller published the list of uniform polyhedra. Sopov (19 In geometry, a tetrahedron known as a triangular pyramid, is a polyhedron composed of four triangular faces, six straight edges, four vertex corners. The tetrahedron is the simplest of all the ordinary convex polyhedra and the only one that has fewer than 5 faces; the tetrahedron is the three-dimensional case of the more general concept of a Euclidean simplex, may thus be called a 3-simplex. The tetrahedron is one kind of pyramid, a polyhedron with a flat polygon base and triangular faces connecting the base to a common point. In the case of a tetrahedron the base is a triangle, so a tetrahedron is known as a "triangular pyramid". Like all convex polyhedra, a tetrahedron can be folded from a single sheet of paper, it has two such nets. For any tetrahedron there exists a sphere on which all four vertices lie, another sphere tangent to the tetrahedron's faces. A regular tetrahedron is one, it is one of the five regular Platonic solids. In a regular tetrahedron, all faces are the same size and shape and all edges are the same length. Regular tetrahedra alone do not tessellate, but if alternated with regular octahedra in the ratio of two tetrahedra to one octahedron, they form the alternated cubic honeycomb, a tessellation. The regular tetrahedron is self-dual; the compound figure comprising two such dual tetrahedra form a stellated octahedron or stella octangula. The following Cartesian coordinates define the four vertices of a tetrahedron with edge length 2, centered at the origin, two level edges: and Expressed symmetrically as 4 points on the unit sphere, centroid at the origin, with lower face level, the vertices are: v1 = v2 = v3 = v4 = with the edge length of sqrt. Still another set of coordinates are based on an alternated cube or demicube with edge length 2; this form has Coxeter diagram and Schläfli symbol h. The tetrahedron in this case has edge length 2√2. Inverting these coordinates generates the dual tetrahedron, the pair together form the stellated octahedron, whose vertices are those of the original cube. Tetrahedron:, Dual tetrahedron:, For a regular tetrahedron of edge length a: With respect to the base plane the slope of a face is twice that of an edge, corresponding to the fact that the horizontal distance covered from the base to the apex along an edge is twice that along the median of a face. In other words, if C is the centroid of the base, the distance from C to a vertex of the base is twice that from C to the midpoint of an edge of the base. This follows from the fact that the medians of a triangle intersect at its centroid, this point divides each of them in two segments, one of, twice as long as the other. For a regular tetrahedron with side length a, radius R of its circumscribing sphere, distances di from an arbitrary point in 3-space to its four vertices, we have d 1 4 + d 2 4 + d 3 4 + d 4 4 4 + 16 R 4 9 = 2. Point groups in three dimensions In geometry, a point group in three dimensions is an isometry group in three dimensions that leaves the origin fixed, or correspondingly, an isometry group of a sphere. It is a subgroup of the orthogonal group O, the group of all isometries that leave the origin fixed, or correspondingly, the group of orthogonal matrices. O itself is a subgroup of the Euclidean group E of all isometries. Symmetry groups of objects are isometry groups. Accordingly, analysis of isometry groups is analysis of possible symmetries. All isometries of a bounded 3D object have one or more common fixed points. We choose the origin as one of them; the symmetry group of an object is sometimes called full symmetry group, as opposed to its rotation group or proper symmetry group, the intersection of its full symmetry group and the rotation group SO of the 3D space itself. The rotation group of an object is equal to its full symmetry group if and only if the object is chiral; the point groups in three dimensions are used in chemistry to describe the symmetries of a molecule and of molecular orbitals forming covalent bonds, in this context they are called molecular point groups. Finite Coxeter groups are a special set of point groups generated purely by a set of reflectional mirrors passing through the same point. A rank n Coxeter group is represented by a Coxeter -- Dynkin diagram. Coxeter notation offers a bracketed notation equivalent to the Coxeter diagram, with markup symbols for rotational and other subsymmetry point groups. SO is a subgroup of E +, which consists of i.e. isometries preserving orientation. O is the direct product of SO and the group generated by inversion: O = SO × Thus there is a 1-to-1 correspondence between all direct isometries and all indirect isometries, through inversion. There is a 1-to-1 correspondence between all groups of direct isometries H in O and all groups K of isometries in O that contain inversion: K = H × H = K ∩ SOFor instance, if H is C2 K is C2h, or if H is C3 K is S6. If a group of direct isometries H has a subgroup L of index 2 apart from the corresponding group containing inversion there is a corresponding group that contains indirect isometries but no inversion: M = L ∪ where isometry is identified with A. An example would be C4 for H and S4 for M. Thus M is obtained from H by inverting the isometries in H ∖ L; this group M is as abstract group isomorphic with H. Conversely, for all isometry groups that contain indirect isometries but no inversion we can obtain a rotation group by inverting the indirect isometries; this is clarifying when see below. In 2D the cyclic group of k-fold rotations Ck is for every positive integer k a normal subgroup of O and SO. Accordingly, in 3D, for every axis the cyclic group of k-fold rotations about that axis is a normal subgroup of the group of all rotations about that axis. Since any subgroup of index two is normal, the group of rotations is normal both in the group obtained by adding reflections in planes through the axis and in the group obtained by adding a reflection plane perpendicular to the axis; the isometries of R3 that leave the origin fixed, forming the group O, can be categorized as follows: SO: identity rotation about an axis through the origin by an angle not equal to 180° rotation about an axis through the origin by an angle of 180° the same with inversion, i.e. respectively: inversion rotation about an axis by an angle not equal to 180°, combined with reflection in the plane through the origin perpendicular to the axis reflection in a plane through the originThe 4th and 5th in particular, in a wider sense the 6th are called improper rotations. See the similar overview including translations. When comparing the symmetry type of two objects, the origin is chosen for each separately, i.e. they need not have the same center. Moreover, two objects are considered to be of the same symmetry type if their symmetry groups are conjugate subgroups of O. For example, two 3D objects have the same symmetry type: if both have mirror symmetry, but with respect to a different mirror plane if both have 3-fold rotational symmetry, but with respect to a different axis. In the case of multiple mirror planes and/or axes of rotation, two symmetry groups are of the same symmetry type if and only if there is a rotation mapping the whole structure of the first symmetry group to that of the second; the conjugacy definition would allow a mirror image of the structure, but this is not needed, the structure itself is achiral. For example, if a symmetry group contains a 3-fold axis of rotation, it contains rotations in two opposite directions. There are many infinite isometry groups. We may create non-cyclical abelian groups by adding more rotations around the same axis. There are non-abelian groups generated by rotations around different axes; these are free groups. They will be infinite. All the infinite groups mentioned so far are not closed as topological subgroups of O. We now discuss Conway polyhedron notation In geometry, Conway polyhedron notation, invented by John Horton Conway and promoted by George W. Hart, is used to describe polyhedra based on a seed polyhedron modified by various prefix operations. Conway and Hart extended the idea of using operators, like truncation as defined by Kepler, to build related polyhedra of the same symmetry. For example, tC represents a truncated cube, taC, parsed as t, is a truncated cuboctahedron; the simplest operator dual swaps vertex and face elements. Applied in a series, these operators allow many higher order polyhedra to be generated. Conway defined the operators abdegjkmost, while Hart added r and p. Conway's basic operations are sufficient to generate the Archimedean and Catalan solids from the Platonic solids; some basic operations can be made as composites of others. Implementations named further operators, sometimes referred to as "extended" operators. In general, it is difficult to predict the resulting appearance of the composite of two or more operations from a given seed polyhedron. For instance, ambo applied twice is the expand operation: aa = e, while a truncation after ambo produces bevel: ta = b. Many basic questions about Conway operators remain open, for instance, how many operators of a given "size" exist. In Conway's notation, operations on polyhedra are applied from right to left. For example, a cuboctahedron is an ambo cube, i.e. a = a C, a truncated cuboctahedron is t = t = t a C. Repeated application of an operator can be denoted with an exponent: j2. In general, Conway operators are not commutative; the resulting polyhedron has a fixed topology, while exact geometry is not specified: it can be thought of as one of many embeddings of a polyhedral graph on the sphere. The polyhedron is put into canonical form. Individual operators can be visualized in terms of "chambers", as below; each white chamber is a rotated version of the others. For achiral operators, the red chambers are a reflection of the white chambers. Achiral and chiral operators are called local symmetry-preserving operations and local operations that preserve orientation-preserving symmetries although the exact definition is a little more restrictive. The relationship between the number of vertices and faces of the seed and the polyhedron created by the operations listed in this article can be expressed as a matrix M x. When x is the operator, v, e, f are the vertices and faces of the seed, v ′, e ′, f ′ are the vertices and faces of the result M x =; the matrix for the composition of two operators is just the product of the matrixes for the two operators. Distinct operators may have the same matrix, for p and l; the edge count of the result is an integer multiple d of that of the seed: this is called the inflation rate, or the edge factor. The simplest operators, the identity operator S and the dual operator d, have simple matrix forms: M S = = I 3, M d = Two dual operators cancel out; when applied to other operators, the dual operator corresponds to horizontal and vertical reflections of the matrix. Operators can be grouped into groups of four by identifying the operators x, xd, dx, dxd. In this article, only the matrix for x is given. Hart introduced the reflection operator r. This is not a LOPSP, since it does not preserve orientation. R has no effect on achiral seeds, rr returns the original seed. An overline can be used to indicate the other chiral form of an operator. R does not affect the matrix. An operation is irreducible if it cannot be expressed as a composition of operators aside from d and r; the majority of Conway's original operators are irreducible: the exceptions are e, b, o, m. Some open questions about Conway A sphere is a round geometrical object in three-dimensional space, the surface of a round ball. Like a circle in a two-dimensional space, a sphere is defined mathematically as the set of points that are all at the same distance r from a given point, but in a three-dimensional space; this distance r is the radius of the ball, made up from all points with a distance less than r from the given point, the center of the mathematical ball. These are referred to as the radius and center of the sphere, respectively; the longest straight line segment through the ball, connecting two points of the sphere, passes through the center and its length is thus twice the radius. While outside mathematics the terms "sphere" and "ball" are sometimes used interchangeably, in mathematics the above distinction is made between a sphere, a two-dimensional closed surface, embedded in a three-dimensional Euclidean space, a ball, a three-dimensional shape that includes the sphere and everything inside the sphere, or, more just the points inside, but not on the sphere. The distinction between ball and sphere has not always been maintained and older mathematical references talk about a sphere as a solid. This is analogous to the situation in the plane, where the terms "circle" and "disk" can be confounded. In analytic geometry, a sphere with center and radius r is the locus of all points such that 2 + 2 + 2 = r 2. Let a, b, c, d, e be real numbers with a ≠ 0 and put x 0 = − b a, y 0 = − c a, z 0 = − d a, ρ = b 2 + c 2 + d 2 − a e a 2; the equation f = a + 2 + e = 0 has no real points as solutions if ρ < 0 and is called the equation of an imaginary sphere. If ρ = 0 the only solution of f = 0 is the point P 0 = and the equation is said to be the equation of a point sphere. In the case ρ > 0, f = 0 is an equation of a sphere whose center is P 0 and whose radius is ρ. If a in the above equation is zero f = 0 is the equation of a plane. Thus, a plane may be thought of as a sphere of infinite radius; the points on the sphere with radius r > 0 and center can be parameterized via x = x 0 + r sin ⁡ θ cos ⁡ φ y = y 0 + r sin ⁡ θ sin ⁡ φ z = z 0 + r cos ⁡ θ The parameter θ { In geometry, the stereographic projection is a particular mapping that projects a sphere onto a plane. The projection is defined except at one point: the projection point. Where it is defined, the mapping is bijective, it is conformal. It is neither isometric nor area-preserving: that is, it preserves neither distances nor the areas of figures. Intuitively the stereographic projection is a way of picturing the sphere as the plane, with some inevitable compromises; because the sphere and the plane appear in many areas of mathematics and its applications, so does the stereographic projection. In practice, the projection is carried out by computer or by hand using a special kind of graph paper called a stereographic net, shortened to stereonet, or Wulff net; the stereographic projection was known to Hipparchus and earlier to the Egyptians. It was known as the planisphere projection. Planisphaerium by Ptolemy is the oldest surviving document. One of its most important uses was the representation of celestial charts. The term planisphere is still used to refer to such charts. In the 16th and 17th century, the equatorial aspect of the stereographic projection was used for maps of the Eastern and Western Hemispheres, it is believed that the map created in 1507 by Gualterius Lud was in stereographic projection, as were the maps of Jean Roze, Rumold Mercator, many others. In star charts this equatorial aspect had been utilised by the ancient astronomers like Ptolemy. François d'Aguilon gave the stereographic projection its current name in his 1613 work Opticorum libri sex philosophis juxta ac mathematicis utiles. In 1695, Edmond Halley, motivated by his interest in star charts, published the first mathematical proof that this map is conformal, he used the established tools of calculus, invented by his friend Isaac Newton. The unit sphere in three-dimensional space R3 is the set of points such that x2 + y2 + z2 = 1. Let N = be the "north pole", let M be the rest of the sphere; the plane z = 0 runs through the center of the sphere. For any point P on M, there is a unique line through N and P, this line intersects the plane z = 0 in one point P′. Define the stereographic projection of P to be this point P′ in the plane. In Cartesian coordinates on the sphere and on the plane, the projection and its inverse are given by the formulas =, =. In spherical coordinates on the sphere and polar coordinates on the plane, the projection and its inverse are = =, =. {\displaystyle {\begin&=\left=\left(\c In geometry, the Wythoff symbol represents a Wythoff construction of a uniform polyhedron or plane tiling, from a Schwarz triangle. It was first used by Coxeter, Longuet-Higgins and Miller in their enumeration of the uniform polyhedra. A Wythoff symbol consists of a vertical bar, it represents one uniform polyhedron or tiling, although the same tiling/polyhedron can have different Wythoff symbols from different symmetry generators. For example, the regular cube can be represented by 3 | 4 2 with Oh symmetry, 2 4 | 2 as a square prism with 2 colors and D4h symmetry, as well as 2 2 2 | with 3 colors and D 2 h symmetry. With a slight extension, Wythoff's symbol can be applied to all uniform polyhedra. However, the construction methods do not lead to all uniform tilings in Euclidean or hyperbolic space. In three dimensions, Wythoff's construction begins by choosing a generator point on the triangle. If the distance of this point from each of the sides is non-zero, the point must be chosen to be an equal distance from each edge. A perpendicular line is dropped between the generator point and every face that it does not lie on. The three numbers in Wythoff's symbol, p, q and r, represent the corners of the Schwarz triangle used in the construction, which are π / p, π / q and π / r radians respectively; the triangle is represented with the same numbers, written. The vertical bar in the symbol specifies a categorical position of the generator point within the fundamental triangle according to the following: p | q r indicates that the generator lies on the corner p, p q | r indicates that the generator lies on the edge between p and q, p q r | indicates that the generator lies in the interior of the triangle. In this notation the mirrors are labeled by the reflection-order of the opposite vertex; the p, q, r values are listed before the bar. The one impossible symbol | p q r implies the generator point is on all mirrors, only possible if the triangle is degenerate, reduced to a point; this unused symbol is therefore arbitrarily reassigned to represent the case where all mirrors are active, but odd-numbered reflected images are ignored. The resulting figure has rotational symmetry only. The generator point can either be off each mirror, activated or not; this distinction creates 8 possible forms, neglecting one where the generator point is on all the mirrors. The Wythoff symbol is functionally similar to the more general Coxeter-Dynkin diagram, in which each node represents a mirror and the arcs between them – marked with numbers – the angles between the mirrors. A node is circled. There are seven generator points with each set of p, q, r: There are three special cases: p q | – This is a mixture of p q r | and p q s |, containing only the faces shared by both. | p q r – Snub forms are given by this otherwise unused symbol. | p q r s – A unique snub form for U75 that isn't Wythoff-constructible. There are 4 symmetry classes of reflection on the sphere, three in the Euclidean plane. A few of the infinitely many such patterns in the hyperbolic plane are listed. Point groups: dihedral symmetry, p = 2, 3, 4 … tetrahedral symmetry octahedral symmetry icosahedral symmetry Euclidean groups: *442 symmetry: 45°-45°-90° triangle *632 symmetry: 30°-60°-90° triangle *333 symmetry: 60°-60°-60° triangleHyperbolic groups: *732 symmetry *832 symmetry *433 symmetry *443 symmetry *444 symmetry *542 symmetry *642 symmetry... The above symmetry groups only include the integer solutions on the sphere. The list of Schwarz triangles includes rational numbers, determine the full set of solutions of nonconvex uniform polyhedra. In the tilings above, each triangle is a fundamental domain, colored by and odd reflections. Selected tilings created by the Wythoff con From this Article Platonic solid [videos] In three-dimensional space, a Platonic solid is a regular, convex polyhedron. It is constructed by congruent regular polygonal faces with the same number of faces meeting at each vertex. Five solids meet these criteria: — Geometers … Kepler's Platonic solid model of the Solar System from Mysterium Cosmographicum (1596) A set of polyhedral dice. Conway polyhedron notation [videos] In geometry, Conway polyhedron notation, invented by John Horton Conway and promoted by George W. Hart, is used to describe polyhedra based on a seed polyhedron modified by various prefix operations.Conway and Hart extended the idea of using operators, like truncation as defined by Kepler, to build … Image: Triakistetrahedron Image: Rhombicdodecahedron Image: Triakisoctahedron Image: Tetrakishexahedron Regular polyhedron [videos] A regular polyhedron is a polyhedron whose symmetry group acts transitively on its flags. A regular polyhedron is highly symmetrical, being all of edge-transitive, vertex-transitive and face-transitive. In classical contexts, many different equivalent definitions are used; a common one is that … Image: Small Stellated Dodecahedron Image: Great Dodecahedron Image: Great Stellated Dodecahedron Image: Great Icosahedron Geometry [videos] Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is called a geometer. — Geometry arose … A European and an Arab practicing geometry in the 15th century. The frontispiece of Sir Henry Billingsley's first English version of Euclid's Elements, 1570 Woman teaching geometry. Illustration at the beginning of a medieval translation of Euclid's Elements, (c. 1310) Visual checking of the Pythagorean theorem for the (3, 4, 5) triangle as in the Zhoubi Suanjing 500–200 BC. The Pythagorean theorem is a consequence of the Euclidean metric. Polyhedron [videos] In geometry, a polyhedron is a solid in three dimensions with flat polygonal faces, straight edges and sharp corners or vertices. The word polyhedron comes from the Classical Greek πολύεδρον, as poly- + -hedron (form of ἕδρα, "base" or … Convex polyhedron blocks on display at the Universum museum in Mexico City Image: Tetrahedron Image: Hexahedron Image: Octahedron Stereographic projection [videos] In geometry, the stereographic projection is a particular mapping that projects a sphere onto a plane. The projection is defined on the entire sphere, except at one point: the projection point. Where it is defined, the mapping is smooth and bijective. It is conformal, meaning that it … Illustration by Rubens for "Opticorum libri sex philosophis juxta ac mathematicis utiles", by François d'Aguilon. It demonstrates the principle of a general perspective projection, of which the stereographic projection is a special case. The sphere, with various loxodromes shown in distinct colors Use of lower hemisphere stereographic projection to plot planar and linear data in structural geology, using the example of a fault plane with a slickenside lineation Stereographic projection of the spherical panorama of the Last Supper sculpture by Michele Vedani in Esino Lario, Lombardy, Italy during Wikimania 2016 Uniform polyhedron [videos] A uniform polyhedron is a polyhedron which has regular polygons as faces and is vertex-transitive. It follows that all vertices are congruent. — Uniform polyhedra may be regular (if also face and edge … Image: Disdyakisdodecahedro n Image: Disdyakistriacontahe dron Dual polyhedron [videos] In geometry, any polyhedron is associated with a second dual figure, where the vertices of one correspond to the faces of the other and the edges between pairs of vertices of one correspond to the edges between pairs of faces of the other. Such dual figures remain combinatorial or abstract … Image: Ioanniskepplerih 00kepl 0271 crop The dual of a cube is an octahedron. Vertices of one correspond to faces of the other, and edges correspond to each other. Bipyramid [videos] An n-gonal bipyramid or dipyramid is a polyhedron formed by joining an n-gonal pyramid and its mirror image base-to-base. An n-gonal bipyramid has 2n triangle faces, 3n edges, and 2 + n vertices. — The referenced n-gon in the name of the bipyramids is not an external face but an internal one … A bipyramid made with straws and elastics. An extra axial straw is added which doesn't exist in the simple polyhedron hexagonal bipyramid Sphere [videos] A sphere is a perfectly round geometrical object in three-dimensional space that is the surface of a completely round ball. — Like a circle in a … Image: Einstein gyro gravity probe b Image: King of spades spheres Stellation [videos] In geometry, stellation is the process of extending a polygon in two dimensions, polyhedron in three dimensions, or, in general, a polytope in n dimensions to form a new figure. Starting with an original figure, the process extends specific elements such as its edges or face planes, usually in a … Magnus Wenninger with some of his models of stellated polyhedra in 2009 Marble floor mosaic by Paolo Uccello, Basilica of St Mark, Venice, c. 1430 Deltahedron [videos] In geometry, a deltahedron is a polyhedron whose faces are all equilateral triangles. The name is taken from the Greek majuscule delta, which has the shape of an equilateral triangle. There are infinitely many deltahedra, but of these only eight are convex, having 4, 6, 8 … The largest strictly-convex deltahedron is the regular icosahedron Octahedron (album) [videos] Octahedron is the fifth full-length studio album by American progressive rock band the Mars Volta, released on June 23, 2009. The album was released by Warner Bros. Records in North America and Mercury Records worldwide. It is the last studio album to feature drummer Thomas Pridgen and guitarist … Image: Tmv octahedron Wythoff symbol [videos] In geometry, the Wythoff symbol represents a Wythoff construction of a uniform polyhedron or plane tiling, from a Schwarz triangle. It was first used by Coxeter, Longuet-Higgins and Miller in their enumeration of the uniform polyhedra. — A Wythoff symbol consists of three numbers and a vertical bar … Example Wythoff construction triangles with the 7 generator points. Lines to the active mirrors are colored red, yellow, and blue with the 3 nodes opposite them as associated by the Wythoff symbol. Point groups in three dimensions [videos] In geometry, a point group in three dimensions is an isometry group in three dimensions that leaves the origin fixed, or correspondingly, an isometry group of a sphere. It is a subgroup of the orthogonal group O, the group of all isometries that leave the origin fixed, or correspondingly, the … An unmarked sphere has O(3) symmetry. Harold Scott MacDonald Coxeter [videos] Harold Scott MacDonald "Donald" Coxeter, FRS, FRSC, was a British-born Canadian geometer. Coxeter is regarded as one of the greatest geometers of the 20th century. He was born in London, received his BA and PhD from Cambridge, but lived in Canada … Image: Coxeter Dihedral angle [videos] A dihedral angle is the angle between two intersecting planes. In chemistry it is the angle between planes through two sets of three atoms, having two atoms in common. In solid geometry it is defined as the union of a line and two half-planes that have this line as a common edge. In higher … Free energy diagram of n-butane as a function of dihedral angle. Cube [videos] In geometry, a cube is a three-dimensional solid object bounded by six square faces, facets or sides, with three meeting at each vertex. — The cube is the only regular hexahedron and is one of the five Platonic solids. It has 6 faces, 12 edges, and 8 vertices. — The cube is also a square … These familiar six-sided dice are cube-shaped. Equilateral triangle [videos] In geometry, an equilateral triangle is a triangle in which all three sides are equal. In the familiar Euclidean geometry, an equilateral triangle is also equiangular; that is, all three internal angles are also congruent to each other and are each 60°. It is also a regular polygon, so it is also … A regular tetrahedron is made of four equilateral triangles. Rectification (geometry) [videos] In Euclidean geometry, rectification or complete-truncation is the process of truncating a polytope by marking the midpoints of all its edges, and cutting off its vertices at those points. The resulting polytope will be bounded by vertex figure facets and the rectified facets of the original … A rectified cube is a cuboctahedron – edges reduced to vertices, and vertices expanded into new faces Tetrahedron [videos] In geometry, a tetrahedron, also known as a triangular pyramid, is a polyhedron composed of four triangular faces, six straight edges, and four vertex corners. The tetrahedron is the simplest of all the ordinary convex polyhedra and the only one that has fewer … Image: Tetraedro (Matemateca IME USP) Antiprism [videos] In geometry, an n-sided antiprism is a polyhedron composed of two parallel copies of some particular n-sided polygon, connected by an alternating band of triangles. Antiprisms are a subclass of the prismatoids and are a type of snub polyhedra. — Antiprisms are similar to prisms except … Hexagonal antiprism Orthographic projection [videos] Orthographic projection is a means of representing three-dimensional objects in two dimensions. It is a form of parallel projection, in which all the projection lines are orthogonal to the projection plane, resulting in every plane of the scene appearing in affine … Image: Orthographic projection SW Volume [videos] Volume is the quantity of three-dimensional space enclosed by a closed surface, for example, the space that a substance or shape occupies or contains. Volume is often quantified numerically using the SI derived unit, the cubic metre. The volume of a container is … A measuring cup can be used to measure volumes of liquids. This cup measures volume in units of cups, fluid ounces, and millilitres. History of Turkey [videos] See History of the Republic of Turkey for the history of the modern state.The history of Turkey, understood as the history of the region now forming the territory of the Republic of Turkey, includes the history of both Anatolia and Eastern Thrace (the European part of … Originally a church, later a mosque, and now a museum, the Hagia Sophia in Istanbul was built by the Byzantines in the 6th century. Mehmed II enters Constantinople by Fausto Zonaro The sultan of the golden age, Suleiman the Magnificent. Mustafa Kemal Atatürk (1881-1938) Grace Kelly [videos] Grace Patricia Kelly was an American film actress who became Princess of Monaco after marrying Prince Rainier III in April 1956. — After embarking on an acting career in 1950, when she was 20, Kelly appeared in New York City theatrical productions and more … Grace Kelly, c. 1956 The Kelly family home built by John B. Kelly in 1929, in the East Falls section of Philadelphia The cast of Mogambo (1953) Kelly in a promotional photograph for Rear Window (1954) Will Smith [videos] Willard Carroll Smith II is an American actor, rapper and media personality. In April 2007, Newsweek called him "the most powerful actor in Hollywood". Smith has been nominated for five Golden Globe Awards and two Academy Awards, and has won four Grammy Awards. — In the … Smith in 2017 Smith at the Emmy Awards in 1993 Smith hosting the 2011 Walmart Shareholders Meeting Smith performed the soccer 2018 World Cup's official song "Live It Up" Matt Damon [videos] Matthew Paige Damon is an American actor, film producer and screenwriter. He is ranked among Forbes magazine's most bankable stars and is one of the highest-grossing actors of all time. Damon has received various accolades, including an Academy Award, from five nominations … Damon in 2015 Damon and paparazzo Rino Barillari in Rome in 1999 Damon at Incirlik, 2001 Damon and Robert De Niro at Berlin in February 2007 for the premiere of The Good Shepherd Athens [videos] Athens is the capital and largest city of Greece. Athens dominates the Attica region and is one of the world's oldest cities, with its recorded history spanning over 3,400 years and its earliest human presence starting … Athena, patron goddess of Athens; (Varvakeion Athena, National Archaeological Museum) Tondo of the Aison Cup, showing the victory of Theseus over the Minotaur in the presence of Athena. Theseus was responsible, according to the myth, for the synoikismos ("dwelling together")—the political unification of Attica under Athens. The earliest coinage of Athens, circa 545-525/15 BC The Roman Agora and the Gate of Athena in Plaka district. Order of the Thistle [videos] The Most Ancient and Most Noble Order of the Thistle is an order of chivalry associated with Scotland. The current version of the Order was founded in 1687 by King James VII of Scotland who asserted that he was reviving an earlier Order. The Order consists of the … John Drummond, 1st Earl of Melfort in 1688; originator of the 'revived' Order Prince Augustus Frederick, Duke of Sussex in the robes of a Knight of the Order of the Thistle Vestments of a Knight of the Thistle The St Andrew with the saltire in the badge of the Order of the Thistle Battle of Waterloo [videos] The Battle of Waterloo was fought on Sunday, 18 June 1815 near Waterloo in Belgium, part of the United Kingdom of the Netherlands at the time. A French army under the command of Napoleon Bonaparte was defeated by two of the armies of the Seventh Coalition: a British-led allied army under the … The strategic situation in Western Europe in 1815: 250,000 Frenchmen faced a coalition of about 850,000 soldiers on four fronts. Napoleon was forced to leave 20,000 men in Western France to reduce a royalist insurrection. The resurgent Napoleon's strategy was to isolate the Anglo-allied and Prussian armies and annihilate each one separately. The 1st Duke of Wellington, commander of the Anglo-allied army Gebhard Leberecht von Blücher, who had led one of the coalition armies defeating Napoleon at the Battle of Leipzig, commanded the Prussian army Cam Newton [videos] Cameron Jerrell Newton is an American football quarterback for the Carolina Panthers of the National Football League. He played college football at Auburn and was drafted as the first overall pick by the Panthers in the 2011 NFL Draft. Newton is the only player in the … Newton with the Carolina Panthers in 2016 Newton (top) warming up prior to the 2010 Iron Bowl Newton led Auburn back from a 24-point deficit to defeat rival Alabama. Newton receiving a snap in 2010 against the LSU Tigers Catherine, Duchess of Cambridge [videos] Catherine, Duchess of Cambridge, is a member of the British royal family. Her husband, Prince William, Duke of Cambridge, is expected to become king of the United Kingdom and 15 other Commonwealth realms, making Catherine a likely future queen … The newly married Duke and Duchess of Cambridge on the balcony of Buckingham Palace (2011) William and Catherine with their first son the day after his birth (2013) Catherine and William celebrating Canada Day in Ottawa (2011) Catherine and William meet the Obamas at Buckingham Palace two weeks after their wedding (2011) Battle of Zama [videos] The Battle of Zama—fought in 202 BC near Zama —marked the end of the Second Punic War. A Roman army led by Publius Cornelius Scipio Africanus, with crucial support from Numidian leader Masinissa, defeated the Carthaginian army led by Hannibal. — After defeating Carthaginian and … The Battle of Zama by Henri-Paul Motte, 1890. The Battle of Zama by Cornelis Cort, 1567. Movements of the opposing armies before the battle Tyler Perry [videos] Tyler Perry is an American actor, playwright, filmmaker, and comedian. In 2011, Forbes listed him as the highest paid man in entertainment, earning $130 million USD between May 2010 and May 2011.Perry created and performs the Madea character, a tough … Perry at the 82nd Academy Awards in 2010 Perry at a book signing in 2006
CommonCrawl
Home / Basic Electrical / Parallel Circuit Definition | Parallel Circuit Examples Parallel Circuit Definition | Parallel Circuit Examples Ahmad Faizan Basic Electrical Parallel Circuit Definition Resistors are said to be connected in parallel when the same voltage appears across every component. With different resistance values, different currents flow through each resistor. The total current taken from the supply is the sum of all the individual resistor currents. The equivalent resistance of a parallel resistor circuit is most easily calculated by using the reciprocal of each individual resistor value. Parallel Circuit Characteristics Two resistors connected in parallel may be used as a current divider. In a parallel circuit, as in series circuit, the total power supplied is the sum of the powers dissipated in the individual components. Open-circuit and short-circuit conditions in a parallel circuit have an effect on the supply current. Parallel Connected Resistors Resistors are connected in parallel when the circuit has two terminals that are common to every resistor. Figure 1 shows two resistors (R1 and R2) are connected in parallel, with same voltage applied from a power supply. Thus, it can be stated, Fig.1: Circuit Diagram for Parallel Connected Resistors Resistors are connected in parallel when the same voltage is applied across each resistor. You May Also Read: Series Circuit Definition & Series Circuit Examples Current Levels The parallel-resistor circuit diagram in figure 1 shows that different currents flow in each parallel component. As illustrated, the current through each resistor is $\begin{align} & {{I}_{1}}=\frac{E}{{{R}_{1}}} \\ & and \\ & {{I}_{2}}=\frac{E}{{{R}_{2}}} \\\end{align}$ Now, look at the current directions in figure 1 with respect to junction A. I1 flowing through R1 is flowing away from junction A, and I2 flowing through R2 is also flowing away from A. The supply current I is flowing towards A. Also, I, I1, and I2 are the only currents entering or leaving the junction A. Consequently, $I={{I}_{1}}+{{I}_{2}}$ The same reasoning at junction B, where I1 and I2 are entering and I is leaving B, also gives In the case where there are n resistors in parallel, the supply current is $\begin{matrix} I={{I}_{1}}+{{I}_{2}}+{{I}_{3}}+\cdots +{{I}_{n}} & \cdots & (1) \\\end{matrix}$ Kirchhoff's Current Law The rule about currents entering and leaving a junction is defined in Kirchhoff's current law: KCL Definition The algebraic sum of the currents entering or a point in an electric circuit must equal the algebraic sum of the currents leaving that point. Parallel Circuit Example 1 The parallel resistors shown in figure 1 have values of R1=12Ω and R2=15Ω. The supply voltage is E=9V. Calculate the current that flows through each resistor and the total current drawn from the battery. $\begin{align} & {{I}_{1}}=\frac{E}{{{R}_{1}}}=\frac{9V}{12\Omega }=0.75A \\ & {{I}_{2}}=\frac{E}{{{R}_{2}}}=\frac{9V}{15\Omega }=0.6A \\ & I={{I}_{1}}+{{I}_{2}}=1.35A \\\end{align}$ Parallel Equivalent Resistance Consider the case of four resistors in parallel, as shown in figure 2. Fig.2: Four-Resistor Parallel Circuit From equation (1), the battery current is $I={{I}_{1}}+{{I}_{2}}+{{I}_{3}}+{{I}_{4}}$ Which can be rewritten as \[\begin{align} & I=\frac{E}{{{R}_{1}}}+\frac{E}{{{R}_{2}}}+\frac{E}{{{R}_{3}}}+\frac{E}{{{R}_{4}}} \\ & or \\ & I=E\left( \frac{1}{{{R}_{1}}}+\frac{1}{{{R}_{2}}}+\frac{1}{{{R}_{3}}}+\frac{1}{{{R}_{4}}} \right) \\\end{align}\] For n resistors in parallel, this becomes \[\begin{matrix} I=E\left( \frac{1}{{{R}_{1}}}+\frac{1}{{{R}_{2}}}+\frac{1}{{{R}_{3}}}+\cdots +\frac{1}{{{R}_{n}}} \right) & \cdots & (2) \\\end{matrix}\] If all the resistors in parallel could be replaced by just one resistance that could draw the same current from the battery, the equation for current would be written $\begin{align} & I=\frac{E}{R} \\ & or \\ & I=E(\frac{1}{R}) \\\end{align}$ \[\begin{matrix} \frac{1}{R}=\frac{1}{{{R}_{1}}}+\frac{1}{{{R}_{2}}}+\frac{1}{{{R}_{3}}}+\cdots +\frac{1}{{{R}_{n}}} & \cdots & (3) \\\end{matrix}\] Thus it is seen that The reciprocal of the equivalent resistance of several resistors in parallel is equal to the sum of the reciprocals of the individual resistances. Equation (3) can be rearranged to give \[R\begin{matrix} =\frac{1}{\frac{1}{{{R}_{1}}}+\frac{1}{{{R}_{2}}}+\frac{1}{{{R}_{3}}}+\cdots +\frac{1}{{{R}_{n}}}} & \cdots & (4) \\\end{matrix}\] The equivalent circuit of the parallel resistors and the battery can now be drawn as shown in figure 3. Fig.3: Equivalent Circuit Determine the equivalent resistance of the four parallel resistors in figure 2, and use it to calculate the total current drawn from the battery. From equation (4) \[\begin{align} & R=\frac{1}{\frac{1}{2k\Omega }+\frac{1}{6k\Omega }+\frac{1}{3.2k\Omega }+\frac{1}{4.8k\Omega }}\cong 842\Omega \\ & and \\ & I=\frac{E}{R}=\frac{24V}{842\Omega }=28.5mA \\\end{align}\] It should be noted that when two equal resistors are connected in parallel, the equivalent resistance is half the resistance of one resistor. Also, the equivalent resistance for n parallel connected equal resistors is \[R=\frac{{{R}_{n}}}{n}\] Analysis Procedure for parallel circuits Step 1: Calculate current through each resistor: \[{{I}_{1}}=\frac{E}{{{R}_{1}}}\text{, }{{I}_{2}}=\frac{E}{{{R}_{2}}},\text{ }etc\] Step 2: Calculate the total supply current: $I={{I}_{1}}+{{I}_{2}}+\cdots \cdots $ Step 1: Use equation (4) to determine the equivalent resistance (R) of all the resistors in parallel Step 2: Calculate the total battery current: $I=\frac{E}{R}$ Current Divider Refer to a two-resistor parallel circuit as illustrated in figure 4. Such a parallel combination of two resistors is sometimes term as a current divider, because the supply current is divided between the two parallel branches of the circuit. Fig.4: Two resistors are connected in parallel to function as a current divider For this circuit \[\begin{align} & I={{I}_{1}}+{{I}_{2}}=\frac{E}{{{R}_{1}}}+\frac{E}{{{R}_{2}}} \\ & or \\ & I=E\left( \frac{1}{{{R}_{1}}}+\frac{1}{{{R}_{2}}} \right) \\\end{align}\] Using R1*R2 as the common denominator for 1/R1 and 1/R2, the equation becomes \[\begin{matrix} I=E\left( \frac{{{R}_{1}}+{{R}_{2}}}{{{R}_{1}}*{{R}_{2}}} \right) & \cdots & (6) \\\end{matrix}\] Therefore, for two resistors in parallel, the equivalent resistance is \[\begin{matrix} R=\left( \frac{{{R}_{1}}+{{R}_{2}}}{{{R}_{1}}*{{R}_{2}}} \right) & \cdots & (7) \\\end{matrix}\] From equation (6), \[E=I\left( \frac{{{R}_{1}}*{{R}_{2}}}{{{R}_{1}}+{{R}_{2}}} \right)\] And substituting for E is, \[{{I}_{1}}=\frac{E}{{{R}_{1}}}\] \[\begin{align} & {{I}_{1}}=\frac{I\left( \frac{{{R}_{1}}*{{R}_{2}}}{{{R}_{1}}+{{R}_{2}}} \right)}{{{R}_{1}}} \\ & or \\ & {{I}_{1}}=\frac{I}{{{R}_{1}}}\left( \frac{{{R}_{1}}*{{R}_{2}}}{{{R}_{1}}+{{R}_{2}}} \right) \\\end{align}\] \[\begin{matrix} {{I}_{1}}=I\left( \frac{{{R}_{2}}}{{{R}_{1}}+{{R}_{2}}} \right) & \cdots & (8) \\\end{matrix}\] And similarly, Note that the expression for I1 has R2 on its top line, and that for I2 has R1 on its top line. Equations (8) and (9) can be used to determine how a known supply current divides into two individual resistor currents. Calculate the equivalent resistance and the branch currents for the circuit in figure 4 when: ${{R}_{1}}=12\Omega ,{{R}_{2}}=15\Omega ,and\text{ }E=9V$ From equation (7): \[R=\frac{{{R}_{1}}*{{R}_{2}}}{{{R}_{1}}+{{R}_{2}}}=\frac{12*15}{12+15}\cong 6.67\Omega \] $I=\frac{E}{R}=\frac{9}{6.67}=1.35A$ \[{{I}_{1}}=I\left( \frac{{{R}_{2}}}{{{R}_{1}}+{{R}_{2}}} \right)=1.35*\left( \frac{15}{12+15} \right)=0.75A\] \[{{I}_{2}}=I\left( \frac{{{R}_{1}}}{{{R}_{1}}+{{R}_{2}}} \right)=1.35*\left( \frac{12}{12+15} \right)=0.6A\] It is important to note that equations (8) and (9) refer only to circuit with two parallel branches. They are not applicable to circuits with more than two parallel branches. However, similar equations can be derived for the current division in a multi-branch parallel circuit. Consider the circuit in figure 5, which has four resistors connected in parallel. The total current splits into four components, as illustrated. Fig.5: Four Resistors connected in Parallel From equation (4), and parallel resistance for the whole circuit is: \[R={}^{1}/{}_{\left[ \frac{1}{{{R}_{1}}}+\frac{1}{{{R}_{2}}}+\frac{1}{{{R}_{3}}}+\frac{1}{{{R}_{4}}} \right]}\] The voltage drop across the parallel combination (and across each individual resistor) is: \[\begin{align} & {{E}_{R}}=IR \\ & \text{and,} \\ & {{I}_{1}}=\frac{E}{{{R}_{1}}},{{I}_{2}}=\frac{E}{{{R}_{2}}},etc. \\ & \text{Therefore,} \\ & {{I}_{1}}=\frac{IR}{{{R}_{1}}},{{I}_{2}}=\frac{IR}{{{R}_{2}}},etc. \\\end{align}\] For any multi-branch parallel resistor circuit, the current in branch n is: \[\begin{matrix} {{I}_{n}}=I\frac{R}{{{R}_{n}}} & \cdots & (10) \\\end{matrix}\] Use the current divider equation to determine the branch currents in the circuit of figure 5. The component values are: ${{R}_{1}}=2\Omega ,{{R}_{2}}=6k\Omega ,{{R}_{3}}=3.2k\Omega ,{{R}_{4}}=4.8k\Omega $ The supply current is 28.5mA. \[\begin{align} & R=\frac{1}{\frac{1}{{{R}_{1}}}+\frac{1}{{{R}_{2}}}+\frac{1}{{{R}_{3}}}+\frac{1}{{{R}_{4}}}} \\ & R=\frac{1}{\frac{1}{2}+\frac{1}{6}+\frac{1}{3.2}+\frac{1}{4.8}}=842\Omega \\\end{align}\] From equation (10), $\begin{align} & {{I}_{1}}=I\frac{R}{{{R}_{1}}}=28.5*\frac{842}{2}\simeq 12mA \\ & {{I}_{2}}=I\frac{R}{{{R}_{2}}}=28.5*\frac{842}{6}\simeq 4mA \\ & {{I}_{3}}=I\frac{R}{{{R}_{3}}}=28.5*\frac{842}{3.2}\simeq 7.5mA \\ & {{I}_{4}}=I\frac{R}{{{R}_{4}}}=28.5*\frac{842}{4.8}\simeq 5mA \\\end{align}$ Whereas, the total current I is 28.5 mA which is equal to the source current. Power in Parallel Circuits Whether a resistor is connected in series or in parallel, the power dissipated in the resistor is: For the circuit in figure 6, \[\begin{align} & {{P}_{1}}=E{{I}_{1}} \\ & {{P}_{1}}=\frac{{{E}^{2}}}{{{R}_{1}}} \\ & or \\ & {{P}_{2}}=I_{1}^{2}{{R}_{1}} \\\end{align}\] The power dissipated in R2 is calculated in a similar way. The total power output from the battery is, of course, \[\begin{align} & {{P}_{1}}=EI=E({{I}_{1}}+{{I}_{2}})=E{{I}_{1}}+E{{I}_{2}} \\ & or \\ & P={{P}_{1}}+{{P}_{2}} \\\end{align}\] For any parallel (or series) combination of n resistors, Equation would be: \[P={{P}_{1}}+{{P}_{2}}+{{P}_{3}}+\cdots +{{P}_{n}}\] Fig.6: Power Dissipation in Parallel Resistor Circuit For the circuit described in figure 4 (above example 3), calculate the power dissipated in R1 and R2 and the total power supplied from the battery. \[\begin{align} & {{P}_{1}}=\frac{{{E}^{2}}}{{{R}_{1}}}=\frac{{{9}^{2}}}{12}=6.75W \\ & {{P}_{2}}=\frac{{{E}^{2}}}{{{R}_{2}}}=\frac{{{9}^{2}}}{15}=5.4W \\\end{align}\] \[P={{P}_{1}}+{{P}_{2}}=6.75+5.4=12.15W\] Open Circuit and Short Circuit in a Parallel Circuit When one of the components in a parallel resistance circuit is open-circuited, as illustrated in figure 7, no current flows through that branch of the circuit. The other branch currents are not affected by such an open circuit because each of the other resistors still has full supply voltage applied its terminals. Fig.7: Open-Circuited Resistor When I1 goes to zero, the total current drawn from the battery is reduced from \[I={{I}_{1}}+{{I}_{2}}+{{I}_{3}}\] \[I={{I}_{1}}+{{I}_{2}}\] Figure 8 shows a short-circuit across a resistor R3. This has the same effect whether it is across R1, R2, or R3, or across the voltage source terminals. In this case, the current that flows through each resistor is effectively zero. However, the battery now has a short-circuit across its terminals. Consequently, the battery short-circuit current flows: \[{{I}_{sc}}=\frac{E}{{{r}_{i}}}\] Where ri is the battery internal resistance. In this situation, abnormally large current flow, and the battery could be seriously damaged. Fig.8: Short-Circuit across a Resistor You May Also Read: Series-Parallel Circuit with Examples
CommonCrawl
Integral Representations for Continuous Linear Functionals in Operator-Initiated Topologies Integral Representations for Continuous Linear Functionals in Operator-Initiated Topologies Roth, Walter 2004-10-14 00:00:00 On a given cone (resp. vector space) $$\mathcal{Q}$$ we consider an initial topology and order induced by a family of linear operators into a second cone $$\mathcal{P}$$ which carries a locally convex topology. We prove that monotone linear functionals on $$\mathcal{Q}$$ which are continuous with respect to this initial topology may be represented as certain integrals of continuous linear functionals on $$\mathcal{P}$$ . Based on the Riesz representation theorem from measure theory, we derive an integral version of the Jordan decomposition for linear functionals on ordered vector spaces. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Positivity Springer Journals http://www.deepdyve.com/lp/springer-journals/integral-representations-for-continuous-linear-functionals-in-operator-PBQcWCCs4U Roth, Walter , Volume 6 (2) – Oct 14, 2004 /lp/springer-journals/integral-representations-for-continuous-linear-functionals-in-operator-PBQcWCCs4U Positivity / Copyright © 2002 by Kluwer Academic Publishers Mathematics; Fourier Analysis; Operator Theory; Potential Theory; Calculus of Variations and Optimal Control; Optimization; Econometrics On a given cone (resp. vector space) $$\mathcal{Q}$$ we consider an initial topology and order induced by a family of linear operators into a second cone $$\mathcal{P}$$ which carries a locally convex topology. We prove that monotone linear functionals on $$\mathcal{Q}$$ which are continuous with respect to this initial topology may be represented as certain integrals of continuous linear functionals on $$\mathcal{P}$$ . Based on the Riesz representation theorem from measure theory, we derive an integral version of the Jordan decomposition for linear functionals on ordered vector spaces. Positivity – Springer Journals Compact convex sets and boundary integrals Alfsen, E.M. Sur le théorème de Lebesgue-Nikodym Dieudonné, J. Ordered cones and approximation Keimel, K.; Roth, W. Hahn-Banach type theorems for locally convex cones Roth, W Real and complex linear extensions for locally convex cones Roth, W. Real Analysis Royden, H.L. Topological vector spaces Schaefer, H.H. Roth, W. (2004). Integral Representations for Continuous Linear Functionals in Operator-Initiated Topologies. Positivity, 6(2), 115-127. Roth, Walter. "Integral Representations for Continuous Linear Functionals in Operator-Initiated Topologies." Positivity 6.2 (2004): 115-127.
CommonCrawl
Selection on metabolic pathway function in the presence of mutation-selection-drift balance leads to rate-limiting steps that are not evolutionarily stable Alena Orlenko1,2, Ashley I. Teufel1,2, Peter B. Chi1,3 & David A. Liberles1,2 Biology Direct volume 11, Article number: 31 (2016) Cite this article While commonly assumed in the biochemistry community that the control of metabolic pathways is thought to be critical to cellular function, it is unclear if metabolic pathways generally have evolutionarily stable rate limiting (flux controlling) steps. A set of evolutionary simulations using a kinetic model of a metabolic pathway was performed under different conditions to evaluate the evolutionary stability of rate limiting steps. Simulations used combinations of selection for steady state flux, selection against the cost of molecular biosynthesis, and selection against the accumulation of high concentrations of a deleterious intermediate. Two mutational regimes were used, one with mutations that on average were neutral to molecular phenotype and a second with a preponderance of activity-destroying mutations. The evolutionary stability of rate limiting steps was low in all simulations with non-neutral mutational processes. Clustering of parameter co-evolution showed divergent inter-molecular evolutionary patterns under different evolutionary regimes. This study provides a null model for pathway evolution when compensatory processes dominate with potential applications to predicting pathway functional change. This result also suggests a possible mechanism in which studies in statistical genetics that aim to associate a genotype to a phenotype assuming independent action of variants may be mis-specified through a mis-characterization of the link between individual gene function and pathway function. A better understanding of the genotype-phenotype map has potential applications in differentiating between compensatory changes and directional selection on pathways as well as detecting SNPs and fixed differences that might have phenotypic effects. This article was reviewed by Arne Elofsson, David Ardell, and Shamil Sunyaev. A long standing goal in molecular evolution and comparative genomics is to understand how genes and their functions evolve. Molecular evolutionary and statistical genetics analyses have commonly treated protein function independently of the functions of other proteins and without consideration of genotype-phenotype maps. However, mutation works at the level of the gene, while selection works at the level of the organism in a population in an ecosystem. One critical component of the interplay between molecular biology and organismal biology is the metabolic pathway that combines the actions of multiple proteins (enzymes) in the generation of energy and molecular building blocks. Systems of differential equations based upon Michaelis-Menten kinetics have become a common modeling tool for describing the function of metabolic pathways [1]. But how do pathways evolve and how do their constituent members co-evolve? Within a given pathway, various enzymes catalyze reactions at different efficiencies and rates. Rate-limiting steps are the bottlenecks in biochemical pathways and can serve as important points of regulation. Kacser and Burns [2] established the concept of flux control enzymes in a pathway, with a given set of enzyme rate constants. The distribution of rate-limiting steps varies across different biochemical networks, although current biochemical thought from network control theory is that selection has a strong role in maintaining efficient pathway function and regulation through the most controllable points [3–15]. Specific examples in glycolysis are given in [16]. A correlate of this is an expectation of evolutionary stability of rate limiting steps when there is negative (stabilizing) selection on pathway function (for example, steady state flux that is not selected to change) [5]. This issue, however, has not been seriously addressed in the biochemical literature. Related biochemical expectations suggest that the observed distribution of rate-limiting steps is driven by pathway architecture [4, 12, 14]. An examination of the BioModel Database [17] showed glycolysis as the only pathway with data from multiple species and it shows no evidence for conserved rate limiting steps controlling steady state flux [16]. Further, it is unclear from a population genetic perspective that stabilizing selection on steady state flux should give rise to evolutionarily stable rate limiting steps in the presence of mutation-selection balance (see [18, 19] for other studies on the role of mutation-selection balance on molecular systems). The effect of new mutations on fitness has been characterized [20] and has large fractions of both strongly deleterious (lethal) mutations and slightly deleterious mutations. The frequency and magnitude of slightly deleterious mutations depends upon the mutational space surrounding the protein sequence and is linked to its stability and activity [21, 22]. At the two ends of the spectrum, the globally most active sequence will have only degenerative changes possible, while the globally least active sequence will have only activating changes possible. In between, the proportion of changes that increase or decrease activity will depend upon the current activity. With such a mutational process acting on molecular phenotypes (rather than fitnesses directly), it is expected that enzymes with excess activity will accumulate changes that reduce their activity until they affect pathway flux and are acted upon by selection. At the level of an individual enzyme within a pathway, these dynamics have been described [23]. In this study we use simulations on a simplified pathway (Fig. 1) to examine the nature of mutation-selection-drift balance and enzyme co-evolution, towards an understanding of the evolutionary stability of rate limiting steps. In addition to selection on flux, two additional biological considerations that have been suggested in the literature are included, selection against the cost of mRNA and protein synthesis to prevent wasteful expression [24] and selection against the accumulation of high concentrations of intermediate compounds that may be toxic to a cell [13]. We have also tested the role of effective population size and of biophysical assumptions on the number of mutational degrees of freedom on the evolutionary dynamics. The analysis suggests that rate limiting steps may not be stable over long evolutionary periods. The simplified pathway that was simulated is shown. This pathway contains features of both glycolysis [35] and the methylglyoxal pathway [36]. A constant concentration of compound A is converted to compound F and the steady state flux is measured Mutation-selection-drift balance and rate limiting steps Pathway evolution was simulated according to several sets of mutational processes and selective regimes to evaluate the evolutionary dynamics with simultaneous mutational and selective pressures acting on pathway function. After 20,000 generations, all experiments except the mutation-only negative control (where selection was absent) showed that the fitness equilibrium had been reached (Additional file 1: Figures S20-S21). However, when the mutational process was designed to mimic biological mechanisms and adaptive mutation was limiting, there was still co-evolutionary directional movement in some parameters without fitness effects, most notably KM (Additional file 1: Figures S5-S19). When simulating the evolution of metabolic pathways in a forward population genetic regime and consistent with previous findings [9, 14], a neutral (towards molecular phenotype) mutational process led to evolutionarily stable rate limiting steps, particularly when specific intermediates were selected as deleterious at high concentration (Fig. 2a, panel A) [13]. However, when mutational pressure was applied according to an expected distribution acting upon molecular phenotypes, mutation-selection balance emerged and no step was stable for a longer evolutionary period than other steps (Fig. 2a, panel B). All steps spent only brief evolutionary periods as rate limiting. a The evolutionary stability of rate limiting steps is shown through the average number of generations each step was found to remain rate limiting, once it emerged as rate limiting. Error bars delineate 95 % bootstrap confidence intervals for each set of experimental conditions, and p-values are for the null hypothesis that the average is constant across each reaction within the experiment. b The variability among replicates within the experiment investigating selection on flux only is shown. Error bars delineate 95 % bootstrap confidence intervals When an intermediate occurs at high concentration generating concentration-dependent toxicity (methylglyoxal as an example) or becomes subject to cross-reactivity with other pathways and enzymes that bind at lower KM to such substrates, this can create a selective pressure against a high concentration of the intermediate. When a selective pressure was applied against a particular intermediate (intermediate B), an increase in the evolutionary period that the producing enzyme was rate limiting was observed (Fig. 2a, panel D). However, these experiments still did not display rate-limiting steps with long evolutionary stability and mutation-selection balance dominated the evolutionary dynamics. The increase (of about 5 generations on average) in the half-life of the first reaction as a rate-limiting step may be due to selection against overly active enzyme A when compensatory changes to enzyme B are limiting. Enzyme B did not show significant differences in the time spent rate limiting when compared with the other 3 enzymes. The overall proportion of time that each reaction spent as rate-limiting is shown in Table 1. In this instance, there was an increase in the period of time that the reaction leading to the production of the deleterious intermediate was rate-limiting, suggesting that sampling of genomes would observe that this step is flux controlling most frequently, even though it is not evolutionarily stable as flux controlling. Table 1 The overall proportion of generations that each reaction spent as rate limiting is shown, pooled across each replication, for each experiment It is important to note that within each experiment, there was variability among the replicates. That is, within each replicate, a particular reaction may have had higher numbers of consecutive generations in which it was rate limiting, and this was not necessarily constant among all replicates. In particular, for the experiment shown in Fig. 2a, panel B, the variability among replicates is shown in Fig. 2b. Another important mechanism that has been discussed is that of selection against expression cost [24]. When the expression cost is included in the fitness function (Fig. 2a, panel C), there is no difference between the enzymes in the period found to be rate limiting. However, there are trade-offs in the individual parameters induced by this selective pressure (Fig. 3). While selection against flux alone shows no position-specific effects in enzyme concentration, the relationship between enzyme length (expression cost) and flux shows a negative slope (Fig. 3a). The expected patterns are also observed with kcat (Fig. 3b; positive slope) and with KM (Fig. 3c; more negative slope). That the KM values do not fully compensate for the expression, especially in the smallest enzymes may be indicative of a combination of weak selection and a waiting time for beneficial mutations that are governed by a more complex landscape with the added expression cost term. In evaluating the influence of selection against the cost of protein expression on the evolutionary dynamics or parameters, the relationship of enzyme length to a enzyme concentration, b kcat, and c, and KM is shown when selection acts on expression cost in addition to flux and when it acts only on flux. Each point represents the average value from each replicate. In each panel, p-values correspond to the question of whether the slopes are different from each other, as assessed by mixed-effects interaction models Allele segregation within population In order to assess the consistency of the observations made in this simulation with population genetic expectations given the mutational profile, the amount of segregating variation in the population was characterized. As estimated from Kimura and Crow [25] as described in the Methods section, assuming neutrality, the expected number of alleles for each parameter in the population is 1.6 alleles per parameter segregating at any time. The observed values that were calculated from the selection on flux only experiment are greater than the expectation when neutral, but of the same order of magnitude. Over 2000 generations, the mean for all of the parameters was found to be 2.75 with a standard deviation of 0.33. The minimum and the maximum of the range were found to be 1.85 and 3.77 correspondingly. For the reaction parameters involved only in the forward direction, the mean for 2000 generations was found to be 2.48 (standard deviation 0.37), with the range minimum of 1.53 and maximum of 3.8. The difference in the parameters reflects the action of selection, particularly with regard to the reverse parameters. Patterns of co-evolution When mutation-selection-drift balance occurs under negative (stabilizing) selection, individual parameter values are still changing. After controlling for systematic directional change in KM, patterns of parameter co-evolution can be examined as a characteristic of the co-evolutionary fitness landscape. As expected, without selection, mutational pressure alone results in no significant clusters (as assessed via bootstrapping) (Fig. 4a). When selection acted on flux alone (Fig. 4b) and when it acted on both flux and against a high concentration of a deleterious intermediate (Fig. 4d), the parameters of an enzyme formed significant clusters, largely independent for each enzyme. When selection acted on flux and against total expression cost (Fig. 4c), clusters surprisingly corresponded to positions within a pathway rather than to enzyme length. This may be due to the complex fitness landscape in this simulation that was limited by adaptive mutation, although the exact cause of this particular pattern is not immediately clear. Selection for the first step to be rate-limiting with a neutral mutational process (Fig. 4e) also showed fewer clusters with kcat values and KM values clustering by pathway position, consistent with prior observations [9]. Clustering of co-evolving parameters across simulations. Colors represent parameters which are found to cluster together (at the 5 % level), while black parameters were not found to have significant associations with clusters. Panels show results from simulations with a mutation only, b selection on flux only, c selection on flux and against total expression cost, d selection on flux and against a high concentration of a deleterious intermediate, and e non-biological neutral mutation and selection on flux and for the first reaction to be rate limiting Is mutation-selection-drift balance just drift in a small population in disguise? It might be conceived that the results shown in Fig. 2 are an artifact of a small effective population size and simply reflect drift, arguing that any semblance of mutation-selection balance as a limit to pathway control will disappear with a larger effective population size. To test this, a new simulation scheme was developed without explicit individuals, but maintaining explicit generations. This approximation to the population genetic process (with added caveats as described in Methods) enabled us to evaluate the average length of time a step remained rate limiting under an identical to above small (102) population size and under a much larger population size (106) with selection just on metabolic flux. As seen in Fig. 5a, a very similar pattern of the distribution of rate limiting steps is obtained with the large and small population sizes, suggesting that mutation-selection balance operates in both small and large population sizes on metabolic pathways, consistent with our expectations. The scaling of the number of generations that differs between Figs. 2 and 5 is a product of the different mutational and fixation processes in the different experiments and the presence of segregating variation from a high mutation rate in the first set of experiments. In that case, shifts were due to population dynamics rather than the fixation of new mutations. a The evolutionary stability of rate-limiting steps for experiments with small (102) and large (106) population sizes was evaluated, with altered simulation assumptions from Fig. 2. Error bars delineate 95 % bootstrap confidence intervals found across 30 replicates for both sets of population size. b The evolutionary stability of rate-limiting steps, subject to the constraint dictated by Haldane's relationship, is shown. Error bars delineate 95 % bootstrap confidence intervals found across 30 replicates Haldane's relationship and mutational processes Haldane's relationship describes the relationship between the various kinetic parameters in establishing an equilibrium that is consistent with thermodynamic observations of energy differences between reactants and products. In the simulations shown thus far, the mutational parameters are independently free to vary. Haldane's relationship constrains the values of the parameters as acted upon by mutation by the thermodynamics of the reaction, although the joint effects of mutations are not currently modelable (see [16] for some discussion of this in the context of glycolysis as well as [26]). In the simplest case, Haldane's relationship reduces 4 parameters to 3° of freedom, although more degrees of freedom are added with additional products and substrates, regulation, cofactors, and more complex equilibria. It is well known that modulation of KM is used biologically to regulate the direction of the reaction [26]. However, the precise scheme that was simulated under had one too many degrees of freedom, so the effect of this was tested. In the experiment constrained by Haldane's relationship, the evolutionary stability of the rate-limiting steps has a similar average generation length of each reaction as rate-limiting, which is approximately 200 generations in both cases (Fig. 5a, b). Simulating under Haldane's relationship generates a noticeably more flat distribution across reactions steps as well as less variance. Low evolutionary stability of rate-limiting steps caused by mutation-selection balance appears to be an expectation for control in metabolic pathways when pathway flux is under negative (stabilizing selection), even when there is a preference for a particular reaction to be flux controlling (as in the reaction leading to a deleterious intermediate). It is clear that in nature, not all pathways are under selection for a constant flux, but may be temporally regulated. More complex regulatory schemes are expected to result in a more complex landscape and longer times to reach a fitness equilibrium, but mutation-selection-drift balance should still play an important role. Relatedly, processes that constantly shift the fitness equilibrium, such as shifting selection (see for example [27]) or shifting population sizes might be expected to show interesting evolutionary dynamics. Although the fitness equilibrium is never reached, this would provide a very different mechanism of generating flux controlling steps than control theory suggests. In this study, only a linear pathway was examined. There is no reason to expect mutation-selection balance to not apply to branched pathways or cycles, although the dynamics of equilibration may in fact be different. This remains an interesting topic for future study. A further layer of complexity is that this work has proceeded with a fixed network whereas network structure evolves in natural systems. Duplication [28] and the existence and evolution of promiscuous functions [29, 30] are known to give rise to specific processes of network growth [31]. The dynamics of this type of differential equation system evolution have been studied in a community ecology setting [32] and the co-evolutionary landscapes that emerge may be different from those with a static structure. With an understanding of the co-evolution of parameters under negative selection, it will be interesting to observe if this pattern changes when positive directional selection is applied to a pathway flux. This would give a probabilistic basis to examining patterns of co-evolution in a pathway to differentiate between compensatory processes and directional selection. These models could also potentially be used to differentiate between negative and positive directional selection in an Approximate Bayesian Computing framework, where constraint on pathway flux gives rise to lineage-specific patterns of enzyme evolution that can be compared to data from gene family analysis. Lastly, one debate that has consistently arisen in the molecular evolution community is that of the relative importance of changes in gene expression and changes in coding sequence evolution [33]. Mechanistic frameworks like this with roots in either a Boltzmann Distribution or Michaelis-Menten Kinetics, when coupled to a protein level mutational model (see for example [34]), have the potential to describe the mutational opportunity to affect phenotypes through changes in either protein concentrations or protein coding sequence function parameters (like KM or kcat although predictions on enzymatic reactions are more complex than binding). Deviations from this mutational opportunity (for example, from additional levels of constraint) would be informative about the molecular nature of both compensatory and adaptive evolution. Relatedly, the field of statistical genetics has commonly made an assumption that the action of a variant is constant against all genetic backgrounds. In the simulations here, the effect of a variant that reduces enzyme activity will have a flux and fitness effect in some parameter (genetic) backgrounds and not in others. Statistically, this averaging would result in low power to detect causal variants. An understanding of the dynamics associated with processes like mutation-selection balance could be used to generally improve models used for understanding the genotype-phenotype map in various biological systems, including in human genetic disease. Many studies in comparative genomics study each gene in isolation and thereby miss the equilibrium that mutation, selection, and drift generate, including inter-molecular compensatory changes. Under several population genetic and selective regimes, the dynamics of enzyme co-evolution with ultimate negative selection on pathway flux were characterized, resulting in a general lack of evolutionarily stable rate-limiting steps. From this, expected patterns of enzyme co-evolution with negative selection were generated using a clustering approach. This ultimately provides a null model for pathway evolution under stabilizing selection. Simulated evolution of metabolic pathways To evaluate the role of mutation-selection-drift balance in biochemical pathway evolution, a population of cells with a key metabolic pathway was evolved under different selective schemes. The simplified kinetic model designed to capture features of glycolysis [35] and methylglyoxal metabolism [36] is shown in Fig. 1. The glycolysis-like aspects of the pathway include the feedback loop (as an approximation to glycolysis regulation) and the synthesis of final metabolite F as analogous to pyruvate in a linear pathway. The methylglyxoxal-like pathway elements include the toxic intermediate (B) as analogous to methylglyoxal (a highly toxic intermediate) and again, the synthesis of the final metabolite (F) is analogous to pyruvate. This model is expressed in terms of a system of ordinary differential equations where reactions are described by reversible Michaelis-Menten kinetics. Each enzyme has parameters for enzyme concentration [Enzyme] (mmol/l), the catalytic constant (kcat) (mmol/l/s), the Michaelis constant for the substrate (KM) (mmol/l), the reversible catalytic constant (kcatr) (mmol/l/s), and the Michaelis constant for the product (KMr) (mmol/l). The kinetic model has a single inhibitory reaction that is described in the system by the inhibition constant KI (mmol/l). The COPASI [37] modeling environment is used to solve this system of equations. The steady state solution is used, with a constantly replenishing concentration of A and mass action to utilize F, as described in Additional file 1: Table S1. In order to model the evolutionary process, a forward time simulation with discrete generations is employed. In general, the simulation represents each individual in the population as an instance of the described model, subjecting this model individual to mutations which may elicit fitness effects, and then sampling individuals based on fitness to populate the next generation using weighted sampling with replacement. The pathway architecture remains unchanged during the course of the simulations. These simulations proceed by establishing an initial population of 100 homogenous individuals with parameter values given in Additional file 1: Table S1. Because a set of differential equations must be solved for each individual in each generation, the population size was limited by computational capacity. It is not expected that the results obtained in this study are driven by the size of the population. Each forward simulation was repeated 5 times. Mutations were introduced with a probability of 3*10−3 per parameter per individual per generation. KM and kcat were treated as evolving independently, although there is a mechanistically unpredictable degree of dependence (and link to protein stability) in their evolution in nature from current understanding, as described below. The mutational effect on the catalytic rate constant and enzyme concentration (both indicated by p below) are derived from a normal distribution with variable mean \( {\mu}_{n_1} \), where $$ {\mu}_{n_1}=-0.01{e}^{c*\cdot {p}_{n_1-1}} $$ The mutational effects on the binding constants (K) are described by a standard normal distribution with a variable mean \( {\mu}_{n_2} \), $$ {\mu}_{n_2}=\frac{1}{-0.01{e}^{c\cdot {K}_{n_2-1}}} $$ The index value c is used to scale the mutational effects, with the following values for each constant: $$ c=\left\{\begin{array}{c}\hfill 2.5\times {10}^{-2},\wedge enzymeconcentration\hfill \\ {}\hfill 2.5\times {10}^{-2},\wedge inhibitionconstant\hfill \\ {}\hfill 1.0\times {10}^{-2},\wedge catalyticconstant\hfill \\ {}\hfill 3.{3}^{\prime}\times {10}^{-4},\wedge reversiblecatalyticconstant\hfill \\ {}\hfill 1,\wedge productconstant\hfill \\ {}\hfill 3.{3}^{\prime}\times {10}^{-2},\wedge reversableproductconstant\hfill \end{array}\right. $$ This mutational scheme allows for scaling across orders of magnitude in kinetic parameters and generates a distribution of mutational effects with a bias towards slightly degrading change that is dependent upon the activity and expression level of the protein. The mutational scheme is consistent with current thought in molecular evolution, where the range and distribution of mutational effects are influenced by the current state [22]. Most of the mutations are slightly deleterious or neutral, while advantageous mutations are rare, although slightly less so as the activity of the molecule decreases. Intuitively, as a sequence decreases in fitness contribution, the number of sequences with higher potential fitness contribution increases and as it increases in fitness contribution, the number of sequences with a higher potential fitness contribution decreases, as expected by Fisher's geometric model. Five different selection schemes were employed to examine the influence of various factors on pathway evolution. The first scheme involved selection on steady state flux alone, where the fitness of an individual is described below: $$ {F}_1=\frac{1}{1+{e}^{-{\left( flux-650\right)}^{0.07}}} $$ Values in this logistic function control the asymptotic fitness and the gradient of the flux to fitness relationship. As enzymes reach limits of adaptation because of the ability to utilize products, so do pathways, where the end products are also subjected to the rules of binding and catalysis [23, 38, 39]. The asymptote of 650 and slope of 0.07 are arbitrary, but are chosen to reflect the ultimate utilizable flux. Changing them would be expected to alter the distribution of fitness effects (fraction of deleterious changes at equilibrium), but not the overall evolutionary dynamics of the system. A second (negative control) scheme was implemented to examine mutational opportunity and mutational pressure. In this experiment individuals were sampled at random from the population and only the mutational process acted. Another control was used to examine the evolutionary stability of rate limiting steps, by implementing a scheme with selection on the first reaction rate to become rate limiting by preventing the buildup of the intermediate after the reaction, and using a neutral mutational distribution (with respect to molecular phenotype) that eliminated mutational pressure. We used the multiplicative fitness function, $$ {F}_m={F}_1{F}_2 $$ $$ {F}_2=\frac{1}{e^{s\cdot \left[B\right]}} $$ Here, [B] is the concentration of the deleterious metabolite and s (9.4 × 10−4) is a scalar chosen to control the flux and the intersection point of the two curves. As indicated, the mean of the mutational distribution is set at 0, and the distribution is parameter-independent. A fourth experiment was implemented to examine the role of preventing the buildup of the deleterious intermediate on pathway evolution, resulting in the same multiplicative fitness function above. This experiment used the biological (parameter dependent) mutational distribution as previously outlined. Finally, the cost of protein production was also considered using another multiplicative fitness function where s is a normalizing constant (1.0 x 10−6), costAA (30.3) and costnuc (49.2) reflect the per unit costs of synthesis [24], and enzyme lengths are given in Additional file 1: Table S2, $$ {F}_p={F}_1{F}_3, $$ $$ {F}_3=\frac{1}{1+s\cdot \left( cos{t}_{protein} + cos{t}_{mRNA}\right)} $$ $$ \begin{array}{l} cos{t}_{protein}= cos{t}_{AA}\left\{\left[ EnzymeA\right] lengt{h}_A+\left[ EnzymeB\right] lengt{h}_B+\left[ EnzymeC\right] lengt{h}_C\right.\\ {}\left.+\left[ EnzymeD\right] lengt{h}_D+\left[ EnzymeE\right] lengt{h}_E\right\}\end{array} $$ $$ \begin{array}{l} cos{t}_{mRNA}= cos{t}_{nuc}\left\{\frac{3\cdot lengt{h}_A\cdot \left[ EnzymeA\right]}{1000}\right.+\frac{3\cdot lengt{h}_B\cdot \left[ EnzymeB\right]}{1000}\\ {}\left.+\frac{3\cdot lengt{h}_C\cdot \left[ EnzymeC\right]}{1000}+\frac{3\cdot lengt{h}_D\cdot \left[ EnzymeD\right]}{1000}+\frac{3\cdot lengt{h}_E\cdot \left[ EnzymeE\right]}{1000}\right\}\end{array} $$ Each of these simulations was run for 22,000 generations and the point of mutation-selection balance was reached by generation 20,000 under each of these selective schemes (the scheme with no selection did not reach equilibrium because there is no mutation-selection balance without selection). The point of mutation-selection balance was determined by the stability of the fitness of the median individual across generational time as assessed by observation of approximately equal rates of positive and negative changes (Additional file 1: Figures S20–S21). The point of balance was confirmed for the experiment with selection on flux alone by replicate experiments approaching the same point from lower fitnesses that were reached from higher fitnesses (Additional file 1: Figure S1). Identification of rate limiting steps The sensitivity of each of the reactions across the last 2000 generations was determined by reducing each reaction rate of the median individual by 10 % while fixing the rest of the reaction rates. The difference in flux between the perturbed and unperturbed systems was used a measure of sensitivity, and the most sensitive step was determined by the reaction for which this value was the largest. Examination of evolution and coevolution Examination of parameter evolution and co-evolution was based upon the values in the median individual at each generation for the 2000 generations after equilibrium was reached. Since the reversible and inhibitory reaction constants have minimal impact on the system, they were removed from the analysis. Five replicates of the same experiment were analyzed together and the rate of change of each parameter was calculated for every generation. In order to control for directional change within enzyme concentrations, catalytic, and binding constants, the average amount of change is calculated for each group and removed from each parameter within the group. 10,000 replicates were bootstrapped from this dataset by random re-sampling within each replicate and complete linkage clustering was performed using absolute correlations as a measure of relatedness between the rates of change (Additional file 1: Figures S22–S26) [40]. The largest clusters significant at the 0.05 level are used to identify co-evolving parameters. Simulations with variable population sizes In order to evaluate the effects of population size on the evolutionary stability of rate-limiting steps, experiments with two different population sizes (102 and 106) were performed, where the small population size was a control for the results of the previous set of experiments. For this purpose, several simplifications to the procedure were made for computational tractability. In each generation, a single mutation was proposed per generation, with mutational effects and fitness as above for the experiment with selection on pathway flux. The Kimura fixation probability was used to evaluate the fixation of proposed mutations, eliminating an explicit population and any probability of multiple segregating changes. We have $$ \psi =\frac{1-{e}^{-2c{N}_esp}}{1-{e}^{-2c{N}_es}} $$ representing the fixation probability, where Ne is the population size, c is the ploidy (haploid, c = 1), s is the selective coefficient (f'/f0-1, where f' is the fitness after mutation and f0 before) and p is the initial frequency of the allele in a population. The initial frequency p was set to ½ rather than 1/N for computational efficiency, giving the property that a neutral mutation has a 50 % chance of fixation, which scales the selective coefficient. The effects of population size played out in rising from a 0.5 frequency to fixation and the introduction of new mutations was independent of population size. The population scheme was run for 200,000 generations per experimental replicate and the rate-limiting step length was calculated as was previously described. Both population sizes were run for 30 replicates. Simulations with more thermodynamic realism To evaluate the effects of biophysical constraints on the reaction landscape, simulations where mutations were constrained by Haldane's relationship were performed for the 106 population size. Although more degrees of freedom are possible with regulation, multiple substrates and products, and the involvement of cofactors, the simplest expression shows that four parameters which are non-independent as, $$ Keq = \frac{k_{cat}*{K}_{Mr}}{k_{cat r}*{K}_M} $$ Here, Keq is the equilibrium constant driven by the thermodynamics of the reaction. For this experiment, kinetic parameter initial values were set according to Haldane's relationship (Additional file 1: Table S3). To maintain the ratio, the mutational scheme was modified from that for other experiments described above. Mutations for KMr and KM are drawn from a normal distribution with a mean at −1 %. The mutational effect for kcatis also drawn from a normal distribution with a mean at −1 % and has a modifier that is dependent on the original ratio of kcat and kcatr and the ratio of mutated KM and KMr. Kcatr is calculated from Haldane's relationship with the mutated kcat, KM, KMr. This experiment was replicated for 30 times. Rate-limiting step lengths were evaluated as was previously described. Characterizing allele segregation with explicit populations In order to estimate the observed allele segregation within each population, the number of alleles for each parameter was calculated. Parameter numbers were calculated within each population for 2000 generations every 10 generations (total of 200 data points for each parameter) as the mean of total number of alleles per generation (for all parameters and for forward reaction-only parameters). Mean and standard deviation per 2000 generations were retrieved for each parameter as well as the minimum and maximum of the dataset. These values were compared with the expected allele segregation number as calculated for the population with a selectively neutral regime as previously described by Kimura and Crow [25], $$ n=2{N}_e\mu +1 $$ Here n is the number of alleles for particular parameter, μ is mutation rate, and Ne the effective population size. Statistical tests and bootstrap confidence intervals For the simulation experiments exploring the evolutionary stability of the rate limiting step, we performed permutation tests under the null hypothesis of no stability. Stability was measured by the number of consecutive generations that a reaction remained rate limiting, once it became the rate limiting step, and under the null hypothesis, each reaction should have the same average number of consecutive generations. For each of the replicates, the correspondence between each reaction and its average time spent as rate limiting was permuted, and the average absolute deviation of each reaction from the overall mean was calculated. In this manner, a null distribution was generated through 100,000 permutation replicates, and an empirical p-value was found by comparing the average absolute deviation in the actual data as compared to this null distribution. Confidence intervals for each average number of consecutive generations that a reaction was rate limiting were constructed by first bootstrapping the replicates, and then bootstrapping the consecutive runs within each selected replicate. In this manner, a Monte Carlo sampling distribution for each average was generated, and 95 % confidence intervals were generated by taking the 2.5th and 97.5th percentiles from each bootstrap sampling distribution. Error bars in the corresponding figures reflect these confidence intervals. To test the question of whether selection has an effect on expression cost, we examine enzyme concentration, kcat and KM against the length of the enzyme. Using the 2000 generations at equilibrium across the five replications and comparing the selection against flux experiment to the selection against flux and protein expression, we ran a linear mixed-effects model with random effects for the replicates, and an interaction term for enzyme length and experiment. The null hypothesis is that the interaction term is equal to 0, or in other words, that the different selection regimes have the same effect on expression cost. Due to computational burden within the mixed-effects model when attempting to account for the correlation structure induced by the Markovian nature of the simulations, only the unique values of each corresponding outcome variable were selected to be fit in the model, and standard linear mixed-effects models were run with a random effect for each replicate. While this loss of information is suboptimal, this should result in conservative inference. Assumptions of homoskedasticity and linearity of the relationship between enzyme length and parameter observations appear to be satisfied. The small effective sample size may be a concern, but should also lead to conservative inference. Statistical significance of the interaction term was assessed via likelihood ratio tests comparing the interaction model to the null model, which did not contain the interaction term. Reviewers' comments We thank the reviewers for their reviews of our manuscript. Reviewer 2 included minor comments that have not been included for publication, but have improved the readability of the manuscript. Reviewers' report 1: Arne Elofsson, Stockholm University, Sweden Reviewer summary The authors describe a simulations of enzymatic reactions and identifies rate limiting steps. Reviewer recommendations to author One problem with this paper is that I am not convinced that the main reason to do this study is correct. The authors claim: "While commonly assumed in the biochemistry community that the control of metabolic pathways is thought to be critical to cellular function.". However, they do not provide a single reference to that this really is commonly assumed, or if this is really true. Certainly it varies from pathway to pathway. It is certainly very important to have a very close control over glucose levels in the blood (see the problem for people with diabetes), while lactase efficiency can vary by many orders of magnitude without a large impact on fitness. I would interpret the results in a different way than the authors do. I do not agree with the claim "The evolutionary stability of rate limiting steps was low in all simulations with non-neutral mutational processes. " Instead I think: If you select for positive control or against deleterious intermediates the first step is almost always rate limiting step if not it can be any step. Author Response: The main objective of the study is the ultimate differentiation between compensatory processes and directional selection in pathways after the characterization of the expected evolutionary dynamics of pathways with negative selection on pathway function. This is currently not well understood. There is indeed an implicit assumption of pathway stasis, from biochemistry textbooks that describe "glycolysis" and other pathways, to model organism studies that seek to transfer functions from one organism to another to GWAS and QTL studies that assume that pathway regulation and sensitivity are constant. The selective regime we have applied in this study is fairly simple. It is definitely the case that the nature of the evolutionary dynamics will be altered with more complex regulatory schemes. However, these do not provide selective pressures for extra activity and evolutionarily stable control. With selection against the deleterious intermediate, there is a preponderance of cases where this step re-emerges as rate limiting more frequently than others. However, it is not stable in an evolutionary sense as rate limiting, given its short half-life in remaining flux controlling when it emerges as such. Further, the first step is only the rate limiting step when that is the step leading to the deleterious intermediate; it is not generally the first step that is rate limiting when there is a deleterious intermediate. With lactase, it is unclear to us that the excess activity in the enzyme is stable across distantly related species and we would predict otherwise. Reviewers' report 2: David Ardell, University of California- Merced, USA I am not an expert on Metabolic Control Analysis (MCA) or Biochemical Systems Theory. My assessment of the present work is that it is a highly original synthesis of ideas and models that appears to have been well-implemented, that the conclusions drawn are well-founded from the results obtained, and that the results challenge apparently accepted wisdom regarding the evolutionary stability of rate-limiting steps in non-branching metabolic pathways. The review of prior literature and contextualization of results in the introduction and discussion are sufficient, yet further efforts here might benefit some readers, to dispel confusions and to better connect the results to prior work. For example, in his textbook "A First Course in Systems Biology," E.O. Voit writes "MCA was originally conceived to replace the formerly widespread notion that every pathway has one rate-limiting step, which is a slow reaction that by itself is credited with determining the magnitude of flux through the pathway… In linear pathway sections without branches, the rate-limiting step was traditionally thought to be positioned at the first reaction, where it was possibly inhibited through feed-back exerted by the end product. In MCA, this concept of a rate-limiting step was supplanted with the concept of shared control, which posits that every step in a pathway contributes, to some degree, to the control of the steady-state flux." In light of this, perhaps the author's results might profitably be considered as extending MCA's notion of distributed control in metabolic pathways to distributed selection pressures on their component enzymes coevolving on a rugged landscape. A valuable contribution made by this work, in my opinion, is the example it provides of another relatively simple biological system that under the simplest evolutionary scenarios — namely stabilizing selection on its output — explores a large neutral network of solutions (in this case, of kinetic parameters). In summary, I believe the present work is an important contribution to its field. There are many points where the presentation and narrative could be improved to increase impact, especially for those unfamiliar with some among the many different disciplines and subjects touched on by this work. Author Response: We thank the reviewer for his summary. We have tried to improve the readability of the manuscript and to better introduce disparate concepts. Reviewers' report 3: Shamil Sunyaev, Harvard Medical School, USA This manuscript challenges the idea of evolutionary stability of rate limiting steps in linear pathways. Extensive computer simulations demonstrate that rate limiting steps exist for only short evolutionary times. This is an interesting result. Reviewer recommendations to authors I find the section "Allele Segregation within Population" confusing. I suggest that the authors would clarify this section. Also, how stable are the results with respect to effect sizes and directions of incoming mutations? Author Response: We thank the reviewer for his summary. We have added a new introduction to the allele segregation section to improve clarity. The trajectories of effect sizes and directions of incoming mutations that are sampled for each parameter are shown in the Supplementary Figures. They derive from the mutational process described in the Methods section. [E], [Enzyme], the concentration of the enzyme; kcat, catalytic constant; KI, inhibition constant; KM, Michaelis constant; mRNA, messenger ribonucleic acid; Ne, effective population size; μ, mutation rate. Ingalls B. Mathematical modeling in systems biology: An Introduction. Cambridge: The MIT Press; 2013. Kacser H, Burns JA. The control of flux. Symp Soc Exp Biol. 1973;27:65–104. Campbell S, Khosravi-Far R, Rossman K, Clark G, Der C. Increasing complexity of Ras signaling. Oncogene. 1998;17:1395–413. Rausher M, Miller R, Tiffin P. Patterns of evolutionary rate variation among genes of the anthocyanin biosynthetic pathway. Mol Biol Evol. 1999;16:266–74. Berg JM, Tymoczko JL, Stryer L. Biochemistry. 5th ed. New York: W H Freeman; 2002. Section 16.2. Olsen K, Womak A, Garret A, Suddith J, Purugganan M. Contrasting evolutionary forces in the Arabidopsis thaliana floral developmental pathway. Genetics. 2002;160:1641–50. Riley R, Jin W, Gibson G. Contrasting selection pressures on components of the Ras-mediated signal transduction pathway in Drosophila. Mol Ecol. 2003;12:1315–23. Cork J, Purugganan M. The evolution of molecular genetic pathways and networks. Bioessays. 2004;26:479–84. Wright K, Rausher M. The evolution of control and distribution of adaptive mutations in a metabolic pathway. Genetics. 2009;184:483–502. Alvarez-Ponce D, Aguade M, Rozas J. Comparative genomics of the vertebrate insulin/TOR signal transduction pathway: a network-level analysis of selective pressures. Genome Biol Evol. 2010;3:87–101. O'Connell M. Selection and the cell cycle: positive Darwinian selection in a well-known DNA damage response pathway. J Mol Evol. 2010;71:444–57. Alvarez-Ponce D. The relationship between the hierarchical position of proteins in the human signal transduction network and their rate of evolution. BMC Evol Biol. 2012;12:192. Olson-Manning C, Lee C, Rausher M, Mitchell-Olds T. Evolution of flux control in the glucosinolate pathway in Arabidopsis thaliana. Mol Biol Evol. 2012;30:14–23. Rausher M. The evolution of genes in branched metabolic pathways. Evolution. 2012;67:34–48. Hermansen RA, Mannakee BK, Knecht W, Liberles DA, Gutenkunst RN. Characterizing selective pressures on the pathway for de novo biosynthesis of pyrimidines in yeast. BMC Evol Biol. 2015;15:232. doi:10.1186/s12862-015-0515-x. Orlenko A, Hermansen RA, Liberles DA. Flux control in glycolysis varies across the tree of life. J Mol Evol. 2016;82:146–61. Li C, Donizelli M, Rodriguez N, Dharuri H, Endler L, Chelliah V, Li L, He E, Henry A, Stefan MI, Snoep JL, Hucka M, Le Novère N, Laibe C. BioModels database: an enhanced, curated and annotated resource for published quantitative kinetic models. BMC Syst Biol. 2010;4:92. Taverna D, Goldstein R. Why are proteins marginally stable? Proteins. 2001;46:105–9. Lynch M, Hagner K. Evolutionary meandering of intermolecular interactions along the drift barrier. Proc Natl Acad Sci U S A. 2014;112:30–8. Tamuri AU, dos Reis M, Goldstein RA. Estimating the distribution of selection coefficients from phylogenetic data using sitewise mutation-selection models. Genetics. 2012;190:1101–15. Wylie C, Shakhnovich E. A biophysical protein folding model accounts for most mutational fitness effects in viruses. Proc Natl Acad Sci U S A. 2011;108:9916–21. Dasmeh P, Serohijos A, Kepp K, Shakhnovich E. The Influence of Selection for Protein Stability on dN/dS Estimations. Genome Biol Evol. 2014;6:2956–67. Hartl DL, Dykhuizen DE, Dean AM. Limits of adaptation: the evolution of selective neutrality. Genetics. 1985;111:655–74. Wagner A. Energy constraints on the evolution of gene expression. Mol Biol Evol. 2005;22:1365–74. Kimura M, Crow JF. The number of alleles that can be maintained in a finite population. Genetics. 1964;49:725–38. Uhr ML. The influence of an enzyme on the direction of a reaction. Biochem Educ. 1979;7:15–6. Soyer O, Pfeiffer T. Evolution under fluctuating environments explains observed robustness in metabolic networks. PLoS Comput Biol. 2010;6:e1000907. Soyer O, Creevey C. Duplicate retention in signalling proteins and constraints from network dynamics. J Evol Biol. 2010;23:2410–21. Kim J, Kershner J, Novikov Y, Shoemaker R, Copley S. Three serendipitous pathways in E. coli can bypass a block in pyridoxal-5'-phosphate synthesis. Mol Syst Biol. 2010;6:436. Kim J, Copley S. Inhibitory cross-talk upon introduction of a new metabolic pathway into an existing metabolic network. Proc Natl Acad Sci U S A. 2012;109:2856–64. Light S, Kraulis P. Network analysis of metabolic enzymes evolution in Escherichia coli. BMC Bioinformatics. 2004;5:15. Shoresh N, Hegreness M, Kishony R. Evolution exacerbates the paradox of the plankton. Proc Natl Acad Sci U S A. 2008;105:12365–9. Hoekstra H, Coyne J. The locus of evolution: evo devo and the genetics of adaptation. Evolution. 2007;61:995–1016. Grahnen JA, Nandakumar P, Kubelka J, Liberles DA. Biophysical and structural considerations for protein sequence evolution. BMC Evol Biol. 2011;11:361. Teusink B, Passarge J, Reijenga CA, Esgalhado E, van der Weijden CC, Schepper M, Walsh MC, Bakker BM, van Dam K, Westerhoff HV, Snoep JL. Can yeast glycolysis be understood in terms of in vitro kinetics of the constituent enzymes? Testing biochemistry. Eur J Biochem. 2000;267:5313–29. Weber J, Kayser A, Rinas U. Metabolic flux analysis of Escherichia coli in glucose-limited continuous culture. II. Dynamic response to famine and feast, activation of the methylglyoxal pathway and oscillatory behavior. Microbiology. 2005;151:707–16. Hoops S, Sahle S, Gauges R, Lee C, Pahle J, Simus N, Singhal M, Xu L, Mendes P, Kummer U. COPASI--a COmplex PAthway SImulator. Bioinformatics. 2006;22:3067–74. Bershtein S, Mu W, Serohijos AWR, Zhou J, Shakhnovich EI. Protein quality control acts on folding intermediates to shape the effects of mutations on organismal fitness. Mol Cell. 2013;49:133–44. Jiang L, Mishra P, Hietpas RT, Zeldovich KB, Bolon DNA. Latent effects of Hsp90 mutants revealed at reduced expression levels. PLoS Genet. 2013;9:e1003600. Suzuki R, Shimodaira H. Pvclust: an R package for assessing the uncertainty in hierarchical clustering. Bioinformatics. 2006;22:1540–1542.29. We thank Claudia Weber for a careful reading of this manuscript. This work was funded by NSF grant DBI-0743374. The research was conceived and supervised by DAL. Simulation studies were run by AO. AIT and PBC performed statistical analysis on the data generated from simulations. All authors were involved in writing the manuscript. All authors read and approved the manuscript. Center for Computational Genetics and Genomics and Department of Biology, Temple University, Bio-Life Building, 1900 N. 12th Street, Philadelphia, PA, 19122-1801, USA Alena Orlenko, Ashley I. Teufel, Peter B. Chi & David A. Liberles Department of Molecular Biology, University of Wyoming, Laramie, WY, 82071, USA Alena Orlenko, Ashley I. Teufel & David A. Liberles Department of Mathematics and Computer Science, Ursinus College, Collegeville, PA, 19426, USA Peter B. Chi Alena Orlenko Ashley I. Teufel David A. Liberles Correspondence to David A. Liberles. Additional file 1: Table S1. The initial values given to parameters in the system at the start of each evolutionary simulation where equilibrium is approached from above are shown. Table S2. The lengths of each enzyme, given in the number of amino acids, are shown. Table S3. The initial values given to kcat, kcatr, KM and KMr parameters in the system at the start of the evolutionary simulation when constrained with Haldane's relationship are shown. Keq and ΔG0 for each reaction are also shown. Figure S1. The fitness value of the median individual demonstrating that the same point of mutation-selection balance is reached when simulations begin at a lower fitness. Figures S2–S4. The evolution of parameter values for the experiment which started from a lower fitness are shown. Figures S5–S19. The averaged median of parameters after the point of mutation-selection balance is shown. Figure S20. The rate of change in averaged median fitness across each of the simulations is shown for A) mutation only, B) selection on flux alone, C) selection on flux and against total expression cost, D) selection on flux and against a high concentration of a deleterious intermediate, and E) non-biological neutral mutation, selection on flux, and for the first reaction to be rate limiting. Blue denotes a positive rate of change and red denotes a negative rate of change. Figure S21. Average median fitness across each of the simulations is shown for A) mutation only, B) selection on flux alone, C) selection on flux and against total expression cost, D) selection on flux and against a high concentration of a deleterious intermediate, and E) non-biological neutral mutation, selection on flux, and for the first reaction to be rate limiting. Figures S22–S26. Complete linkage clustering of parameter values for each selective scheme are shown, resulting in the data in Fig. 4. (PDF 1185 kb) Orlenko, A., Teufel, A.I., Chi, P.B. et al. Selection on metabolic pathway function in the presence of mutation-selection-drift balance leads to rate-limiting steps that are not evolutionarily stable. Biol Direct 11, 31 (2016). https://doi.org/10.1186/s13062-016-0133-6 Metabolic pathway evolution
CommonCrawl
Knight's Tours Using a Neural Network There was a paper in an issue of Neurocomputing that got me intrigued: it spoke of a neural network solution to the knight's tour problem. I decided to write a quick C++ implementation to see for myself, and the results, although limited, were thoroughly fascinating. The neural network is designed such that each legal knight's move on the chessboard is represented by a neuron. Therefore, the network basically takes the shape of the knight's graph over an \(n \times n\) chess board. (A knight's graph is simply the set of all knight moves on the board) Each neuron can be either "active" or "inactive" (output of 1 or 0). If a neuron is active, it is considered part of the solution to the knight's tour. Once the network is started, each active neuron is configured so that it reaches a "stable" state if and only if it has exactly two neighboring neurons that are also active (otherwise, the state of the neuron changes). When the entire network is stable, a solution is obtained. The complete transition rules are as follows: $$U_{t+1} (N_{i,j}) = U_t(N_{i,j}) + 2 – \sum_{N \in G(N_{i,j})} V_t(N)$$ $$V_{t+1} (N_{i,j}) = \left\{ \begin{array}{ll} 1 & \mbox{if}\,\, U_{t+1}(N_{i,j}) > 3\\ 0 & \mbox{if}\,\, U_{t+1}(N_{i,j}) < 0\\ V_t(N_{i,j}) & \mbox{otherwise}, \end{array} \right.$$ where \(t\) represents time (incrementing in discrete intervals), \(U(N_{i,j})\) is the state of the neuron connecting square \(i\) to square \(j\), \(V(N_{i,j})\) is the output of the neuron from \(i\) to \(j\), and \(G(N_{i,j})\) is the set of "neighbors" of the neuron (all neurons that share a vertex with \(N_{i,j}\)). Initially (at \(t = 0\)), the state of each neuron is set to 0, and the output of each neuron is set randomly to either 0 or 1. The neurons are then updated sequentially by counting squares on the chess board in row-major order and enumerating the neurons that represent knight moves out of each square. Essentially, the network is configured to generate subgraphs of degree 2 within the knight's graph. The set of degree-2 subgraphs naturally includes Hamiltonian circuits (re-entrant Knight's Tours). However, there are many other solutions that would satisfy the network that are not knight's tours. For example, the network could discover two or more small independent curcuits within the knight's graph. In addition, there are certain cases that will cause the network to diverge (never become stable). Knight's Tour on an \(8 \times 8\) board: Not a Knight's Tour, but still a solution (can you spot the four independent circuits?): In fact, the probability of obtaining a knight's tour on an \(n \times n\) board virtually vanishes as n grows larger. Takefuji, at the time of his publication, only obtained solutions for n < 20. Parberry was able to obtain a single knight's tour out of 40,000 trials for n = 26. I obtained one knight's tour out of about 200,000 trials for n = 28 (three days' worth of calculation on my Pentium IV). Parberry wisely asserts that attempting to find a knight's tour for n > 30 using this method would be futile. My implementation of this algorithm takes the shape of an application for Windows (although it's perfectly runnable under Linux using wine). Several key features of the program include support for arbitrary rectangular chess boards, as well as statistical records of trials performed. Download the application. (Or browse the source code on GitHub) As seen in the screenshot, the program allows you to change the width and height of the chess board. You can then adjust the number of trials to perform. Click the Start button to begin the calculation. The program will then display its progress in the statistics window on the right, showing the number of knight's tours found, number of non-knight's tours found, and number of divergent patterns. By default, the program draws the chess board only when a knight's tour has been found. However, if you enable the Display All check box, the program will draw all solutions it finds. Notable Finds Symmetric 10 x 3 knight's tour 24 x 24 knight's tour (2 hours cpu time) 28 x 28 knight's tour (50 hours cpu time) Undoubtedly, knight's tours for n > 28 can easily be found using simpler combinatorial algorithms, which seems to make this neural network solution for the knight's tour problem less than practical. However, one cannot deny the inherent elegance in this kind of solution, which is what made it so interesting to investigate. I. Parberry. Scalability of a neural network for the knight's tour problem. Neurocomputing, 12:19-34, 1996. Y. Takefuji and K. C. Lee. Neural network computing for knight's tour problems. Neurocomputing, 4(5):249-254, 1992.
CommonCrawl
pdgLive Home > ${{\mathit H}^{0}}$ > ${{\mathit H}^{0}}$ DECAY WIDTH ${{\mathit H}^{0}}$ DECAY WIDTH The total decay width for a light Higgs boson with a mass in the observed range is not expected to be directly observable at the LHC. For the case of the Standard Model the prediction for the total width is about 4 MeV, which is three orders of magnitude smaller than the experimental mass resolution. There is no indication from the results observed so far that the natural width is broadened by new physics effects to such an extent that it could be directly observable. Furthermore, as all LHC Higgs channels rely on the identification of Higgs decay products, the total Higgs width cannot be measured indirectly without additional assumptions. The different dependence of on-peak and off-peak contributions on the total width in Higgs decays to ${{\mathit Z}}{{\mathit Z}^{*}}$ and interference effects between signal and background in Higgs decays to ${{\mathit \gamma}}{{\mathit \gamma}}$ can provide additional information in this context. Constraints on the total width from the combination of on-peak and off-peak contributions in Higgs decays to ${{\mathit Z}}{{\mathit Z}^{*}}$ rely on the assumption of equal on- and off-shell effective couplings. Without an experimental determination of the total width or further theoretical assumptions, only ratios of couplings can be determined at the LHC rather than absolute values of couplings. VALUE (MeV) $3.2$ ${}^{+2.8}_{-2.2}$ 1 SIRUNYAN CMS ${{\mathit p}}{{\mathit p}}$ , 7, 8, 13 TeV, ${{\mathit Z}}$ ${{\mathit Z}^{*}}$ $/$ ${{\mathit Z}}$ ${{\mathit Z}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ $<14.4$ 95 2 AABOUD ATLS ${{\mathit p}}{{\mathit p}}$ , 13 TeV, ${{\mathit Z}}$ ${{\mathit Z}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ , 2 ${{\mathit \ell}}$2 ${{\mathit \nu}}$ $<1100$ 95 3 2017 AV CMS ${{\mathit p}}{{\mathit p}}$ , 13 TeV, ${{\mathit Z}}$ ${{\mathit Z}^{*}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ $<26$ 95 4 KHACHATRYAN 2016 BA CMS ${{\mathit p}}{{\mathit p}}$ , 7, 8 TeV, ${{\mathit W}}{{\mathit W}^{(*)}}$ CMS ${{\mathit p}}{{\mathit p}}$ , 7, 8 TeV, ${{\mathit Z}}{{\mathit Z}^{(*)}}$ , ${{\mathit W}}{{\mathit W}^{(*)}}$ 2015 BE ATLS ${{\mathit p}}{{\mathit p}}$ , 8 TeV, ${{\mathit Z}}{{\mathit Z}^{(*)}}$ , ${{\mathit W}}{{\mathit W}^{(*)}}$ CMS ${{\mathit p}}{{\mathit p}}$ , 7, 8 TeV $ > 3.5 \times 10^{-9}$ 95 8 CMS ${{\mathit p}}{{\mathit p}}$ , 7, 8 TeV, flight distance CMS ${{\mathit p}}{{\mathit p}}$ , 7, 8 TeV, ${{\mathit Z}}$ ${{\mathit Z}^{(*)}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ $<5000$ 95 10 ATLS ${{\mathit p}}{{\mathit p}}$ , 7, 8 TeV, ${{\mathit \gamma}}{{\mathit \gamma}}$ ATLS ${{\mathit p}}{{\mathit p}}$ , 7, 8 TeV, ${{\mathit Z}}$ ${{\mathit Z}^{*}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ CHATRCHYAN CMS ${{\mathit p}}{{\mathit p}}$ , 7, 8 TeV, ${{\mathit Z}}$ ${{\mathit Z}^{*}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ $<22$ 95 12 CMS ${{\mathit p}}{{\mathit p}}$ , 7, 8 TeV, ${{\mathit Z}}{{\mathit Z}^{(*)}}$ CMS ${{\mathit p}}{{\mathit p}}$ , 7, 8 TeV, ${{\mathit \gamma}}{{\mathit \gamma}}$ 1 SIRUNYAN 2019BL measure the width and anomalous ${{\mathit H}}{{\mathit V}}{{\mathit V}}$ couplings from on-shell and off-shell production in the 4 ${{\mathit \ell}}$ final state. Data of 80.2 fb${}^{-1}$ at 13 TeV, 19.7 fb${}^{-1}$ at 8 TeV, and 5.1 fb${}^{-1}$ at 7 TeV are used. The total width for the SM-like couplings is measured to be also [0.08, 9.16] MeV with 95$\%$ CL, assuming SM-like couplings for on- and off-shells (see their Table VIII). Constraints on the total width for anomalous ${{\mathit H}}{{\mathit V}}{{\mathit V}}$ interaction cases are found in their Table IX. See their Table X for the Higgs boson signal strength in the off-shell region. 2 AABOUD 2018BP use 36.1 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. An observed upper limit on the off-shell Higgs signal strength of 3.8 is obtained at 95$\%$ CL using off-shell Higgs boson production in the ${{\mathit Z}}$ ${{\mathit Z}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ and ${{\mathit Z}}$ ${{\mathit Z}}$ $\rightarrow$ 2 ${{\mathit \ell}}$2 ${{\mathit \nu}}$ decay channels (${{\mathit \ell}}$ = ${{\mathit e}}$ , ${{\mathit \mu}}$ ). Combining with the on-shell signal strength measurements, the quoted upper limit on the Higgs boson total width is obtained, assuming the ratios of the relevant Higgs-boson couplings to the SM predictions are constant with energy from on-shell production to the high-mass range. 3 SIRUNYAN 2017AV obtain an upper limit on the width from the distribution in ${{\mathit Z}}$ ${{\mathit Z}^{*}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ (${{\mathit \ell}}$ = ${{\mathit e}}$ , ${{\mathit \mu}}$ ) decays. Data of 35.9 fb${}^{-1}$ ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV is used. The expected limit is 1.60 GeV. 4 KHACHATRYAN 2016BA derive constraints on the total width from comparing ${{\mathit W}}{{\mathit W}^{(*)}}$ production via on-shell and off-shell ${{\mathit H}^{0}}$ using 4.9 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 7 TeV and 19.4 fb${}^{-1}$ at 8 TeV. 5 KHACHATRYAN 2016BA combine the ${{\mathit W}}{{\mathit W}^{(*)}}$ result with ${{\mathit Z}}{{\mathit Z}^{(*)}}$ results of KHACHATRYAN 2015BA and KHACHATRYAN 2014D. 6 AAD 2015BE derive constraints on the total width from comparing ${{\mathit Z}}{{\mathit Z}^{(*)}}$ and ${{\mathit W}}{{\mathit W}^{(*)}}$ production via on-shell and off-shell ${{\mathit H}^{0}}$ using 20.3 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV. The K factor for the background processes is assumed to be equal to that for the signal. 7 KHACHATRYAN 2015AM combine ${{\mathit \gamma}}{{\mathit \gamma}}$ and ${{\mathit Z}}$ ${{\mathit Z}^{*}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ results. The expected limit is 2.3 GeV. 8 KHACHATRYAN 2015BA derive a lower limit on the total width from an upper limit on the decay flight distance $\tau $ $<$ $1.9 \times 10^{-13}$ s. 5.1 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 7 TeV and 19.7 fb${}^{-1}$ at 8 TeV are used. 9 KHACHATRYAN 2015BA derive constraints on the total width from comparing ${{\mathit Z}}{{\mathit Z}^{(*)}}$ production via on-shell and off-shell ${{\mathit H}^{0}}$ with an unconstrained anomalous coupling. 4${{\mathit \ell}}$ final states in 5.1 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 7 TeV and 19.7 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV are used. 10 AAD 2014W use 4.5 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 7 TeV and 20.3 fb${}^{-1}$ at 8 TeV. The expected limit is 6.2 GeV. 11 CHATRCHYAN 2014AA use 5.1 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 7 TeV and 19.7 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV. The expected limit is 2.8 GeV. 12 KHACHATRYAN 2014D derive constraints on the total width from comparing ${{\mathit Z}}{{\mathit Z}^{(*)}}$ production via on-shell and off-shell ${{\mathit H}^{0}}$ . 4${{\mathit \ell}}$ and ${{\mathit \ell}}{{\mathit \ell}}{{\mathit \nu}}{{\mathit \nu}}$ final states in 5.1 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 7 TeV and 19.7 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV are used. 13 KHACHATRYAN 2014P use 5.1 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 7 TeV and 19.7 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV. The expected limit is 3.1 GeV. SIRUNYAN 2019BL PR D99 112003 Measurements of the Higgs boson width and anomalous $HVV$ couplings from on-shell and off-shell production in the four-lepton final state AABOUD 2018BP PL B786 223 Constraints on off-shell Higgs boson production and the Higgs boson total width in $ZZ\to4\ell$ and $ZZ\to2\ell2\nu$ final states with the ATLAS detector SIRUNYAN 2017AV JHEP 1711 047 Measurements of Properties of the Higgs Boson Decaying into the Four-Lepton Final State in ${{\mathit p}}{{\mathit p}}$ Collisions at $\sqrt {s }$ = 13 TeV KHACHATRYAN 2016BA JHEP 1609 051 Search for Higgs Boson Off-shell Production in Proton-Proton Collisions at 7 and 8 TeV and Derivation of Constraints on its Total Decay Width AAD 2015BE EPJ C75 335 Constraints on the Off-Shell Higgs Boson Signal Strength in the High-Mass ${{\mathit Z}}{{\mathit Z}}$ and ${{\mathit W}}{{\mathit W}}$ Final States with the ATLAS Detector KHACHATRYAN 2015AM EPJ C75 212 Precise Determination of the Mass of the Higgs Boson and Tests of Compatibility of its Couplings with the Standard Model Predictions using Proton Collisions at 7 and 8 TeV PR D92 072010 Limits on the Higgs Boson Lifetime and Width from its Decay to Four Charged Leptons AAD 2014W PR D90 052004 Measurement of the Higgs Boson Mass from the ${{\mathit H}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}$ and ${{\mathit H}}$ $\rightarrow$ ${{\mathit Z}}{{\mathit Z}^{*}}$ $\rightarrow$ 4 ${{\mathit \ell}}$ Channels with the ATLAS Detector using 25 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ Collision Data CHATRCHYAN 2014AA PR D89 092007 Measurement of the Properties of a Higgs Boson in the Four-Lepton Final State KHACHATRYAN 2014D PL B736 64 Constraints on the Higgs Boson Width from off-Shell Production and Decay to ${{\mathit Z}}$ -Boson Pairs KHACHATRYAN 2014P EPJ C74 3076 Observation of the Diphoton Decay of the Higgs Boson and Measurement of its Properties
CommonCrawl
A scalable open-source MATLAB toolbox for reconstruction and analysis of multispectral optoacoustic tomography data PSHG-TISS: A collection of polarization-resolved second harmonic generation microscopy images of fixed tissues Radu Hristu, Stefan G. Stanciu, … George A. Stanciu Concept, implementations and applications of Fourier ptychography Guoan Zheng, Cheng Shen, … Changhuei Yang Bayesian analysis of depth resolved OCT attenuation coefficients Lionel D. Fiske, Maurice C. G. Aalders, … Dirk J. Faber Neural network-based image reconstruction in swept-source optical coherence tomography using undersampled spectral data Yijie Zhang, Tairan Liu, … Aydogan Ozcan Computational optical sectioning with an incoherent multiscale scattering model for light-field microscopy Yi Zhang, Zhi Lu, … Qionghai Dai Super-resolution fight club: assessment of 2D and 3D single-molecule localization microscopy software Daniel Sage, Thanh-An Pham, … Seamus Holden Compressive Sensing for Dynamic XRF Scanning George Kourousias, Fulvio Billè, … Alessandra Gianoncelli Fast and accurate sCMOS noise correction for fluorescence microscopy Biagio Mandracchia, Xuanwen Hua, … Shu Jia Accelerating error correction in tomographic reconstruction Sajid Ali, Matthew Otten & Z. W. Di Devin O'Kelly1,2, James Campbell III1, Jeni L. Gerberich1, Paniz Karbasi2, Venkat Malladi3, Andrew Jamieson3, Liqiang Wang2 & Ralph P. Mason1 Cancer imaging Multispectral photoacoustic tomography enables the resolution of spectral components of a tissue or sample at high spatiotemporal resolution. With the availability of commercial instruments, the acquisition of data using this modality has become consistent and standardized. However, the analysis of such data is often hampered by opaque processing algorithms, which are challenging to verify and validate from a user perspective. Furthermore, such tools are inflexible, often locking users into a restricted set of processing motifs, which may not be able to accommodate the demands of diverse experiments. To address these needs, we have developed a Reconstruction, Analysis, and Filtering Toolbox to support the analysis of photoacoustic imaging data. The toolbox includes several algorithms to improve the overall quantification of photoacoustic imaging, including non-negative constraints and multispectral filters. We demonstrate various use cases, including dynamic imaging challenges and quantification of drug effect, and describe the ability of the toolbox to be parallelized on a high performance computing cluster. Multispectral optoacoustic tomography (MSOT) is a relatively new imaging modality, which combines optical contrast and ultrasonic resolution in order to provide a highly parametric view of an imaged sample. Due to the high spatiotemporal resolution, MSOT captures vast quantities of information, creating potential challenges: How does one effectively and efficiently analyze such data? Moreover, how does one ensure that the analysis is done in a manner, which is transparent and verifiable, and thus trustworthy, while allowing the flexibility to adjust the manner of processing to accommodate the differential needs of a wide variety of experiments and data acquisition conditions? To address these issues, we have created an MSOT Reconstruction, Analysis, and Filtering Toolbox (RAFT), enabling users of various technical skill levels to benefit from the rich information present in MSOT data, and to share and compare analyses between sites in a verifiable manner. The package is available as a MATLAB toolbox and a standalone command-line application, with plans to deploy to Docker containers for site- and platform-independence, potential deployment to cloud computing resources, as well as to preserve archival code for execution. The toolbox provides a basis for users to add their own reconstruction and unmixing algorithms, but may also be run in a 'black-box' mode, driven only through external configuration files, thus providing greater repeatability. Scalability is addressed by parallelization within MATLAB, allowing computationally intensive reconstructions to be distributed across several computational workers. A Nextflow wrapper around the MATLAB systems allows multiple pipelines to be run on a single computational cluster simultaneously. MSOT may be thought of as optically-encoded ultrasound; an object under brief, intense illumination absorbs some of the illuminating energy, converting some of that energy into heat1,2. This heat induces transient thermoelastic expansion, creating a pressure wave, which travels outward from the point of absorption. If this absorption and thermal conversion process occurs in a short enough period of time, the pressure wave is temporally confined, creating a compact wavefront, which can be detected using ultrasonic transducers. The reconstruction of the original pressure image then provides a measure of energy deposition by the original illumination, which may be related across multiple illumination wavelengths to yield an overall spectral image. Knowledge of the endmembers present allows the spectral image to be unmixed into its corresponding endmember images. The promise of MSOT, given recent advancements in laser tuning controls, data acquisition bandwidth, and ultrasound transducer design, is that it can provide optical contrast with ultrasound resolution. Indeed, the modality is highly scalable across various temporal and spatial regimes: It may be used for super-resolution imaging at the scale of tens of nanometers3,4,5, is becoming popular for preclinical and small animal investigations6,7,8,9,10,11 and has been deployed for clinical usage on human subjects12,13,14,15,16,17,18. The rate of imaging is fundamentally limited by two parameters: The signal to noise ratio achievable by a given photoacoustic imaging technology, and the exposure limits defined by guiding agencies. In practice, the field assumes a maximum energy deposition of 20 mJ/cm2 at skin surface, and therefore many low-energy laser shots may be substituted for few high-energy laser shots19,20. Beyond the nature of the acquisition devices themselves, there are numerous methods by which photoacoustic images may be reconstructed; these range from direct inversion algorithms analogous to the 'delay-and-sum' approaches of ultrasound21, to analytical inversions valid under specialized geometries22,23, to model-based approaches analogous to those used in CT and MRI22,24,25,26,27,28,29. Still further methods use the time-reversal symmetry of the governing equations and numerical simulation to determine the original photoacoustic energy distribution30,31. These all operate under the governing photoacoustic equations. Under pulsed laser light and the assumption of ideal point transducers, the photoacoustic equations can be written as an optical component (1) and an acoustic component (2)32: $$p_{0} \left( {\vec{r},\vec{\lambda }} \right) = p\left( {t = 0, \vec{r},\vec{\lambda }} \right) = H\left( {\vec{r},\vec{\lambda }} \right) = \Gamma \left( {\vec{r}} \right)\phi \left( {\vec{r},\vec{\lambda }} \right)\mu_{a} \left( {\vec{r},\vec{\lambda }} \right) = \Gamma \left( {\vec{r}} \right)\phi \left( {\vec{r},\vec{\lambda }} \right)\mathop \sum \limits_{i = 1}^{{N_{c} }} C_{i} \left( {\vec{r}} \right)\varepsilon_{i} \left( {\vec{\lambda }} \right)$$ $$p_{d} \left( {t,\overrightarrow {{r_{d} }} } \right) = \frac{\partial }{\partial t}\left[ {\frac{t}{4\pi }\iint\limits_{{\left| {\vec{r}_{d} - \vec{r}} \right| = \nu_{s} t}} {p_{0} \left( {\vec{r}} \right)d\Omega }} \right]$$ Here, \(p\) denotes pressure values, assumed to instantaneously take on the values \(p_{0} \left( {\vec{r}} \right)\) at each location \(\vec{r}\) throughout the imaging region at time \(t = 0\). \(H\left( {\vec{r}} \right)\) denotes the heating function, which is dependent on the Gruneisen parameter \({\Gamma }\) describing the efficiency of conversion from light to heat at each point, the light fluence \(\phi \left( {\vec{r},\vec{\lambda }} \right)\) defined throughout the imaging region for each illumination wavelength \(\lambda\), and the absorption coefficient \(\mu_{a} \left( {\vec{r},\vec{\lambda }} \right),\) which describes the conversion of light fluence to absorbed energy at each point and each wavelength. In turn, the absorption coefficient is described by the absorption spectra \(\varepsilon_{i} \left( {\vec{\lambda }} \right)\) of all endmembers present at each point, weighted by their concentration at that point \(C_{i} \left( {\vec{r}} \right)\) (Eq. 1). Once light has been absorbed and converted into acoustic waves, these waves spread outward from the point of origin according to the wave equation with wave-fronts traveling at the speed of sound \(\nu_{s}\) in the medium, eventually reaching the detection transducers located at \(\vec{r}_{d}\), each of which has a solid angle \(d{\Omega }\) describing the contribution of each imaged point to the overall measured signal at time \(t\) (Eq. 2). This integration over the 2D detection surface is denoted by the double integral in Eq. 2. Once data have been acquired using transducer geometry \(\overrightarrow {{r_{d} }}\) and sampled at \(\vec{t}\), the time-dependent photoacoustic data must be reconstructed into the corresponding image. Thorough overviews of the state of the field have been compiled by various groups1,33,34,35. Reconstruction may be accomplished through either direct or inverse methods; the former encompasses such approaches as backprojection algorithms, while the latter encompasses any approach using a model of image formation mapping from \(H\left( {\vec{r}} \right)\) to \(p_{d} \left( {\vec{t},\overrightarrow {{r_{d} }} } \right)\) and minimizing a cost function. For any given model or reconstruction approach, a variety of parameters and constraints may be applied to the process to enforce certain conditions such as non-negativity, or to regularize the reconstruction process and emphasize certain aspects of the data. A general representation of the reconstruction process in discrete form is given by the optimization problem $$\vec{I} = \mathop {\text{argmin }}\limits_{{\vec{I}}} \left\| {M\vec{I} - \vec{d}} \right\| + \lambda \left\| {R\left( {\vec{I},\vec{d}} \right)} \right\|$$ where \(\vec{I}\) represents the reconstructed image, \(\vec{d}\) the acquired data, \(M\) the forward model mapping \(\vec{I}\) to \(\vec{d}\), \(R\) a regularization function which is a function of \(\vec{I}\) and/or \(\vec{d}\), and \(\lambda\) a regularization parameter which adjusts the influence of the regularization term in the value of the objective function. Reconstruction may be performed using any norm, though the \(L_{2}\) Euclidean norm is most commonly used. A plethora of pre- and post-processing approaches may be added to the analysis chain; reconstruction may be performed on the individual images, or jointly among several multispectral images simultaneously. Unmixing may be performed prior to, or after, reconstruction, and both reconstruction and unmixing may themselves be combined into a single operation to yield unmixed images directly from multispectral ultrasound data. It is important that analyses be transparent, verifiable, and validatable. Many approaches may give 'suitable' results in a qualitative sense, but applying further quantitative analysis to these less than quantitative intermediate results can lead to spurious analyses and interpretation. It is thus key that assumptions be explicit for any analysis, and that the analytical provenance for a given analyzed datum be recorded. As a salient example, bandpass filtering of the recorded ultrasound signal and deconvolution of the transducer impulse response are common preprocessing steps36, but many deconvolution methods require an assumption of white noise, which is violated if bandpass filtering is performed prior to deconvolution. It is also important that a given analysis be repeatable and consistent across datasets; indeed, if one has a dataset that benefitted from a particular analytical chain, then one would like to easily apply the same processing parameters to other datasets under similar acquisition conditions. However, given the diversity of possible acquisition settings, an analysis must be sufficiently flexible to accommodate the details of each particular dataset, such as the order of wavelengths sampled. Numerous tools are available to address individual components of this process; K-wave is an excellent toolbox developed over the past several years by Treeby and Cox, largely intended to simulate pressure fields in ultrasound and photoacoustic imaging contexts37. Toast++ and NIRFAST are toolboxes to simulate light transport efficiently38,39, while MCML addresses light transport from a Monte-Carlo perspective40. FIELD-II simulates acoustic sensitivity fields41,42. Given the existence of such tools, it is key that they can be effectively leveraged into the new RAFT framework. Lastly, for any amount of computational power or configurability, it is critical that the use of such tools be straightforward and consistent, and that any changes made to a codebase do not interfere with established analyses or reduce the overall quality of results. It is therefore key that there be means by which the framework is tested and updated automatically, or at least to a level of convenience enabling update and maintenance. We address several of these problems and believe that this new tool will provide a foundation for the future development of photoacoustic imaging, and that further development of the RAFT will continue to expand its capabilities. MSOT-RAFT structure RAFT operates in a data-driven manner, directed by a configuration file which describes the parameters and methods used to perform a given analysis, and which acts as a record of how a given dataset is processed. Defaults are provided to enable immediate usage, but users may modify these defaults according to their preferences. For settings which are explicitly dependent on metadata, such as the number of samples acquired or the location of transducers, a metadata population step loads such information into a standard form, thereby generalizing the pipeline's application beyond a specific manufacturer's technology. Multiple extensions of the processing pipeline to different photoacoustic data are possible through modification with different loaders. Data frames are loaded using a memory-mapped data interface, allowing large datasets to be handled without loading the entire dataset into memory at one time. This enables the processing to be performed even on workstations with minimal available memory. The action is handled by a loader, which is instantiated using the metadata information associated with the MSOT dataset, giving it a well-defined mapping between a single scalar index and the corresponding data frame. Each raw data frame is associated with acquisition metadata, such as the temperature of the water bath or laser energy, and is output to the next processing step. Processing proceeds via the transformation of data frames, which are arrays of data associated with a particular, well-defined coordinate system. An example is the data frame acquired from a single laser pulse and the resulting acoustic acquisition. Data frames may also have additional, contextual data, such as the time of acquisition or excitation wavelength. Figure 1 illustrates a commonly-used processing topology, with an overall effect of transforming a series of single-wavelength photoacoustic data frames into a series of multi-component image frames. Example pipeline structure. A stream of individual single-wavelength photoacoustic data frames is transformed through a cascade of processing actions to yield a series of spectrally unmixed images at each point in time. The pipeline takes advantage of the known acquisition geometry of the system to perform reconstruction, and user-defined spectral endmembers to perform unmixing. Figure created using a combination of MATLAB and Microsoft PowerPoint. Coordinates explicitly describe the structure of the data contained within data frames. Coordinates may be vectors of scalars or vectors of vectors. An example of the former is the use of two scalar coordinates to describe the X and Y position of a given pixel vertex (Fig. 2a, x and y coordinates), while an example of the latter is an index of transducers, each of which has an associated X,Y position (Fig. 2c, \(\vec{\user2{s}}\) coordinate). Transformations between these spaces are effected by the forward model \(M\) (Fig. 2b) and the inverse solution operator given in Eq. 3 (Fig. 2d). Image-data mappings. An image (a) consisting of some explicit values, is organized according to the coordinate system \({\mathcal{I}}\). (b) Through the transformation effected by the forward model operator \(M\), one determines (c) the photoacoustic data in coordinate system \({\mathcal{D}}\). In practice, one acquires the data and (d) seeks to reconstruct the corresponding image through the action of some reconstruction operator \(R\). Iterative schemes are often favored for the reconstruction process, and so \(R\) is implicitly dependent on the operator \(M\). Figure created using a combination of MATLAB and Microsoft PowerPoint. Some processing methods require that the procedure maintain memory of its state, e.g., in the use of recursive filters or online processing. We therefore implemented the filters using the MATLAB System Object interface, which provides a convenient abstraction to describe such operator mappings, where the transformation may have some time-dependent internal structure. Other processing methods benefit from the assumption that each data frame within a dataset may be treated independently, such as in the case of reconstruction, and the use of an object-oriented design ensures that the processing of such datasets can be effectively parallelized. The most up-to-date version of the RAFT is available at https://doi.org/10.5281/zenodo.4658279, which may have been updated since the time of writing. The RAFT is designed in a highly polymorphic manner, allowing different methods to be applied to each step, and for the overall pipeline topology to change. As an example, one can perform multispectral state estimation on the raw data prior to reconstruction, in contrast to the topology shown in Fig. 1, where the multispectral state estimation occurs after reconstruction. For succinctness, we will only be illustrating the pipeline topology shown in Fig. 1. Following its loading into memory and calibration for laser energy variations, each raw data frame is preconditioned by subtracting the mean of each transducer's sampled time course, deconvolving the transducer impulse response using Wiener deconvolution43, bandpass filtering the signal, correcting for wavelength-dependent water absorption with the assumption of Beer's law attenuation44, and interpolating the data frame into the coordinate system expected by the reconstruction solver. Reconstruction proceeds by assuming the independence of each data frame, and is thus parallelized across an entire dataset through the use of a number of distributed workers, each performing the same processing on a distinct region of the overall dataset. During initialization, the reconstruction system creates a model operator, which is used during operation to reconstruct each frame into the target image coordinate system. Each frame is then written to disk until processing completes. Spectral unmixing is accomplished by providing a stream of single-wavelength data frames to a multispectral state filter, which estimates the true multispectral image at each point in time45. This multispectral image estimate is then provided to another solver system which inverts the mixing model, derived from the known wavelength space and the assumed endmembers present. To readily accommodate future methods, we established a consistent input configuration for processing steps. Reconstruction systems are constructed by pairing an inverse solver to a forward model: The forward model represents the mapping from some input space (here, an image) to some output space (here, a frame of photoacoustic data), while the inverse solver takes the model as an argument and attempts to reduce some objective function subject to some input arguments. Models have a standardized initialization signature, requiring an input coordinate system, an output coordinate system, and a set of model-specific parameters, which, in the case of MSOT, necessarily includes the speed of sound of the medium. Pipeline parallelization MSOT data processing is computationally intensive, and the associated large problem sizes incur substantial processing time. In biological research, where cohorts of animals may be assessed multiple times over the course of a study, this processing time can result in prohibitively long experimental iterations, hindering effective development of methods. If there is no co-dependence among individual acquisitions, it is possible to parallelize computationally intensive steps, and particularly desirable to do so when such steps require a long time to process46,47. We therefore implemented the toolbox using MATLAB's object-oriented functionalities to enable one processing system to be copied among an arbitrary number of parallel workers, and to demonstrate the benefits of parallelization in the case of reconstruction. When many datasets are to be processed using the same configuration, it is desirable to parallelize the processing at the scale of datasets. To this end, we implemented a Nextflow wrapper around the toolbox to illustrate a possible scenario of deploying the toolbox on a computational cluster. With a cluster scheduler, an arbitrary number of datasets can be processed simultaneously, enabling substantial horizontal scalability. Comparison of processing methods There is a practically infinite number of combinations and permutations of different reconstruction methods, preconditioning steps, solver settings, cost functions, and analyses, resulting in a highly complex optimization problem. By providing means to generate arbitrary amounts of test data in a parametric fashion, and by describing processing pipelines using well-defined recipe files, we enable optimization of the complex parametric landscape describing all possible processing pipelines. We illustrate the use of the pipeline to assess the effects of different processing approaches in providing an end analysis. Results were processed using MATLAB 2017b (MathWorks, Natick, MA), though we include continuous integration testing intended to provide consistency between versions. All figures were created using a combination of MATLAB and Microsoft PowerPoint. All animal work was conducted under animal protocol (APN #2018-102344-C) approved by the UT Southwestern Institutional Animal Care and Use Committee. All animal work was conducted in accordance with the UT Southwestern Institutional Animal Care and Use Committee guidelines as well as all superseding federal guidelines and in conformance with ARRIVE guidelines. We verified the ability of the pipeline to perform reconstructions through the use of several testing schemes. First, we implemented analytical data generation, both single-wavelength and multispectral, in order to test the numerical properties of models and reconstructions. Data were generated consisting of random paraboloid absorbers across a field of view, along with the corresponding photoacoustic data, for a variety of pixel resolutions. For each model, 50 random images were generated with random numbers of sources, random sizes, and random locations at each of the chosen resolutions, and the correspondence of each model's forward data to the analytical data was quantified. The assumption of point detectors allows one to find the analytical signal expected at a given sampling location, and the assumption of linearity allows one to calculate the total signal from several non-overlapping sources. This provides validation of imaging models, by comparing the known image of sources against the model output. We use this process to demonstrate the potential use of the RAFT as a process optimization tool, informing selections of methods and parameters under different use-cases. To test the performance of the different models and solvers at different imaging resolutions, we generated numeric data of paraboloid absorbers. The span of each paraboloid was constrained to lie in the field of view of the reconstructed image. Given a known set of paraboloid parameters, it is possible to construct the corresponding projection of the paraboloids onto the image space and the data space. Given the known source image, we compared the output of each model to the ground truth data. To compare solvers, given the output data, we compared the reconstructed image against the known ground truth image, creating an error image (Eq. 4). We quantified the mean bias (Eq. 5), average L1 (Eq. 6) and L2 norms (Eq. 7), scaled by the number of pixels in an image frame or reconstruction: $$e\left( {x,y} \right) = I_{known} \left( {x,y} \right) - I_{recon} \left( {x,y} \right)$$ $$bias = \frac{{\mathop \sum \nolimits_{{y \in {\mathcal{Y}}}} \mathop \sum \nolimits_{{x \in {\mathcal{X}}}} e\left( {x,y} \right)}}{{N_{x} N_{y} }}$$ $$\left| {\left| {\vec{e}} \right|} \right|_{{L_{1} }} = \frac{{\mathop \sum \nolimits_{{y \in {\mathcal{Y}}}} \mathop \sum \nolimits_{{x \in {\mathcal{X}}}} \left| {e\left( {x,y} \right)} \right|}}{{N_{x} N_{y} }}$$ $$\left| {\left| {\vec{e}} \right|} \right|_{{L_{2} }} = \frac{{\sqrt {\mathop \sum \nolimits_{{y \in {\mathcal{Y}}}} \mathop \sum \nolimits_{{x \in {\mathcal{X}}}} e\left( {x,y} \right)^{2} } }}{{N_{x} N_{y} }}$$ The bias of the reconstruction signifies the tendency of the reconstruction process to over- or under-estimate the pixel values of the reconstructed image—bias values close to 0 indicate that the reconstructed values are on average equal to the true values. The L1 and L2 norms reflect the error in the reconstruction—lower values of each indicate that the solution converges to a more precise estimate of the true image values. We additionally quantified the Structural Similarity Index (SSIM)48, a measure of image similarity, as a scale-invariant figure of merit: $$SSIM_{{x\hat{x}}} = \frac{{\left( {2\mu_{x} \mu_{{\hat{x}}} + C_{1} } \right)\left( {2\sigma_{{x\hat{x}}} + C_{2} } \right)}}{{\left( {\mu_{x}^{2} + \mu_{{\hat{x}}}^{2} + C_{1} } \right)\left( {\sigma_{x}^{2} + \sigma_{{\hat{x}}}^{2} + C_{2} } \right)}}$$ The average norms and the SSIM were calculated for the error images for each generated phantom image, and this process was repeated N = 50 times at each resolution tested (\(N_{x} = N_{y} \in \left[ {30, 50, 100, 150, 200, 250, 300, 350, 400} \right]\)). For the solver comparisons, we additionally calculated the number of negative pixels in each image as well as the relative residual at convergence for each method, with the relative residual given by: $$r_{rel} = \frac{{\left\| {M\vec{I} - \vec{d}} \right\|}}{{\left\| {\vec{d}} \right\|}}$$ To demonstrate application to biological data, we analyzed two preclinical imaging scenarios: The first (Dataset A) was a dynamic observation of a gas challenge followed by administration of a vascular disrupting agent (VDA). A male NOD-SCID mouse (Envigo) with a human PC3 prostate tumor xenograft implanted subcutaneously in the right aspect of the back was subjected to a gas breathing challenge, while continuously anesthetized with 2% isoflurane. Prior to imaging, the animal was shaved and depilated around the imaging region to avoid optical or acoustic interference from fur. Gas flow was maintained at 2L/min throughout the imaging session and mouse was allowed to equilibrate in the imaging chamber for at least 10 min before measurements commenced. Initial air breathing was switched to oxygen at 8 min, back to air at 16 min, and again to oxygen at 24 min. At 34 min, the animal was given an intraperitoneal injection of 120 mg/kg combretastatin A-4 phosphate (CA4P) in situ49,50,51, and observed for a further 60 min. Imaging proceeded by sampling the wavelengths [715, 730, 760, 800, 830, 850] nm, with each wavelength oversampled 6 times, but not averaged. Overall, 52,091 frames of data were acquired. The second scenario (Dataset B) examined a single gas challenge, wherein a female nude mouse (Envigo) was implanted with a human MDA-MB-231 breast tumor xenograft in the right dorsal aspect of the lower mammary fat pad. The gas challenge consisted of 11 min breathing air, 8 min breathing oxygen, followed by 10 min breathing air. Imaging proceeded by sampling the wavelengths [715, 730, 760, 800, 830, 850] nm, with each wavelength oversampled 2 times but not averaged. Overall, 11,184 frames were acquired. We tested the scalability of the pipeline by performing reconstruction of two experimentally-acquired datasets of 52,091 frames (Dataset A) and 989 frames (Dataset C, acquired during a tissue-mimicking phantom experiment), using two distinct indirect methods and one direct method. Reconstruction was performed using either the Universal Backprojection (BP) algorithm21, chosen for its intrinsic universality, or the closely related direct interpolated model matrix inversion28 (dIMMI) or curve-driven model matrix inversion26 (CDMMI) models. The two models differ in how they discretize the photoacoustic imaging equations. The dIMMI model uses a piecewise-linear discretization with a defined number of points sampled along the wavefront, and uses bilinear interpolation to assign weights to nearby pixels. CDMMI, in contrast, assumes a spherical wavefront and exactly calculates the arc-length of the wavefront within each pixel. The solvers used for reconstruction were either MATLAB's built-in LSQR function with default parameters, or the non-negative accelerated projected conjugate gradient (nnAPCG) method52. The total processing time for reconstructing all frames of Dataset A was determined using \(N = \left[ {4,8,12,16,20,24} \right]\) distributed workers, while \(N = \left[ {1,2, \ldots ,24} \right]\) distributed workers were applied to Dataset C. We used the RAFT to execute two distinct processing pathways on each biological dataset, using the package's default parameter settings except where otherwise specified: The first, so-called 'unconstrained' method, used the dIMMI model and an LSQR solution of the imaging equations, and a sliding-window multispectral state estimation filter to produce a complete multispectral image at each point in time. Unmixing was then performed using the pseudoinverse of the mixing equations: $$\begin{aligned} I\left( {\vec{r},\vec{\lambda }} \right) & = \left( {\begin{array}{*{20}c} {\vec{I}_{{\lambda_{1} }} \left( {\vec{r}} \right)} \\ \vdots \\ {\vec{I}_{{\lambda_{{N_{\lambda } }} }} \left( {\vec{r}} \right)} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {I_{{\lambda_{1} ,r_{1} }} } & \cdots & {I_{{\lambda_{1} ,r_{{N_{r} }} }} } \\ \vdots & \ddots & \vdots \\ {I_{{\lambda_{{N_{\lambda } }} ,r_{1} }} } & \cdots & {I_{{\lambda_{{N_{\lambda } }} ,r_{{N_{r} }} }} } \\ \end{array} } \right) \approx \mu_{a} \left( {\vec{r},\vec{\lambda }} \right) = \mathop \sum \limits_{i = 1}^{{N_{c} }} C_{i} \left( {\vec{r}} \right)\varepsilon_{i} \left( {\vec{\lambda }} \right) = {\text{\rm E}}\vec{C} \\ & = \left( {\begin{array}{*{20}c} {\varepsilon_{{\lambda_{1} ,C_{1} }} } & \cdots & {\varepsilon_{{\lambda_{1} ,C_{{N_{c} }} }} } \\ \vdots & \ddots & \vdots \\ {\varepsilon_{{\lambda_{{N_{\lambda } }} ,C_{1} }} } & \cdots & {\varepsilon_{{\lambda_{{N_{\lambda } }} ,C_{{N_{c} }} }} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {c_{{C_{1} ,r_{1} }} } & \cdots & {c_{{C_{1} ,r_{{N_{r} }} }} } \\ \vdots & \ddots & \vdots \\ {c_{{C_{{N_{c} }} ,r_{1} }} } & \cdots & {c_{{C_{{N_{c} }} ,r_{{N_{r} }} }} } \\ \end{array} } \right) \\ \end{aligned}$$ $$\hat{C} \left( {\vec{r},\vec{c}} \right) \approx {\text{\rm E}}^{ + } \vec{I}\left( {\vec{r},\vec{\lambda }} \right)$$ Here, \(\vec{I}_{{\lambda_{j} }} \left( {\vec{r}} \right)\) represents the spectral image at each wavelength \(\lambda_{j}\) and each pixel location \(\vec{r}\). Variations in fluence are assumed to be negligible, and so the multispectral image is assumed to be approximately equivalent to the actual absorption image \(\mu_{a}\). The mixing matrix \({\text{\rm E}}\) is constructed by taking the molar absorption coefficients of each endmember \(C_{i}\) at each wavelength \(\lambda_{j}\). The unmixed concentration image is thus derived by multiplying \(I(\vec{r},\vec{\lambda }\)) on the left by the pseudoinverse \({\text{\rm E}}^{ + }\), itself calculated by using MATLAB's pinv function. In this work, the endmembers were assumed to be only oxyhemoglobin and deoxyhemoglobin, with values derived from the literature53. The second, so-called "constrained" approach, used the CDMMI method and an nnAPCG solution of the imaging equations to provide a non-negativity constraint, and an \(\alpha \beta\) Kalata filter45 for the multispectral state estimation at each point in time to provide a kinematic constraint on the time-evolution of the signal. Unmixing was then performed using the nnAPCG solver applied to the mixing equations. The second imaging scenario was additionally processed using an second constrained approach, using an \(\alpha\) Kalata filter. The \(\alpha\) filter is very effective at reducing noise over time, but is unable to reliably track rapid dynamic changes in the underlying signal, causing a lagged response. The \(\alpha \beta\) filter, by contrast, has lower noise-suppression properties but is much more able to follow signal dynamics. This is in essence due to the fact that the \(\alpha\) filter assumes the underlying signal is static. All approaches were preconditioned by first subtracting the mean from each transducer's pressure data, Wiener deconvolution of the impulse response function, bandpass filtering between 50 kHz and 7 MHz, and correction by the water attenuation coefficient for a given frame's acquisition wavelength, assuming a general path length of 3 cm. All images were reconstructed at 200 × 200 resolution and a 2.5 cm field of view unless otherwise specified. The effects of the combination of these steps can be seen in Supplementary Fig. 1. We assessed the variation in quantitation by several approaches. For Dataset A, we quantified the indicator function \(H^{ - } \left( x \right)\) of negative pixels throughout, i.e., the number of times each pixel in the reconstructed images attained a negative value throughout the imaging session. The occurrence of negative pixels creates interpretation difficulties, and it is desirable to have few or no negative pixels in a dataset. The unmixed values of hemoglobin ([Hb]MSOT), oxyhemoglobin ([HbO2]MSOT), total hemoglobin ([HbTot]MSOT), and oxygen saturation (SO2MSOT) were compared between the two methods using a randomly-sampled binned Bland–Altman analysis with 100 × 100 bins, randomly sampling 10% of all pixels throughout the image time course. In Dataset A, the significance of the difference in each unmixed parameter before and after drug was quantified, using a two-population T-test and N = 100 frames between [19900:20000] frames and [51900:52000], corresponding to 10 s immediately before administration of the drug and 10 s immediately before the end of the imaging session. The T-score itself was used as the figure of merit. We additionally quantified the relative effect of the CA4P response by using the initial gas challenge as a measure of scale, relating the Cohen's D of the CA4P response to the Cohen's D of the transition from oxygen-air at 16 min. This was done based on the presumption that the initial gas challenge provides indications of the patency of vasculature, which should be related to the response to CA4P. Details of this analysis are presented in Supplementary Information. The MDA-MB-231 gas challenge was assumed to have a rectangular input pO2 waveform. We quantified the centered correlation of each pixel in each channel for each method against this known input function, and calculated the mean and variance of the correlation within tumor, spine, and background ROIs, as well as an ROI covering the whole animal cross-section. Statistical significance of differences between each method's correlation coefficient was quantified using Fisher's Z transformation on the raw correlation coefficients and a two-population Z-test. Significance was set at \(\alpha = 0.01\), while strong significance was set at \(\alpha = 0.0001\). We additionally tested the ability of each of the processing approaches (Unconstrained, Constrained + \(\alpha \beta\), Constrained + \(\alpha\)) to provide results which were amenable to further analysis, in this case fitting a 7-parameter exponential model (Supplementary Methods) to the [HbO2]MSOT time course in each pixel for each method. Computation was performed using the UT Southwestern BioHPC computational resource; timing was calculated using computational nodes with between 256 and 384 GB of RAM and 24 parallel workers. When comparing CDMMI to dIMMI in terms of accurate prediction of model output data, both methods performed with statistical equivalence at high resolutions (Fig. 3). At lower resolutions, there was a notable increase in the modelling error of the CDMMI method. This may be attributed to the problem of aliasing; the dIMMI method allocates additional sampling points along the integral curve during model calculation, acting as an anti-aliasing filter, which reduces the incidence of high-frequency artifacts exacerbated by the differentiation step. However, the CDMMI model seems to indicate a trend towards statistically improved performance at higher resolutions. Comparison of dIMMI and CDMMI models. The dIMMI model has statistically superior performance in accurately modelling the photoacoustic imaging process for low image resolutions (a, b) under both the L1 and L2 norms. At high resolutions, CDMMI trended towards superior performance over dIMMI, though does not consistently achieve statistical separation. (c) Both dIMMI and CDMMI have improved SSIM scores as a function of pixel resolution; CDMMI achieves slightly better performance at high resolutions. Figure created using a combination of MATLAB and Microsoft PowerPoint. Reconstruction performance contrasts the modelling results (Figs. 4 and 5); when images were reconstructed using the LSQR method (Fig. 4) and considering the L1 and L2 norms (Fig. 5d, e), the CDMMI method had statistically superior performance over dIMMI (N < 250), though at higher resolutions the difference was negligible. In contrast, when considering the relative residual (Fig. 4b), dIMMI provided superior reconstruction performance. Similar results were seen for the nnAPCG method (Fig. 5), though we note the minimum relative residual, L1 norm, and L2 norm were achieved at a lower resolution (N = 150). This discrepancy between the resolutions of minimum relative residual or norm potentially indicates a need for additional iterations at higher resolutions for the nnAPCG method. We note, however, that the presence of a minimum indicates that the RAFT itself may be optimized through the use of randomly-generated data to provide optimal reconstruction settings. Comparison of dIMMI- and CDMMI-based reconstructions using the LSQR reconstruction algorithm. (a) Both models have a tendency towards increasing absolute error as a function of image size, though (b) the relative residual for CDMMI is higher than dIMMI for low image resolutions, converging at higher resolutions. (c) Both models incur similar numbers of negative pixels in their reconstructions. These trends are reversed, however, when considering the reconstruction L1 norm (d) and L2 norm (e) for each model, with CDMMI outperforming dIMMI when reconstructing at low resolutions; again, both methods perform similarly at high resolutions. (f) depicts the ground truth, reconstructed, difference, and SSIM images for each of the methods. Both methods appear to perform similarly well, and the introduction of reconstruction artifacts as seen in the SSIM images affects both to comparable degrees. Figure created using a combination of MATLAB and Microsoft PowerPoint. Comparison of dIMMI- and CDMMI-based reconstructions using the nnAPCG reconstruction algorithm. (a) Both models tend towards increasing absolute error as a function of image size though (b) the relative residual for CDMMI is higher than dIMMI for low image resolutions, converging at higher resolutions. (c) Neither model incurs negative pixels due to the non-negative constraint. When considering the reconstruction L1 norm (d) and L2 norm (e) for each model, CDMMI results in a lower norm at low resolutions; again, both methods perform similarly at high resolutions. (f) depicts the ground truth, reconstructed, difference, and SSIM images for each of the methods. Both models underestimate the true image intensity for high-intensity objects when reconstructed using nnAPCG, causing mismatches towards the center of each object. Figure created using a combination of MATLAB and Microsoft PowerPoint. The choice of distinct reconstruction methods has a salient impact on the quality of the reconstruction, as well as the sensitivity of the method to solver parameters. The solution process in MSOT does not necessarily converge after infinite iterations due to ill conditioning of singular values, and so must be halted at an early stage, or a truncated singular value decomposition (T-SVD) be used as a regularization process. This explains the higher variation in the various error metrics shown in Figs. 4 and 5—LSQR internally uses a more numerically stable algorithm and converges more efficiently than the fundamentally nonlinear nnAPCG method. Together with convergence, a similar question applies to analytical quality; although the absolute error of a given reconstruction may be low in a numerical or simulated environment, artifacts may appear in real data due to nonlinearities and modelling inaccuracies, which are not fully captured by the forward models. LSQR, despite its favorable numerical properties, provides no guarantees of converging to a sensible value, and due to the non-local nature of the photoacoustic imaging equations, errors or nonphysical values in one pixel will affect the reliability of values in other pixels. Due to this combination of factors, we recommend the use of the nnAPCG method when possible, as it provides an inherently constrained reconstruction suitable for use in further analyses, particularly regarding spectral unmixing. Indeed, the use of a non-negative analysis constrains the possible SO2MSOT values to the range of [0,1] instead of [\(- \infty ,\infty\)], which may result from various combinations of positive and negative values of oxy- and deoxy-hemoglobin. Figure 6 illustrates differences in reconstructed performance for three individual pixels in distinct regions of a tumor in a mouse, each showing specific cases within a single study where the constrained approach provides benefits. Though the shapes of the time courses of each parameter within each pixel are consistent between methods, it is clear that the constrained method produces less noisy data. This is due in large part to the use of the \(\alpha \beta\) multispectral Kalata filter, which reduces inter-frame noise45. The unconstrained approach (red) also demonstrates the risks of using inappropriate analyses when processing data. The [Hb]MSOT values in the tumor pixel were negative for a large portion of the experiment (Fig. 6, Pixel 1), which resulted in [SO2]MSOT values exceeding 1. Such pixels can corrupt both spatial and temporal averages, but are not readily compensated; at the same time, the responses of these pixels have resolvable structure, indicating that there is biologically useful data present. The use of the constrained analysis thus provides more complete and more reliable analyses. Comparison of different processing approaches implemented using the MSOT-RAFT for analysis of Dataset A. (a) Mean cross-sectional [Hbtot]MSOT image. S: Spine, T: Tumor. (b) Expanded view of tumor periphery, with pixels of interest highlighted; relative size of highlighted squares is larger than a single pixel to improve visibility. The unconstrained approach (red) is substantially noisier and regularly incurs non-physical negative values in the low-signal outer tumor (Pixel 1, [Hb]MSOT), which result in spurious values exceeding 1 for downstream SO2MSOT calculation (Pixel 1, SO2MSOT). Poor SNR as seen in Pixel 2 is managed with an \(\alpha \beta\) filter (black), making the transitions between gases much more conspicuous. The \(\alpha \beta\) filter preserves the dynamical structure of each pixel as well, as seen in the transitions of Pixel 3. Figure created using a combination of MATLAB and Microsoft PowerPoint. As applied to Dataset A, the effects of processing choice are salient: Fig. 7 shows the total number of negative-valued pixels throughout the study. The constrained analysis yields no negative values, while the unconstrained analysis results in negative values in various locations through various channels. Even within the tumor bulk, there are substantial regions where the values of [HbO2]MSOT, [Hb]MSOT, and SO2MSOT attain negative values. The presence of these values would compromise biological inference due to the non-physicality of negative values. Overall differences between the methods for each channel are shown in the Bland–Altman plots (Fig. 8). There is generally good agreement between each, though the constrained method tends to provide lower estimates than the unconstrained method. The presence of an overall diagonal structure reflects the non-negativity of the constrained analysis. Total occurrence of negative pixels in each channel for the constrained and unconstrained analyses on Dataset A. Animal outline shown, S: Spine, T: Tumor. The constrained analysis results in no negative pixels at any point throughout the imaging time course, reflecting the consistent preservation of the non-negative constraint. The unconstrained analysis, by contrast, develops a large number of negative pixels, including numerous regions within the animal. Figure created using a combination of MATLAB and Microsoft PowerPoint. Bland–Altman plots between the constrained and unconstrained analyses on Dataset A. (a) [HbO2]MSOT, (b) [Hb]MSOT, (c) [Hbtot]MSOT, (d) SO2MSOT. There is generally good agreement throughout all channels, though there appears to be a consistently lower estimate of all parameters using the constrained approach. The appearance of diagonal structure in (a–c) is due to the non-negative constraint, while the band of finite width in (d) is the result of the constrained method's SO2 estimates being confined to [0,1]. Figure created using a combination of MATLAB and Microsoft PowerPoint. When examining changes in response to treatment, the constrained approach leads to greater significance when comparing each of the parameter values before and 60 min after administration of CA4P (Fig. 9), allowing the resolution of significant changes even in low-signal areas. Similar results were seen when considering the relative effect of the drug administration normalized by the oxygen-air transition response (Fig. 10). There is a large region of anomalous response in the [Hb]MSOT and [HbO2]MSOT channels, signified by the magenta arrows in Fig. 10, for both the constrained and unconstrained methods. These variations are attributable to variations in [Hbtot]MSOT , which may themselves be due to temperature-dependent signal changes in the animal54,55 or systemic blood pressure effects due to CA4P administration56. As a result, SO2MSOT provides a more reliable metric of variation due to its intrinsic calibration against [Hbtot]MSOT. Similarly, SO2MSOT is less sensitive to variations in light fluence, likely due to variations in light spectrum as a function of wavelength being weaker throughout the bulk of the animal than the variations in light intensity. The unconstrained approach results in a greater number of small-scale anomalous responses (Fig. 10, white arrows) due to the non-physical values of various parameters in those pixels. In contrast, the relative effect of the constrained SO2MSOT is much more consistent, showing vessel-like structures in the response within the tumor bulk, with a dramatically reduced occurrence of anomalous responses. T-scores of difference in each parameter before and after CA4P administration in Dataset A. Across all channels, the constrained method results in T scores with substantially increased magnitude, corresponding to greater significance. The large positive region in the upper-left of the [Hb]MSOT, [HbO2]MSOT, and [Hbtot]MSOT is likely due to physiological effects unrelated to the local vascular effects of CA4P or the choice of processing. This effect is normalized in the ratiometric calculation of SO2MSOT. Figure created using a combination of MATLAB and Microsoft PowerPoint. Relative Cohen's d (Supplementary Information) of drug administration calibrated against oxygen-air transition during initial gas challenge in Dataset A. In both constrained and unconstrained [Hb]MSOT and [HbO2]MSOT images, there is a large region of anomalous relative effect (magenta arrows), likely due to changes in measured [Hbtot]MSOT over the course of the experiment. This large region is absent in the SO2MSOT analyses due to the ratiometric calculation. More localized artifacts such as those highlighted in the unconstrained analysis (white arrows) are due to modelling or reconstruction inaccuracies, and so are not compensated by calculating SO2MSOT. Figure created using a combination of MATLAB and Microsoft PowerPoint. Increases in correlation against the known pO2 waveform were seen for both the \(\alpha\) and \(\alpha \beta\) constrained methods when compared to the unconstrained approach (Figs. 11, 12). Though the distributions of correlation were similar (Fig. 11), the constrained methods improved the positive correlation with [HbO2]MSOT and SO2MSOT and the negative correlation with [Hb]MSOT, while having a relatively mild effect on [HbTot]MSOT (Fig. 12). Despite the visual similarities in correlation values between \(\alpha\) and \(\alpha \beta\) filtered timeseries (Fig. 11), the \(\alpha \beta\) filter has the advantage of quickly following dynamic changes, while the \(\alpha\) filter naturally suffers from a lag time after such changes. Nevertheless, the \(\alpha\) filter suppresses noise effectively, leading to the improved correlations shown in Fig. 12. The improvements in correlation were particularly significant in the HbO2 and SO2 channels, possibly reflecting the suitability of these channels for measuring response to gas challenge. The constrained methods were also able to achieve better model fits, as seen in Fig. 13. The unconstrained approach provided results which were generally consistent with the constrained approaches, but resulted in many noncausal switching time values in the tumor region. Correlation images against known inhaled O2 time course for Dataset B. U: Unconstrained analysis. C-\(\alpha \beta\): Constrained analysis using \(\alpha \beta\) Kalata filter. C-\(\alpha\): Constrained analysis using \(\alpha\) Kalata filter. The use of the constrained approach results in improved Pearson's correlation (\(\rho\)) with the gas challenge pO2 time course across all channels. Figure created using a combination of MATLAB and Microsoft PowerPoint. Plots of average correlation for distinct ROIs of Dataset B. A general improvement in correlation is seen using either of the constrained approaches, though the superior noise-rejection properties of the \(\alpha\) filter enable it to achieve better overall correlation with the gas challenge pO2 time course. Figure created using a combination of MATLAB and Microsoft PowerPoint. Selected parameters from fitting Dataset B using 7-parameter monoexponential model (Supplementary Information) using three different processing approaches. U: Unconstrained analysis. C-\(\alpha \beta\): Constrained analysis using \(\alpha \beta\) Kalata filter. C-\(\alpha\): Constrained analysis using \(\alpha\) Kalata filter. Although performance of the model fit is similar across much of the imaged area, the unconstrained analysis assigns inaccurate values of switching times to a large region on the interior of the tumor (white arrows). Figure created using a combination of MATLAB and Microsoft PowerPoint. The process of reconstruction benefits from parallelization, enabling faster processing. As shown in Fig. 14, even a few additional logical cores, as is available on most modern processors, provided dramatically improved performance. The model-based approaches generally benefited more from this parallelization; due to the larger proportion of the processing time per-frame taken up by the solution process itself, increased parallelization provides greater benefit to the model-based reconstructions. Diminishing returns with increasing numbers of processors are attributable to network and hard-disk limitations. We note that the scaling performance is comparable for each method, indicating that the relative overhead of parallelization for each method is comparable. Reconstruction using different algorithms for varying numbers of parallel workers. Both Dataset A (left) and Dataset C (right) benefit from parallelization, though both show diminishing returns. When reconstruction time is normalized to the time required for a single worker (left) or 4 workers (right) to execute, it becomes clear that both model-based approaches benefit greatly from parallelization, while backprojection (BP) more rapidly saturates due to the greater proportion of its reconstruction which is non-parallelizable. For large numbers of workers, it becomes evident that system overhead becomes restrictive, as there is a decrease in relative speedup. Note separate axes for dIMMI and CDMMI (left) versus BP (right). Figure created using a combination of MATLAB and Microsoft PowerPoint. The MSOT-RAFT provides a common platform for reconstructing optoacoustic tomography data in a variety of scenarios. The package is open-source and may be scaled to accommodate the computing resources available. Configuration of the overall pipeline during preprocessing enables the straightforward usage of default processing approaches, while simultaneously providing the flexibility for more sophisticated modification of the processing path. We note that this publication is a static record, and recommend examining the Zenodo repository for any updates. The MSOT-RAFT is accessible through a variety of interfaces, whether as a MATLAB toolbox or as a compiled executable library usable through other methods, with imminent deployment of Docker images capable of running on Singularity-enabled high performance computing environments. This allows it to be flexibly used in a broad array of contexts, whether running on an individual machine or as a component of a much larger distributed processing pipeline. There are numerous additional factors to consider; light fluence \(\phi \left( {\vec{r},\vec{\lambda }} \right)\) is inhomogeneous throughout the imaging region, owing to the removal of photons via absorption by superficial layers. Fluence is also inhomogeneous through wavelength, due to the generally inhomogeneous distribution of endmembers within superficial layers. These additional considerations necessitate additional processing steps, which could be conveniently added through the modular structure of the framework. The end goal of imaging is the ability to quantitatively resolve the spatial distribution of a variety of parameters, and the reconstruction procedures presented may be augmented for truly quantitative measurements. The reconstructed photoacoustic image is an inversion of a linear map representing the time-distance relationship under the assumption of zero acoustic attenuation and homogenous speed of sound. As others have noted, improved image quality can be attained with spatially-variant time-distance relationships57,58. The determination of the local speed of sound could be performed through Bayesian-type methods58, geometric simplification, or adjunct imaging such as transmission-reflection ultrasound59. Though the MSOT-RAFT is presently developed for the input of photoacoustic imaging data acquired using the iThera MSOT imaging systems, we note that this is a question of input format; additional manufacturers and even experimental systems may generate data, which could be successfully reconstructed after suitable input formatting. We anticipate that the MSOT-RAFT will provide an effective starting point for future developments. In particular, the modular structure of the pipeline, and the end-to-end comparisons of reconstruction quality, enable external optimization of the entire system, so as to achieve optimal performance. Since the RAFT is parameter-driven, it could be tuned using a hyperparameter optimizer such as Spearmint in order to create more effective reconstruction pipelines60,61,62. The resulting optimized pipeline could then be recorded and shared, enabling more rapid dissemination of successful processing motifs. The RAFT is broadly broken down into modular steps, and they do not all need to be performed using the toolbox itself. Indeed, one could use the RAFT for a large portion of the analysis, and inject additional processing to the analytical chain. This provides for extensive future developments, for example adding fluence correction prior to spectral unmixing. We plan to extend the filter interface to allow for the calling of external subroutines, such as efficient GPU kernels, compiled C and C++ functions, and various Python scripts. We additionally hope to add support for standard imaging formats such as DICOM or OME-XML, to enable management of data using PACS systems and inclusion in other studies and databases ranging from the experimental to the clinical. We have described the implementation and demonstrated the performance of MSOT-RAFT, an open-source toolbox for processing and reconstructing photoacoustic imaging data, and have demonstrated its use for processing and analyzing photoacoustic imaging data. The most up to date publicly available version of the RAFT, along with examples of usage and test data, can be found at https://doi.org/10.5281/zenodo.4658279 and is available under the MIT license. All data analyzed in this study are available from the corresponding author (RPM) on reasonable request. Beard, P. Biomedical photoacoustic imaging. Interface focus 1, 602–631. https://doi.org/10.1098/rsfs.2011.0028 (2011). Wang, L. V. Photoacoustic Imaging and Spectroscopy. (CRC Press, 2009). Shi, J., Tang, Y. & Yao, J. Advances in super-resolution photoacoustic imaging. Quant. Imaging Med. Surg. 8, 724. https://doi.org/10.21037/qims.2018.09.14 (2018). Vilov, S. et al. Super-resolution photoacoustic and ultrasound imaging with sparse arrays. Sci. Rep. 10, 1–8. https://doi.org/10.1038/s41598-020-61083-2 (2020). Zhang, P., Li, L., Lin, L., Shi, J. & Wang, L. V. In vivo superresolution photoacoustic computed tomography by localization of single dyed droplets. Light: Sci. Appl. 8, 1–9, doi:https://doi.org/10.1038/s41377-019-0147-9 (2019). Hupple, C. W. et al. A light-fluence-independent method for the quantitative analysis of dynamic contrast-enhanced multispectral optoacoustic tomography (DCE MSOT). Photoacoustics 10, 54–64. https://doi.org/10.1016/j.pacs.2018.04.003 (2018). Balasundaram, G. et al. Noninvasive anatomical and functional imaging of orthotopic glioblastoma development and therapy using multispectral optoacoustic tomography. Transl. Oncol. 11, 1251–1258. https://doi.org/10.1016/j.tranon.2018.07.001 (2018). Rich, L. J., Miller, A., Singh, A. K. & Seshadri, M. Photoacoustic imaging as an early biomarker of radio therapeutic efficacy in head and neck cancer. Theranostics 8, 2064. https://doi.org/10.7150/thno.21708 (2018). Hudson, S. V. et al. Targeted noninvasive imaging of EGFR-expressing orthotopic pancreatic cancer using multispectral optoacoustic tomography. Can. Res. 74, 6271–6279. https://doi.org/10.1158/0008-5472.Can-14-1656 (2014). Mallidi, S., Larson, T., Aaron, J., Sokolov, K. & Emelianov, S. Molecular specific optoacoustic imaging with plasmonic nanoparticles. Opt Express 15, 6583–6588. https://doi.org/10.1364/oe.15.006583 (2007). Article ADS CAS PubMed Google Scholar Mallidi, S., Luke, G. P. & Emelianov, S. Photoacoustic imaging in cancer detection, diagnosis, and treatment guidance. Trends Biotechnol 29, 213–221. https://doi.org/10.1016/j.tibtech.2011.01.006 (2011). Buehler, A., Kacprowicz, M., Taruttis, A. & Ntziachristos, V. Real-time handheld multispectral optoacoustic imaging. Opt Lett 38, 1404–1406. https://doi.org/10.1364/OL.38.001404 (2013). Dean-Ben, X. L., Ozbek, A. & Razansky, D. Volumetric real-time tracking of peripheral human vasculature with GPU-accelerated three-dimensional optoacoustic tomography. IEEE Trans Med Imaging 32, 2050–2055. https://doi.org/10.1109/tmi.2013.2272079 (2013). Dean-Ben, X. L. & Razansky, D. Portable spherical array probe for volumetric real-time optoacoustic imaging at centimeter-scale depths. Opt Express 21, 28062–28071. https://doi.org/10.1364/OE.21.028062 (2013). Article ADS PubMed Google Scholar McNally, L. R. et al. Current and emerging clinical applications of multispectral optoacoustic tomography (MSOT) in oncology. Clin. Cancer Res. 22, 3432. https://doi.org/10.1158/1078-0432.CCR-16-0573 (2016). Diot, G. et al. Multispectral Optoacoustic Tomography (MSOT) of Human Breast Cancer. Clin. Cancer Res. 23, 6912–6922. https://doi.org/10.1158/1078-0432.ccr-16-3200 (2017). Becker, A. et al. Multispectral optoacoustic tomography of the human breast: characterisation of healthy tissue and malignant lesions using a hybrid ultrasound-optoacoustic approach. Eur. Radiol. 28, 602–609. https://doi.org/10.1007/s00330-017-5002-x (2018). Dogan, B. E. et al. Optoacoustic imaging and gray-scale US features of breast cancers: correlation with molecular subtypes. Radiology 292, 564–572. https://doi.org/10.1148/radiol.2019182071 (2019). ANSI Standard. Z136. 1–2000, for Safe Use of Lasers. Published by the Laser (2000). Thomas, R. J. et al. A procedure for multiple-pulse maximum permissible exposure determination under the Z136. 1–2000 American National Standard for Safe Use of Lasers. J. Laser Appl. 13, 134–140, doi: https://doi.org/10.2351/1.1386796 (2001). Xu, M. & Wang, L. V. Universal back-projection algorithm for photoacoustic computed tomography. Phys Rev E Stat Nonlin Soft Matter Phys 71, 016706. https://doi.org/10.1103/PhysRevE.71.016706 (2005). Lutzweiler, C., Dean-Ben, X. L. & Razansky, D. Expediting model-based optoacoustic reconstructions with tomographic symmetries. Med. Phys. 41, 013302. https://doi.org/10.1118/1.4846055 (2014). Wang, K. & Anastasio, M. A. A simple Fourier transform-based reconstruction formula for photoacoustic computed tomography with a circular or spherical measurement geometry. Phys Med Biol 57, N493-499. https://doi.org/10.1088/0031-9155/57/23/N493 (2012). Dean-Ben, X. L., Buehler, A., Ntziachristos, V. & Razansky, D. Accurate model-based reconstruction algorithm for three-dimensional optoacoustic tomography. IEEE Trans Med Imaging 31, 1922–1928. https://doi.org/10.1109/TMI.2012.2208471 (2012). Ding, L., Dean-Ben, X. L. & Razansky, D. Real-time model-based inversion in cross-sectional optoacoustic tomography. IEEE Trans. Med. Imaging 35, 1883–1891. https://doi.org/10.1109/Tmi.2016.2536779 (2016). Liu, H. et al. Curve-driven-based acoustic inversion for photoacoustic tomography. IEEE Trans. Med. Imaging 35, 2546–2557. https://doi.org/10.1109/TMI.2016.2584120 (2016). Oraevsky, A. A., Deán-Ben, X. L., Wang, L. V., Razansky, D. & Ntziachristos, V. Statistical weighting of model-based optoacoustic reconstruction for minimizing artefacts caused by strong acoustic mismatch.7899, 789930, doi:https://doi.org/10.1117/12.874623 (2011). Rosenthal, A., Razansky, D. & Ntziachristos, V. Fast semi-analytical model-based acoustic inversion for quantitative optoacoustic tomography. IEEE Trans Med Imaging 29, 1275–1285. https://doi.org/10.1109/TMI.2010.2044584 (2010). Zhang, J., Anastasio, M. A., La Riviere, P. J. & Wang, L. V. Effects of different imaging models on least-squares image reconstruction accuracy in photoacoustic tomography. IEEE Trans Med Imaging 28, 1781–1790. https://doi.org/10.1109/TMI.2009.2024082 (2009). Cox, B. T. & Treeby, B. E. Artifact trapping during time reversal photoacoustic imaging for acoustically heterogeneous media. IEEE Trans Med Imaging 29, 387–396. https://doi.org/10.1109/TMI.2009.2032358 (2010). Huang, C., Wang, K., Nie, L., Wang, L. V. & Anastasio, M. A. Full-wave iterative image reconstruction in photoacoustic tomography with acoustically inhomogeneous media. IEEE Trans. Med. Imaging 32, 1097–1110. https://doi.org/10.1109/TMI.2013.2254496 (2013). Xia, J., Yao, J. & Wang, L. V. Photoacoustic tomography: principles and advances. Electromagn. Waves (Cambridge, Mass.) 147, 1, doi:https://doi.org/10.2528/pier14032303 (2014). Cox, B. T., Arridge, S. R. & Beard, P. C. Estimating chromophore distributions from multiwavelength photoacoustic images. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 26, 443–455, doi:https://doi.org/10.1364/josaa.26.000443 (2009). Xu, M. & Wang, L. V. Photoacoustic imaging in biomedicine. Rev. Sci. Instrum. 77, 041101. https://doi.org/10.1063/1.2195024 (2006). Lutzweiler, C. & Razansky, D. Optoacoustic imaging and tomography: reconstruction approaches and outstanding challenges in image performance and quantification. Sensors (Basel) 13, 7345–7384. https://doi.org/10.3390/s130607345 (2013). Caballero, M. A. A., Rosenthal, A., Buehler, A., Razansky, D. & Ntziachristos, V. Optoacoustic determination of spatio-temporal responses of ultrasound sensors. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 60, 1234–1244. https://doi.org/10.1109/TUFFC.2013.2687 (2013). Treeby, B. E. & Cox, B. T. k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields. J. Biomed. Opt. 15, 021314. https://doi.org/10.1117/1.3360308 (2010). Dehghani, H. et al. Near infrared optical tomography using NIRFAST: Algorithm for numerical model and image reconstruction. Commun. Numer. Methods Eng. 25, 711–732 (2009). Schweiger, M. & Arridge, S. R. The Toast++ software suite for forward and inverse modeling in optical tomography. J. Biomed. Opt. 19, 040801. https://doi.org/10.1117/1.JBO.19.4.040801 (2014). Wang, L., Jacques, S. L. & Zheng, L. MCML—Monte Carlo modeling of light transport in multi-layered tissues. Comput. Methods Programs Biomed. 47, 131–146 (1995). Jensen, J. A. & Svendsen, N. B. Calculation of pressure fields from arbitrarily shaped, apodized, and excited ultrasound transducers. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 39, 262–267. https://doi.org/10.1109/58.139123 (1992). Jensen, J. A. In 10th Nordicbaltic Conference on Biomedical Imaging, vol. 4, Supplement 1, Part 1: 351--353. (Citeseer). Lu, T. & Mao, H. In 2009 Symposium on Photonics and Optoelectronics. 1–4 (IEEE). Bigio, I. J. & Fantini, S. Quantitative Biomedical Optics: Theory, Methods, and Applications. (Cambridge University Press, 2016). O'Kelly, D., Guo, Y. & Mason, R. P. Evaluating online filtering algorithms to enhance dynamic multispectral optoacoustic tomography. Photoacoustics 19, 100184. https://doi.org/10.1016/j.pacs.2020.100184 (2020). Hill, M. D. & Marty, M. R. Amdahl's Law in the Multicore Era. Computer 41, 33–38. https://doi.org/10.1109/MC.2008.209 (2008). Amdahl, G. M. In Proceedings of the April 18–20, 1967, Spring Joint Computer Conference. 483–485. https://doi.org/10.1145/1465482.1465560. Zhou, W., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612. https://doi.org/10.1109/TIP.2003.819861 (2004). Nielsen, T. et al. Combretastatin A-4 phosphate affects tumor vessel volume and size distribution as assessed using MRI-based vessel size imaging. Clin Cancer Res. 18, 6469–6477. https://doi.org/10.1158/1078-0432.ccr-12-2014 (2012). Dey, S. et al. The vascular disrupting agent combretastatin A-4 phosphate causes prolonged elevation of proteins involved in heme flux and function in resistant tumor cells. Oncotarget 9, 4090. https://doi.org/10.18632/oncotarget.23734 (2018). Tomaszewski, M. R. et al. Oxygen-enhanced and dynamic contrast-enhanced optoacoustic tomography provide surrogate biomarkers of tumor vascular function, hypoxia, and necrosis. Can. Res. 78, 5980–5991. https://doi.org/10.1158/0008-5472.CAN-18-1033 (2018). Ding, L., Luis Dean-Ben, X., Lutzweiler, C., Razansky, D. & Ntziachristos, V. Efficient non-negative constrained model-based inversion in optoacoustic tomography. Phys Med Biol 60, 6733–6750, doi:https://doi.org/10.1088/0031-9155/60/17/6733 (2015). Cheong, W.-F., Prahl, S. A. & Welch, A. J. A review of the optical properties of biological tissues. IEEE J. Quantum Electron. 26, 2166–2185 (1990). Petrova, E., Liopo, A., Oraevsky, A. A. & Ermilov, S. A. Temperature-dependent optoacoustic response and transient through zero Grüneisen parameter in optically contrasted media. Photoacoustics 7, 36–46. https://doi.org/10.1016/j.pacs.2017.06.002 (2017). Shah, J. et al. Photoacoustic imaging and temperature measurement for photothermal cancer therapy. J. Biomed. Opt. 13, 034024. https://doi.org/10.1117/1.2940362 (2008). Busk, M., Bohn, A. B., Skals, M., Wang, T. & Horsman, M. R. Combretastatin-induced hypertension and the consequences for its combination with other therapies. Vascul. Pharmacol. 54, 13–17. https://doi.org/10.1016/j.vph.2010.10.002 (2011). Luís Deán-Ben, X., Ntziachristos, V. & Razansky, D. Effects of Small Variations of Speed of Sound in Optoacoustic Tomographic Imaging. Vol. 41 (2014). Lutzweiler, C., Meier, R. & Razansky, D. Optoacoustic image segmentation based on signal domain analysis. Photoacoustics 3, 151–158. https://doi.org/10.1016/j.pacs.2015.11.002 (2015). Merčep, E., Herraiz, J. L., Deán-Ben, X. L. & Razansky, D. Transmission–reflection optoacoustic ultrasound (TROPUS) computed tomography of small animals. Light: Sci. Appl. 8, 18, doi:https://doi.org/10.1038/s41377-019-0130-5 (2019). Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A. & Talwalkar, A. Hyperband: a novel bandit-based approach to hyperparameter optimization. J. Mach. Learn. Res. 18, 6765–6816 (2017). Hazan, E., Klivans, A. & Yuan, Y. Hyperparameter optimization: a spectral approach. arXiv preprint arXiv:1706.00764 (2017). Bergstra, J. S., Bardenet, R., Bengio, Y. & Kégl, B. In Advances in Neural Information Processing Systems. 2546–2554. We would like to acknowledge Stefan Morscher, Neal Burton, Jacob Tippetts, and Clinton Hupple of iThera Medical GmbH, for extensive assistance in organizing and validating the pipeline. We thank David Trudgian and Daniel Moser for their assistance in integrating the pipeline into the high performance computing environment. The research was supported in part by National Institutes of Health (NIH) Grant 1R01CA244579-01A1, Cancer Prevention and Research Institute of Texas (CPRIT) IIRA Grants RP140285 and RP140399 and the assistance of the Southwestern Small Animal Imaging Resource through the NIH Cancer Center Support Grant 1P30 CA142543. DOK was the recipient of a fellowship administered by the Lyda Hill Department of Bioinformatics. The iThera MSOT was purchased under NIH Grant 1 S10 OD018094-01A1. Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9058, USA Devin O'Kelly, James Campbell III, Jeni L. Gerberich & Ralph P. Mason BioHPC, University of Texas Southwestern Medical Center, Dallas, TX, USA Devin O'Kelly, Paniz Karbasi & Liqiang Wang Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA Venkat Malladi & Andrew Jamieson Devin O'Kelly James Campbell III Jeni L. Gerberich Paniz Karbasi Venkat Malladi Andrew Jamieson Liqiang Wang Ralph P. Mason DO conceived of, designed, and implemented the RAFT, including the MATLAB toolbox and Nextflow wrapper. JC III and JG contributed to the implantation, care, and imaging of the animals included in this study. PK provided valuable guidance relating to the design of the RAFT. VM provided substantial technical assistance in implementing the Nextflow wrapper and continuous integration scheme. AJ provided assistance and code relating to the continuous integration component. LW provided technical oversight and key financial support. RPM provided financial support and supervision of the development process. Correspondence to Ralph P. Mason. Supplementary Information. O'Kelly, D., Campbell, J., Gerberich, J.L. et al. A scalable open-source MATLAB toolbox for reconstruction and analysis of multispectral optoacoustic tomography data. Sci Rep 11, 19872 (2021). https://doi.org/10.1038/s41598-021-97726-1
CommonCrawl
A Method of Coupling Expected Patch Log Likelihood and Guided Filtering for Image De-noising Shunfeng Wang* , Jiacen Xie* , Yuhui Zheng** , Jin Wang*** and Tao Jiang* Corresponding Author: Shunfeng Wang* ([email protected]) Shunfeng Wang*, College of Math and Statistics, Nanjing University of Information Science and Technology, Nanjing, China, [email protected] Jiacen Xie*, College of Math and Statistics, Nanjing University of Information Science and Technology, Nanjing, China, [email protected] Yuhui Zheng**, Jiangsu Engineering Center of Network Monitoring, College of Computer and Software, Nanjing University of Information Science andTechnology, Nanjing, China, [email protected] Jin Wang***, School of Computer Communication Engineering, Changsha University of Science Technology, Changsha 410004, China, [email protected] Tao Jiang*, College of Math and Statistics, Nanjing University of Information Science and Technology, Nanjing, China, [email protected] Received: January 3 2018 Revision received: March 20 2018 Accepted: April 5 2018 Abstract: With the advent of the information society, image restoration technology has aroused considerable interest. Guided image filtering is more effective in suppressing noise in homogeneous regions, but its edge-preserving property is poor. As such, the critical part of guided filtering lies in the selection of the guided image. The result of the Expected Patch Log Likelihood (EPLL) method maintains a good structure, but it is easy to produce the ladder effect in homogeneous areas. According to the complementarity of EPLL with guided filtering, we propose a method of coupling EPLL and guided filtering for image de-noising. The EPLL model is adopted to construct the guided image for the guided filtering, which can provide better structural information for the guided filtering. Meanwhile, with the secondary smoothing of guided image filtering in image homogenization areas, we can improve the noise suppression effect in those areas while reducing the ladder effect brought about by the EPLL. The experimental results show that it not only retains the excellent performance of EPLL, but also produces better visual effects and a higher peak signal-to-noise ratio by adopting the proposed method. Keywords: Edge Preserving , Expected Patch Log Likelihood , Image De-noising , Guided Filtering The image is the most direct form by which humans perceive the world, and it has been widely applied in various fields. However, images are unavoidably polluted by noise in the process of acquisition and transmission. Therefore, image de-noising plays an indispensable role in our daily life. In order to remove the noise from images effectively, numerous image de-noising approaches have been presented, such as the traditional Bayes method [1-4] and various regularization methods. Among these, total variation (TV) regularization [5], as a representative regularization de-noising method, has attracted considerable attention because of its low computational complexity and well-understood mathematical behavior. Since the introduction of TV regularization in the context of image processing, many researchers have recently presented various mature algorithms and extended the applications of TV functional [6-12]. With the observation that similar image patches are ubiquitous in the whole image, a non-local type of average filtering has been proposed [13-15]. Since then, a self-similarity-based non-local method has been widely studied for image regularization [16]. Protter et al. [17] proposed a non-local total variational regularization model.Gao et al. [18] used the Zernike moment to identify the similarity between image patches. A better image restoration effect can be obtained by using these methods. Other de-noising methods based on non-local information include K-SVD [19] and BM3D [20,21] among others. Beyond that, classical filters like Wiener filtering [22,23] and Kalman filtering [24] have been used till now. The principle of Wiener filtering is to transform the problem into the minimizing of the mean square error between the original and estimated values. The Expected Patch Log Likelihood (EPLL) image restoration techniques proposed by Zoran and Weiss [25], who utilized a mixture model to learn image patch prior, have attracted a lot of attention. As one of the most popular methods in the field of non-local information de-noising, it is characterized by a higher peak signal-to-noise ratio (PSNR) and a better visual effect. Many scholars have been keen to improve it [26-30], but there is still great potential for improvement in terms of the quality of denoising. For example, when the noise level is high, the de-noising results of EPLL tend to exhibit the ladder effect in the smooth regions. Guided image filtering [31] can establish the filter kernel explicitly, and its guided image can be the input image or the related of input, while the output image is a local linear transformation of the guided image. When using guided filtering to de-noise an image, we usually use the input image as the guided image to smooth and de-noise the image. But in the process of an experiment, it is found that when the noise level is low, guided filtering can remove the noise and keep the edge well. With the increase of noise, such details as the edge and texture of the image are polluted by noise, which has a great impact on the de-noising results rather than providing effective guidance information, when using this filtering. In summary, we consider that guided image filtering has a better noise suppression effect in homogeneous regions, whereas its edge-preserving property is poor. The key to guided filtering is the selection of the guided image. The result of EPLL de-noising helps achieve a good structure, but it is easy to cause the ladder effect in homogeneous areas. For this reason, we propose a method of coupling the EPLL and guided filtering for image de-noising that incorporates these two methods, which complement and promote each other, thus enhancing the effect of image de-noising. The remainder of this paper is organized as follows. Section 2 presents a brief summary of the basic theory of the EPLL model and guided filtering; Section 3 elaborates upon the details of how to integrate the EPLL with guided filtering and analyze the benefits of such a process; Section 4 presents the experimental results of image de-noising as supporting evidence for the effectiveness of the proposed method; and Section 5 presents some conclusions about the method of coupling EPLL and guided filtering for image de-noising, as well as directions for future studies. 2. Guided Image Filtering and Expected Patch Log Likelihood 2.1 Guided Image Filtering The guided filter supposes that the output image q has a local linear relationship with the guided image I[31]. That is, in the window ωk centered on the pixel k, q is satisfied with a linear transformation associated with I: [TeX:] $$q _ { i } = a _ { k } I _ { i } + b _ { k } , \forall i \in \omega _ { k }$$ where (ak,bk) is the linear constant coefficient in the square window ωk, the radius of ωk is r. In order to determine the linear coefficient (ak,bk) , the following cost functions are introduced in [3]: [TeX:] $$\left( a _ { k } , b _ { k } \right) = \sum _ { i \in \omega _ { k } } \left( \left( a _ { k } I _ { i } + b _ { k } - p _ { i } \right) ^ { 2 } + \varepsilon a _ { k } ^ { 2 } \right)$$ where ε is a regularization parameter, [TeX:] $$q _ { i } = p _ { i } - n _ { i } ; p _ { i }$$ is the pixel of the input image, and ni is the noise of this pixel. (ak,bk) can be obtained by minimizing Eq. (2): [TeX:] $$a _ { k } = \frac { \frac { 1 } { | \omega | } \sum _ { i \in \omega _ { k } } I _ { i } p _ { i } - \mu _ { k } \overline { p } _ { k } } { \sigma _ { k } ^ { 2 } + \varepsilon }$$ [TeX:] $$b _ { k } = \overline { p } _ { k } - a _ { k } \mu _ { k }$$ where [TeX:] $$\mu _ { k } \text { and } \sigma _ { k } ^ { 2 }$$ represent the mean and variance of the guided image I in the window ωk respectively; |ω| is the number of pixels in the window ωk, and pk is the mean of the input image p in the window ωk. The output value qi of the pixel i is related to all the windows containing the pixel i. Therefore, in order to obtain a stable qi, we need to average it. So the final output image qi is: [TeX:] $$q _ { i } = \overline { a } _ { i } I _ { i } + \overline { b } _ { i } = \frac { 1 } { | \omega | } \sum _ { k \in \omega _ { i } } \left( a _ { k } I _ { i } + b _ { k } \right) , \forall i \in I$$ 2.2 Expected Patch Log Likelihood EPLL is a method of restoring an image by using the statistical information of the external image patch. The basic idea is to maximize the likelihood probability of the image patch, and make the restored image patch close to the prior. For a degraded image X and a known prior knowledge, EPLL is defined as: [TeX:] $$E P L L _ { P } ( X ) = \sum _ { i } \log p \left( P _ { i } X \right)$$ where Pi is an operator to extract image patch on the i the pixel, then PiX is the extracted patch. logp(PiX) indicates the logarithmic value of the likelihood probability of the i th patch under a given prior distribution. For a given degraded image Y, the cost function i: [TeX:] $$f _ { p } ( X | Y ) = \frac { \lambda } { 2 } \| A X - Y \| ^ { 2 } - E P L L _ { p } ( X )$$ where A is a degenerate matrix and λ is the regularization parameter. The "Half Quadratic Splitting" [25] method can be used to optimize Eq. (7). A set of auxiliary variables {zi} is introduced to be equal to PiX, yielding the following cost function: [TeX:] $$f _ { p , \beta } \left( X , \left\{ z _ { i } \right\} | Y \right) = \frac { \lambda } { 2 } \| A X - Y \| ^ { 2 } + \sum _ { i } \frac { \beta } { 2 } \left( \left\| P _ { i } X - z _ { i } \right\| ^ { 2 } - \log p \left( z _ { i } \right) \right)$$ As β tends to infinity, we obtain that {zi} will be equal to PiX and the solutions to Eq. (8) and Eq. (7) converge. We know that many popular image priors can be regarded as a special case of Gaussian Mixture Model (GMM). The GMM can also be used in EPLL, and the non-Gaussian distribution is represented by the combination of several single Gaussian distributions. Thus, the log likelihood of a given patch PiX is: [TeX:] $$\log p \left( P _ { i } X \right) = \log \left( \sum _ { k = 1 } ^ { K } \pi _ { k } N \left( P _ { i } X | \mu _ { k } , \Sigma _ { k } \right) \right)$$ where K is the number of mixture components, πk is the mixing weight for each mixture component, and μk and Σk are the corresponding mean and covariance matrix. To solve Eq. (6), we choose the most likely Gaussian mixing weight kmax for each patch PiX, and then Eq. (6) is minimized by alternatively updating zi and X: [TeX:] $$z _ { i } ^ { n + 1 } = \left( \Sigma _ { k _ { \max } } + \frac { 1 } { \beta } I \right) ^ { - 1 } \left( P _ { i } X ^ { n } \Sigma _ { k _ { \max } } + \frac { 1 } { \beta } I \mu _ { k _ { \max } } \right)$$ [TeX:] $$X ^ { n + 1 } = \left( \lambda A ^ { T } A + \beta \sum _ { j } P _ { j } ^ { T } P _ { j } \right) ^ { - 1 } \left( \lambda A ^ { T } y + \beta \sum _ { j } P _ { j } ^ { T } z _ { j } ^ { n + 1 } \right)$$ where μkmax and Σkmax are the corresponding mean and covariance matrix with the mixing weight kmax, and I is an identity matrix. 3. Proposed Method with the Combination of EPLL and Guided Filtering Guided image filtering has two advantages: it has good edge-preserving properties, so it will not cause a "gradient reversal", and it can also be used for other purposes than smoothing. Moreover, with the help of the guiding image, the output image is more structured. The image obtained by EPLL has a better structure and can provide better auxiliary information for the guided filter. Based on the advantages of the two models, this study proposes a method of coupling the expected patch log likelihood and guided filtering for image de-noising, with the aim of improving the de-noising performance. The algorithm implementation steps (Table 1) are as follows: This method is mainly used to eliminate Gaussian additive noise. It has a very good smoothing effect while maintaining the edge information effectively, which may be attributable to the fact that it performs the second smoothing filtering for homogeneous regions on the basis of EPLL. When the guided image is fixed, the filtering results mainly depend on the coefficients (ak,bk). Algorithm implementation steps In the iteration process, the input image is the same as the guided image, and Eqs. (3) and (4) can be simplified as: [TeX:] $$a _ { k } = \frac { \sigma _ { k } ^ { 2 } } { \sigma _ { k } ^ { 2 } + \varepsilon }$$ [TeX:] $$b _ { k } = \left( 1 - a _ { k } \right) \mu _ { k }$$ The parameter ε determines whether a pixel is in a boundary region (Table 2). Performance of algorithms in different regions It can been seen from the above analysis that the edge-preserving effect of the guided filter depends on the guided image, while the EPLL model can present a better guiding image for guided filtering. In contrast, the second smoothing of flat regions obtained by the guided filtering can improve the noise suppression effect of the EPLL and avoid the ladder effect. These two methods complement and reinforce each other perfectly. The next sections prove the validity of the proposed method via experiments. In this section, we discuss the performance of the proposed method. In this study's experiments, the GMM with 200 mixture components was learned from a set of 2 × 106 images patches, which were sampled from the Berkeley Segmentation Database Benchmark (BSDS300). In order to verify the effectiveness of the proposed method, it was compared with guided filtering and EPLL in terms of visual effects and numerical results. To the images used in this study's experiments were added Gaussian noise with zero mean and standard variance, σ = 15 or σ = 30. The parameters for EPLL in the experiments were as follows: image patch size √L = 8, regularization parameter λ = L/σ2, and penalty parameter β = 1⁄σ2 ∗ [1 2 4 8 16]. Meanwhile, the parameters of guided filtering were as follows: radius r = 2, and regularization parameter ε = 0.022. The results are as follows: Fig. 1 displays the de-noised results of the three methods on Couple image with dimensions of 512×512. Here, Fig. 1(a) is the original clean image; Fig. 1(b) is the noisy image generated by adding Gaussian white noise with zero mean and standard variance σ = 30 to the original image; Fig. 1(c) shows the result of the guider filtering, but the edges, details, and other information have not been preserved well; Fig. 1(d) shows the de-noising results of the EPLL model, where it can be seen that mottling occurs in certain areas; and Fig. 1(e) shows the de-noising result of the proposed method. This shows a better visual effect at the boundary of the wall. The boundary is preserved and the transition is more natural in the smooth region. As shown in Fig. 2, the local enlargement of the results of the three algorithms confirms the above analysis. Figs. 3 and 4 show the same result. The proposed method makes the restoration result smoother and preserves more details. Thus, it is reasonable to conclude that the proposed method is superior to EPLL in terms of numerical results, as shown in Table 3. Image de-noising performance comparison on Couple image and σ = 30: (a) original image, (b) noisy image, (c) guider filtering, (d) EPLL, and (e) proposed method. Enlargement of local Boat image: (a) original image, (b) noisy image, (c) guider filtering, (d) EPLL, and (e) proposed method. Image de-noising performance comparison on Plane image and σ = 15: (a) original image, (b) noisy image, (c) guider filtering, (d) EPLL, and (e) proposed method. Image de-noising performance comparison of Barbara image and σ = 15: (a) original image, (b) noisy image, (c) guider filtering, (d) EPLL, and (e) proposed method. Based on the complementarity of EPLL and guided filtering, this paper proposes a method of coupling the expected patch log likelihood and guided filtering for image de-noising. It uses the EPLL model to construct the guided image for guided filtering, which can provide better structural information for guided filtering. Meanwhile, by the secondary smoothing of guided image filtering in the image homogenization areas, we can improve the noise suppression effect in those areas, and reduce the ladder effect brought about by EPLL. The experimental results show that the proposed method is better than the previous two methods, in terms of both the visual effect and numerical performance. This combination makes full use of the advantages of the two methods while making up for their shortcomings, which makes the two complement and progress each other. Of course, there are still some shortcomings in this method. For example, the selection of the parameters ε and iteration times are all artificially set, and their values directly determine whether the algorithm will be overly smooth or not. As such, any future research will focus on this aspect. This work was partly supported by the National Natural Science Foundation of China (Grant No. 61672293). Shunfeng Wang She received the M.S. degree in the Nanjing University of Information Science and Technology, China in 2002. Now she is a Professor at the College of Mathematics and Statistics, Nanjing University of Information Science and Technology. Her research interests cover pattern recognition, and information processing. Jiacen Xie She received the B.S. degree in information and computation science from Nanjing University of Information Science and Technology (NUIST), China in 2011. Now, she is a postgraduate in the College of Math and Statistics, NUIST. Her research interests cover image processing and pattern recognition. Yuhui Zheng He received the Ph.D. degree in the Nanjing University of Science and Technology, China in 2009. Now, he is an associate professor in the School of Computer and Software, Nanjing University of Information Science and technology. His research interests cover information system. Jin Wang He received the B.S. and M.S. degrees from Nanjing University of Posts and Telecommunications, China in 2002 and 2005, respectively. He received Ph.D. degree from Kyung Hee University Korea in 2010. Now, he is a professor in the School of Computer Communication Engineering, Changsha University of Science Technology. His research interests mainly include routing protocol and algorithm design, performance evaluation and optimization for wireless ad hoc and sensor networks. He is a member of the IEEE and ACM. Tao Jiang He received the B.S. degree in information and computation science from Nanjing University of Information Science and Technology (NUIST), China in 2011. Now, he is a postgraduate in the College of Math and Statistics, NUIST. His research interests cover image processing and pattern recognition. 1 R. Molina, J. Numez, F. J. Cortijo, J. Mateos, "Image restoration in astronomy: a Bayesian perspective," IEEE Signal Processing Magazine, 2001, vol. 18, no. 2, pp. 11-29. doi:[[[10.1109/79.916318]]] 2 R. Molina, J. Mateos, A. K. Katsaggelos, M. Vega, "Bayesian multichannel image restoration using compound Gauss-Markov random fields," IEEE Transactions on Image Processing, 2003, vol. 12, no. 12, pp. 1642-1654. doi:[[[10.1109/TIP.2003.818015]]] 3 S. P. Belekos, N. P. Galatsanos, A. K. Katsaggelos, "Maximum a posteriori video super-resolution using a new multichannel image prior," IEEE Transactions on Image Processing, 2010, vol. 19, no. 6, pp. 1451-1464. doi:[[[10.1109/TIP.2010.2042115]]] 4 J. Han, R. Quan, D. Zhang, F. Nie, "Robust object co-segmentation using background prior," IEEE Transactions on Image Processing, 2018, vol. 27, no. 4, pp. 1639-1651. doi:[[[10.1109/TIP.2017.2781424]]] 5 L. I. Rudin, S. Osher, E. Fatemi, "Nonlinear total variation based noise removal algorithms," Physica D: Nonlinear Phenomena, 1992, vol. 60, no. 1-4, pp. 259-268. doi:[[[10.1016/0167-2789(92)90242-f]]] 6 F. Xiao, W. Liu, Z. Li, L. Chen, R. Wang, "Noise-tolerant wireless sensor networks localization via multinorms regularized matrix completion," IEEE Transactions on Vehicular Technology, 2018, vol. 67, no. 3, pp. 2409-2419. doi:[[[10.1109/TVT.2017.2771805]]] 7 J. H. Park, "Efficient approaches to computer vision and pattern recognition," Journal of Information Processing Systems, 2017, vol. 13, no. 5, pp. 1043-1051. doi:[[[10.3745/JIPS.00.0007]]] 8 A. Buades, B. Coll, J. M. Morel, "A non-local algorithm for image denoising," in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, 2005;pp. 60-65. doi:[[[10.1109/cvpr.2005.38]]] 9 F. Xiao, Z. Wang, N. Ye, R. Wang, X. Y. Li, "One more tag enables fine-grained RFID localization and tracking," IEEE/ACM Transactions on Networking, 2018, vol. 26, no. 1, pp. 161-174. doi:[[[10.1109/TNET.2017.2766526]]] 10 J. Han, D. Zhang, G. Cheng, N. Liu, D. Xu, "Advanced deep-learning techniques for salient and category-specific object detection: a survey," IEEE Signal Processing Magazine, 2018, vol. 35, no. 1, pp. 84-100. doi:[[[10.1109/MSP.2017.2749125]]] 11 Y. Zheng, K. Ma, Q. Yu, J. Zhang, J. Wang, "Regularization parameter selection for total variation model based on local spectral response," Journal of Information Processing Systems, 2017, vol. 13, no. 5, pp. 1168-1182. doi:[[[10.3745/JIPS.02.0072]]] 12 J. Zhang, Q. Yu, Y. Zheng, H. Zhang, J. Wu, "Regularization parameter selection for TV image denoising using spatially adaptive local spectral response," Journal of Internet Technology, 2016, vol. 17, no. 6, pp. 1117-1124. doi:[[[10.6138/JIT.2016.17.6.20160603]]] 13 A. Buades, B. Coll, J. M. Morel, "A review of image denoising algorithms, with a new one," Multiscale Modeling Simulation, 2005, vol. 4, no. 2, pp. 490-530. doi:[[[10.1137/040616024]]] 14 X. Yao, J. Han, D. Zhang, F. Nie, "Revisiting co-saliency detection: a novel approach based on two-stage multi-view spectral rotation co-clustering," IEEE Transactions on Image Processing, 2017, vol. 26, no. 7, pp. 3196-3209. doi:[[[10.1109/TIP.2017.2694222]]] 15 S. Zhang, H. Jing, "Fast log-Gabor-based nonlocal means image denoising methods," in Proceedings of IEEE International Conference on Image Processing, Paris, France, 2015;pp. 2724-2728. doi:[[[10.1109/icip.2014.7025551]]] 16 Y. Zheng, B. Jeon, J. Zhang, Y. Chen, "Adaptively determining regularization parameters in non-local total variation regularisation for image denoising," Electronics Letters, 2015, vol. 5, no. 2, pp. 144-145. doi:[[[10.1049/el.2014.3494]]] 17 M. Protter, M. Elad, H. Takeda, P. Milanfar, "Generalizing the nonlocal-means to super-resolution reconstruction," IEEE Transactions on Image Processing, 2009, vol. 18, no. 1, pp. 36-51. doi:[[[10.1109/TIP.2008.2008067]]] 18 X. Gao, Q. Wang, X. Li, D. Tao, K. Zhang, "Zernike-moment-based image super resolution," IEEE Transactions on Image Processing, 2011, vol. 20, no. 10, pp. 2738-2747. doi:[[[10.1109/TIP.2011.2134859]]] 19 M. Elad, M. Aharon, "Image denoising via sparse and redundant representations over learned dictionaries," IEEE Transactions on Image Processing, 2006, vol. 15, no. 12, pp. 3736-3745. doi:[[[10.1109/TIP.2006.881969]]] 20 K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, "Image denoising by sparse 3-D transform-domain collaborative filtering," IEEE Transactions on Image Processing, 2007, vol. 16, no. 8, pp. 2080-2095. doi:[[[10.1109/TIP.2007.901238]]] 21 I. Djurovic, "BM3D filter in salt-and-pepper noise removal," EURASIP Journal on Image and Video Processingarticle no. 13, 2016, vol. 2016, no. article 13. doi:[[[10.1186/s13640-016-0113-x]]] 22 M. K. Ozkan, A. T. Erdem, M. I. Sezan, A. M. Tekalp, "Efficient multiframe Wiener restoration of blurred and noisy image sequences," IEEE Transactions on Image Processing, 1992, vol. 1, no. 4, pp. 453-476. doi:[[[10.1109/83.199916]]] 23 F. Baselice, G. Ferraioli, V. Pascazio, G. Schirinzi, "Enhanced Wiener Filter for Ultrasound image denoising," Computer Methods and Programs in Biomedicine, 2018, vol. 153, pp. 71-81. doi:[[[10.1007/978-981-10-5122-7_17]]] 24 S. Citrin, M. R. Azimi-Sadjadi, "A full-plane block Kalman filter for image restoration," IEEE Transactions on Image Processing, 1992, vol. 1, no. 4, pp. 488-195. doi:[[[10.1109/83.199918]]] 25 D. Zoran, Y. Weiss, "From learning models of natural image patches to whole image restoration," in Proceedings of IEEE International Conference on Computer Vision, Barcelona, Spain, 2011;pp. 479-486. doi:[[[10.1109/iccv.2011.6126278]]] 26 Y. Zheng, X. Zhou, B. Jeon, J. Shen, H. Zhang, "Multi-scale patch prior learning for image denoising using Student's-t mixture model," Journal of Internet Technology, 2017, vol. 18, no. 7, pp. 1553-1560. doi:[[[10.6138/JIT.2017.18.7.20161120]]] 27 J. Sulam, M. Elad, "Expected patch log likelihood with a sparse prior," in Energy Minimization Methods in Computer Vision and Pattern Recognition. Cham: Springer2015,, pp. 99-111. doi:[[[10.1007/978-3-319-14612-6_8]]] 28 Y. Zheng, B. Jeon, L. Sun, J. Zhang, and H. Zhang, IEEE Transactions on Circuits and Systems for Video Technology, 2017., http://doi.org/10.1109/TCSVT.2017.2724940 29 S. Wang, J. Xie, Y. Zheng, T. Jiang, S. Xue, "Expected patch log likelihood based on multi-layer prior information learning," Advances in Computer Science and Ubiquitous Computing. Singapore: Springer2017,, pp. 299-304. doi:[[[10.1007/978-981-10-7605-3_49]]] 30 J. Zhang, J. Liu, T. Li, Y. Zheng, J. Wang, "Gaussian mixture model learning based image denoising method with adaptive regularization parameters," Multimedia Tools and Applications, 2017, vol. 76, no. 9, pp. 11471-11483. doi:[[[10.1007/s11042-016-4214-4]]] 31 K. He, J. Sun, X. Tang, "Guided image filtering," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, vol. 35, no. 6, pp. 1397-1409. doi:[[[10.1109/TPAMI.2012.213]]] 32 C. L. Tsai, W. C. Tu, S. Y. Chien, "Efficient natural color image denoising based on guided filter," in Proceedings of IEEE International Conference on Image Processing, Quebec, Canada, 2015;pp. 43-47. doi:[[[10.1109/icip.2015.7350756]]] Corrupted image Y and X(0)=Y,penalty parameter β,regularization parameters λ and ε, the radius r. Step 1. Choose the most likely Gaussian mixing weights kmax for each patch PiX; Calculate zi(n+1) using (10); Pre-estimate image X(n+1) using (11); Let I(n+1)=X(n+1), calculate μk and σk2; LetP(n+1)=I(n+1),calculate ak,bk using (3) and (4); Calculate q(n+1) using (5) and let q(n+1)=X(n+1); Repeat Steps 1-6 until the stopping criterion is satisfied. Output. De-noised image X(n+1). Boundary region. The pixel value varies greatly, [TeX:] $$\sigma _ { k } ^ { 2 } >> \varepsilon , \text { so } a _ { k } \rightarrow \mathbf { 1 } , b _ { k } \rightarrow \mathbf { 0 } , q \rightarrow I$$ and the edge information of the image is better preserved. Flat region. The pixel values are almost unchanged, [TeX:] $$\sigma _ { k } ^ { 2 } \ll \varepsilon , \text { so } a _ { k } \rightarrow 0 , b _ { k } \rightarrow \mu _ { k } , q \rightarrow \overline { \mu } _ { k }$$, and the flat regions are smoother. Image Noise standard variance Guided filtering EPLL Our method Boat σ=15 29.99 31.75 31.99 σ=30 27.34 28.29 28.61 Plane σ=15 28.75 28.97 29.54 Barbara σ=15 30.11 30.41 30.62 Hill σ=15 30.18 31.51 31.74 Couple σ=15 29.60 31.70 31.96
CommonCrawl
Taiwanese Journal of Mathematics Taiwanese J. Math. Volume 15, Number 5 (2011), 1979-1998. ALGORITHMS CONSTRUCTION FOR NONEXPANSIVE MAPPINGS AND INVERSE-STRONGLY MONOTONE MAPPINGS Yonghong Yao, Yeong-Cheng Liou, and Chia-Ping Chen More by Yonghong Yao More by Yeong-Cheng Liou More by Chia-Ping Chen In this paper, we construct two algorithms for finding a common element of the set of fixed points of a nonexpansive mapping and the set of solutions of the variational inequality for an $\alpha$-inverse-strongly monotone mapping in a Hilbert space. We show that the sequence converges strongly to a common element of two sets under the some mild conditions on parameters. As special cases of the above two algorithms, we obtain two schemes which both converge strongly to the minimum norm element of the set of fixed points of a nonexpansive mapping and the set of solutions of the variational inequality for an $\alpha$-inverse-strongly monotone mapping. Taiwanese J. Math., Volume 15, Number 5 (2011), 1979-1998. First available in Project Euclid: 18 July 2017 https://projecteuclid.org/euclid.twjm/1500406418 doi:10.11650/twjm/1500406418 Primary: 47H05: Monotone operators and generalizations 47J05: Equations involving nonlinear operators (general) [See also 47H10, 47J25] 47J25: Iterative procedures [See also 65J15] metric projection inverse-strongly monotone mapping nonexpansive mapping variational inequality minimum-norm Yao, Yonghong; Liou, Yeong-Cheng; Chen, Chia-Ping. ALGORITHMS CONSTRUCTION FOR NONEXPANSIVE MAPPINGS AND INVERSE-STRONGLY MONOTONE MAPPINGS. Taiwanese J. Math. 15 (2011), no. 5, 1979--1998. doi:10.11650/twjm/1500406418. https://projecteuclid.org/euclid.twjm/1500406418 F. E. Browder and W. V. Petryshyn, Construction of fixed points of nonlinear mappings in Hilbert spaces, J. Math. Anal. Appl., 20 (1967), 197-228. Zentralblatt MATH: 0153.45701 Digital Object Identifier: doi:10.1016/0022-247X(67)90085-6 Z. Opial, Weak convergence of the sequence of successive approximations of nonexpansive mappings, Bull. Amer. Math. Soc., 73 (1967), 595-597. Digital Object Identifier: doi:10.1090/S0002-9904-1967-11761-0 S. Reich and A. J. Zaslavski, Convergence of Krasnoselskii-Mann iterations of nonexpansive operators, Math. Comput. Model., 32 (2000), 1423-1431. Digital Object Identifier: doi:10.1016/S0895-7177(00)00214-4 A. Moudafi, Viscosity approximation methods for fixed-points problems, J. Math. Anal. Appl., 241 (2000), 46-55. Digital Object Identifier: doi:10.1006/jmaa.1999.6615 H. K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc., 66 (2002), 240-256. H. K. Xu, Viscosity approximation methods for nonexpansive mappings, J. Math. Anal. Appl., 298 (2004), 279-291. Digital Object Identifier: doi:10.1016/j.jmaa.2004.04.059 G. Marino and H. K Xu, A general iterative method for nonexpansive mappings in Hilbert spaces, J. Math. Anal. Appl., 318 (2006), 43-52. S. S. Chang, Viscosity approximation methods for a finite family of nonexpansive mappings in Banach spaces, J. Math. Anal. Appl., 323 (2006), 1402-1416. T. Suzuki, Strong convergence of approximated sequences for nonexpansive mappings in Banach spaces, Proc. Amer. Math. Soc., 135 (2007), 99-106. Y. Yao and Y.-C. Liou, Weak and strong convergence of Krasnoselski-Mann iteration for hierarchical fixed point problems, Inverse Problems, 24(1) (2008), 015015 (8 pp). A. T. Lau and W. Takahashi, Fixed point properties for semigroup of nonexpansive mappings on Fréchet spaces, Nonlinear Anal., 70 (2009), 3837-3841. Digital Object Identifier: doi:10.1016/j.na.2008.07.041 A. Petrusel and J. C. Yao, Viscosity approximation to common fixed points of families of nonexpansive mappings with generalized contractions mappings, Nonlinear Anal., 69 (2008), 1100-1111. L. C. Ceng, S. M. Guu and J. C. Yao, Hybrid viscosity-like approximation methods for nonexpansive mappings in Hilbert spaces, Comput. Math. Appl., 58 (2009), 605- 617. Digital Object Identifier: doi:10.1016/j.camwa.2009.02.035 L. C. Ceng, P. Cubiotti and J. C. Yao, Strong convergence theorems for finitely many nonexpansive mappings and applications, Nonlinear Anal., 67 (2007), 1464-1473. Y. Yao, R. Chen and J. C. Yao, Strong convergence and certain control conditions of modified Mann iteration, Nonlinear Anal., 68 (2008), 1687-1693. W. Takahashi, Nonlinear functional analysis, Yokohama Publishers, Yokohama, Japan, 2000. F. E. Browder, Nonlinear monotone operators and convex sets in Banach spaces, Bull. Amer. Math. Soc., 71 (1965), 780-785. Digital Object Identifier: doi:10.1090/S0002-9904-1965-11391-X Project Euclid: euclid.bams/1183527309 F. E. Browder, The fixed point theory of multi-valued mappings in topological vector spaces, Math. Ann., 177 (1968), 283-301. R. E. Bruck, On the weak convergence ofan ergodic iteration for the solution ofv ariational inequalities for monotone operators in Hilbert space, J. Math. Anal. Appl., 61 (1977), 159-164. J. L. Lions and G. Stampacchia, Variational inequalities, Comm. Pure Appl. Math., 20 (1967), 493-517. Digital Object Identifier: doi:10.1002/cpa.3160200302 W. Takahashi, Nonlinear complementarity problem and systems ofcon vex inequalities, J. Optim. Theory Appl., 24 (1978), 493-508. W. Takahashi and M. Toyoda, Weak convergence theorems for nonexpansive mappings and monotone mappings, J. Optim. Theory Appl., 118 (2003), 417-428. Digital Object Identifier: doi:10.1023/A:1025407607560 H. Iiduka and W. Takahashi, Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings, Nonlinear Anal., 61 (2005), 341-350. R. T. Rockafellar, On the maximality ofsums ofnonlinear monotone operators, Trans. Amer. Math. Soc., 149 (1970), 75-88. R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14 (1976), 877-898. F. Liu and M. Z. Nashed, Regularization of nonlinear ill-posed variational inequalities and convergence rates, Set-Valued Anal., 6 (1998), 313-344. J. C. Yao, Variational inequalities with generalized monotone operators, Math. Operations Research, 19 (1994), 691-705. Digital Object Identifier: doi:10.1287/moor.19.3.691 J. C. Yao and O. Chadli, Pseudomonotone complementarity problems and variational inequalities, Handbook of Generalized Convexity and Monotonicity, J. P. Crouzeix, N. Haddjissas and S. Schaible, eds., 2005, pp. 501-558. L. C. Zeng, S. Schaible and J. C. Yao, Iterative algorithm for generalized set-valued strongly nonlinear mixed variational-like inequalities, J. Optim. Theory Appl., 124 (2005), 725-738. Digital Object Identifier: doi:10.1007/s10957-004-1182-z L. C. Ceng, Q. H. Ansari and J. C. Yao, On relaxed viscosity iterative methods for variational inequalities in Banach spaces, J. Comput. Appl. Math., 230 (2009), 813-822. Digital Object Identifier: doi:10.1016/j.cam.2009.01.015 L. C. Zeng, Q. H. Ansari and S. Y. Wu, Strong convergence theorems of relaxed hybrid steepest-descent methods for variational inequalities, Taiwanese J. Math., 10 (2006), 13-29. Digital Object Identifier: doi:10.11650/twjm/1500403796 L. C. Ceng and J. C. Yao, An extragradient-Like approximation method for variational inequality problems and fixed point problems, Appl. Math. Computat., 190 (2007), 205-215. Q. H. Ansari, L. C. Ceng and J.-C. Yao, Mann Type Steepest-Descent and Modified Hybrid Steepest-Descent Methods for Variational Inequalities in Banach Spaces, Numer. Funct. Anal. Optim., 29(9-10) (2008), 987-1033. Digital Object Identifier: doi:10.1080/01630560802418391 Mathematical Society of the Republic of China Taiwanese Journal of Mathematics Home Iterative Algorithms for Solving the System of Mixed Equilibrium Problems, Fixed-Point Problems, and Variational Inclusions with Application to Minimization Problem Chamnarnpan, Tanom and Kumam, Poom, Journal of Applied Mathematics, 2012 CONVERGENCE THEOREMS FOR VARIATIONAL INEQUALITIES EQUILIBRIUM PROBLEMS AND NONEXPANSIVE MAPPINGS BY HYBRID METHOD Saeidi, Shahram, Taiwanese Journal of Mathematics, 2012 An Iterative Method for Finding Common Solution of the Fixed Point Problem of a Finite Family of Nonexpansive Mappings and a Finite Family of Variational Inequality Problems in Hilbert Space Husain, Shamshad and Singh, Nisha, Journal of Applied Mathematics, 2019 Some Results on an Infinite Family of Nonexpansive Mappings and an Inverse-Strongly Monotone Mapping in Hilbert Spaces Cheng, Peng and Zhang, Anshen, Abstract and Applied Analysis, 2012 A Viscosity Hybrid Steepest Descent Method for Equilibrium Problems, Variational Inequality Problems, and Fixed Point Problems of Infinite Family of Strictly Pseudocontractive Mappings and Nonexpansive Semigroup Che, Haitao and Pan, Xintian, Abstract and Applied Analysis, 2013 Viscosity approximation methods for equilibrium problems and fixed point problems of nonexpansive mappings and inverse-strongly monotone mappings Wang, Shenghua, Zhou, Haiyun, and Song, Jianmin, Methods and Applications of Analysis, 2007 Common Solutions of Generalized Mixed Equilibrium Problems, Variational Inclusions, and Common Fixed Points for Nonexpansive Semigroups and Strictly Pseudocontractive Mappings Kumam, Poom, Hamphries, Usa, and Katchang, Phayap, Journal of Applied Mathematics, 2011 An Iterative Method for Solving a System of Mixed Equilibrium Problems, System of Quasivariational Inclusions, and Fixed Point Problems of Nonexpansive Semigroups with Application to Optimization Problems Sunthrayuth, Pongsakorn and Kumam, Poom, Abstract and Applied Analysis, 2012 Approximating Iterations for Nonexpansive and Maximal Monotone Operators Yao, Zhangsong, Cho, Sun Young, Kang, Shin Min, and Zhu, Li-Jun, Abstract and Applied Analysis, 2015 Weak Convergence Theorems for Strictly Pseudocontractive Mappings and Generalized Mixed Equilibrium Problems Jung, Jong Soo, Journal of Applied Mathematics, 2012 euclid.twjm/1500406418
CommonCrawl
Middle SchoolGrade 6Grade 7Grade 8 Grade 7Unit 1Unit 2Unit 3Unit 4Unit 5Unit 6Unit 7Unit 8Unit 9 Comparing Relationships with Tables Decide whether each table could represent a proportional relationship. If the relationship could be proportional, what would the constant of proportionality be? How loud a sound is depending on how far away you are. distance to listener (ft) level (dB) The cost of fountain drinks at Hot Dog Hut. (fluid ounces) 16 $1.49 Teachers with a valid work email address can click here to register or sign in for free access to Formatted Solution. A taxi service charges $1.00 for the first \(\frac{1}{10}\) mile then $0.10 for each additional \(\frac{1}{10}\) mile after that. Fill in the table with the missing information then determine if this relationship between distance traveled and price of the trip is a proportional relationship. distance traveled (mi) price (dollars) \(\frac{9}{10}\) \(3\frac{1}{10}\) A rabbit and turtle are in a race. Is the relationship between distance traveled and time proportional for either one? If so, write an equation that represents the relationship. Turtle's run: distance (meters) time (minutes) 1,768.5 32.75 Rabbit's run: 1,107.5 20 For each table, answer: What is the constant of proportionality? \(\frac13\) \(\frac73\) 3 \(7\frac12\) (From Unit 2, Lesson 2.) Kiran and Mai are standing at one corner of a rectangular field of grass looking at the diagonally opposite corner. Kiran says that if the the field were twice as long and twice as wide, then it would be twice the distance to the far corner. Mai says that it would be more than twice as far, since the diagonal is even longer than the side lengths. Do you agree with either of them? IM 6–8 Math was originally developed by Open Up Resources and authored by Illustrative Mathematics®, and is copyright 2017-2019 by Open Up Resources. It is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). OUR's 6–8 Math Curriculum is available at https://openupresources.org/math-curriculum/. Adaptations and updates to IM 6–8 Math are copyright 2019 by Illustrative Mathematics, and are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). Adaptations to add additional English language learner supports are copyright 2019 by Open Up Resources, and are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). The second set of English assessments (marked as set "B") are copyright 2019 by Open Up Resources, and are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). Spanish translation of the "B" assessments are copyright 2020 by Illustrative Mathematics, and are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This site includes public domain images or openly licensed images that are copyrighted by their respective owners. Openly licensed images remain under the terms of their respective licenses. See the image attribution section for more information.
CommonCrawl
Home » CBSE » Class 11 Find the vector equation of the plane passing through the point (a, b, c) and parallel to the plane . CBSE, Class 12, Exercise 28D, Maths, RS Aggarwal, The Planes Find the vector equation of the plane through the points and parallel to the plane . Show that the planes 2x – y + 6z = 5 and 5x – 2.5y + 15z = 12 are parallel. Find the vector and Cartesian equations of a plane which is at a distance of 7 units from the origin and whose normal vector from the origin is CBSE, Class 12, Exercise 28B, Maths, RS Aggarwal, The Planes Answer: Given, $\begin{array}{l} d = 7\\ \overline n = 3\widehat i + 5\widehat j - 6\widehat k \end{array}$ The unit vector normal to the plane: ... Find the distance of the point (2, 1, 0) from the plane 2x + y – 2z + 5 = 0. CBSE, Class 12, Exercise 28C, Maths, RS Aggarwal, The Planes Answer: Given plane, 2x + y – 2z + 5 = 0 The point is (2, 1, 0). Find the distance of the point (1, 1, 2) from the plane plane . Answer: Given plane, $\overrightarrow r .(2\widehat i - 2\widehat j + 4\widehat k) + 5 = 0$ The cartesian form: 2x – 2y + 4z + 5 = 0 The point is (1, 1, 2). Find the distance of the point (3, 4, 5) from the plane . Answer: Given plane, $\overrightarrow r .(2\widehat i - 5\widehat j + 3\widehat k) = 13$ The cartesian form: 2x – 5y + 3z – 13 = 0 The point is (3, 4, 5). Find the distance of the point from the plane . Answer: Given plane, $\overrightarrow r .(\widehat i + \widehat j + \widehat k) + 17 = 0$ The cartesian form: x + y + z +17 = 0 The point is $(\widehat i + 2\widehat j + 5\widehat k)$ => (1, 2,... Answer: Given plane, $\overrightarrow r .(3\widehat i - 4\widehat j + 12\widehat k) = 9$ The cartesian form: 3x – 4y + 12z – 9 =0 The point is $(2\widehat i - \widehat j - 4\widehat k)$ => (2,... Find the vector and Cartesian equations of a plane which passes through the point (1, 4, 6) and normal vector to the plane is is Answer: Given, A = (1, 4, 6) Find the vector and Cartesian equations of a plane which is at a distance of 6 units from the origin and which has a normal with direction ratios 2, -1, -2. Answer: Given, d = 6 Find the vector and Cartesian equations of a plane which is at a distance of 6/√29 from the origin and whose normal vector from the origin is is Answer: Given, = (x × 2) + (y × (-3)) + (z × 4) = 2x - 3y + 4z The Cartesian equation... Find the vector equation of a plane which is at a distance of 5 units from the origin and which has as the unit vector normal to it. Answer: Given, $\begin{array}{l} d = 5\\ \widehat n = \widehat k \end{array}$ The equation of plane at 5 units distance from the origin $\begin{array}{l} \widehat n \end{array}$ and as a unit... Reduce the equation of the plane 4x – 3y + 2z = 12 to the intercept form, and hence find the intercepts made by the plane with the coordinate axes. CBSE, Class 12, Exercise 28A, Maths, RS Aggarwal, The Planes Answer: Equation of the plane: 4x – 3y + 2z = 12 $\begin{array}{l} \frac{4}{{12}}x - \frac{3}{{12}}y + \frac{2}{{12}}z = 1\\ \frac{x}{3} + \frac{y}{{ - 4}} + \frac{z}{6} = 1 \end{array}$ It is the... Write the equation of the plane whose intercepts on the coordinate axes are 2, – 4 and 5 respectively. Answer: Given, Coordinate axes are 2, - 4, 5 The equation of the variable plane: The required equation of the plane is 10x – 5y + 4z = 20. Find the equation of the plane passing through each group of points: A (-2, 6, -6), B (-3, 10, -9) and C (-5, 0, -6) Answer: Given, A (-2, 6, -6) B (-3, 10, -9) C (-5, 0, -6) ... 3. Show that the four points A (0, -1, 0), B (2, 1, -1), C (1, 1, 1) and D (3, 3, 0) are coplanar. Find the equation of the plane containing them. Answer: Given, A (0, -1, 0) B (2, 1, -1) C (1, 1, 1) D (3, 3, 0) 4x – 3 (y + 1) + 2z = 0 4x – 3y + 2z – 3 = 0 Take x = 0, y = 3 and z =... Show that the four points A (3, 2, -5), B (-1, 4, -3), C (-3, 8, -5) and D (-3, 2, 1) are coplanar. Find the equation of the plane containing them. Answer: Let us take, The equation of the plane passing through A (3, 2, -5) a (x – 3) + b (y – 2) + c (z + 5) = 0 It passes through the points B (-1, \4, -3) and C (-3, 8, -5) a (1 – 3) + b (4 – 2)... Find the equation of the plane passing through each group of points: (i) A (2, 2, -1), B (3, 4, 2) and C (7, 0, 6) (ii) A (0, -1, -1), B (4, 5, 1) and C (3, 9, 4) CBSE, Class 12, Exercise 28A, Exercise 28B, Exercise 28C, Exercise 28D, Exercise 28E, Exercise 28F, Exercise 28G, Exercise 28H, Maths, RS Aggarwal, The Planes Answer: (i) Given, A (2, 2, -1) B (3, 4, 2) C (7, 0, 6) ... Find the length of tangent drawn to a circle with radius 8 cm form a point 17 cm away from the center of the circle CBSE, Circles, Class 10, Maths, RS Aggarwal Using properties of determinants prove that: CBSE, Class 12, Determinants, Exercise 6B, Maths, RS Aggarwal Solution: $=\left|\begin{array}{lll} b(b-a) & b-c & c(b-a) \\ a(b-a) & a-b & b(b-a) \\ c(b-a) & c-a & a(b-a) \end{array}\right|$ Taking (b-a) common from $\mathrm{C}_{1},... Solution: Expanding with R1 $\begin{array}{l} =b^{2} c^{2}\left(a^{2} c+a b c-a b c-a^{2} b\right)-b c\left(a^{3} c^{2}+a^{2} b c^{2}-a^{2} b^{2} c-a^{3} b^{2}\right)+(b+c)\left(a^{3} b c^{2}-a^{3}... Solution: $\left|\begin{array}{ccc}a^{2} & b^{2} & c^{2} \\ (a+1)^{2} & (b+1)^{2} & (c+1)^{2} \\ (a-1)^{2} & (b-1)^{2} & (c-1)^{2}\end{array}\right|$... CBSE, Class 12, Determinants, Excercise 6B, Maths, RS Aggarwal Solution: $\left|\begin{array}{ccc} a & b & a x+b y \\ b & c & b x+c y \\ a x+b y & b x+c y & 0 \end{array}\right|$ $\begin{array}{l} \left.=\left(\frac{1}{x... Solution: $\begin{array}{l} \left|\begin{array}{ccc} \mathrm{b}+\mathrm{c} & \mathrm{a} & \mathrm{a} \\ \mathrm{b} & \mathrm{c}+\mathrm{a} & \mathrm{b} \\ \mathrm{c} & \mathrm{c}... Solution: $\begin{array}{l} \left|\begin{array}{ccc} a^{2}+2 a & 2 a+1 & 1 \\ 2 a+1 & a+2 & 1 \\ 3 & 3 & 1 \end{array}\right| \\ =\left|\begin{array}{ccc} a^{2}-1 & a-1... Solution: $\left|\begin{array}{ccc}x+4 & 2 x & 2 x \\ 2 x & x+4 & 2 x \\ 2 x & 2 x & x+4\end{array}\right|$ $=\left|\begin{array}{ccc}5 \mathrm{x}+4 & 5 \mathrm{x}+4... Evaluate : Solution: $\begin{array}{l} \left|\begin{array}{lll} 67 & 19 & 21 \\ 39 & 13 & 14 \\ 81 & 24 & 26 \end{array}\right| \\... If is a matrix such that and then write the value of . CBSE, Class 12, Determinants, Exercise 6A, Maths, RS Aggarwal Solution: Theorem: If Let $A$ be $k \times k$ matrix then $|p A|=p^{k}|A|$. Given: $\mathrm{k}=3$ and $\mathrm{p}=3$. $\begin{array}{l} |3 \mathrm{~A}|=3^{3} \times|\mathrm{A}| \\ =27|\mathrm{~A}|... Prove that CBSE, Class 11, Maths, RS Aggarwal, Trigonometric, or Circular, Functions Answer: = cosx cos2x cos4x cos8x Multiply and divide by 2sinx, we get We know that, sin 2x = 2 sinx cosx Replacing x by 2x, we get sin 2(2x) = 2 sin(2x) cos(2x) or sin 4x = 2 sin 2x cos 2x... Answer: Taking sinx common from the numerator and cosx from the denominator Prove that: Answer: = RHS ∴ LHS = RHS Hence Proved Answer: Multiply and divide by 2, we get = RHS ∴ LHS = RHS Hence Proved Prove that cot x – 2cot 2x = tan x Answer: Taking LHS, = cot x – 2cot 2x …(i) We know that, cot x = cos x/ sin x Replacing x by 2x, we get cot 2x = cos 2x/ sin 2x So, eq. (i) becomes = sin x/ cos x = tan x = RHS ∴ LHS = RHS Hence... Prove that cos 2x + 2sin2 x = 1 Answer: Taking LHS = 2 – 1 = 1 = RHS Prove that cosec 2x + cot 2x = cot x Answer: To Prove: cosec 2x + cot 2x = cot x Taking LHS, = cosec 2x + cot 2x …(i) We know that, cosec x = 1 / sin x and cot x = cos x/ sin x Replacing x by 2x, we get = cos x/ sinx = cot... Prove that sin 2x(tan x + cot x) = 2 Answer: Taking LHS, sin 2x(tan x + cot x) We know that: We know that, sin 2x = 2 sinx cosx = 2 = RHS ∴ LHS = RHS Hence Proved Answer: = sin x / cos x = tan x = RHS ∴ LHS = RHS Hence Proved In a four-sided field, the length of the longer diagonal is 128 m. The lengths of perpendiculars from the opposite vertices upon this diagonal are 22.7 m and 17.3 m. Find the area of the field. Areas Related To Circles, CBSE, Class 10, Maths, RS Aggarwal The adjacent sides of a parallelogram are 36 cm and 27 cm in length. If the distance between the shorter sides is 12 cm, find the distance between the longer sides. The diagonals of a rhombus are 48 cm and 20 cm long. Find the perimeter of the rhombus. A parallelogram and a rhombus are equal in area. The diagonals of the rhombus measure 120 m and 44 m. If one of the sides of the || gm is 66 m long, find its corresponding altitude. The cost of fencing a square lawn at 14 per metre is 2800. Find the cost of mowing the lawn at ₹ 54 per 100 m2. The adjacent sides of a ||gm ABCD measure 34 cm and 20 cm and the diagonal AC is 42 cm long. Find the area of the ||gm. Find the area of a trapezium whose parallel sides are 11 cm and 25 cm long and non- parallel sides are 15 cm and 13 cm. Find the area of a rhombus each side of which measures 20 cm and one of whose diagonals is 24 cm. A lawn is in the form of a rectangle whose sides are in the ratio 5:3 and its area is Find the cost of fencing the lawn at ₹ 20 per metre. Find the area of a triangle whose sides are 42 cm, 34 cm and 20 cm. Find the area of a rhombus whose diagonals are 48 cm and 20cm long. The length of the diagonal of a square is 24 cm. Find its area. The longer side of a rectangular hall is 24 m and the length of its diagonal is 26 m. Find the area of the hall. Find the area of an isosceles triangle each of whose equal sides is 13 cm and whose base is 24 cm. Find the area of an equilateral triangle having each side of length 10 cm. (Take The parallel sides of a trapezium are 9.7cm and 6.3 cm, and the distance between them is 6.5 cm. The area of the trapezium is (a) 104 cm2 (b) 78 cm2 (c) 52 cm2 (d) 65 cm2 The sides of a triangle are in the ratio 12: 14 : 25 and its perimeter is 25.5 cm. The largest side of the triangle is (a) 7 cm (b) 14 cm (c) 12.5 cm (d) 18 cm In the given figure ABCD is a trapezium in which AB =40 m, BC=15m,CD = 28m, AD= 9 m and CE = AB. Area of trapezium ABCD is In the given figure ABCD is a quadrilateral in which Find the area of trapezium whose parallel sides are 11 m and 25 m long, and the nonparallel sides are 15 m and 13 m long. The shape of the cross section of a canal is a trapezium. If the canal is 10 m wide at the top, 6 m wide at the bottom and the area of its cross section is 640 m2 , find the depth of the canal. The parallel sides of trapezium are 12 cm and 9cm and the distance between them is 8 cm. Find the area of the trapezium. The area of rhombus is 480 c m2 , and one of its diagonal measures 48 cm. Find (i) the length of the other diagonal, (ii) the length of each of the sides (iii) its perimeter The perimeter of a rhombus is 60 cm. If one of its diagonal us 18 cm long, find (i) the length of the other diagonal, and (ii) the area of the rhombus. Find the area of the rhombus, the length of whose diagonals are 30 cm and 16 cm. Also, find the perimeter of the rhombus. The adjacent sides of a parallelogram ABCD measure 34 cm and 20 cm, and the diagonal AC measures 42 cm. Find the area of the parallelogram. The area of a parallelogram is 392 m2 . If its altitude is twice the corresponding base, determined the base and the altitude. The adjacent sides of a parallelogram are 32 cm and 24 cm. If the distance between the longer sides is 17.4 cm, find the distance between the shorter sides. Find the area of a parallelogram with base equal to 25 cm and the corresponding height measuring 16.8 cm. Sol: Given: Base = 25 cm Height = 16.8 cm \Area of the parallelogram = Base ´ Height = 25cm ´16.8 cm = 420 cm2 Find the area of the quadrilateral ABCD in which in AB = 42 cm, BC = 21 cm, CD = 29 cm, DA = 34 cm and diagonal BD = 20 cm. Find the perimeter and area of the quadrilateral ABCD in which AB = 17 cm, AD = 9 cm, CD = 12 cm, ACB  90 and AC = 15 cm. Find the area of the quadrilateral ABCD in which AD = 24 cm, BAD  90 and BCD is an equilateral triangle having each side equal to 26 cm. Also, find the perimeter of the quadrilateral. Sol: In the given figure ABCD is quadrilateral in which diagonal BD = 24 cm, AL  BD and CM  BD such that AL = 9cm and CM = 12 cm. Calculate the area of the quadrilateral. The cost of fencing a square lawn at ₹ 14 per meter is ₹ 28000. Find the cost of mowing the lawn at ₹ 54per100 m2 The cost of harvesting a square field at ₹ 900 per hectare is ₹ 8100. Find the cost of putting a fence around it at ₹ 18 per meter. The area of a square filed is 8 hectares. How long would a man take to cross it diagonally by walking at the rate of 4 km per hour? Find the length of the diagonal of a square whose area is 128 cm2 . Also, find its perimeter. CBSE, Class 10, Maths, RS Aggarwal Find the area and perimeter of a square plot of land whose diagonal is 24 m long. The cost of painting the four walls of a room 12 m long at ₹ 30 per m2 is ₹ 7560 per m2 and the cost of covering the floor with the mat at ₹ dimensions of the room. The dimensions of a room are 14 m x 10 m x 6.5 m There are two doors and 4 windows in the room. Each door measures 2.5 m x 1.2 m and each window measures 1.5 m x 1 m. Find the cost of painting the four walls of the room at ₹ 35 per m2 . A 80 m by 64 m rectangular lawn has two roads, each 5 m wide, running through its middle, one parallel to its length and the other parallel to its breadth. Find the cost of gravelling the reads at ₹ 40 per m2 . A carpet is laid on floor of a room 8 m by 5 m. There is border of constant width all around the carpet. If the area of the border is 12 m2 A room 4.9 m long and 3.5 m board is covered with carpet, leaving an uncovered margin of 25 cm all around the room. If the breadth of the carpet is 80 cm, find its cost at ₹ 80 per metre. The length and breadth of a rectangular garden are in the ratio 9:5. A path 3.5 m wide, running all around inside it has an area of 1911m2 . Find the dimensions of the garden. A footpath of uniform width runs all around the inside of a rectangular field 54m long and 35 m wide. If the area of the path is 420 m2 , find the width of the path. A rectangular plot measure 125 m by 78 m. It has gravel path 3 m wide all around on the outside. Find the area of the path and the cost of gravelling it at ₹ 75 per m2 A rectangular park 358 m long and 18 m wide is to be covered with grass, leaving 2.5 m uncovered all around it. Find the area to be laid with grass. The area of rectangle is 192cm2 and its perimeter is 56 cm. Find the dimensions of the rectangle. A 36-m-long, 15-m-borad verandah is to be paved with stones, each measuring 6dm by 5 dm. How many stones will be required? The floor of a rectangular hall is 24 m long and 18 m wide. How many carpets, each of length 2.5 m and breadth 80 cm, will be required to cover the floor of the hall? A room is 16 m long and 13.5 m broad. Find the cost of covering its floor with 75-m-wide carpet at ₹ 60 per metre. A lawn is in the form of a rectangle whose sides are in the ratio 5 : 3. The area of the lawn is 3375m2 . Find the cost of fencing the lawn at ₹ 65 per metre. The area of a rectangular plot is 462m2is length is 28 m. Find its perimeter One side of a rectangle is 12 cm long and its diagonal measure 37 cm. Find the other side and the area of the rectangle. The length of a rectangular park is twice its breadth and its perimeter is 840 m. Find the area of the park. The perimeter of a rectangular plot of land is 80 m and its breadth is 16 m. Find the length and area of the plot. In the given figure, ABC is an equilateral triangle the length of whose side is equal to 10 cm, and ADC is right-angled at D and BD= 8cm. Find the area of the shaded region Find the area and perimeter of an isosceles right angled triangle, each of whose equal sides measure 10cm. Each of the equal sides of an isosceles triangle measure 2 cm more than its height, and the base of the triangle measure 12 cm. Find the area of the triangle. The base of an isosceles triangle measures 80 cm and its area is 360 cm2. Find the perimeter of the triangle. Find the length of the hypotenuse of an isosceles right-angled triangle whose area is 200cm2 . Also, find its perimeter Find the area of a right – angled triangle, the radius of whose, circumference measures 8 cm and the altitude drawn to the hypotenuse measures 6 cm. The base of a right – angled triangle measures 48 cm and its hypotenuse measures 50 cm. Find the area of the triangle. If the area of an equilateral triangle is 81 11. If the area of an equilateral triangle is 36 Sol: cm2 , find its perimeter. The height of an equilateral triangle is 6 cm. Find its area. Each side of an equilateral triangle is 10 cm. Find (i) the area of the triangle and (ii) the height of the triangle. The length of the two sides of a right triangle containing the right angle differ by 2 cm. If the area of the triangle is 24 c m2 , find the perimeter of the triangle. The difference between the sides at the right angles in a right-angled triangle is 7 cm. the area of the triangle is 60 c m2 . Find its perimeter. The perimeter of a right triangle is 40 cm and its hypotenuse measure 17 cm. Find the area of the triangle. The perimeter of a triangular field is 240m, and its sides are in the ratio 25:17:12. Find the area of the field. Also, find the cost of ploughing the field at ₹ 40 per m2 The sides of a triangle are in the ratio 5:12:13 and its perimeter is 150 m. Find the area of the triangle. Find the area of the triangle whose sides are 18 cm, 24 cm and 30 cm. Also find the height corresponding to the smallest side. Find the areas of the triangle whose sides are 42 cm, 34 cm and 20 cm. Also, find the height corresponding to the longest side. Find the area of triangle whose base measures 24 cm and the corresponding height measure 14.5 cm. In a bulb factory, machines A, B and C manufactures and bulbs respectively. Out of these bulbs and of the bulbs produced respectively by and are found to be defective. A bulb is picked up at random from the total production and found to be defective. Find the probability that this bulb was produced by machine . Bayes's Theorem and its Applications, CBSE, Class 12, Exercise 30A, Maths, RS Aggarwal Let $A$ : Manufactured from machine $A$, B : Manufactured from machine B C: Manufactured from machine C D : Defective bulb We want to find $P(A \mid D)$, i.e. probability of selected defective bulb... An insurance company insured 2000 scooters and 3000 motorcycles. The probability of an accident involving a scooter is , and that of motorcycles is . An insured vehicle met with an accident. Find the probability that the accidented vehicle was a motorcycle. Let $M$ : Motorcycle S: Scooter A : Accident vechicle We want to find $P(M \mid A)$, i.e. probability of accident vehicle was a motorcycle $\begin{array}{l} P(M \mid A)=\frac{P(M) \cdot P(A \mid... A car manufacturing factory has two plants and . Plant manufactures of the cars, and plant , manufactures . At pant of the cars are rated of standard quality, and at plant are rated of standard quality. A car is picked up at random and is found to be of standard quality. A car is picked up at random and is found to be of standard quality. Find the probability that it has come from plant . Let $X$ : Car produced from plant $X$ $Y$ : Car produced from plant $Y$ S: Car rated as standard quality We want to find $P(X \mid S)$, i.e. selected standard quality car is from plant $X$... There are four boxes, and , containing marbles. A contains 1 red, 6 white and 3 black marbles; contains 6 red, 2 white and 2 black marbles; C contains 8 red, 1 white and 1 black marbles; and D contains 6 white and 4 black marbles. One of the boxes is selected at random and a single marble is drawn from it. If the marble is red, what is the probability that it was drawn from the box ? Let $A:$ Ball drawn from bag $A$ B: Ball is drawn from bag B $C:$ Ball is drawn from bag $C$ $D:$ Ball is drawn from bag $D$ BB: Black ball WB : White ball RB : Red ball Assuming all boxes have an... There are 3 bags, each containing 5 white and 3 black balls. Also, there are 2 bags, each containing 2 white and 4 black balls. A white ball is drawn at random. Find the probability that this ball is from a bag of the first group. Let $A$ : the set of first 3 bags $B$ : a set of next 2 bags WB : White ball BB : Black ball Now we can change the problem to two bags, i.e. bag A containing 15 white and 9 black balls( 5 white and... Urn A contains 7 white and 3 black balls; urn B contains 4 white and 6 black balls; urn C contains 2 white and 8 black balls. One of these urns is chosen at random with probabilities and respectively. From the chosen urn, two balls are drawn at random without replacement. Both the balls happen to be white. Find the probability that the balls are drawn are from urn . Let $A:$ Ball is drawn from bag $A$ B : Ball is drawn from bag B $C:$ Ball is drawn from bag $C$ BB: Black ball WB: White ball RB: Red ball Probability of picking 2 white balls fro urn $A=\frac{7... There are three boxes, the first one containing 1 white, 2 red and 3 black balls; the second one containing 2 white, 3 red and 1 black ball and the third one containing 3 white, 1 red and 2 black balls. A box is chosen at random, and from it, two balls are drawn at random. One ball is red and the other, white. What is the probability that they come from the second box? let $A:$ Ball drawn from bag $A$ B : Ball is drawn from bag B $C:$ Ball is drawn from bag $C$ BB: Black ball WB: White ball RB: Red ball Assuming, selecting bags is of equal probability i.e.... Mark the tick against the correct answer in the following: CBSE, Class 12, Inverse Trigonometric Functions, Maths, RS Aggarwal Solution: Option(C) is correct. To Find: The value of $\tan ^{-1} 2+\tan ^{-1} 3$ Since we know that $\tan ^{-1} x+\tan ^{-1} y=\tan ^{-1}\left(\frac{x+y}{1-x y}\right)$ $\begin{array}{l}... Three urns contain 2 white and 3 black balls; 3 white and 2 black balls, and 4 white and 1 black ball respectively. One ball is drawn from an urn chosen at random, and it was found to be white. Find the probability that it was drawn from the first urn. let $\mathrm{A}:$ Ball drawn from bag $\mathrm{A}$ $B:$ Ball is drawn from bag $B$ $C:$ Ball is drawn from bag $C$ BB : Black ball WB : White ball Assuming, selecting bags is of equal probability... Three urns A, B and C contains 6 red and 4 white; 2 red and 6 white; and 1 red and 5 white balls respectively. An urn is chosen at random, and a ball is drawn. If the ball drawn is found to be red, find the probability that the balls was drawn from the first urn . let $A:$ Ball drawn from bag $A$ B : Ball is drawn from bag B $C:$ Ball is drawn from bag $C$ R: Red ball W : White ball Assuming, selecting bags is of equal probability i.e. $\frac{1}{3}$ We want... There are two I and II. The bag I contains 3 white and 4 black balls, and bag II contains 5 white and 6 black balls. One ball is drawn at random from one of the bags and is found to be white. Find the probability that it was drawn from the bag I. Let $\mathrm{W}$ : White ball B : Black ball $\begin{array}{l} X: 1^{\text {st }} \text { bag } \\ Y: 2^{\text {nd }} \text { bag } \end{array}$ Assuming, selecting bags is of equal probability i.e.... A bag A contains 1 white and 6 red balls. Another bag contains 4 white and 3 red balls. One of the bags is selected at random, and a ball is drawn from it, which is found to be white. Find the probability that the ball is drawn is from bag . red balls. Another bag contains 4 white and 3 red balls. One of the bags is selected at random, and a ball is drawn from it, which is found to be white. Find the probability that the ball is drawn is from bag A. Let R : Red ball W : White ball A: Bag A B: Bag B Assuming, selecting bags is of equal probability i.e. $\frac{1}{2}$ We want to find $P(A \mid W)$, i.e. the selected white ball is from bag $A$:... Two groups are competing for the positions on the board of directors of a corporation. The probabilities that the first and the second groups will win are and , respectively. Further, if the first group wins, the probability of introducing a new product is , and when the second groups win, the corresponding probability is . Find the probability that the new product introduced was by the second group. Let $F$ : First group S : Second group $N$ : Introducing a new product We want to find $P(S \mid N)$, i.e. new product introduced by the second group $\begin{array}{l} \mathrm{P}(\mathrm{S} \mid... Solution: Option(C) is correct. To Find: The value of $\tan ^{-1} 1+\tan ^{-1} \frac{1}{3}$ Let, $x=\tan ^{-1} 1+\tan ^{-1} \frac{1}{3}$ Since we know that $\tan ^{-1} x+\tan ^{-1} y=\tan... Solution: Option(A) is correct. To Find: The value of $\tan ^{-1}(-1)+\cos ^{-1}\left(\frac{-1}{\sqrt{2}}\right)$ Let, $x=\tan ^{-1}(-1)+\cos ^{-1}\left(\frac{-1}{\sqrt{2}}\right)$ $\Rightarrow... Solution: Option(B) is correct. To Find: The value of $\tan ^{-1} 1+\cos ^{-1}\left(\frac{-1}{2}\right)+\sin ^{-1}\left(\frac{-1}{2}\right)$ Now, let $x=\tan ^{-1} 1+\cos... D. none of these Solution: Option(B) is correct. To Find: The value of $\tan ^{-1}(\sqrt{3})-\sec ^{-1}(-2)$ Let, $x=\tan ^{-1}(\sqrt{3})-\sec ^{-1}(-2)$ $\begin{array}{l} \Rightarrow... Mark the tick against the correct answer in the following: The value of is Solution: Option(B) is correct. To Find: The value of $\sin \left(\cos ^{-1} \frac{3}{5}\right)$ Now, let $x=\cos ^{-1} \frac{3}{5}$ $\Rightarrow \cos x=\frac{3}{5}$ Now , $\sin x=\sqrt{1-\cos ^{2}... Solution: Option(A) is correct. To Find: The value of $\sec ^{-1}\left(\sec \left(\frac{8 \pi}{5}\right)\right)$ Now, let $x=\sec ^{-1}\left(\sec \left(\frac{8 \pi}{5}\right)\right)$ $\Rightarrow... Solution: To Find: The value of $\tan ^{-1}\left(\tan \left(\frac{7 \pi}{6}\right)\right)$ Now, let $x=\tan ^{-1}\left(\tan \left(\frac{7 \pi}{6}\right)\right)$ $\Rightarrow \tan x=\tan... Solution: Option(C) us correct. To Find: The value of $\sin ^{-1}\left(\sin \left(\frac{2 \pi}{3}\right)\right)$ Now, let $x=\sin ^{-1}\left(\sin \left(\frac{2 \pi}{3}\right)\right)$ $\Rightarrow... Mark the tick against the correct answer in the following: The principal value of is Solution: Option(A) is correct. To Find: The Principle value of $\operatorname{cosec}^{-1}(-\sqrt{2})$ Let the principle value be given by $\mathrm{x}$ Now, let... Draw two concentric circles of radii 4 cm and 6 cm. Construct a tangent to the smaller circle from a point on the larger circle. Measure the length of this tangent. Draw a circle of radius 4 cm. Draw tangent to the circle making an angle of 60 with a line passing through the centre. Draw a circle of radius 3.5 cm. Draw a pair of tangents to this circle which are inclined to each other at an angle of 60 . Write the steps of construction. CBSE, Class 10, Constructions, Maths, RS Aggarwal Draw a circle of radius 4.8 cm. Take a point P on it. Without using the centre of the circle, construct a tangent at the point P. Write the steps of construction. Draw a ABC , right-angled at B such that AB = 3 cm and BC = 4cm. Now, Construct a triangle Construct an isosceles triangle whose base is 9 cm and altitude 5cm. Construct another Construct a ABC in which BC = 5cm, C  60 and altitude from A equal to 3 cm. Construct Construct a ABC Sol: in which B= 6.5 cm, AB = 4.5 cm and ABC  60 Draw a line segment AB of length 6.5 cm and divided it in the ratio 4 : 7. Measure each of the two parts. Draw a line segment AB of length 5.4 cm. Divide it into six equal parts. Write the steps of construction. Construct a tangent to a circle of radius 4 cm form a point on the concentric circle of radius 6 cm and measure its length. Also, verify the measurement by actual calculation. Draw a circle of radius 32 cm. Draw a tangent to the circle making an angle 30 with a line passing through the centre. Write the steps of construction for drawing a pair of tangents to a circle of radius 3 cm , which are inclined to each other at an angle of 60 . Draw a circle of radius 4.2. Draw a pair of tangents to this circle inclined to each other at an angle of 45 Draw a line segment AB of length 8 cm. Taking A as centre , draw a circle of radius 4 cm and taking B as centre , draw another circle of radius 3 cm. Construct tangents to each circle form the centre of the other circle. Draw a circle with the help of a bangle. Take any point P outside the circle. Construct the pair of tangents form the point P to the circle Draw a circle with center O and radius 4 cm. Draw any diameter AB of this circle. Construct tangents to the circle at each of the two end points of the diameter AB. Draw a circle of radius 3.5 cm. Take two points A and B on one of its extended diameter, each at a distance of 5 cm from its center. Draw tangents to the circle from each of these points A and B. 2. Draw two tangents to a circle of radius 3.5 cm form a point P at a distance of 6.2 cm form its centre. Draw a circle of radius 3 cm. Form a point P, 7 cm away from the centre of the circle, draw two tangents to the circle. Also, measure the lengths of the tangents. Constructions, Maths, RS Aggarwal Draw a right triangle in which the sides (other than hypotenuse) are of lengths 4 cm and 3 Construct an isosceles triangles whose base is 8 cm and altitude 4 cm and then another To construct a triangle similar to Find the eccentricity of an ellipse whose latus rectum is one half of its major axis. CBSE, Class 11, Ellipse, Maths, RS Aggarwal Find the eccentricity of an ellipse whose latus rectum is one half of its minor axis. Find the equation of an ellipse whose eccentricity is , the latus rectum is , and the center is at the origin. Find the equation of an ellipse, the lengths of whose major and mirror axes are units respectively. Find the equation of the ellipse which passes through the point and having its foci at Find the equation of the ellipse with eccentricity , foci on the y-axis, center at the origin and passing through the point Given Eccentricity = \[\frac{3}{4}\] We know that Eccentricity = c/a Therefore,c=\[\frac{3}{4}\]a Find the equation of the ellipse with center at the origin, the major axis on the x-axis and passing through the points Given: Center is at the origin and Major axis is along x – axis So, Equation of ellipse is of the form \[\frac{{{x}^{2}}}{{{a}^{2}}}+\frac{{{y}^{2}}}{{{b}^{2}}}=1\]…(i) Given that ellipse passing... Find the equation of the ellipse whose foci are at Given: Coordinates of foci = \[\left( \mathbf{0},\text{ }\pm \mathbf{4} \right)\] …(i) We know that, Coordinates of foci = \[\left( 0,\text{ }\pm c \right)\] …(ii) The coordinates of the foci are... and e=1/2 Let the equation of the required ellipse be \[\frac{{{x}^{2}}}{{{a}^{2}}}+\frac{{{y}^{2}}}{{{b}^{2}}}=1\] Given: Coordinates of foci = \[\left( \pm 1,\text{ }0 \right)\] …(i) We know that,... Find the equation of the ellipse whose foci are and the eccentricity is Let the equation of the required ellipse be Given: Coordinates of foci = \[\left( \pm 2,\text{ }0 \right)\]…(iii) We know that, Coordinates of foci = \[\left( \pm c,\text{ }0 \right)\]…(iv) ∴ From... Find the equation of the ellipse the ends of whose major and minor axes are respectively. Given: Ends of Major Axis = \[\left( \pm \mathbf{4},\text{ }\mathbf{0} \right)\] and Ends of Minor Axis = \[\left( \mathbf{0},\text{ }\pm \mathbf{3} \right)\] Here, we can see that the major axis is... Find the equation of the ellipse whose vertices are the and foci at Given: Vertices = \[(0,\pm 4)\] …(i) The vertices are of the form = (0, ±a) …(ii) Hence, the major axis is along y – axis ∴ From eq. (i) and (ii), we get \[\begin{array}{*{35}{l}} a\text{ }=\text{... Find the equation of the ellipse whose vertices are at Given: Vertices = \[\left( \pm \mathbf{6},\text{ }\mathbf{0} \right)\] …(i) The vertices are of the form = \[\left( \pm a,\text{ }0 \right)\] …(ii) Hence, the major axis is along x – axis ∴ From eq.... Find the (v) length of the latus rectum of each of the following ellipses. Given: \[\mathbf{25}{{\mathbf{x}}^{\mathbf{2}}}+\text{ }\mathbf{4}{{\mathbf{y}}^{\mathbf{2}}}=\text{ }\mathbf{100}\] Divide by \[100\] to both the sides, we get... Find the (iii) coordinates of the foci, (iv) eccentricity Find the (i) lengths of major axes, (ii) coordinates of the vertices CBSE, Class 11, RS Aggarwal Given: \[\mathbf{16}{{\mathbf{x}}^{\mathbf{2}}}+\text{ }{{\mathbf{y}}^{\mathbf{2}}}=\text{ }\mathbf{16}\] Divide by \[16\] to both the sides, we get... Answer: = cot x = RHS ∴ LHS = RHS Hence Proved Given: \[\mathbf{9}{{\mathbf{x}}^{\mathbf{2}}}+\text{ }{{\mathbf{y}}^{\mathbf{2}}}=\text{ }\mathbf{36}\] Divide by \[36\] to both the sides, we get \[\frac{9}{36}{{x}^{2}}+\frac{1}{36}{{y}^{2}}=1\]... Answer: = tan x = RHS ∴ LHS = RHS Hence Proved Given: \[\mathbf{3}{{\mathbf{x}}^{\mathbf{2}}}+\text{ }\mathbf{2}{{\mathbf{y}}^{\mathbf{2}}}=\text{ }\mathbf{18}\]…(i) Divide by \[18\] to both the sides, we get... Answer: Taking LHS = cos x + sin x = RHS ∴ LHS = RHS Hence Proved If cos x = -1/3 , find the value of cos 3x Answer; We know that, cos 3x = 4cos3 x – 3 cosx Putting the values, we get If sinx = 1/6, find the value of sin 3x. Answer: To find: sin 3x We know that, sin 3x = 3 sinx – sin3 x Putting the values, we get Given: \[\frac{{{x}^{2}}}{9}+\frac{{{y}^{2}}}{16}=1\]….(i) Since, \[9<16\] So, above equation is of the form, \[\frac{{{x}^{2}}}{{{b}^{2}}}+\frac{{{y}^{2}}}{{{a}^{2}}}=1\]…(ii) Comparing eq. (i)... Given: \[\frac{{{x}^{2}}}{9}+\frac{{{y}^{2}}}{16}=1\]….(i) Since, \[9<16\] So, above equation is of the form, \[\frac{{{x}^{2}}}{{{b}^{2}}}+\frac{{{y}^{2}}}{{{a}^{2}}}=1\]…(ii) Comparing... If , find the values of tan 2x Answer: We know that: Given: \[\frac{{{x}^{2}}}{4}+\frac{{{y}^{2}}}{25}=1\]…(i) Since, \[4\text{ }<\text{ }25\] So, above equation is of the form, \[\frac{{{x}^{2}}}{{{b}^{2}}}+\frac{{{y}^{2}}}{{{a}^{2}}}=1\]…(ii)... If , find the values of cos 2x If , find the values of sin 2x Answer: We know that Given: \[\mathbf{4}{{\mathbf{x}}^{\mathbf{2}}}+\text{ }\mathbf{9}{{\mathbf{y}}^{\mathbf{2}}}=\text{ }\mathbf{1}\] \[\frac{{{x}^{2}}}{\frac{1}{4}}+\frac{{{y}^{2}}}{\frac{1}{9}}=1\]….(i) Since,...
CommonCrawl
Diffusive-like redistribution in state-changing collisions between Rydberg atoms and ground state atoms Photo-excitation of long-lived transient intermediates in ultracold reactions Yu Liu, Ming-Guang Hu, … Kang-Kuen Ni Electronic pair alignment and roton feature in the warm dense electron gas Tobias Dornheim, Zhandos Moldabekov, … Michael Bonitz Coherent multidimensional spectroscopy of dilute gas-phase nanosystems Lukas Bruder, Ulrich Bangert, … Frank Stienkemeier Few generalized entropic relations related to Rydberg atoms Kirtee Kumar & Vinod Prasad Ultracold Rydberg molecules J. P. Shaffer, S. T. Rittenhouse & H. R. Sadeghpour Sticky collisions of ultracold RbCs molecules Philip D. Gregory, Matthew D. Frye, … Simon L. Cornish Anisotropic dynamics of resonant scattering between a pair of cold aligned diatoms Haowen Zhou, William E. Perreault, … Richard N. Zare Tools for quantum simulation with ultracold atoms in optical lattices Florian Schäfer, Takeshi Fukuhara, … Yoshiro Takahashi Exactly solvable model of two interacting Rydberg-dressed atoms confined in a two-dimensional harmonic trap Przemysław Kościk & Tomasz Sowiński Philipp Geppert ORCID: orcid.org/0000-0001-9858-67131, Max Althön1, Daniel Fichtner1 & Herwig Ott ORCID: orcid.org/0000-0002-3155-27191 Atomic and molecular collision processes Techniques and instrumentation Ultracold gases Exploring the dynamics of inelastic and reactive collisions on the quantum level is a fundamental goal in quantum chemistry. Such collisions are of particular importance in connection with Rydberg atoms in dense environments since they may considerably influence both the lifetime and the quantum state of the scattered Rydberg atoms. Here, we report on the study of state-changing collisions between Rydberg atoms and ground state atoms. We employ high-resolution momentum spectroscopy to identify the final states. In contrast to previous studies, we find that the outcome of such collisions is not limited to a single hydrogenic manifold. We observe a redistribution of population over a wide range of final states. We also find that even the decay to states with the same angular momentum quantum number as the initial state, but different principal quantum number is possible. We model the underlying physical process in the framework of a short-lived Rydberg quasi-molecular complex, where a charge exchange process gives rise to an oscillating electric field that causes transitions within the Rydberg manifold. The distribution of final states shows a diffusive-like behavior. The understanding of collisions between Rydberg atoms and ground state atoms has a long history and dates back to seminal work done by Fermi1. Today, such processes are important for low-temperature plasma physics2, astrophysical plasmas3, and ultracold atom experiments, which have found in Rydberg physics a perfect match to explore ultracold chemistry and many-body physics: On the one hand, the high control over the internal and external degrees of freedom in an ultracold atomic gas enables the study of new phenomena in the field of Rydberg physics, such as Rydberg molecules4, Rydberg blockade5, Rydberg antiblockade6,7, and coherent many-body dynamics8. On the other hand, the same control can now be used to study established processes in a detailed fashion, thus unraveling the underlying microscopic physical mechanisms. This way, the state-resolved study of inelastic collisions and molecular decay processes involving Rydberg atoms has become possible. Collisions between a Rydberg atom and a ground state atom can have several possible outcomes. Here, we are interested in such collisions of both partners, where the Rydberg atom undergoes a transition to a lower-lying state, while the excess energy is converted into kinetic energy of the atoms. Such collisions have been studied in detail by Schlagmüller et al.9. They are important for the understanding of recombination processes in plasmas, for the quantitative understanding of inelastic processes in Rydberg gases10 and the decay dynamics of ultralong-range Rydberg molecules (ULRMs). The microscopic details of such a collision involve the physics of ULRMs4, where s- and p-wave scattering between the Rydberg electron and the ground state atom determine the potential energy landscape at large internuclear distances. At short internuclear distances, however, the covalent molecular binding mechanisms take over and dominate the molecular dynamics. The total scattering process therefore probes the potential energy landscape at all internuclear distances. Thus, the understanding of such a process needs the modeling of both the ultralong-range potential energy landscape as well as that at short internuclear distances. An experimental in-depth study requires the state-selective detection of the reaction products. Only then, it is possible to access branching ratios and selection rules and one can compare the experimental outcome to effective theoretical models. Magneto-optical trap recoil ion momentum spectroscopy (MOTRIMS)11,12,13,14,15,16,17,18,19,20,21,22 is such a technique, which has been used to perform momentum spectroscopy of atomic and molecular processes with high resolution. Inspired by the MOTRIMS technique, we have developed a high-resolution momentum microscope, which enables the study of inelastic processes involving Rydberg atoms. Here, we use this technique to investigate inelastic collisions between Rydberg atoms and ground state atoms. In order to increase the collision rate, we excite vibrational states of ULRMs with principal quantum numbers between n = 20 and n = 27 as initial state and observe the subsequent dynamics. Ultralong-range Rydberg molecules The interaction of a ground state atom and a Rydberg atom at large internuclear distances is mediated by low-energy scattering between the Rydberg electron and the ground state atom, also denoted as perturber atom. For rubidium, the potential energy at separations larger than the extent of the electronic wave function is given by the energy levels of an isolated Rydberg atom, \({E}_{nl}\propto -1/{(n-{\delta }_{l})}^{2}\), where n is the principal quantum number. The quantum defect δl causes a significant splitting of the potential energies only for angular momentum quantum numbers l ≤ 2. One therefore has to distinguish between energetically isolated low-l (S, P, D) states with significant quantum defects and high-l hydrogenic manifolds. For smaller internuclear distances, the scattering interaction between the Rydberg electron and the ground state atom leads to oscillatory potentials, which support molecular states. The potential energy landscape is shown in Fig. 1. We employ these so-called ULRMs with bond lengths of ≈900 a0 as starting point for our measurements and restrict ourselves to the regime of low principal quantum numbers owing to the n−6 scaling of the outer wells' depths23. As initial state, we chose molecular states near the atomic Rydberg P-resonance (see inset of Fig. 1), which are excited using a three-photon transition. For a brief introduction to ULRMs, see Methods. Fig. 1: Adiabatic potential energy curves (PECs) of rubidium ULRMs in the vicinity of the 25P-state. The annotations to the right denote the terms of the asymptotic free Rydberg states. Starting point of our studies is the preparation of rubidium ultralong-range Rydberg molecules (ULRMs), which are bound vibrational states supported by the outermost potential wells at internuclear distances of up to 1000 a0. The inset shows a zoom into the ultralong-range part of the potential energy landscape with a selection of vibrational wavefunctions (gray curves). Here, we specifically excite vibrational states near the atomic resonance as highlighted in red in the particular case of a 25P-ULRM. As time evolves, the ground state atom tunnels toward the ionic core (green arrows), following the R−4-interaction dominated PECs (blue shaded area) up to the region, where short-range molecular couplings are dominant. The red arrows indicate possible outcomes of a state-changing collision, where the release energy is translated into kinetic energy. Details of the calculation of such PECs are provided in the "Methods" section. State-changing collisions The excited molecular states are only weakly bound and the wave function extends over several wells up to the inner part of the potential landscape. As a consequence, there is a finite probability for the ground state atom to tunnel toward the ionic core of the Rydberg atom. This way, we mimic an inelastic collision between a free ground state atom and a Rydberg atom, up to a vanishingly small mismatch in energy, which stems from the binding energy of the Rydberg molecule. In other words, the outcome of the collision remains unaffected by the presence of ULRMs. However, unlike initially free atoms, they are capable of mediating collisions with well-defined starting conditions, which makes them an ideal test bed to systematically study the dynamics of inelastic collisions in both a controlled and precise way. The molecular dynamic is initially dominated by the low-energy scattering between the electron and the ground state atom Vp(∣r − R∣) (second term in Eq. (8)) and the ion-neutral polarization potential Vc,g(R) ∝ R−4 (Eq. 9). In case of alkali atoms, Vp(∣r − R∣) shows a prominent attractive feature, the so-called butterfly potential24,25,26, see Fig. 1. For our initial states, there is a finite probability of adiabatically following the butterfly potential energy curve (PEC), which accelerates the ground state atom toward the ionic core. At shorter internuclear distances, Vc,g(R) takes over and further accelerates the collision process (blue shaded area in Fig. 1). Up to this point, the dynamics can be considered as understood. For even shorter internuclear distances, the ionic core directly interacts with the ground state atom and the Rydberg electron becomes a spectator. As we will detail later on, it is this short-range physics, which is mainly responsible for two distinct processes. The first one is associative ionization, which reads for the case of rubidium $${\text{Rb}}^{* }+{\text{Rb}}\to {\text{Rb}}_{2}^{+}+{\text{e}}^{-}+{{\Delta }} {E}_{\text{b}},$$ where ΔEb is the release energy due to the chemical bond of the molecular ion. The second one is a state-changing collision, resulting in an exoergic reaction $${\text{Rb}}^{* }({n}_{\text{i}},{l}_{\text{i}})+\,{\text{Rb}}\to {\text{Rb}}^{* }({n}_{\text{f}}\le {n}_{\text{i}},{l}_{\text{f}})+\,{\text{Rb}}\,+{{\Delta }}E,$$ where the indices i and f denote the initial or final state, respectively. ΔE is the released energy, which is transformed into kinetic energy of the ground state atom and the Rydberg atom. In the present work, we address this second type of collisions by directly measuring the momenta of the Rydberg atoms using high-resolution state-resolved momentum spectroscopy. This method enables a clear identification of the final states (sketched as red arrows in Fig. 1) and makes it possible to investigate the distribution of population after the collision. Recoil-ion momentum spectroscopy To measure the momentum distributions of Rydberg atoms with high resolution, we have adapted the MOTRIMS technique11,12,13,14,15,16,17,18,19,20,21,22, included an optical dipole trap, and implemented a so-called reaction microscope. The image at the top of Fig. 2 shows a three-quarter section CAD drawing of the essential parts of our experimental apparatus. Fig. 2: Recoil-ion momentum spectrometer and experimental sequence. a CAD drawing of the experimental setup in three-quarter section view. Laser-cooled 87Rb atoms are trapped in a crossed optical dipole trap (red beams) and excited by a three-photon transition (depicted as blue beam) to a Rydberg state. By using a high-power CO2 laser pulse (green beam), the Rydberg atoms are photoionized efficiently. The ions then follow two subsequent homogeneous electric fields and traverse a field-free drift tube before hitting the position- and time-sensitive detector. This method allows the measurement of momentum distributions of initially neutral particles with high resolution. The lower panel shows a sketch illustrating the state-changing collision process and the experimental procedure to measure the resulting momenta. b The experiment starts with the photoassociation of an ULRM, where the initial state of the Rydberg electron (blue) is \(\left|{n}_{\text{i}},{l}_{\text{i}}=1\right\rangle\). Subsequently, the Rydberg core (red) and the neutral atom (green) approach each other. c During the inelastic collision, the Rydberg electron changes its state (final principal quantum number nf is less than or equal to ni). The final angular momentum quantum number lf may be any value between 0 and (nf − 1). The release energy is apportioned equally between the Rydberg atom and the perturber atom, which fly into opposite directions due to the momentum conservation. d The Rydberg atom is photoionized by a CO2 laser pulse. e Electric fields guide the ionic core toward the detector without changing the transverse momentum. From the point of impact the momentum of the Rydberg atom can be inferred. Each experimental sequence starts with the trapping of precooled 87Rb atoms in a three-dimensional magneto-optical trap (3D MOT). The atoms are then transferred to a crossed optical dipole trap with a wavelength of 1064 nm (illustrated as red beams in Fig. 2) and trapping frequencies of ωx = 2π × 2.8 kHz, ωy = 2π × 1.4 kHz, and ωz = 2π × 3.1 kHz. After a short evaporation, the sample consists of more than 3 × 104 atoms, prepared in the \(|5{\text{S}}_{1/2},F=1\rangle\) ground state, with a temperature of ≈100 μK and a peak density of 1.9 × 1013 atoms/cm3. Rydberg states nP3/2 with principal quantum numbers n between 20 and 27 are addressed via an off-resonant three-photon transition employing the 5P3/2 and 5D5/2 states as intermediate states. The radiation at 780 nm (first step), 776 nm (second step), and 1280–1310 nm (third step) is provided by frequency-stabilized diode laser systems. While two excitation lasers are applied from the same direction, the third one is counterpropagating (depicted as blue beam in Fig. 2). The corresponding detunings amount to δ5P = −60 MHz and δ5D = +45 MHz, respectively. As the photon energy of the dipole trap beams is sufficient to photoionize atoms in the 5D5/2 state (photoionization cross section \(\gtrsim 17\ {\rm{Mb}}\approx 0.6\ {{\rm{a}}}_{0}^{2}\)), the dipole trap is switched off prior to the 1 μs long excitation pulse. Subsequently, the atoms are recaptured, such that we can perform up to 100 experiments per sample without loosing too much (≲25%) density. At the bottom of Fig. 2, we illustrate the microscopic physical processes. Starting with the photoassociation of ULRMs, we wait for a total of 2 μs during which the inelastic collisions take place and the Rydberg atom changes its state (Fig. 2a, b). Since the final states are energetically lower than the initial state, the release energy is translated into kinetic energy, which is shared by the Rydberg atom and the ground state atom. Due to momentum conservation, both constituents move into opposite directions. Subsequently, the Rydberg atoms are photoionized by a short pulse from a high-power CO2 laser (Fig. 2c). With a photoionization cross section of tens of megabarns27,28 the ionization process is very efficient. The recoil momentum caused by the photoionization is two orders of magnitude smaller than the typical momenta of the investigated processes, such that the created ion has, in good approximation, the same momentum as the Rydberg atom. The ion then follows two sections of homogeneous electric fields and traverses a drift tube with zero electric field before hitting a position and time-sensitive microchannel plate delay-line detector (Fig. 2d). This configuration is referred to as Wiley–McLaren spectrometer29 that, in particular, provides space and time focusing of the ions, i.e., ions with the same momentum hit the detector at the same position and the same time, independent of their initial position in the trap. As a result, we are able to measure momentum distributions of initially neutral atoms with resolutions better than 0.1 ℏ/a0 depending on the chosen electric fields. Momentum spectroscopy of state-changing collisions The outcome of our experiments is two-dimensional momentum distributions as shown in Fig. 3 for the 25P-state. A larger part of the ions accumulates at the center, where the transverse momentum is close to zero. These ions stem from photoionized ULRMs which have not undergone a state-changing collision or from facilitated off-resonantly excited Rydberg atoms6. Around the center, two concentric circular structures are visible. The circular shape arises from a projection of a three-dimensional spherical shell in momentum space onto the surface of the detector. The sharp boundaries of the circles thereby correspond to momentum vectors perpendicular to the spectrometer axis. As the initial state is well defined, the energy differences ΔE to each of the lower-lying states and hence the momenta p of the Rb+ ions with the mass mRb can be calculated using $$p=\frac{\hslash \sqrt{2{m}_{\text{Rb}}{{\Delta }}E}}{{a}_{0}{E}_{\text{H}}{m}_{\text{e}}},$$ where ℏ denotes the reduced Planck constant, a0 the Bohr radius, EH the Hartree energy, and me the electron rest mass. This allows us to identify the different shells in the momentum spectra. Fig. 3: Detector image resulting from the decay of 25P-ULRMs. The center of the plot consists of ions with vanishing transverse momentum, stemming mainly from photoionized ULRMs that have not undergone a state-changing collision. The two concentric circles at momenta 2.5 and 5.0 ℏ/a0 are due to the decay into the 22Hy and 21Hy hydrogenic manifolds. The feature in the lower right quadrant is due to technical issues of the detection unit. Sum of 107 experimental runs, normalized to the maximum number of ion counts. We have verified that resonant or even blue-detuned excitations of Rydberg atoms give rise to similar product-state distributions. In this case, however, the fraction of atoms which undergo a state-changing collision is substantially smaller as the excited Rydberg atom has to find a collisional partner during the short evolution time until the ionization pulse is applied. Considering a full detector image, as shown in Fig. 4 for the case of 20P-ULRMs, a large range of final states becomes apparent. Here, we cannot only observe the decay into manifolds as low as n = 12; also states lying in between the manifolds are clearly visible. The deviations from the circular structure are artefacts caused by unshielded rods which hold the electrodes of the spectrometer and are set on a higher electric potential. As a consequence, the ions experience a Coulomb repulsion from four directions and the resulting momentum distribution appears to be pincushion-distorted. This especially applies to ions with large radial momenta moving at large distances from the spectrometer axis. Nevertheless, the final states can unambiguously be identified and evaluated. Fig. 4: Full detector image for state-changing collisions of 20P-ULRMs. Final states with principal quantum numbers down to n = 12 are clearly visible. In addition, the decay to the 18D- and 19P-state can be observed, indicating the presence of low-l final states. The color code is the same as in Fig. 3. Deviations from the circular shape are due to design-related imperfections, causing a repulsion of the ions along the two diagonals. The artifacts in the corners of the detector plane are due to technical issues of the detection unit. We first concentrate our analysis on transitions from the initial state to the two lower-lying manifolds and the low-l states in between. For a quantitative analysis of the momentum spectra we show the radial profile of the momentum distribution in Fig. 5. Since the outcome of our experiments is two-dimensional projections of initially three-dimensional spherical shells, we make use of Abel transformations. To account for the finite thickness of the shells, which corresponds to the momentum uncertainty, we assume for the three-dimensional momentum distribution a Gaussian distribution of width σ and amplitude A, which is shifted isotropically by the radius R of the respective shell $$f(r,R,\sigma )=\frac{A}{\sqrt{2\pi {\sigma }^{2}}}\exp \left(-\frac{{(r-R)}^{2}}{2{\sigma }^{2}}\right),$$ where r2 = x2 + y2 + z2. The two-dimensional projection along the z-axis is then given by the Abel transform of Eq. (4) $$F(\rho ,R,\sigma ) =\int \nolimits_{-\infty }^{\infty }f(r,R,\sigma )\,{\text{d}}\,z\\ =\int \nolimits_{\rho }^{\infty }2\cdot f(r,R,\sigma )\cdot \frac{r}{\sqrt{{r}^{2}-{\rho }^{2}}}\,{\text{d}}\,r,$$ in which ρ2 = x2 + y2. Fig. 5: Angular integrated radial profile of the momentum distribution shown in Fig. 3. The integration is performed over a circle segment, where the spectrum shows the least distortion. The center peak is omitted due to scaling purposes. Four peaks are visible. The two most pronounced peaks correspond to manifolds 22Hy and 21Hy. In between, the 23D- and 24P-states are visible. We do not see signatures of a decay to the 25S-state, which should appear between 22Hy and 23D. For higher radii, only the envelopes of the peaks are visible. Due to the azimuthal integration, the final fit function is given by ρ ⋅ F(ρ, R, σ). To account for the appearance of multiple peaks, we fit a sum of peaks to the data. For the radial profile shown in Fig. 5, for instance, we fitted a total of six peaks, of which four are sharp and clearly identifiable. The remaining two peaks are rather small and broad, which is mainly due to the continuously increasing distortion for higher transverse momenta and, hence, larger radii of the underlying momentum distribution. Thus, only the envelope of several peaks rather than individual peaks indicating the final states after the collision can be resolved. In summary, we find good agreement with our experimental data. From the fit, we extract the amplitudes, the momenta, and their uncertainties, encoded in A, R, and σ. Particularly, this allows us to evaluate the relative amplitude for each final state. In addition to the two manifolds visible in Fig. 3, we can identify two more peaks stemming from the 23D and 24P final states. A systematic analysis for the initial quantum numbers nP, with n ∈ {20, 22, 25, 27} is shown in Fig. 6, where we plot the momenta of the fitted peaks in dependence of n. The expected momenta as calculated from the release energy are plotted as solid lines, where the thickness accounts for the spectral width of each final state including fine structure splitting and finite quantum defects of F- and G-states, which we include in the respective energetically close manifolds. In this procedure, we have introduced one global scaling factor to match experiment and theory, while the relative momenta remain conserved. The perfect agreement visible in Fig. 6 allows us to use this scaling factor as a calibration factor for all our momentum spectra. Fig. 6: Momenta resulting from state-changing collisions in dependence of the initial state. Depicted are the first four peaks, deduced from the fits to the respective radial profile (Fig. 5). The theoretically expected momenta are plotted as color-coded lines, where the thickness of the lines accounts for the momentum width (see text). The error bars indicate the momentum uncertainty as given by the fit parameter σ of the Abel transform. In order to evaluate all detectable final states, we also include the molecular ions, which are created at short internuclear distances through associative ionization. Their signal is readily distinguished via their longer time of flight. The respective signals are then normalized to the total number of events \({N}_{\text{tot}}={N}_{{\text{Rb}}_{2}^{+}}+{N}_{{\text{Rb}}^{+}}\). The results of this first part of the analysis are summarized in Table 1. Table 1 Relative population of the final states down to the (n − 4)Hy manifold and fraction of molecular ions Rb\({}_{2}^{+}\). All values are normalized to the sum of the detected atomic and molecular ions. The missing population is distributed over lower-lying states. Inspecting Table 1, several trends are discernible. First, we note that the majority of events appear in the center of the momentum distribution. This is plausible as the signal in the center mainly consists of ions from long-living, photoionized ULRMs or from facilitated off-resonantly excited and photoionized Rydberg atoms. We find that these processes appear to become less likely for initial states with lower principal quantum numbers, in favor of state-changing collisions. Here, we particularly observed a pronounced decay to the (n − 3) hydrogenic manifold, which corroborates the experimental findings of Schlagmüller et al.9. Beyond that, our results reveal that the set of possible final states is not restricted to the first lower-lying hydrogenic manifold, but extends to numerous lower-lying states as well. Also the decay to isolated low angular momentum states, including P-states, is possible, which means that the observed collisions do not necessarily need to alter the angular quantum number. We therefore call these processes state-changing collisions rather than l-changing collisions as it is commonly referred to in literature (e.g., refs. 30,31,32). Surprisingly, we could not find any indication of a decay into a final state with l = 0. Evaluating our signal-to-noise ratio, such processes are at least suppressed by a factor of 10. The situation changes for the decay into hydrogenic manifolds, which appear to be more likely than the decay into isolated low-l states. This is plausible as each manifold comprises a high amount of available final states. Moreover, the autoionization resonance width of Rydberg atoms becomes narrower for larger angular momentum quantum numbers, thus suppressing the formation of molecular ions33. Of particular interest is also the ratio between state-changing collisions and Rb\({}_{2}^{+}\) formation. Evaluating the distribution listed in Table 1, a clear trend becomes apparent. When starting with a 27P-ULRM, for instance, the occurrence of state-changing collisions is of the same order of magnitude as the formation of molecular ions (6.7% compared to 3.6%). For lower initial states, however, we find a significantly enhanced incidence of state-changing collisions of up to 28.5% (for 20P-ULRMs), exceeding the fraction of molecular ions (2.3%) by more than one order of magnitude. These observations are new compared to previous studies9, where only the decay into one lower-lying manifold could be experimentally observed. However, there are also substantial differences concerning the experimental prerequisites. First of all, in ref. 9, initial states with much higher principal quantum numbers (n ≥ 40) have been used. In this regime, enhanced state mixing gives rise to strong long-range couplings such that the dynamics predominantly occurs at large internuclear distances. A crucial role in this context plays the direct coupling between the butterfly PEC and the trilobite PEC (see also Fig. 1), which is assumed to be responsible for the decay into the next lower-lying manifold9. For initial states with lower principal quantum numbers, however, this coupling is strongly suppressed and cannot explain the observed redistribution of population. Instead, the probed dynamics is characterized by short-range interactions (see "Methods" section) rather than adiabatic couplings in the long-range part of the potential energy landscape and must therefore be clearly distinguished from the processes observed by Schlagmüller et al.9. Another relevant difference in this connection is the number of ground state atoms inside the wave function of the Rydberg electron, which could considerably influence the collision process. Due to the low principal quantum numbers of the initial states used in our experiment, we expect this number, on average, to be well below 1. In ref. 9, however, not only the principal quantum numbers of the initial states but also the densities of the atomic sample are higher, which results in an average of up to 1000 ground state atoms within the orbit of the Rydberg electron, bound to one single Rydberg core. Apart from that, it should be noted that not only the principal quantum numbers but also the angular momentum quantum numbers of the initial states are different. In fact, instead of nP-states, the experiments performed by Schlagmüller et al.9 rely on nS-states. Looking at the molecular PECs in Fig. 1, it becomes obvious that the initial dynamics starting from those states is quite different due to the involved quantum defects. Our approach is especially sensitive to small signals at high momenta, thus facilitating the detection of lower-lying states due to a high signal-to-noise ratio. Only this way, it is possible to resolve the redistribution of population as a result of state-changing collisions. Due to major differences concerning both the experimental approach and the probed dynamics, our findings are distinct from previously reported results presented in ref. 9. Short-range dynamics We now turn our attention to the details of the population distribution between the final states. In Fig. 4, one can clearly see the decay into six hydrogenic manifolds when starting with a 20P-ULRM. For simplicity, we restricted ourselves to the manifolds only and neglect low-l states, which are, by comparison, substantially suppressed. The measured population in each manifold is shown in Fig. 7. One can clearly see a continuous decrease of the signal, with substantial weight even at the lowest detectable quantum number (n = 12). In order to explain the wide distribution of final states, we have to look at the microscopic details of the molecular PEC couplings. In ref. 9, it was argued that the decay into the lower-lying manifold is due to a direct coupling of the butterfly PEC with the trilobite PEC. For an initial nP-state with low principal quantum number, the direct coupling between the butterfly state and the trilobite state becomes small and cannot fully explain the strong decay into the next lower-lying manifold and even less the distribution among the other final states observed in our experiment. Considering couplings within the long-range part of the PEC landscape only is therefore not expedient. In fact, to understand this process one has to look in more detail at the molecular dynamics at short internuclear distances. Fig. 7: Distribution of the final state population upon state-changing collisions starting from 20P-ULRMs. The data are normalized to the sum of events that ended up in the respective states. The diffusive model (see text) is shown as green dashed line and reproduces the experimental data well. The error bars indicate systematic uncertainties of the determined populations. Following ref. 34, inelastic collisions between Rydberg atoms and ground state atoms can often be subdivided into three phases: (1) the approach of the particles until they strongly couple to each other, (2) the formation of a Rydberg quasi-molecular complex at short internuclear distances, and (3) the outcome of the collision, a state-change followed by dissociation or associative ionization. Applying this principle to the present case, the ionic core of the Rydberg atom and the neutral atom first approach each other, following the long-range PECs. When the internuclear distance has reached values ≲30 a0, the subsequent dynamics can be described in the framework of a Rydberg diatomic quasi-molecular complex. This complex consists of two positively charged Rb+ cores, a generalized valence electron stemming from the ground state atom and a Rydberg electron, which is shared by the molecular ionic core Rb\({}_{2}^{+}\) (see Fig. 8). At such short internuclear distances, the so-called dipole resonant mechanism35,36 becomes active. When the ionic core and the ground state atom approach each other, the valence electron starts to tunnel between the two ionic cores. This leads to an oscillating internal dipole moment D\((t)=e\,\text{R}\,\cos (\omega t)\) of the quasi-molecule, where R denotes the distance of the two ionic cores, ω = Δ(R) is the splitting between the gerade and ungerade wave function of the inner valence electron, and e is the elementary charge. The periodic potential leads to an oscillating electric field with dipolar radiation characteristics, which can induce transitions of the Rydberg electron. In this semiclassical picture of the collision process, the varying distance between the ionic core and the ground state atom leads to a time-varying oscillation frequency. It is therefore not surprising that these mechanisms can induce many transitions within and between different Rydberg manifolds. Fig. 8: Rydberg quasi-molecular diatomic complex. The short-range dynamics of the collision between a Rydberg atom and a ground state atom can be described in the framework of a short-lived Rydberg quasi-molecular complex, where tunneling of the inner valence electron between the two ionic cores leads to an oscillating dipole moment D. As a result, the oscillating dipole induces transitions of the Rydberg electron. To illustrate these mechanisms, we show in Fig. 9 a simplified version of the PECs at short internuclear distances. All final states of the collision when starting with 20P-ULRMs are highlighted. Considering only the molecular ion, Rb\({}_{2}^{+}\), we have two PECs, Ug(R) and Uu(R) for the terms \({}^{2}{{{\Sigma }}}_{\,\text{g}\,}^{+}\) and \({}^{2}{{{\Sigma }}}_{\,\text{u}\,}^{+}\). The \({}^{2}{{{\Sigma }}}_{\,\text{g}\,}^{+}\) state is the ground state of the molecular ion and forms a deep potential well. The PEC labeled as \({}^{2}{{{\Sigma }}}_{\,\text{u}\,}^{+}\), however, is predominantly repulsive with a shallow attractive section at larger internuclear distances. The energy difference between both states ℏ × Δ(R) determines the oscillation frequency of the electron when tunneling between the two ionic cores. Fig. 9: Potential energy landscape of a Rydberg quasi-molecular complex. The molecular PECs split into two branches, which belong to the \({}^{2}{{{\Sigma }}}_{\,\text{g}\,}^{+}\) and \({}^{2}{{{\Sigma }}}_{\,\text{u}\,}^{+}\) components of the quasi-molecular ion. The highest-lying PEC of each branch belongs to Ug(R) (Uu(R)) of the molecular ion. All other PECs \({U}_{\,\text{g,u}\,}^{(n,l)}(R)\) are shifted by the binding energy of the Rydberg electron. For simplicity, we restrict the plot to hydrogenic manifolds only. The multicolored PECs are relevant for the description of the state-changing collision process when starting with 20P-ULRMs (see also Fig. 4). When the frequency of the oscillating dipole ω = Δ(R) exceeds the binding energy of the Rydberg electron, the complex may undergo associative ionization. The gray dashed line indicates the energy limit as defined by the asymptotic potential of the initial state Ψi. The inclusion of the Rydberg electron is now done in a trivial way by only taking care of its binding energy. This results in copies of the PECs, shifted by the binding energy of the Rydberg electron $${U}_{\,\text{g,u}\,}^{(n,l)}(R)={U}_{\text{g,u}}(R)+{E}_{\text{bind}}(n,l).$$ All other couplings, such as the spin–spin interaction, fine structure splitting, hyperfine structure effects, or the exchange interaction between the two electrons, are so small that they are negligible on the energy scale given by ℏ × Δ(R). The relevant molecular symmetry for all PECs is therefore the gerade and ungerade one of the molecular ionic core. The PECs in Fig. 9 show a plenitude of crossings between the gerade and ungerade states. Due to the resonant dipole mechanism, all of these are avoided and the molecule can undergo transitions between the different PECs, resulting in an effective redistribution of the populations during the collision. It is instructive to look at the coupling strength between the PECs. Based on the semiclassical model introduced above, we can make the following estimate: we calculated the electric field E(t) induced by the oscillating internal dipole at the classical radius of the Rydberg electron's orbit. We further simplify the system by assuming that the resulting electric field is spatially homogeneous across the Rydberg electron wave function. Together with the typical transition matrix elements between neighboring Rydberg states, given by ea0n2, we get the coupling strength ℏΩ. The resulting values of Ω are in the Terahertz range and are therefore comparable to or even higher than the energetic distance of adjacent manifolds. This complicates the molecular dynamics at short internuclear distances even further, as the coupling cannot be considered as a small perturbation to the PECs. Consequently, surface hopping models37 are expected to be not applicable (see also ref. 38). Diffusive-like redistribution of population To account for this strong mixing of the PECs, we adopt an effective model from ref. 38, where the redistribution of population between the Rydberg states is the consequence of a diffusive motion of the Rydberg electron in microwave fields38. This approach has been successfully employed for the description of collisional or thermal ionization processes36,38,39,40,41,42,43,44. As the physical mechanisms in a state-changing collision are the same, it is applicable in our case as well. The stochastic motion of the Rydberg electron is described by a diffusion equation $$\frac{\partial }{\partial t}{{\Phi }}(n,t)={\mathcal{D}}\frac{{\partial }^{2}}{\partial {n}^{2}}{{\Phi }}(n,t),$$ where Φ(n,t) is the distribution of a Rydberg electron in the space of principal quantum numbers n and \({\mathcal{D}}\) is the diffusion coefficient. Prior to the redistribution, the main principal quantum number of the initial state is given by ni. Due to the mixing of the butterfly state, we assume the population to be initially in a state with the principal quantum number ni = 18 with probability pi. We then solve Eq. (7) using \({\mathcal{D}}\) as a fit parameter. The results are shown in Fig. 7 and show good agreement with our experimental results, thus confirming a diffusive-like redistribution between the final states. One might wonder, why the diffusive model describes the experimental observation so well, given its simplicity. In fact, the exact microscopic ingredients are much more complex since the diffusion coefficient \({\mathcal{D}}\) in Eq. (7) is not necessarily a constant34 and the oscillating electric dipole radiation field is far from being homogeneous. Moreover, we have completely ignored the n − 1 angular momentum states of each manifold, which all have different matrix elements, and that the initial butterfly state is made up by a large number of angular momentum states. However, when so many different initial angular momentum states and so many couplings with different strength between a plenitude of PECs contribute to the system dynamics, a diffusive-like behavior might be, after all, just the most likely one. Our results might therefore be interpreted as a manifestation of the central limit theorem. In this manuscript, we demonstrate that inelastic collisions between Rydberg and ground state atoms can result in a large range of final states. We give evidence for the decay into low angular momentum states over a large range of principal quantum numbers. We also find pronounced decay into many lower-lying hydrogenic manifolds with substantial weight. The distribution among the manifolds suggests a diffusive-like redistribution between the Rydberg states at short internuclear distances. We give a simplified explanation of this behavior in terms of redistribution of Rydberg states in microwave fields. An ab initio quantum-chemical treatment of the total collision process is a challenging task, given the different interaction mechanisms at short and large internuclear distances. Nevertheless, our results help to model parts of the collisions more accurately. Our results also have implications for the modeling of inelastic processes in many-body Rydberg systems. In the future, it will be interesting to look for effects of alignment in the initial state, where the resulting momentum distribution after the collisions becomes anisotropic. This would exploit the full 3D imaging capability of our momentum spectrometer. We expect that this development will also allow for the study of other dynamical processes in Rydberg systems such as Rydberg–Rydberg dynamics, the direct measurement of the momentum distribution of Rydberg molecules and the study of other exotic Rydberg matter, such as heavy Rydberg systems45. ULRMs are bound states between a Rydberg atom and at least one ground state atom. The binding results from low-energy scattering between the Rydberg electron and the ground state atom and can be expressed in the formalism of a Fermi pseudopotential1,24,46,47,48,49 $$V_{{\text{e,g}}}(|{{r}}-{{R}}|) = V_{s}(|{{r}}-{{R}}|) + V_{p}(|{{r}}-{{R}}|) \\ =2\pi a_s[k(R)] \delta({{r}}-{{R}}) \\ \quad +6\pi a_p[k(R)]\mathop{\nabla}\limits^{\leftarrow} \delta({{r}}-{{R}}) \mathop{\nabla}\limits^{\rightarrow},$$ where r is the position of the Rydberg electron and R the position of the ground state atom with respect to the Rydberg ionic core. The first term describes s-wave interactions, which dominate at sufficiently large internuclear distances. At smaller internuclear distances of a few hundred Bohr radii, the p-wave scattering interaction comes into play. In case of alkali atoms, it equips the potential energy landscape with an attractive potential, associated with the so-called butterfly PECs24,25,26, which arise from an underlying p-wave shape resonance. Besides the electron–atom scattering, one also has to account for the attractive long-range interaction between the ionic core and the polarizable ground state atom, which is given by $${V}_{\mathrm{c}},{\mathrm{g}}(R)=-\alpha /(2{R}^{4})$$ with the polarizability α. The effective Hamiltonian for the Rydberg electron therefore reads $${\mathcal{H}}={{\mathcal{H}}}_{0}(r)+{V}_{\mathrm{c}},{\mathrm{g}}(R)+{V}_{\mathrm{e}},{\mathrm{g}}(| \,{\mathrm{r}}-{\mathrm{R}}\,| ),$$ where \({{\mathcal{H}}}_{0}(r)\) is the Hamiltonian of the bare Rydberg atom. By diagonalizing this Hamiltonian in a finite set of basis states Born–Oppenheimer PECs can be deduced. Since the energy shift due to Vs(∣r − R∣) is proportional to the electron probability density at the position of the perturber, the PECs are oscillatory functions of R with localized wells at large separations, which can support closely spaced bound vibrational states. The data that support the findings of this study are available from the corresponding author upon reasonable request. Fermi, E. Sopra lo spostamento per pressione delle righe elevate delle serie spettrali. Il Nuovo Cimento (1924–1942) 11, 157 (1934). Rolston, S. L. & Roberts, J. L. Ultracold neutral plasmas. The Expanding Frontier Of Atomic Physics pp. 73-82 (2003). Klyucharev, A. N., Bezuglov, N. N., Mihajlov, A. A. & Ignjatović, L. M. Influence of inelastic Rydberg atom–atom collisional process on kinetic and optical properties of low-temperature laboratory and astrophysical plasmas. J. Phys.: Conf. Ser. 257, 012027 (2010). Shaffer, J. P., Rittenhouse, S. T. & Sadeghpour, H. R. Ultracold Rydberg molecules. Nat. Commun. 9, 1965 (2018). Article CAS PubMed ADS PubMed Central Google Scholar Urban, E. et al. Observation of Rydberg blockade between two atoms. Nat. Phys. 5, 110 (2009). Weber, T. et al. Mesoscopic Rydberg-blockaded ensembles in the superatom regime and beyond. Nat. Phys. 11, 157 (2015). Amthor, T., Giese, C., Hofmann, C. S. & Weidemüller, M. Evidence of antiblockade in an ultracold Rydberg gas. Phys. Rev. Lett. 104, 013001 (2010). Article PubMed ADS CAS Google Scholar Barredo, D. et al. Coherent excitation transfer in a spin chain of three Rydberg atoms. Phys. Rev. Lett. 114, 113002 (2015). Schlagmüller, M. et al. Ultracold chemical reactions of a single Rydberg atom in a dense gas. Phys. Rev. X 6, 031020 (2016). Goldschmidt, E. A. et al. Anomalous broadening in driven dissipative Rydberg systems. Phys. Rev. Lett. 116, 113001 (2016). Article CAS PubMed ADS Google Scholar Wolf, S. & Helm, H. Ion-recoil momentum spectroscopy in a laser-cooled atomic sample. Phys. Rev. A 62, 043408 (2000). Van der Poel, M., Nielsen, C., Gearba, M.-A. & Andersen, N. Fraunhofer diffraction of atomic matter waves: electron transfer studies with a laser cooled target. Phys. Rev. Lett. 87, 123201 (2001). Turkstra, J. et al. Recoil momentum spectroscopy of highly charged ion collisions on magneto-optically trapped Na. Phys. Rev. Lett. 87, 123202 (2001). Flechard, X., Nguyen, H., Wells, E., Ben-Itzhak, I. & DePaola, B. Kinematically complete charge exchange experiment in the cs++ r b collision system using a mot target. Phys. Rev. Lett. 87, 123203 (2001). Nguyen, H., Fléchard, X., Brédy, R., Camp, H. & DePaola, B. Recoil ion momentum spectroscopy using magneto-optically trapped atoms. Rev. Sci. Instrum. 75, 2638 (2004). Article CAS ADS Google Scholar Blieck, J. et al. A new magneto-optical trap-target recoil ion momentum spectroscopy apparatus for ion-atom collisions and trapped atom studies. Rev. Sci. Instrum. 79, 103102 (2008). DePaola, B., Morgenstern, R. & Andersen, N. Motrims: magneto-optical trap recoil ion momentum spectroscopy. Adv. At., Mol., Opt. Phys. 55, 139 (2008). Schuricke, M. et al. Strong-field ionization of lithium. Phys. Rev. A 83, 023413 (2011). Fischer, D. et al. Ion-lithium collision dynamics studied with a laser-cooled in-ring target. Phys. Rev. Lett. 109, 113202 (2012). Götz, S. et al. Versatile cold atom target apparatus. Rev. Sci. Instrum. 83, 073112 (2012). Hubele, R. et al. Electron and recoil ion momentum imaging with a magneto-optically trapped target. Rev. Sci. Instrum. 86, 033105 (2015). Li, R. et al. Recoil-ion momentum spectroscopy for cold rubidium in a strong femtosecond laser field. J. Instrum. 14, P02022 (2019). Fey, C., Hummel, F. & Schmelcher, P. Ultralong-range Rydberg molecules. Mol. Phys. 118, e1679401 (2020). Hamilton, E. L., Greene, C. H. & Sadeghpour, H. Shape-resonance-induced long-range molecular Rydberg states. J. Phys. B: At., Mol. Opt. Phys. 35, L199 (2002). Chibisov, M., Khuskivadze, A. & Fabrikant, I. Energies and dipole moments of long-range molecular Rydberg states. J. Phys. B: At., Mol. Opt. Phys. 35, L193 (2002). Niederprüm, T. et al. Observation of pendular butterfly Rydberg molecules. Nat. Commun. 7, 1 (2016). Markert, F. et al. AC-Stark shift and photoionization of Rydberg atoms in an optical dipole trap. New J. Phys. 12, 113003 (2010). Gabbanini, C. Assessments of lifetimes and photoionization cross-sections at 10.6μ m of nd Rydberg states of rb measured in a magneto-optical trap. Spectrochim. Acta Part B: At. Spectrosc. 61, 196 (2006). Wiley, W. & McLaren, I. H. Time-of-flight mass spectrometer with improved resolution. Rev. Sci. Instrum. 26, 1150 (1955). Higgs, C., Smith, K., Dunning, F. & Stebbings, R. A study of l changing in xe (nf)-neutral collisions at thermal energies. J. Chem. Phys. 75, 745 (1981). Matsuzawa, M. Validity of the impulse approximation in Rydberg-neutral collisions. J. Phys. B: At. Mol. Phys. 17, 795 (1984). Lebedev, V. S. & Fabrikant, I. I. Semiclassical calculations of the l-mixing and n, l-changing collisions of Rydberg atoms with rare-gas atoms. J. Phys. B: At., Mol. Opt. Phys. 30, 2649 (1997). Gallagher, T. F. Rydberg Atoms, Vol. 3 (Cambridge Univ. Press, 2005). Dimitrijević, M. S., Srećković, V. A., Zalam, A. A., Bezuglov, N. N. & Klyucharev, A. N. Dynamic instability of Rydberg atomic complexes. Atoms 7, 22 (2019). Smirnov, V. & Mihajlov, A. Nonelastic collisions involving highly excited atoms. Opt. Spectrosc. 30, 525 (1971). Mihajlov, A., Srećković, V., Ignjatović, L. M. & Klyucharev, A. The chemi-ionization processes in slow collisions of Rydberg atoms with ground state atoms: mechanism and applications. J. Cluster Sci. 23, 47 (2012). Belyaev, A. K., Lasser, C. & Trigila, G. Landau-zener type surface hopping algorithms. J. Chem. Phys. 140, 224108 (2014). Bezuglov, N. et al. Diffusion ionization of the Rydberg diatomic quasi-molecular complex formed upon collisions of rubidium atoms. Opt. Spectrosc. 95, 515 (2003). Duman, E. & Shmatov, I. Ionization of highly excited atoms in their own gas. Sov. Phys. JETP 51, 1061 (1980). ADS Google Scholar Janev, R. & Mihajlov, A. Resonant ionization in slow-atom-Rydberg-atom collisions. Phys. Rev. A 21, 819 (1980). Mihajlov, A. & Janev, R. Ionisation in atom-Rydberg atom collisions: ejected electron energy spectra and reaction rate coefficients. J. Phys. B: At. Mol. Phys. 14, 1639 (1981). Bezuglov, N. et al. Analysis of Fokker-Planck type stochastic equations with variable boundary conditions in an elementary process of collisional ionization. Opt. Spectrosc. 91, 19 (2001). Bezuglov, N., Borodin, V., Eckers, A. & Klyucharev, A. A quasi-classical description of the stochastic dynamics of a Rydberg electron in a diatomic quasi-molecular complex. Opt. Spectrosc. 93, 661 (2002). Miculis, K. et al. Collisional and thermal ionization of sodium Rydberg atoms: Ii. theory for ns, np and nd states with n= 5–25. J. Phys. B: At., Mol. Opt. Phys. 38, 1811 (2005). Hummel, F., Schmelcher, P., Ott, H. & Sadeghpour, H. R. An ultracold heavy Rydberg system formed from ultra-long-range molecules bound in a stairwell potential. New J. Phys. 22, 063060 (2020). Omont, A. On the theory of collisions of atoms in Rydberg states with neutral particles. J. de Phys. 38, 1343 (1977). Greene, C. H., Dickinson, A. & Sadeghpour, H. Creation of polar and nonpolar ultra-long-range Rydberg molecules. Phys. Rev. Lett. 85, 2458 (2000). Fey, C., Kurz, M., Schmelcher, P., Rittenhouse, S. T. & Sadeghpour, H. R. A comparative analysis of binding in ultralong-range Rydberg molecules. New J. Phys. 17, 055010 (2015). Eiles, M. T. & Greene, C. H. Hamiltonian for the inclusion of spin effects in long-range Rydberg molecules. Phys. Rev. A 95, 042515 (2017). The authors thank the group of Reinhard Dörner (Goethe University of Frankfurt) for helpful discussions regarding the design of the MOTRIMS apparatus and Dominik Arnold for constructing the spectrometer. The authors thank Cihan Sahin for his help on setting up the experimental apparatus. The authors also thank Bergmann Messgeräte Entwicklung KG for the excellent Pockels cell driver and the invaluable technical support. The authors acknowledge fruitful discussions with Peter Schmelcher and Frederic Hummel (University of Hamburg). The authors acknowledge financial support by the German Research Foundation (Deutsche Forschungsgemeinschaft) within the priority program 'Giant Interactions in Rydberg Systems' (DFG SPP 1929 GiRyd, project no. 316211972). Open Access funding enabled and organized by Projekt DEAL. Department of Physics and Research Center OPTIMAS, Technische Universität Kaiserslautern, Kaiserslautern, Germany Philipp Geppert, Max Althön, Daniel Fichtner & Herwig Ott Philipp Geppert Max Althön Daniel Fichtner Herwig Ott P.G., M.A., and D.F. performed the experiment and analyzed the data. H.O. supervised the experiment. P.G. prepared the manuscript. All authors developed the theoretical model and contributed to the data interpretation and manuscript preparation. Correspondence to Herwig Ott. Peer review information Nature Communications thanks Michał Tomza and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Geppert, P., Althön, M., Fichtner, D. et al. Diffusive-like redistribution in state-changing collisions between Rydberg atoms and ground state atoms. Nat Commun 12, 3900 (2021). https://doi.org/10.1038/s41467-021-24146-0
CommonCrawl
Ternary ionic liquid–water pretreatment systems of an agave bagasse and municipal solid waste blend Jose A. Perez-Pimienta1, Noppadon Sathitsuksanoh2,3, Vicki S. Thompson4, Kim Tran3,6, Teresa Ponce-Noyola5, Vitalie Stavila7, Seema Singh3,6 & Blake A. Simmons3,6 Biotechnology for Biofuels volume 10, Article number: 72 (2017) Cite this article Pretreatment is necessary to reduce biomass recalcitrance and enhance the efficiency of enzymatic saccharification for biofuel production. Ionic liquid (IL) pretreatment has gained a significant interest as a pretreatment process that can reduce cellulose crystallinity and remove lignin, key factors that govern enzyme accessibility. There are several challenges that need to be addressed for IL pretreatment to become viable for commercialization, including IL cost and recyclability. In addition, it is unclear whether ILs can maintain process performance when utilizing low-cost, low-quality biomass feedstocks such as the paper fraction of municipal solid waste (MSW), which are readily available in high quantities. One approach to potentially reduce IL cost is to use a blend of ILs at different concentrations in aqueous mixtures. Herein, we describe 14 IL-water systems with mixtures of 1-ethyl-3-ethylimidazolium acetate ([C2C1Im][OAc]), 1-butyl-3-ethylimidazolium acetate ([C4C1Im][OAc]), and water that were used to pretreat MSW blended with agave bagasse (AGB). The detailed analysis of IL recycling in terms of sugar yields of pretreated biomass and IL stability was examined. Both biomass types (AGB and MSW) were efficiently disrupted by IL pretreatment. The pretreatment efficiency of [C2C1Im][OAc] and [C4C1Im][OAc] decreased when mixed with water above 40%. The AGB/MSW (1:1) blend demonstrated a glucan conversion of 94.1 and 83.0% using IL systems with ~10 and ~40% water content, respectively. Chemical structures of fresh ILs and recycle ILs presented strong similarities observed by FTIR and 1H-NMR spectroscopy. The glucan and xylan hydrolysis yields obtained from recycled IL exhibited a slight decrease in pretreatment efficiency (less than 10% in terms of hydrolysis yields compared to that of fresh IL), and a decrease in cellulose crystallinity was observed. Our results demonstrated that mixing ILs such as [C2C1Im][OAc] and [C4C1Im][OAc] and blending the paper fraction of MSW with agricultural residues, such as AGB, may contribute to lower the production costs while maintaining high sugar yields. Recycled IL-water mixtures provided comparable results to that of fresh ILs. Both of these results offer the potential of reducing the production costs of sugars and biofuels at biorefineries as compared to more conventional IL conversion technologies. Schematic of ionic liquid (IL) pretreatment of agave bagasse (AB) and paper-rich fraction of municipal solid waste (MSW) Liquid transportation fuels and value-added products can be obtained from renewable sources such as grasses and agricultural or forestry residues due to their naturally high carbohydrate content. Moreover, these lignocellulosic biomass materials are available at significant levels and can achieve high sugar production with minimal impact on food sources when compared to first-generation technologies [1]. A pretreatment step is a necessary prerequisite to increase biomass digestibility by reducing its recalcitrance. After this stage, pretreated materials are enzymatically digested into fermentable sugars that are then suitable for biofuel and/or renewable chemical production using fermentation [2]. Various biomass pretreatment technologies have been developed with the general objective to alter or remove hemicellulose and/or lignin, increase surface area and/or decrease the crystallinity of cellulose [3]. In recent years, numerous studies have shown that imidazolium-based ionic liquids (ILs) are attractive as green solvents for biomass pretreatment due to several traits, including high cellulose solubility, low vapor pressure, chemical and thermal stability, non-flammability, and phase behavior. These ILs are relatively benign to the environment when compared to pretreatments that use acids, bases, and/or organic solvents. After IL pretreatment, cellulose can be easily recovered by the addition of an antisolvent, such as water or ethanol [4, 5]. In addition, ionic liquids have been used in the dissolution and partial delignification of corn stover, switchgrass, agave bagasse, softwood, hardwood, and municipal solid waste (MSW) [6–11]. Although IL pretreatment leads to enhanced biomass saccharification, the biggest challenge for commercialization of this technology lies in the relatively high cost of ILs, which can range from $1 up to $800/kg, depending on the purity and source, making it essential to develop comprehensive strategies for improving the overall economics of the biorefineries using IL pretreatment platform [12]. To address the high-cost issue of ILs, we have taken four different approaches into consideration. The first approach entails the use of MSW (which paper mix represents 30% of total) as a lower quality feedstock; therefore by blending a paper-rich fraction of MSW with a higher quality feedstock overall costs can be reduced in a biorefinery scheme [9, 13]. Currently, most biomass conversion studies have focused on the conversion of a single feedstock with little consideration on feedstock diversity and mixed feedstocks. Moreover, biomass availability varies significantly from region to region due to weather conditions and crop varieties and increase the need for a biorefinery that can effectively and efficiently process mixed feedstocks [14, 15]. A second approach for improving the economics of IL pretreatment involves the utilization of aqueous solutions of ILs as opposed to a typical process that uses 100% IL. These aqueous mixtures have significantly lowered viscosities relative to neat ILs, making handling easier and enhancing mass transfer. Previous findings have shown that selected ILs can act effectively in the presence of water, enhancing glucan digestibility due to competitive hydrogen bonding [16–20]. Decreasing IL use without decreasing sugar yields will be reflected in final production costs. A third approach used to minimize associated costs with using ILs to pretreat biomass is to employ ILs combination of acetate (anion) and imidazolium (cation) such as [C2C1Im][OAc] and [C4C1Im][OAc] which demonstrate high lignin removal and cellulose decrystallization in studies where their specific interactions and performance were examined [7, 11, 21, 22]. Imidazolium-based ILs typically have numerous advantages in biomass biorefineries including pretreatment performance independent of biomass type, moderate reaction values (time and temperature), and compatibility with pretreatment reactor construction materials [23]. Currently, [C4C1Im][OAc] costs about 80% when compared to [C2C1Im][OAc], which can lead to reduced costs if pretreatment performance can be maintained. Finally, a fourth approach concerns the recyclability and reusability of ILs for several consecutive batches, which will likely be required for commercial use as a biomass pretreatment within a biorefinery. Recycling of ILs would occur after the addition of an antisolvent such as water that precipitates cellulose and allows easy recovery through filtration or centrifugation. A number of reports have studied the recovery and recycling of ILs after biomass pretreatment with different conditions and equipment [24–26], addition of kosmotropic anions (such as phosphate carbonate and sulfate) to form aqueous biphasic systems [27, 28] and from a technoeconomic perspective [29]. Nevertheless, the particular effects on the recycled ILs and its impact on biomass pretreatment have not been completely elucidated. This study aims to assess the effect of a ternary aqueous system by mixing [C2C1Im][OAc], [C4C1Im][OAc], and water at 14 selected ratios for the pretreatment of a 1:1 blend of MSW and agave bagasse (AGB) (Fig. 1). AGB was selected to be blended with MSW due to favorable characteristics as a bioenergy feedstock such as high carbohydrate content, low water inputs, and high productivities in semiarid regions as well as previous studies that demonstrated high sugar yields can be obtained after IL pretreatment [8]. In order to better understand the pretreatment process, changes in chemical structure were examined by Fourier Transform Infrared (FTIR) spectroscopy, 1H NMR, and component characterization. We also examined the effects of recycling [C2C1Im][OAc] and [C4C1Im][OAc] three times on pretreatment performance. To the best of our knowledge, this is the first report that employs a mixture of ILs and water for the pretreatment of mixed feedstock blends. Aqueous ionic liquid systems employed in the pretreatment of agave bagasse (AGB), municipal solid waste (MSW), and an AGB/MSW (1:1) blend Materials and preparation For the MSW, paper waste materials were prepared as in [9], consisting of 15% glossy paper, 25% non-glossy paper, 32% non-glossy cardboard, and 28% glossy cardboard using a process developed by Idaho National Laboratory (INL). It is recognized that this material is not representative of real MSW streams and that there may be contaminants present that will impact pretreatment effectiveness. However, the goal of this study was to examine the effectiveness of the IL systems in this study on the types of paper that would be found in MSW. Destiladora Rubio, a tequila plant from Jalisco, Mexico, donated the AGB. The AGB was milled with a Thomas-Wiley Mini Mill fitted with a 40-mesh screen (Model 3383-L10 Arthur H. Thomas Co., Philadelphia, PA, USA). Both ground biomass samples were stored at 4 °C in a sealed plastic bag prior to their use. The 1:1 blend was prepared by mixing both MSW and AGB in the pretreatment reactor just before the heating process begins. 1-ethyl-3-methylimidazolium acetate [C2C1Im][OAc] and 1-butyl-3-methylimidazolium acetate [C4C1Im][OAc], citric acid, ethanol, glucose, xylose, sulfuric acid, and HPLC grade water were purchased from Sigma–Aldrich. Aqueous ionic liquid pretreatment in tube reactors A design of experiments was carried out using Minitab® software (Coventry, UK), utilizing 14 unique aqueous ionic liquid combinations (ranked in cost decreasing order), composed of two ionic liquid: [C2C1Im][OAc] and [C4C1Im][OAc] plus DI water (constrained up to 50% when combined with ILs) at different ratios (Fig. 1). One gram of biomass (dry basis) was mixed with 9 g of the specific ternary aqueous IL solution to give a 10% (w/w) biomass solution. The biomass was loaded in tubular reactors made of 0.75-in diameter × 6-in length Hastelloy (C276) tubes, which were then sealed with stainless steel caps. All pretreatment procedures were run in triplicate in tubular reactors that were heated to reaction temperature (120 °C) for 3 h in a WVR convection oven [8]. After pretreatment, all reactors were quenched by quickly transferring them to a room temperature water bath until the temperature dropped to 30 °C, followed by a washing step performed as previously described [30]. A total of 42 experiments were carried out, where the recovered product was lyophilized for two days in a FreeZone12 (Labconco, MO, USA) equipment before compositional analysis. Recycle of ionic liquid and pretreatment The IL/water mixtures obtained from the pretreatments in tube reactors with pure [C2C1Im][OAc] and [C4C1Im][OAc] in AGB were evaporated at 100 °C for 12 h in a drying oven to remove excess water, and then reused to pretreat AGB in tube reactors at 120 °C and 3 h at ambient pressure without any further purification. A total of 3 cycles were performed where the solution of each IL was again separated, concentrated, and reused. The recycling pretreatments were conducted in duplicate and the IL/biomass mixture was homogenized using a glass rod. A portion of the recovered biomass on each cycle was stored for compositional analysis and other one was used for enzymatic saccharification. For each IL recycle, 500 µL was withdrawn to analyze their integrity by FTIR and 1H-NMR. Chemical characterization Sugars content of untreated and pretreated biomass samples were determined according to the standard analytical procedures of the National Renewable Energy Laboratory (NREL) LAP 017 using a two-step acid hydrolysis method [31]. Briefly, for all samples, 0.3 g of dry biomass was treated with 3 mL of 72% H2SO4 for 60 min at 30 °C with constant agitation, then diluted with 84 mL of DI water, finally autoclaved at 121 °C for 1 h. The content of acid insoluble lignin (referred to as lignin in the rest of the manuscript) was determined gravimetrically as the solid residue remaining after two-step hydrolysis. The liquid filtrates were used to determine the carbohydrate concentrations by Agilent HPLC 1200 series equipped with a Bio-Rad Aminex HPX-87H column and a refractive index detector. Delignification was calculated using the following equation: $${\text{Delignification (}}{\% } )\;{ = }\frac{{{\text{Initial lignin}} - {\text{Recovered lignin}}}}{\text{Initial lignin}}\;{ \times }\; 1 0 0.$$ Enzymatic saccharification Saccharification of all biomass samples was carried out at 55 °C and 150 rpm for 72 h in 50 mM citrate buffer (pH 4.8) in a rotary incubator with commercial enzyme cocktails, Cellic® CTec2 and HTec2, obtained as a gift from Novozymes. The protein content of enzymes was determined by bicinchoninic acid (BCA) assay with a Pierce BCA Protein Assay Kit (Thermo Scientific) using BSA as protein standard. CTec2 has a protein content of 186.6 ± 2.0 mg/mL, and protein content of HTec2 was 180.1 ± 1.8 mg/mL. The enzyme activity of CTec2 was determined to be ~80 filter paper units (FPU)/mL. The enzyme loading was normalized to the glucan content (5 g/L) present in the biomass samples to understand the impact of each pretreatment in the response variable of sugar production. Hence, the enzyme concentration of CTec2 and HTec2 was set constant at 20 mg protein/g glucan and 2 mg protein/g xylan, respectively. All assays were performed in triplicate. Analysis of saccharified samples Sugars concentrations were monitored using HPLC by taking 50 µL of the saccharification supernatant. The samples were filtered in 0.45 µm Pall 96-well filter plate, centrifuged (4000 rpm—5 min), recollected in a 96-well Bio-Rad plate and finally covered with pierceable aluminum foil (to prevent vapor losses) to monitored glucose and xylose production in all samples by an Agilent HPLC 1200 series equipped with a Bio-Rad Aminex HPX-87H column and a refractive index detector. The glucan conversion was calculated using $$\begin{aligned} {\text{Glucan conversion (}}\% ) & = \frac{{{\text{Glucose conc }}( {{{{\text{g}}} /{{{\text{mL}}}}}}) \times {\text{Reaction vol }}( {{\text{mL}}})}}{{{\text{Biomass }}( {\text{g}}) \times {\text{wt}}\% \,{\text{cellulose in biomass}}}} \\ & \quad \times \frac{{162\;({\text{PM glucan unit)}}}}{{180\;({\text{PM glucose unit)}}}} \times 100\end{aligned}$$ and is based on the mass of each material used before pretreatment, thus representing an overall process conversion. The xylan conversion was calculated using $$\begin{aligned}{\text{Xylan conversion }}(\% ) &= \frac{{{\text{Xylose conc }} ({{{\text{g}}/{{\text{mL}}}}}) \times {\text{Reaction vol }}\left( {{\text{mL}}} \right)}}{{{\text{Biomass }}( {\text{g}}) \times {\text{wt}}\% \;{\text{xylan in biomass}}}} \\ & \quad \times \frac{{132\;({\text{PM xylan unit}})}}{{150\;({\text{PM xylose unit}})}} \times 100 \end{aligned}$$ and is based on the difference in molecular weight between xylan and the xylose unit [32]. Attenuated total reflectance (ATR)-FTIR spectroscopy ATR-FTIR was conducted using a Bruker Optics Vertex system with built-in diamond-germanium ATR single reflection crystal. All samples were pressed uniformly against the diamond surface using a spring-loaded anvil. Sample spectra were obtained in triplicates using an average of 128 scans over the range between 800 and 2000 cm−1 with a spectral resolution of 4 cm−1. Air, water, and the appropriate IL solution were used as background for untreated and pretreated biomass samples, respectively. Baseline correction was conducted using the rubber band method following the spectrum minima [5]. Crystallinity measurement XRD diffractogram of untreated and IL-treated AGB with fresh and recycled ILs ([C2C1Im][OAc] and [C4C1Im][OAc]) in AGB were acquired with a PANalytical Empyrean diffractometer equipped with a PIXcel3D detector with Cu Kα radiation. The samples were scanned in the range of 5–50° (2θ) with a step size of 0.026° at 45 kV and 40 mA under ambient temperature. Crystallinity index (CrI) was calculated by using Eq. (4) [33] $${\text{CrI}} = \frac{{I_{002} - I_{\text{am}} }}{{I_{002} }},$$ where I 002 is the intensity for the crystalline portion of biomass at about 2θ = 22.4, and I am is the peak for the amorphous portion at 2θ = 16.6. Proton nuclear magnetic resonance (1H-NMR) spectroscopy 1H NMR spectra of fresh and recycled ILs were acquired at 25 °C using a Bruker DRX-500 MHz instrument equipped with a Z-gradient inverse TXI 1H/13C/15N 5 mm probe (ns = 128 and d1 = 10.0 s). Chemical shifts were referenced to tetramethylsilane. The NMR spectra were processed using Bruker's Topspin 3.1 (Windows) processing software. The software Minitab 17 was used for analysis of variance (ANOVA) of experimental results. A 5% probability level (p = 0.05) was used to accept or reject the null hypothesis of significant differences. Duncan's multiple range test at the level of 5% was used to analyze the significances of glucan and xylan conversion of the pretreated biomass besides delignification and glucan conversion of the effect of using recycled ILs [34]. Compositional analysis of untreated and pretreated biomass The initial step to decrease biomass recalcitrance towards fermentable sugars, this is to pretreat the feedstock for downstream processing (saccharification and fermentation). Previous studies have found that [C2C1Im][OAc] is an effective solvent to solubilize AGB bagasse plant cell wall, regenerating cellulose while rejecting lignin upon antisolvent addition with optimal conditions for AGB at 120 °C for 3 h [8, 30]. To provide lower cost biorefinery feedstock inputs, MSW have been used as a blending agent in different feedstocks (e.g., corn stover) using IL pretreatment with advantageous features such as year-round availability, reduce landfill disposal and meet biorefinery overall quality specifications [9]. Recently, different studies have been carried out to determine the impact and effectiveness on pretreatment technologies of mixed lignocellulosic biomass as the feedstock costs remain a large contributor to biofuel production costs including that each material responds differently to a specific process (e.g., component removal, sugar yield) [29, 35, 36]. The process flowsheet of the IL-water pretreatment systems is shown in Fig. 2. Figure 3 presents the compositional analysis of untreated and all 14 IL-water pretreatment system using AGB, MSW, and AGB/MSW (1:1) blend where three major plant cell wall components (glucan, xylan, and lignin) were monitored. For the untreated AGB, a 31.3% glucan, 15.4% xylan, and 21.6% lignin compositional profile measured is comparable to other agave bagasses from the Tequilana species, but relatively lower in glucan content and higher in lignin compared to other reported agave compositions which had glucan and lignin values above 40% and under 20%, respectively [37, 38]. This difference can potentially be attributed to process conditions during tequila production and/or environmental conditions of the biomass source, extraction, and post-harvest procedures. Compositional profile of untreated MSW was 54.7% glucan, 12.9% xylan, and 12.5% lignin, similar to that reported by Sun et al. [9] and similar to the individual composition from two constituents of MSW (newspaper and office paper) described by Foyle et al. [39]. As expected, intermediate values were obtained for the AGB/MSW (1:1) blend with 43.9% glucan, 14.1% xylan, and 16.7% lignin. Process flowsheet of the IL-water pretreatment systems on agave bagasse (AGB), municipal solid waste (MSW), and an AGB/MSW (1:1) blend Compositional analysis of untreated and pretreated biomass under different aqueous ionic liquid systems In order to measure the response of each component from the aqueous IL systems, 100% concentration of [C2C1Im][OAc], [C4C1Im][OAc], and water was included in the experimental design as systems A, F, and N, respectively. Compositional profiles for pretreated samples (Fig. 3) indicate that almost all systems studied achieve a higher glucan content increase (2–24%) with the exception of system N (100% water) where negative values for MSW and AGB/MSW (1:1) blend were obtained. Using system J with ~40% water obtained a ~24% glucan increase with AGB that was higher than the one obtained with system A (18%) when compared to the untreated sample. A~9% glucan increase using MSW was achieved obtained in three systems (I, J, and M) and was comparable to the trends obtained by Sun et al. [9] that reports an increment of 6% of glucan when compared to the untreated biomass. Finally, the AGB/MSW (1:1) blend increased the glucan content by 15 and ~10% with system L and I, respectively. In terms of xylan content, the pretreated AGB showed a similar trend as in previous reports, increasing its loading from 1 to 18%. As opposite as in MSW, where the general trend shows a xylan reduction up to 9% in the IL-treated samples while the AGB/MSW (1:1) blend presents mix results. One of the most important features of IL pretreatment is the high levels of delignification that can be achieved. When compared to the untreated AGB, a significant reduction in lignin content was observed after pretreatment. Lignin content was decreased up to 26.9% with system J (~40% water) comparable to that using system A (100% [C2C1Im][OAc]) which was our base control. Nevertheless, with MSW slight increases were observed with the IL-treated samples and these differences may be attributed to the nature of the lignin in these two feedstocks. A recent study investigated the [C2C1Im][OAc] dissolution of a corn stover/MSW (1:1) blend at 140 °C from 1 to 3 h, and obtained lignin reduction of 46.2, 69.5, and −0.8% for the blend, corn stover, and MSW, respectively, where a negative number stands for a relative increase on lignin content [9]. Lignin removal from the AGB/MSW (1:1) blend was obtained with system A (15.1%) and system J (14.4%), and were not statistically different. This represents a cost savings since system J uses 40% less IL than system A which is neat IL. In this context, Fu and Mazza [40] presented delignification values of 3.6 and 5.6% with a mixture solution of 1:1 [C2C1Im][OAc]/water with neat [C2C1Im][OAc] using Triticale straw. Furthermore, Shi et al. [35] showed that a high sugar yield could be obtained using mixed lignocellulosic feedstocks in which IL pretreatment is capable of handling them with equal efficiency. Sun et al. [9] attribute the difference on delignification to the nature of lignin in MSW, as this paper mix has already gone through a pulping process that removed most of the lignin although, lignin structure in the MSW is thus expected to be more recalcitrant compared to the intact lignin in AGB making it more difficult to be extracted. Summarizing, in general terms, system A as expected from neat [C2C1Im][OAc] presented positive improvement in terms of lignin removal and glucan enrichment from the studied aqueous IL systems and biomass feedstocks while when only water was used (system N), the process temperature (120 °C) was not high enough to substantially modify the biomass cell wall. The intrinsic variation in the cell wall components from the studied materials made that the response on which IL-aqueous system reduce the biomass recalcitrance on higher or lesser magnitude as in the AGB. ATR-FTIR analysis Normalized FTIR spectra between 800 and 2000 cm−1 were used to characterize the chemical fingerprints of the feedstocks before and after IL pretreatment (see Additional file 1). For ATR-FTIR data, seven bands are used to monitor the chemical changes of lignin and carbohydrates, and two bands for changes in calcium oxalate intensity in AGB. As expected, the main antisymmetric carbonyl stretching band specific to the oxalate family occurs at 1618 cm−1 for calcium oxalate and the secondary carbonyl stretching band, the metal-carboxylate stretch, is located at 1317 cm−1. Those two bands are observed to decrease with IL pretreatment in all AGB samples, in agreements with a previous report [30]. Calcium oxalate is located in a large group in the AGB/MSW (1:1) blend but does not appear in MSW. Only AGB presents the 1745 cm−1 in a great the intensity and a reduction trend was found in all IL-treated samples. This band is associated with carbonyl C=O stretching, indicating cleavage of lignin and side chains increasing slightly only on system N (100% water). The mixture employed to represent MSW (glossy paper, non-glossy paper, non-glossy cardboard, and glossy cardboard) has an untreated spectrum similar to those obtained from newspaper and paper [41–43]. Typically, the bands at 1510 and 1605 cm−1 show the aromatic skeletal vibrations of lignin and are used to reflect the delignification that occurs during IL pretreatment when compared to the untreated spectrum. These bands are assigned for C=O stretching in conjugated p-substituted aryl ketones [44]. In AGB and in some samples of the AGB/MSW (1:1) blend, these bands (1510 and 1605 cm−1) are affected by the broad and intense calcium oxalates peaks, which does not occur with MSW. An increase is shown in the IL-treated samples of the band at 1375 cm−1 (C–H deformation in cellulose and hemicellulose). Furthermore, a significant increase of band intensities is observed in all samples at 1056 cm−1 (C–O stretch in cellulose and hemicellulose), and the band intensity at 1235 cm−1 (C–O stretching in lignin and hemicellulose). In addition to that, the crystalline-to-amorphous cellulose ratio peaks of 1098 and 900 cm−1 decreased as a function of IL pretreatment temperature, indicating reduction of cellulose crystallinity in most of pretreated samples when compared to the untreated spectrum [6]. Finally, an increase in the band intensity at 900 cm−1 (antisymmetric out-of-plane ring stretch of amorphous cellulose) is observed in the spectra of IL-treated samples, which reflects the relative increase in cellulose content as a result of partial removal of both lignin and hemicellulose in the biomass AGB, MSW, and AGB/MSW (1:1) blend. Comparison of the enzymatic saccharification of aqueous IL-treated biomass Figure 4 shows the 72 h glucan and xylan conversion of AGB, MSW, and AGB/MSW (1:1) blend. As expected from untreated samples, all three feedstocks showed values under 26 and 14% in terms of xylan conversion. On the other hand, system N (100% water) displayed sugar conversions similar to the untreated samples, as process temperature was not high enough to initiate autohydrolysis. Glucan (A) and xylan (B) conversion contour plots for ternary ionic liquid systems of [C2C1Im][OAc], [C4C1Im][OAc] and water. (I) Agave bagasse, (II) municipal solid waste, and (III) AGB/MSW (1:1) blend The AGB using system J had a (97.6%) glucan conversion similar to system A (94.7%) offering an advantage in terms of IL utilization where a relatively high water content (~40%) maintained a sugar conversion comparable to neat IL pretreatment (Fig. 4-IA), correlated with high delignification values. For IL pretreated MSW (Fig. 4-IIA), 96.7% of glucan conversion was obtained with system J, whereas conversion values above 90% were reached when neat systems were employed. When IL-water mixtures were used, System D (10.1% water) achieved a high glucan conversion (~93%), in contrast with system L (50% water) with an 83.1%. Agave bagasse and MSW obtained xylan conversion yields above 87 and 76% for system A and B, respectively (Fig. 4-IB, IIB). Saccharification of AGB/MSW (1:1) blend showed a 72-h glucan conversion of 96.8% (system A), 94.1% (system D, ~10% water), and 83.0% (system J, ~40% water) (Fig. 4-IIIA). Hence, a high sugar conversion was obtained using the AGB/MSW (1:1) blend in an IL system with an equal efficiency as that obtained using neat [C2C1Im][OAc]. In terms of xylan conversion of the AGB/MSW (1:1) blend, 92.2% was obtained using system A, while IL-water systems were in the range of 65–78% (Fig. 4-IIIB). The improved saccharification for IL pretreated samples was due to the decrease biomass recalcitrance granted by weaken the van der Waals interaction between cell wall polymers and disrupt the covalent linkages between hemicellulose and lignin [36]. Table 1 shows a comparison with selected pretreatments that maximize enzymatic digestibility of AGB and MSW. Each pretreatment has it distinctive operation parameters and interaction with the lignocellulosic biomass where IL pretreatment outperformed other processes wits fast saccharification rates and high sugar yields where this difference could be attributed to an improved substrate availability. Table 1 Comparison of selected pretreatments that maximize enzymatic digestibility of agave tequilana bagasse (AGB) and municipal solid waste (MSW)—paper mix Overall, all three biomass samples can be efficiently saccharified obtaining a high sugar conversion when compared to the untreated samples, and comparable sugar yields were observed for the IL mixtures relative to those obtained with neat ILs. A few reports exist where IL-water systems have been used to investigate the dissolution of lignocellulosic biomass using imidazolium-based cation. Fu and Mazza [40] study the [C2C1Im][OAc]-water pretreatment of triticale straw at 150 °C for 90 min, and achieved a sugar yield of 81 for 50% water and 67% for neat IL, which were lower than the glucan conversion efficiency of 98% for AGB (40% water) and 83% for MSW (50% water) switchgrass at 120 °C for 3 h in this study. Similarly, Brandt et al. [45] using aqueous solutions applied two ionic liquids (1-butyl-3-methylimidazolium methyl sulfate [C4C1Im][MeSO4] and 1-butyl-3-methylimidazolium hydrogen sulfate [C4C1Im][HSO4]) in Miscanthus pulp at 120 °C, and were able to achieve a glucan conversion of 85% and 92% using solutions containing 40% and 10% water content, respectively. Nonetheless, the [C4C1Im][MeSO4] pretreatment was carried out for 22 h, while [C4C1Im][HSO4] pretreatment lasted 13 h, higher processing times values than the 3 h in this study. Another paper reported 88% glucose yield from sugarcane bagasse using [C4C1Im][Cl] solution containing 20% water and 1.6% H2SO4 at 130 °C for 30 min [46]. In addition, Shi et al. [47] show that 50–80% [C2C1Im][OAc]-water mixtures at 160 °C in switchgrass can match the performance of neat [C2C1Im][OAc] in terms of glucose yield. Finally, taking into consideration the decreased use of ILs when mixing with up to 40% water, this will impact process economics by reducing associated costs with recycling and handling (with a less viscous solution). This method is also very versatile when employing mixed biomass due to feedstock flexibility, where MSW can provide a lower cost and reduce the environmental impact on subsequent landfill disposal. IL recycling In order to obtain an affordable and scalable IL conversion technology, an efficient process for the recycle and reuse of the ILs is mandatory. In addition, dissolved lignin and/or xylan could be recovered; hence, an added value to the overall process can be attained. The effects of recycled ILs and their impact on biomass pretreatment have not been completely elucidated. By addition of an antisolvent (water), a major fraction of the cellulosic content of the biomass can be recovered from the IL solution forming a single phase. In this study, we used the recovered IL/water mixtures from system A and system B to perform 3 subsequent recycle steps by IL pretreating fresh untreated AGB (120 °C and 3 h), and conclude with a saccharification step (Fig. 5). The IL recycling was performed to test imidazolium-based ionic liquids using only AGB (as a more homogenous sample than MSW), to understand the feasibility of pretreatment and possible changes of its molecular structure. Approximately, 85–90% of IL was recovered on each recycle. Figure 6 presents the 1H-NMR and Additional file 2 shows the FTIR analysis of 3 series of recycled [C2C1Im][OAc] and [C4C1Im][OAc] in AGB. Based on both spectra, [C2C1Im][OAc] and [C4C1Im][OAc] appear to hold their structure as shown on their proton spectra and the distinctive FTIR bands (1175, 1378, and 1574 cm−1) from fresh ILs to the recycled ones. Recycled [C2C1Im][OAc] shows an extra peak at 3.6 ppm suggesting that recycled IL contained residual sugars; however, these sugars did not affect its recycle. This may be probably due to relatively severe recycle conditions employed (100 °C—12 h). Nonetheless, this did not have a significant effect on biomass crystallinity of AGB using fresh ILs or recycled ILs. In addition, we have observed methoxyl peak (~2.5 ppm) in the 1H NMR, suggesting that the change in color of IL is partly due to the presence of lignin. Work flow of ionic liquid recycling of [C2C1Im][OAc] and [C4C1Im][OAc] in agave bagasse 1H-NMR analysis of 3 series of recycled [C2C1Im][OAc] and [C4C1Im][OAc] in agave bagasse. 0 Fresh ionic liquid, 1 1st recycle, 2 2nd recycle, and 3 3rd recycle The ratios of crystalline to amorphous cellulose and disordered components found in untreated, fresh ILs, and recycled IL were used to determine the crystallinity index (CrI), as cellulose crystallinity has shown to affect the enzymatic saccharification. Both AGB samples pretreated with fresh ILs present a transition from cellulose I polymorph to cellulose II polymorph as the (002) peak around 22.1° was shifted to a lower angle (20.6°) after IL pretreatment (see Additional file 3). The CrI of the pretreated samples decreased when compared to the untreated sample. The CrI obtained from the samples generated by the 100% IL processes is higher than that of those obtained from the recycled samples, although this assessment could be affected by the interference of sharp crystalline peaks of calcium oxalate at 2θ = 15°, 24.5°, and 30.5° [30]. In terms of glucan conversion, [C2C1Im][OAc] was in the range of ~85 to ~95% in the recycled experiments, while [C4C1Im][OAc] from ~67 to ~71% (Fig. 7). Similarly, Shill et al. [27] show that a 90% glucan conversion was still maintained using up to 2 recycling steps of [C2C1Im][OAc] at 140 °C and 1 h using Miscanthus. Furthermore, xylan conversion was maintained in a 10% range for both ILs. A significant difference was obtained only on the 2nd recycle of [C2C1Im][OAc] which did not occur with [C4C1Im][OAc]. This may be solved with other recycling strategies such as the one recently applied by Sathitsuksanoh et al. [54] that used alcohols as alternative precipitating agents with IL pretreatment process. Glucan and xylan conversion of pretreated agave bagasse by recycled [C2C1Im][OAc] and [C4C1Im][OAc] on 72-h saccharification time Ternary IL-water systems for the pretreatment of mixed feedstock (such as AGB and MSW) enable delignification and sugar conversion at similar levels to 100% IL. Mixing ILs such as [C2C1Im][OAc] and [C4C1Im][OAc] results in an effective method to pretreat biomass with different price ranges while maintaining performance. In addition, effectiveness of [C2C1Im][OAc] and [C4C1Im][OAc] during biomass pretreatment remains intact with up to 40% water content. MSW presents relatively higher sugar yield than AGB, whereas the AGB/MSW (1:1) blend shows a glucan conversion of 94.1 and 83.0% using an IL system with ~10 and ~40% water content, respectively. Dissolution of biomass cellulose was also efficient using recycled ILs with only ~10% decrease in glucan, and xylan conversion yields were observed when a 2nd IL recycle was used in comparison with fresh IL. The same effect occurred with cellulose crystallinity of IL-treated biomass where comparable results were obtained when pure and recycle ILs were employed. The chemical structures of neat and recycled ILs demonstrate strong similarities in their behavior, as observed by FTIR and 1H-NMR spectroscopy. Altogether, this study highlights the potential of blending MSW as a potentially low-cost feedstock, as using IL-water systems with imidazolium-based ILs mixtures yield comparable biomass treatment results as with pure ILs. Finally, the promising IL recycling results indicate that this strategy can be used and further integrated with downstream saccharification and fermentation within a biorefinery scheme to reduce total operation costs. [C2C1Im][OAc]: 1-ethyl-3-methylimidazolium acetate 1-butyl-3-methylimidazolium acetate agave bagasse deionized FTIR: HPLC: high-performance liquid chromatography IL: ionic liquid MSW: Wu H, Mora-Pale M, Miao J, Doherty TV, Linhardt RJ, Dordick JS. Facile pretreatment of lignocellulosic biomass at high loadings in room temperature ionic liquids. Biotechnol Bioeng. 2011;108:2865–75. Jørgensen H, Kristensen JB, Felby C. Enzymatic conversion of lignocellulose into fermentable sugars: challenges and opportunities. Biofuels Bioprod Bioref. 2007;1:119–34. Kumar P, Barrett DM, Delwiche MJ, Stroeve P. Methods for pretreatment of lignocellulosic biomass for efficient hydrolysis and biofuel production. Ind Eng Chem Res. 2009;48:3713–29. Trinh LTP, Lee YJ, Lee J-W, Lee H-J. Characterization of ionic liquid pretreatment and the bioconversion of pretreated mixed softwood biomass. Biomass Bioenerg. 2015;81:1–8. Singh S, Simmons BA, Vogel KP. Visualization of biomass solubilization and cellulose regeneration during ionic liquid pretreatment of switchgrass. Biotechnol Bioeng. 2009;104:68–75. Li C, Knierim B, Manisseri C, Arora R, Scheller HV, Auer M, Vogel KP, Simmons BA, Singh S. Comparison of dilute acid and ionic liquid pretreatment of switchgrass: biomass recalcitrance, delignification and enzymatic saccharification. Bioresour Technol. 2010;101:4900–6. Li C, Cheng G, Balan V, Kent MS, Ong M, Chundawat SPS, daCosta SL, Melnichenko YB, Dale BE, Simmons BA, Singh S. Influence of physico-chemical changes on enzymatic digestibility of ionic liquid and AFEX pretreated corn stover. Bioresour Technol. 2011;102:6928–36. Perez-Pimienta JA, Lopez-Ortega MG, Varanasi P, Stavila V, Cheng G, Singh S, Simmons BA. Comparison of the impact of ionic liquid pretreatment on recalcitrance of agave bagasse and switchgrass. Bioresour Technol. 2013;127:18–24. Sun N, Xu F, Sathitsuksanoh N, Thompson VS, Cafferty K, Li C, Tanjore D, Narani A, Pray TR, Simmons BA, Singh S. Blending municipal solid waste with corn stover for sugar production using ionic liquid process. Bioresour Technol. 2015;186:200–6. Sun N, Rahman M, Qin Y, Maxim ML, Rodriguez H, Rogers RD. Complete dissolution and partial delignification of wood in the ionic liquid 1-ethyl-3-methylimidazolium acetate. Green Chem. 2009;11:646–55. Brandt A, Grasvik J, Hallett JP, Welton T. Deconstruction of lignocellulosic biomass with ionic liquids. Green Chem. 2013;15:550–83. Konda NM, Shi J, Singh S, Blanch HW, Simmons BA, Klein-Marcuschamer D. Understanding cost drivers and economic potential of two variants of ionic liquid pretreatment for cellulosic biofuel production. Biotechnol Biofuels. 2014;7:86. Juneja A, Kumar D, Murthy GS. Economic feasibility and environmental life cycle assessment of ethanol production from lignocellulosic feedstock in Pacific Northwest U.S. J Renew Sustain Energ. 2013;5:023142. Li C, Tanjore D, He W, Wong J, Gardner JL, Thompson VS, Yancey NA, Sale KL, Simmons BA, Singh S. Scale-up of ionic liquid-based fractionation of single and mixed feedstocks. Bioenergy Res. 2015;8(3):982–91. Shi J, Thompson V, Yancey N. Impact of mixed feedstocks and feedstock densification on ionic liquid pretreatment efficiency. Biofuels. 2013;4:63–72. Hou XD, Li N, Zong MH. Facile and simple pretreatment of sugar cane bagasse without size reduction using renewable ionic liquidswater mixtures. ACS Sustaine Chem Eng. 2013;1:519–26. Xia S, Baker GA, Li H, Ravula S, Zhao H. Aqueous ionic liquids and deep eutectic solvents for cellulosic biomass pretreatment and saccharification. RSC Adv. 2014;4:10586–96. Swatloski Richard P, Spear Scott K, Holbrey John D, Rogers Robin D. Dissolution of cellulose with ionic liquids. J Am Chem Soc. 2002;124:4974–5. Kohno Y, Ohno H. Ionic liquid/water mixtures: from hostility to conciliation. Chem Comm. 2012;48:7119. Wang Q, Chen Q, Mitsumura N, Animesh S. Behavior of cellulose liquefaction after pretreatment using ionic liquids with water mixtures. J Appl Polym Sci. 2014;131:1–8. Brandt A, Hallett JP, Leak DJ, Murphy RJ, Welton T. The effect of the ionic liquid anion in the pretreatment of pine wood chips. Green Chem. 2010;12:672–9. Parthasarathi R, Balamurugan K, Shi J, Subramanian V, Simmons BA, Singh S. Theoretical insights into the role of water in the dissolution of cellulose using IL/water mixed solvent systems. J Phys Chem B. 2015;119:acs.jpcb.5b02680. George A, Brandt A, Tran K, Zahari SMSNS, Klein-Marcuschamer D, Sun N, Sathitsuksanoh N, Shi J, Stavila V, Parthasarathi R, Singh S, Holmes BM, Welton T, Simmons BA, Hallett JP. Design of low-cost ionic liquids for lignocellulosic biomass pretreatment. Green Chem. 2015;17:1728–34. Qiu Z, Aita GM. Pretreatment of energy cane bagasse with recycled ionic liquid for enzymatic hydrolysis. Bioresour Technol. 2013;129:532–7. Zhang Z, O'Hara IM, Doherty WOS. Pretreatment of sugarcane bagasse by acid-catalysed process in aqueous ionic liquid solutions. Bioresour Technoly. 2012;120:149–56. An Y-X, Zong M-H, Wu H, Li N. Pretreatment of lignocellulosic biomass with renewable cholinium ionic liquids: biomass fractionation, enzymatic digestion and ionic liquid reuse. Bioresour Technol. 2015;192:165–71. Shill K, Padmanabhan S, Xin Q, Prausnitz JM, Clark DS, Blanch HW. Ionic liquid pretreatment of cellulosic biomass: enzymatic hydrolysis and ionic liquid recycle. Biotechnol Bioeng. 2011;108:511–20. Gao J, Chen L, Yuan K, Huang H, Yan Z. Ionic liquid pretreatment to enhance the anaerobic digestion of lignocellulosic biomass. Bioresour Technol. 2013;150:352–8. Klein-Marcuschamer D, Simmons BA, Blanch HW. Techno-economic analysis of a lignocellulosic ethanol biorefinery with ionic liquid pre-treatment. Biofuel Bioprod Bioref. 2011;5:562–9. Perez-Pimienta JA, Lopez-Ortega MG, Chavez-Carvayar JA, Varanasi P, Stavila V, Cheng G, Singh S, Simmons BA. Characterization of agave bagasse as a function of ionic liquid pretreatment. Biomass Bioenerg. 2015;75:180–8. Sluiter A, Hames B, Ruiz RO, Scarlata C, Sluiter J, Templeton D, Energy D of. Determination of structural carbohydrates and lignin in biomass. Biomass Anal Technol Team Lab Anal Proced. 2004;2011(July):1–14. Shill K, Miller K, Clark DS, Blanch HW. A model for optimizing the enzymatic hydrolysis of ionic liquid-pretreated lignocellulose. Bioresour Technol. 2012;126:290–7. Segal L, Creely JJ, Martin AE, Conrad CM. An empirical method for estimating the degree of crystallinity of native cellulose using the X-ray diffractometer. Text Res J. 1959;29:786–94. Duncan DB. Multiple range and multiple F tests. Biometrics. 1955;11:1–42. Shi J, George KW, Sun N, He W, Li C, Stavila V, Keasling JD, Simmons BA, Lee TS, Singh S. Impact of pretreatment technologies on saccharification and isopentenol fermentation of mixed lignocellulosic feedstocks. Bioenergy Res. 2015;8:1004–13. Li C, Sun L, Simmons BA, Singh S. Comparing the recalcitrance of eucalyptus, pine, and switchgrass using ionic liquid and dilute acid pretreatments. Bioenergy Res. 2013;6:14–23. Ávila-Lara AI, Camberos-Flores JN, Mendoza-Pérez JA, Messina-Fernández SR, Saldaña-Duran CE, Jimenez-Ruiz EI, Sánchez-Herrera, Leticia M, Pérez-Pimienta JA. Optimization of alkaline and dilute acid pretreatment of agave bagasse by response surface methodology. Front Bioeng Biotechnol. 2015;3:146. Davis SC, Dohleman FG, Long SP. The global potential for Agave as a biofuel feedstock. GCB Bioenergy. 2011;3:68–78. Foyle T, Jennings L, Mulcahy P. Compositional analysis of lignocellulosic materials: evaluation of methods used for sugar analysis of waste paper and straw. Bioresour Technol. 2007;98:3026–36. Fu D, Mazza G. Aqueous ionic liquid pretreatment of straw. Bioresour Technol. 2011;102:7008–11. Subhedar PB, Gogate PR. Alkaline and ultrasound assisted alkaline pretreatment for intensification of delignification process from sustainable raw-material. Ultrason Sonochem. 2014;21:216–25. Polovka M, Polovková J, Vizárová K, Kirschnerová S, Bieliková L, Vrška M. The application of FTIR spectroscopy on characterization of paper samples, modified by Bookkeeper process. Vib Spectrosc. 2006;41:112–7. Smidt E, Böhm K, Schwanninger M. The application of FT-IR spectroscopy in waste management. In: Nikolic GH, editor. Fourier transforms - new analytical approaches and FTIR strategies. Rijeka: InTech; 2011. p. 251–306. Zhao X-B, Wang L, Liu D-H. Peracetic acid pretreatment of sugarcane bagasse for enzymatic hydrolysis: a continued work. J Chem Technol Biotechnol. 2008;83:950–6. Brandt A, Ray MJ, To TQ, Leak DJ, Murphy RJ, Welton T. Ionic liquid pretreatment of lignocellulosic biomass with ionic liquid-water mixtures. Green Chem. 2011;13:2489–99. Zhang Z, O'Hara I, Doherty W. Effects of pH on pretreatment of sugarcane bagasse using aqueous imidazolium ionic liquids. Green Chem. 2013;15:431–8. Shi J, Balamurugan K, Parthasarathi R, Sathitsuksanoh N, Zhang S, Stavila V, Subramanian V, Simmons B, Singh S. Understanding the role of water during ionic liquid pretreatment of lignocellulose: co-solvent or anti-solvent? Green Chem. 2014;16:3830–40. Perez-Pimienta JA, Flores-Gómez CA, Ruiz HA, Sathitsuksanoh N, Balan V, da Costa Sousa L, Dale BE, Singh S, Simmons BA. Evaluation of agave bagasse recalcitrance using AFEX™, autohydrolysis, and ionic liquid pretreatments. Bioresour Technol. 2016;211:216–23. Montella S, Balan V, Sousa C, Gunawan C, Giacobbe S, Pepe O, Faraco V. Saccharification of newspaper waste after ammonia fiber expansion or extractive ammonia. AMB Express. 2016;6:18. Velázquez-Valadez U, Farías-Sánchez JC, Vargas-Santillán A, Castro-Montoya AJ. Tequilana weber agave bagasse enzymatic hydrolysis for the production of fermentable sugars: oxidative-alkaline pretreatment and kinetic modeling. Bioenergy Res. 2016;9:998–1004. Saucedo-Luna J, Castro-Montoya AJ, Martinez-Pacheco MM, Sosa-Aguirre CR, Campos-Garcia J. Efficient chemical and enzymatic saccharification of the lignocellulosic residue from Agave tequilana bagasse to produce ethanol by Pichia caribbica. J Ind Microbiol Biotechnol. 2011;38:725–32. Shi J, Mirvat E, Yang B, Wyman CE. The potential of cellulosic ethanol production from municipal solid waste: a technical and economic evaluation. Berkeley: University of California Energy Institute; 2009. Perez-Pimienta JA, Poggi-Varaldo HM, Ponce-Noyola T, Ramos-Valdivia AC, Chavez-Carvayar JA, Stavila V, Simmons BA. Fractional pretreatment of raw and calcium oxalate-extracted agave bagasse using ionic liquid and alkaline hydrogen peroxide. Biomass Bioenerg. 2016;91:48–55. Sathitsuksanoh N, Sawant M, Truong Q, Tan J, Canlas CG, Sun N, Zhang W, Renneckar S, Prasomsri T, Shi J, Çetinkol Ö, Singh S, Simmons BA, George A. How alkyl chain length of alcohols affects lignin fractionation and ionic liquid recycle during lignocellulose pretreatment. Bioenergy Res. 2015;8:973–81. JAP carried out the biomass pretreatment, compositional analysis, ionic liquid recycle, saccharification work, and drafted the manuscript. NS performed the NMR analysis and drafted the NMR-related parts of the manuscript. KT contributed to compositional analysis and sugar analysis work. VT produced the MSW blends. VS performed the XRD analysis, calculated the crystallinity index of the samples, and drafted XRD-related parts of the manuscript. SS and BAS contributed to the original experimental design. TP, SS, and BAS conceived the study, participated in its design, coordination, and drafted the manuscript. All authors suggested modifications to the draft and approved the final manuscript. We have read Biotechnology for Biofuels policy on data and material release, and the data within this manuscript meet those requirements. All authors read and approved the final manuscript. The authors thank Novozymes for the gift of the Cellic® CTec2 and HTec2 enzyme cocktails. The data supporting our findings can be found in this manuscript and in the additional files provided. The authors agree to publish in the journal. This work was part of the DOE Joint BioEnergy Institute (http://www.jbei.org) supported by the US Department of Energy, Office of Science, Office of Biological and Environmental Research, through Contract DE-AC02-05CH11231 between Lawrence Berkeley National Laboratory and the US Department of Energy. NS was partially supported by the National Science Foundation under Cooperative Agreement No. 1355438. Department of Chemical Engineering, Universidad Autónoma de Nayarit, Tepic, Mexico Jose A. Perez-Pimienta Department of Chemical Engineering and Conn Center for Renewable Energy Research, University of Louisville, Louisville, KY, USA Noppadon Sathitsuksanoh Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Joint BioEnergy Institute, 5885 Hollis Street, Emeryville, CA, 94608, USA Noppadon Sathitsuksanoh, Kim Tran, Seema Singh & Blake A. Simmons Biological and Chemical Processing Department, Idaho National Laboratory, Idaho Falls, ID, USA Vicki S. Thompson Department of Biotechnology and Bioengineering, CINVESTAV-IPN, Ciudad de México, Mexico Teresa Ponce-Noyola Biological and Engineering Sciences Center, Sandia National Laboratories, Livermore, CA, USA Kim Tran, Seema Singh & Blake A. Simmons Energy Nanomaterials Department, Sandia National Laboratories, Livermore, CA, USA Vitalie Stavila Kim Tran Seema Singh Blake A. Simmons Correspondence to Blake A. Simmons. Additional file 1. FTIR spectra of untreated and pretreated biomass under different ionic liquid–water systems. Unt: untreated, AGB: agave bagasse, MSW: municipal solid waste, Blend: agave bagasse/municipal solid waste (1:1) blend. FTIR spectra of all untreated and pretreated samples from agave bagasse, municipal solid waste and the agave bagasse/municipal solid waste (1:1) blend between 800 and 2000 cm−1 with a spectral resolution of 4 cm−1. Additional file 2. Chemical changes tracked of fresh and recycled [C2C1Im][OAc] (up) and [C4C1Im][OAc] (down). FTIR spectra of recycled ionic liquids [C2C1Im][OAc] and [C4C1Im][OAc] from three different cycles. Additional file 3. XRD spectrum and crystallinity index (CrI) of agave bagasse under different conditions (untreated, Fresh IL pretreated and IL-recycled). XRD diffractograms of untreated and pretreated agave bagasse under different process conditions. Perez-Pimienta, J.A., Sathitsuksanoh, N., Thompson, V.S. et al. Ternary ionic liquid–water pretreatment systems of an agave bagasse and municipal solid waste blend. Biotechnol Biofuels 10, 72 (2017). https://doi.org/10.1186/s13068-017-0758-4 Biomass blend Ternary system Biomass pretreatment
CommonCrawl
Progressive Neural Networks Andrei A. Rusu and Neil C. Rabinowitz and Guillaume Desjardins and Hubert Soyer and James Kirkpatrick and Koray Kavukcuoglu and Razvan Pascanu and Raia Hadsell Keywords: cs.LG Abstract: Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy. TLDR; The authors propose Progressive Neural Networks (ProgNN), a new way to do transfer learning without forgetting prior knowledge (as is done in finetuning). ProgNNs train a neural neural on task 1, freeze the parameters, and then train a new network on task 2 while introducing lateral connections and adapter functions from network 1 to network 2. This process can be repeated with further columns (networks). The authors evaluate ProgNNs on 3 RL tasks and find that they outperform finetuning-based approaches. #### Key Points - Finetuning is a destructive process that forgets previous knowledge. We don't want that. - Layer h_k in network 3 gets additional lateral connections from layers h_(k-1) in network 2 and network 1. Parameters of those connections are learned, but network 2 and network 1 are frozen during training of network 3. - Downside: # of Parameters grows quadratically with the number of tasks. Paper discussed some approaches to address the problem, but not sure how well these work in practice. - Metric: AUC (Average score per episode during training) as opposed to final score. Transfer score = Relative performance compared with single net baseline. - Authors use Average Perturbation Sensitivity (APS) and Average Fisher Sensitivity (AFS) to analyze which features/layers from previous networks are actually used in the newly trained network. - Experiment 1: Variations of Pong game. Baseline that finetunes only final layer fails to learn. ProgNN beats other baselines and APS shows re-use of knowledge. - Experiment 2: Different Atari games. ProgNets result in positive Transfer 8/12 times, negative transfer 2/12 times. Negative transfer may be a result of optimization problems. Finetuning final layers fails again. ProgNN beats other approaches. - Experiment 3: Labyrinth, 3D Maze. Pretty much same result as other experiments. #### Notes - It seems like the assumption is that layer k always wants to transfer knowledge from layer (k-1). But why is that true? Network are trained on different tasks, so the layer representations, or even numbers of layers, may be completely different. And Once you introduce lateral connections from all layers to all other layers the approach no longer scales. - Old tasks cannot learn from new tasks. Unlike humans. - Gating or residuals for lateral connection could make sense to allow to network to "easily" re-use previously learned knowledge. - Why use AUC metric? I also would've liked to see the final score. Maybe there's a good reason for this, but the paper doesn't explain. - Scary that finetuning the final layer only fails in most experiments. That's a very commonly used approach in non-RL domains. - Someone should try this on non-RL tasks. - What happens to training time and optimization difficult as you add more columns? Seems prohibitively expensive. [link] Summary by David Stutz 2 years ago Rusu et al. Propose progressive networks, sets of networks allowing transfer learning over multiple tasks without forgetting. The key idea of progressive networks is very simple. Instead of fine-tuning a model (for transfer learning), the pre-trained model is taken and its weights fixed. Another network is then trained from scratch while receiving features from the pre-trained network as additional input. Specifically, the authors consider a sequence of tasks. For the first task, a deep neural network (e.g. multi-layer perceptron) is trained. Assuming $L$ layers with hidden activations $h_i^{(1)}$ for $i \leq L$, each layer computes $h_i^{(1)} = f(W_i^{(1)} h_{i-1}^{(1)})$ where $f$ is an activation function and for $i = 1$, the network input is used. After training until convergence, a second network is trained – now on a different task. The parameters of the first network is fixed, but the second network can use the features of the first one: $h_i^{(2)} = f(W_i^{(2)} h_{i-1}^{(2)} + U_i^{(2:1)}h_{i-1}^{(1)})$. This idea can be generalized to the $k$-the network, which can use the activations from all the previous networks: $h_i^{(k)} = f(W_i^{(k)} h_{i-1}^{(k)} + \sum_{j < k} U_i^{(k:j)} h_{i-1}^{(j)})$. For three networks, this is illustrated in Figure 1. https://i.imgur.com/ndyymxY.png Figure 1: An illustration of the feature transfer between networks. In practice, however, this approach results in an explosion of parameters and computation. Therefore, the authors apply a dimensionality reduction to the $h_{i – 1}^{(j)}$ for $j < k$. Additionally, an individual scaling factor is used to account for different ranges used in the different networks (also depending on the input data). Then, the above equation can be rewritten as $h_i^{(k)} = f(W_i^{(k)} h_{i-1}^{(k)} + \sum_{j < k} U_i^{(k)} f(V_i^{(k)} \alpha_i^{(:k)} h_{i-1}^{(:k)})$. (Note that notation has been adapted slightly, as I found the original notation misleading.) Here, $h_{i – 1}^{(:k)}$ denotes the concatenated features from all networks $j < k$. Similarly, for each network, one $\alpha_i^{(j)}$ is learned to scale the features (note that the notation above would imply a element-wise multiplication of the $\alpha_i^{(j)}$'s repeated in a vector, or equivalently a matrix-vector product). $V_i^{(k)}$ then describes a dimensionality reduction; overall, a one-layer perceptron is used to "transfer" features from networks $j < k$ to the current network. The same approach can also be applied to convolutional layers (e.g. a $1 \times 1$ convolution can be used for dimensionality reduction). In experiments, the authors show that progressive networks allow efficient transfer learning (efficient in terms of faster training). Additionally, they study which features are actually transferred. Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
CommonCrawl
Development of a new testing equipment that combines the working principles of both the split Hopkinson bar and the drop weight testers Rateb Adas1 & Majed Haiba1 In the current work, a new high strain rate tensile testing equipment is proposed. The equipment uses a pendulum device to generate an impact load and a three-bar mechanism to bring that load to act upon a specially designed specimen. As the standard impact testing apparatus uses pendulum device and the well-known SHB high strain rate tester adopts the above-mentioned mechanism, the introduced equipment can be dealt with as an impact apparatus in which the base that supports the V-shape specimen is replaced with the three-bar configuration that the traditional SHB uses. In order to demonstrate the applicability of the new tester, virtual design tools were used to determine the most appropriate configuration for it. Then, a detailed design was created, and a full-scale prototype was produced, calibrated, instrumented and tested. The obtained results demonstrate that the new tester is capable of axially straining steel specimens up to failure at a maximum rate of about 250 s−1, which is reasonable when compared with a more established high strain rate testers. Due to the need for optimized products, different uniaxial tensile testing techniques have been introduced to generate data under dynamic conditions. In this context, servo-hydraulic (SH) (Boyce and Crenshaw 2005), split Hopkinson bar (SHB) (Ogawa 1984) and drop weight (DW) (Chan 2009) are some of the most popular testing systems. As shown in Fig. 1, dynamic testing systems are classified based on the achievable strain rate \(\dot{\varepsilon }\). According to what is shown in the figure, each of the above-mentioned systems serves for a specific range of \(\dot{\varepsilon }\) and consequently fits for a specific set of applications. For automotive industry there is a clear need to use either a high-speed SH or a DW testing systems as the loading speed associated with this industry corresponds to 0.01 \(\left\langle {\dot{\varepsilon }} \right\rangle\) 500 s−1 (Xiao 2008). Due to lack of access to high-speed SH testers, a growing interest in the DW technology was noticed (Chan 2009; Li and Liu 2009; Mott et al. 2007; Ferrini and Khachonkitkosol 2015). In spite of that, reports of work based on that technology for dynamic tensile tests are relatively scarce due to limitations and complications associated with it (Chan 2009) ("Literature review of the currently available DW tensile testers" section presents some of these shortcomings). Thus, the feasibility of modifying that technology for less complication and easier operation needs to be explored. Within this framework, the current research presents a new design that attempts to overcome some of the challenges associated with the use of the currently available DW tensile testers. As shown in Fig. 2, the working principle of the proposed tester implements a pendulum device to deliver an impact load to one end of the specimen, which is mounted between the incident and transmitter bars. A striker tube is impacting the end of the incident bar so that a stress wave is created and propagated through the incident bar, specimen and the transmitter bar. Based on the propagation theory, the stress and strain in the specimen can be determined by the strain histories measured using strain gauges which are attached to the bars. Obviously, perfect alignment of the components that the load passes through is essential to ensure pure tensile loading along the gauge section of the specimen. Ranges of strain rates covered by dynamic tensile testing systems (Xiao 2008) A sketch of the proposed tensile testing machine Literature review of the currently available DW tensile testers The Droop Tower Instrument (DTI) is a high strain rate compression tester in which a weight is suspended at a height and dropped onto a specimen, and a force sensor, "which is attached to the bottom of the weight", is used to take strain readings. Obviously, a modified form of the above setup is needed if tensile tests are to be accounted for using a traditional DTI. Within this context, a setup that involves two-dog bone specimens with grip sections at two ends and a curved connecting part was considered by (Chan 2009). As shown in Fig. 3, the dropped weight strikes the center of the specimen during the test, creating tension along the vertically aligned dog bone sections of the sheet. The tensile testing specimen considered by (Chan 2009) Clearly, the above-mentioned tensile testing configuration is problematical as bending wave is sent through the specimen, which would create an oscillation, and would also add noise to strain measurements. Obviously, replacing the two-dog bone specimen with a simpler one improves the quality of the measured stain histories, but requires a modified design of the DTI. Within this context, a modified design of the traditional DTI, which is capable of measuring the tensile response of materials, was developed by Mott et al. (2007). As shown in Fig. 4, the device uses a 100 kg drop weight which is raised on a vertical track to a given height and then released. Attached to the bottom of the weight are two round impact bars. These bars engage L levers, which pivot about bearings as the drop weight falls, to pull attached cables. The cables pass around pulleys and are attached to shuttles, which are in turn caused to move in opposite directions on linear bearings on a horizontal track. The tensile force is measured by load cells at each end of the sample, and strain in the specimen is determined by the change in length between marks at either end of the test section. Evidently, Mott's design of the DW tensile tester uses simple shaped axially loaded specimens, but implements flexible elements, which requires distinct treatment when analyzing the recorded strain histories, and adopts a very complicated mechanism, which is reflecting in a negative manner when considering machine calibration, efficiency and expenses. A modified design of the tensile testing DTI (Mott et al. 2007) CAD modeling and design development The following set of requirements and constraints were taken into account during the design phases of the proposed tester: Loading steel specimens up to failure at maximum strain rate of 300 s−1. Respecting the recommended testing practices, specimen geometry, clamping method, measurement devices, and data processing methods, as stated in Borsutzki et al. (2005). Minimization in size, ease of specimen mounting and dismounting and efficiency in energy management. Axiality of specimen loading and centricity between the components that the impact energy basses through. In doing so, the model, which is shown in Fig. 5, was initially created using a Multi-Body System (MBS) software (Garcia). As illustrated in the figure, that model includes the following components: A kinetic model of the initial design of the proposed tester Rigid elements to represent the pendulum device (the hammer), the incident bar, the transmitter bar and the striker tube. A revolute joint to account for the pendulum motion of the hammer. A sliding joint to account for the motion of the Striker tube along the incident bar. Another sliding joint to account for the possible longitudinal motion between the incident bar and the two linear supports which hold it. A non-linear spring to stand for the specimen (Shames 1992). A low strain rate tensile test of an aluminum specimen was conducted and the non-linear stiffness of the spring was calculated using the obtained load–deflection data. Based on the results obtained from the above modeling, the design of the proposed tester was finalized via an iteration process that includes modifications, obtaining results and evaluation steps, in which all design variations were explored. Figure 6a–c illustrates the final design of the tester. This design includes the following components: (1) steel structure, (2) hammer, (3) axis of rotation, (4) striker tube, (5) incident bar, (6) transmitter bar with spherical end, (7) specimen, (8) transmitter bar support with spherical hole, (9) incident bar supports, (10) non-return lock, (11) extension of incident bar, (12) baffle fitted with a rubber damper. a 3-D illustration of the proposed machine. b Frontal section of the proposed machine. c An illustration of the transmitter bar, (equipped with two symmetrical flat locations for strain gauge installation) In the above design, and as recommended by Borsutzki et al. ( 2005 ), force history F(t), (required to evaluate the stress variation acting on the specimen), is calculated using Eq. (1), in which: \(\varepsilon_{e} (t)\) is an elastic strain history (measured using a strain gauge attached to the transmitter bar, as illustrated in Fig. 6). E is the elastic moduli of the transmitter bar material. A is the cross section area of the transmitter bar (measured at the strain gauge locations) $$F\left( t \right) = E \cdot \varepsilon_{e} \left( t \right) \cdot A$$ The structural modeling of the proposed tester To carry out the structural study of the proposed tester, a finite element model was created using ANSYS software. The created model, which represents one-half of the structure, consists of 114,730 elements of the type "Solide45". For constraining, symmetrical boundary conditions, to represent the missing half of the structure, and contact elements and nodal constraints, to represent the machine-ground interface, were considered. For loading, the relevant load histories, which were estimated from the abovementioned MBS simulation, were applied as nodal loads in three locations, (1) the hammer articulation, (2) the baffle, and (3) the spherical joint support. Solving the created model and reviewing the obtained results (stress, strain, and displacement contours and histories) enables the following conclusions to be drawn: The structure is very stiff as the maximum deflection does not exceed 5 μm. The structure has a minimum safety factor of about 13, as the maximum value of the equivalent stress equals 20.7 MPa. Producing, installing and preliminary testing of the tester During the subsequent phases of the work, the standard and non-standard components of the tester were respectively purchased and manufactured, they then assembled and calibrated in order to account for the above-mentioned axiality and centricity requirements. The obtained hardware, which is shown in Fig. 7, was installed over a solid floor and then subjected to preliminary tests. In doing so, the following considerations were taken into account: An illustration of the produced tester Components, which are subjected to impact loading, were thermally hardened, up to 55HRC, in order to eliminate energy losses due local distortions. The spherical surfaces of the transmitter bar and the corresponding support were subjected to set of special treatments (hardening, polishing and greasing) in order to improve the performance of interface between them. This was essential to assure pure axial loading of specimen when loaded. Components, which moves relative to each other's, were equipped with linear bearings in order to eliminate energy losses due friction. The three supports, which hold the bars, were accurately aligned using a standard 30 mm diameter chrome rod, produced by Bosch Rexroth Corp. An identical rod was also used to produce the bars (the raw material which was used to produce the bars is a standard 30 mm chrome rod, produced by Bosch Rexroth Corp). The non-return lock was calibrated in a way that maintains the kinetic energy of the striker tube when passes through it, while preventing the retreat of that tube towards the tested specimen after hitting the extension of the incident bar. Twelve M10 × 100 steel screws were used to rigidly install the tester over a solid floor. One steel specimen was manufactured according to the details shown in Fig. 8. Obviously, the designated design of the specimen is simple, easy to be mounted and dismounted and similar to what the SHB tester uses. Moreover, it respects the related recommendations, as specified in Borsutzki et al. ( 2005) and in ASTM D1822. The designated design of the implemented specimens For preliminary testing, the specimen was firmly installed be threading it into the corresponding threaded holes of the incident and transmitter bars, respectively. The hammer was then raised up to the highest possible position and freed to hit the striker tube which moves and hits the extension of incident bar, causing the specimen to quickly extend up to failure. Due to the satisfactory performance noticed during the stage of the initial testing, it was decided to accept the designated design of the specimen and move ahead to produce and test twenty new steel specimens. Strain rate determination The strain rate history \(\dot{\varepsilon }\left( t \right),\) which is associated with a specific tensile test, is usually calculated using the formula (Borsutzki et al. 2005): $$\dot{\varepsilon }\left( t \right) = \frac{dl}{l \cdot dt}$$ in which l is the parallel length of the specimen; l = 9 mm as shown in Fig. 8. \(\frac{dl}{dt}\) is the extension history of the tested specimen. Then, the determination of \(\dot{\varepsilon }\left( t \right)\) for a specific tensile test requires accurate measurement of the extension history. In the current research, it was assumed that the extension history of the loaded specimen is identical to the motion history of the incident bar, and that assumption was accepted due to the huge rigidity of the bar when compared with that of the specimen (Borsutzki et al. 2005). Thus, an acquisition system, which involves LVDT sensor, processing unit and an industrial computer, was used to record the longitudinal motion of the incident bar, when it moves due to specimen elongation and then failure. As shown in Fig. 9, a calibration table was used to accurately locate and rigidly install the sensor. In doing so, the instructions, which are listed in the user manual of the sensor, were carefully respected. An illustration of the calibrated sensor Processing of the recorded histories Out of the twenty specimens, one arbitrary selected sample was mounted and then tested while operating the data acquisition system. The recorded motion history of the incident bar is shown in Fig. 10. Clearly, the obtained history needs processing, as inconvenient components and noises due to the 50 Hz electrical interference superimpose the useful data (Borsutzki et al. 2005). In an attempt to overcome this matter, several standard filtration algorithms were implemented without satisfactory results. Within this context, Fig. 11 presents the results obtained using a low-pass filter. Unfortunately, the use of the low-pass filtration process was disappointed as it eliminates the main spike, which represents the most important part of the history. The first recorded motion history A comparison between the histories before and after the low-pass filtration For more efficient processing of the history, a modification technique that involves the following steps was adopted, as recommended by Borsutzki et al. ( 2005): To eliminate the noise signal, an inverse of that signal was generated and added to the history, as illustrated in Fig. 12b. a History after noise filtration. b History before noise filtration To eliminate the inconvenient components of the history the two following processes were considered: Zero displacements were eliminated from the after filtration history. Displacements, which were bigger than 2.1 mm, were eliminated from the history, as the 2.1 mm limit matched the measured elongation of the broken specimen; see Fig. 13 which illustrates the above-mentioned processing. The final state of the displacement history For the statistical validation of the obtained results, the remaining nineteen specimens were tested and the obtained histories were modified using the above-mentioned steps, and then mean values and standard deviations were calculated; see Fig. 14 for the graphical presentation of the final results and Table 1 for samples of the obtained values. The statistical presentation of the final results Table 1 Samples of the calculated extension results In order to calculate the strain rate history of the performed tests, the history of the above-mentioned mean values was treated using Eq. (1), and Fig. 15 presents the achieved results. Clearly, Fig. 15 proves the following: The strain rate history of the tested specimens The current tester is capable of straining steel specimens at maximum rate of about 250 s−1, which is normal, as the typical value of the maximum strain rate for the traditional DW tester dos not exceed 300 s−1; see Fig. 1. The current tester is not capable of loading specimens at constant strain rate, which is also normal, as many well-established DW and SHB testers have the same disadvantage (Chan 2009; Borsutzki et al. 2005). In order to evaluate the performance of the proposed tester, a comparison between the history obtained using the tester and corresponding history obtained by Chan (2009), who uses an Instron drop tower testing machine and tested samples made of titanium, was accomplished. Clearly, the compared histories, as plotted in Fig. 16 are not identical mostly due to differences in material behaviors, however, they are similar if matters such as the following are addressed: Comparison of strain rate histories The maximum achievable values of strain rates. The non-constant values of the calculated strain rates. The time scale of the specimens loading. The current work dealt with a new design of high strain rate tensile testing equipment. The working principle of the introduced design includes a pendulum device which delivers an impact axial load to a specially designed specimen which extents up to failure at a particular strain rate. In order to evaluate the achievable straining properties a detailed design of the proposed machine was developed, the machine was then produced, instrumented and tested. The presented data and the obtained results, as hosted in the current paper, prove the following: The proposed tester can be seen as a standard impact testing equipment in which the base that supports the specimen is replaced by the three bars that the traditional SHB uses. Additionally, the new design can also be seen as a modified form of the DW tensile tester. The new tensile testing equipment is capable of straining steel specimens up to failure at maximum rate of 250 s−1. The maximum achievable strain rate of the new tester is comparable to that of the DW testers. The straining performance of the new testing equipment is not constant as it increases almost linearly when testing specimens made of structural steel. The non-constant straining behavior of the tester is typical when compared with those of well-known high strain rate testing technologies. The performance of the new equipment is considerably adequate, as the maximal value of the calculated standard deviations did not exceed twice the nominal value of the corresponding standard deviation (Barford 1987). Barford NC (1987) Experimental measurements: precision, error and truth, 2nd edn. Wiley, Chichester Borsutzki M, Cornette D, Kuriyama Y, Uenishi A, Yan B (2005) Recommendations for dynamic tensile testing of sheet steels. International Iron and Steel Institute, Brussels Boyce BL, Crenshaw TB (2005) Servo-hydraulic methods for mechanical testing in the sub-Hopkinson rate regime up to strain rates of 500 1/s. Sandia National Laboratories, Albuquerque Chan JJ (2009) Design of fixtures and specimens for high strain-rate tensile testing on a drop tower. BSc thesis, Massachusetts Institute of Technology, Cambridge Ferrini Sh, Khachonkitkosol L (2015) Design of a cost effective drop tower for impact testing of aerospace material, BSc thesis. Worcester Polytechnic Institute, Worcester Li G, Liu D (2009) Low strain rate testing based on weight drop impact tester. In: Proceedings of the SEM annual conference, New Mexico, USA Mott PH, Twigg JN, Roland DF, Schrader HS, Pathak JA, Roiland CM (2007) High-speed tensile test instrument. Rev Sci Instrum 78:045105 Ogawa K (1984) Impact-tension compression test by using a split-Hopkinson bar. Exp Mech 24:81–85 Shames IH (1992) Elastic and inelastic stress analysis. Prentice-Hall Inc, Englewood Cliffs Xiao X (2008) Dynamic tensile testing of plastic materials. Polym Test 27:164–178 RA and MH equally contributed to the performed tasks, starting from the design stage up to drafting and approving the final manuscript. Both authors read and approved the final manuscript. This work was totally supported by Damascus University. The authors gratefully acknowledge the contributions of numerous colleagues in Damascus University for their help in the experiments, measurements, and valuable discussions. All authors declare that they have no competing interests. Damascus University, Damascus, Syrian Arab Republic Rateb Adas & Majed Haiba Search for Rateb Adas in: Search for Majed Haiba in: Correspondence to Rateb Adas. Adas, R., Haiba, M. Development of a new testing equipment that combines the working principles of both the split Hopkinson bar and the drop weight testers. SpringerPlus 5, 1155 (2016) doi:10.1186/s40064-016-2770-8 High strain rate testing Dynamic testing Tensile testing equipment Strain rate histories
CommonCrawl
Marcel Rudert Musik / Komposition [email protected] Theater und Co. Allgemein / Neues papa rudin pdf The author will not update solutions following the thread in Rudin's book but will eventually put all solutions in order. I did not think that this would work, my best friend showed me this website, and it does! The solutions to Rudin's papa book. We use optional third-party analytics cookies to understand how you use GitHub.com so we can build better products. I've studied it thoroughly as an undergrad/early grad student when I was training to be a research mathematician working in complex and harmonic analysis. Download File PDF Solution Manual To Functional Analysis Rudin Solution Manual To Functional Analysis Rudin Getting the books solution manual to functional analysis rudin now is not type of challenging means. Finally I get this ebook, thanks for all these Walter Rudin Principles Of Mathematical Analysis Solution Manual Pdf I can get now! Download Principles Of Mathematical Analysis books, The third edition of this well known text continues to provide a solid foundation in mathematical analysis for undergraduate and first-year graduate students. they're used to log you in. The solutions to Rudin's papa book. You can always update your selection by clicking Cookie Preferences at the bottom of the page. [PDF] Principles Of Mathematical Analysis Full Download-BOOK Walter Rudin. Principles Of Mathematical Analysis by Walter Rudin, Principles Of Mathematical Analysis Books available in PDF, EPUB, Mobi Format. Learn more. Download Free Real Analysis Walter Rudin Real Analysis Walter Rudin Rudin's Real and Complex Analysis is my favorite math book. Just select your click then download button, and complete an offer to start downloading the ebook. To get started finding Walter Rudin Principles Of Mathematical Analysis Solution Manual Pdf , you are right to find our website which has a comprehensive collection of manuals listed. The author has studied chapter 1-3 and is solving the problems alone. the originality of any content in this solution manual. In 1970 Rudin was an Invited Speaker at the International Congress of Mathematicians in Nice. acquire the real analysis walter rudin link that we come up with the money for here and check out the link. They fled to France after the Anschluss in 1938. [2], Leroy P. Steele Prize for Mathematical Exposition, "Factorization in the group algebra of the real line", "Holomorphic maps that extend to automorphisms of a ball", "Vilas Professor Emeritus Walter Rudin died after a long illness on May 20, 2010", "Noted UW-Madison mathematician Rudin dies at 89", "Review: Casper Goffman, Real Functions, and Walter Rudin, Principles of mathematical analysis, and Henry P. Thielman, Theory of functions of real variables", "Book Review: Principles of Mathematical Analysis", "Review: Walter Rudin, Function theory in the unit ball of $\mathbf {C}^n$", Walter B. Rudin, "Set Theory: An Offspring of Analysis" (1990 Morris Marden Lecture) – YouTube, https://en.wikipedia.org/w/index.php?title=Walter_Rudin&oldid=989022719, American people of Austrian-Jewish descent, Wikipedia articles with SUDOC identifiers, Wikipedia articles with WORLDCATID identifiers, Creative Commons Attribution-ShareAlike License, This page was last edited on 16 November 2020, at 16:44. And by having access to our ebooks online or by storing it on your computer, you have convenient answers with Walter Rudin Principles Of Mathematical Analysis Solution Manual Pdf . so many fake sites. When France surrendered to Germany in 1940, Rudin fled to England and served in the Royal Navy for the rest of World War II. Our library is the biggest of these that have literally hundreds of thousands of different products represented. He wrote the first of these while he was a C.L.E. book. [2] His research interests ranged from harmonic analysis to complex analysis. pdf free rudin analysis solution manual manual pdf pdf file Page 1/15. If nothing happens, download the GitHub extension for Visual Studio and try again. After that he was a C.L.E. pdf free real analysis walter rudin manual pdf pdf file Page 1/14. He received an honorary degree from the University of Vienna in 2006. [8], Rudin's analysis textbooks have also been influential in mathematical education worldwide, having been translated into 13 languages, including Russian,[9] Chinese,[10] and Spanish.[11]. Download Principles Of Mathematical Analysis books, The third edition of this well known text continues to provide a solid foundation in mathematical analysis for undergraduate and first-year graduate students. You could not solitary going taking into account ebook gathering or library or borrowing from your connections to gate them. Author is studying himself whilst updating the solutions. The two resided in Madison, Wisconsin, in the eponymous Walter Rudin House, a home designed by architect Frank Lloyd Wright. Walter Rudin (May 2, 1921 – May 20, 2010) was an Austrian-American mathematician and professor of Mathematics at the University of Wisconsin–Madison.. Author is studying himself whilst updating the solutions. Rudin wrote Principles of Mathematical Analysis only two years after obtaining his Ph.D. from Duke University while he was a C. L. E. Moore Instructor at MIT. XD. Use Git or checkout with SVN using the web URL. eBook includes PDF, ePub and Kindle version. My friends are so mad that they do not know how I have all the high quality ebook which they do not! Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world. Analysis Books available in PDF, EPUB, Mobi Format. After the war he left for the United States, and earned his B.A. Learn more, We use analytics cookies to understand how you use our websites so we can make them better, e.g. However it is nearly impossible to acknowledge every source of this download the GitHub extension for Visual Studio. You have remained in right site to start getting this info. GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together. The author does not claim, on any situation, of Many thanks. they're used to gather information about the pages you visit and how many clicks you need to accomplish a task. If there is a survey it only takes 5 minutes, try any survey which works for you. Highly appreciated!! Walter Rudin (May 2, 1921 – May 20, 2010[2]) was an Austrian-American mathematician and professor of Mathematics at the University of Wisconsin–Madison. Learn more. Acces PDF Real Analysis Walter Rudin Real Analysis Walter Rudin Recognizing the quirk ways to acquire this book real analysis walter rudin is additionally useful. from Duke University in North Carolina in 1947, and two years later earned a Ph.D. from the same institution. Rudin was born into a Jewish family in Austria in 1921. If nothing happens, download GitHub Desktop and try again. Like much of Rudin's other writings, this … We use essential cookies to perform essential website functions, e.g. If you have any problems/difficulties in understanding or suggestion/correction to fulfill the proof contents, please consider open a Github issue. We have made it easy for you to find a PDF Ebooks without any digging. [3], In addition to his contributions to complex and harmonic analysis, Rudin was known for his mathematical analysis textbooks: Principles of Mathematical Analysis,[4] Real and Complex Analysis,[5] and Functional Analysis[6] (informally referred to by students as "Baby Rudin", "Papa Rudin", and "Grandpa Rudin", respectively). In order to read or download Disegnare Con La Parte Destra Del Cervello Book Mediafile Free File Sharing ebook, you need to create a FREE account. We use optional third-party analytics cookies to understand how you use GitHub.com so we can build better products. [12] He was awarded the Leroy P. Steele Prize for Mathematical Exposition in 1993 for authorship of the now classic analysis texts, Principles of Mathematical Analysis and Real and Complex Analysis. You signed in with another tab or window. Where To Download Rudin Analysis Solution Manual Rudin Analysis Solution Manual Solutions manual developed by Roger Cooke of the University of Vermont, to accompany Principles of Mathematical Analysis, by Walter Rudin. [1], Rudin died on May 20, 2010 after suffering from Parkinson's disease. Walter Rudin - Wikipedia Walter Rudin is the author of three textbooks, Principles of Mathematical Analysis, Real and Complex Analysis, and Functional Analysis, whose widespread use is illustrated by the fact that they have been translated into a total of 13 languages. In order to read or download walter rudin principles of mathematical analysis solution manual pdf ebook, you need to create a FREE account. Learn more. "Papa Rudin", and "Grandpa Rudin", respectively). The author has studied chapter 1-3 and is solving the problems alone. However it is nearly impossible to acknowledge every source of this book. lol it did not even take me 5 minutes at all! Principles, acclaimed for its elegance and clarity,[7] has since become a standard textbook for introductory real analysis courses in the United States. He remained at the University for 32 years. this is the first one which worked! For more information, see our Privacy Statement. Work fast with our official CLI. Lidia Pasta Sauce, Conecuh Sausage Recipes, Ibm Na Front-end Developer Hackerrank, Sherpa Fleece Pants, 66th Filmfare Awards South 2020, Romans 5:7 Got Questions, Liftmaster Garage Door Opener Learn Button, © 2010-2015 Marcel Rudert. All rights reserved. Diese Website benutzt Cookies. Wie die meisten. Wie fast alle. Wenn du die Website weiter nutzt, gehen wir von deinem Einverständnis aus.OK!Datenschutzerklärung
CommonCrawl
OG QUiz 8 kendallcouch2 Carried Interest in a joint interest operation, when a WI owner elects not to participate in drilling the well Carried party any party electing not to participate carrying party the working interest owners who agree to pay the carried party's share of costs Farmout an arrangement in which the owner of a working interest (the farmor) assigns all or part of the working interest to another party (the farmee) in return for the exploration and development of the property When the costs of drilling, producing and operating have been recouped from the sale of products on a well. Reversionary Interest a variation of a farm-in/out agreement which provides for a share of the working interest to revert back to the farmor at some point in time Free Well Arrangement For example, the owner of a working interest transfers a portion of its working interest to a second party in exchange for the second party agreeing to drill (and possibly equip) a well free of cost to the assignor. This will result in the assignor essentially incurring no costs for a well in which it has an interest Nonconsent Nonconsent operations arise when one or more of the working interest owners do not consent to the drilling, deepening, reworking, or abandonment of a well. Another term frequently used to refer to this arrangement is sole risk. Promoted vs Promoting Party Unitization the combination of leases that have been at least partially developed. In a unitization, the parties enter into a unitization agreement that de nes the areas to be unitized and specifies the rights and obligations of each party. One party, known as the unit operator, has the responsibility of operating the unit. The purpose of unitizations is more economical and efficient development and operation. In particular, a unitization may be necessary to conduct secondary or tertiary recovery operations. Participation factors The relative amount of the reserves contributed by each party is typically used to determine the parties' participation factors, i.e., percentage of working interests in the unitized property contributions to a well must be equalized because the participation factors do not account for the fact that the properties may be in varying stages of development. BA Law: Chapter 24 bee-eaglePLUS Basic Land Practices Final Petro_2016 Business Law Final -Secured Transactions amanda_anderson9 Chapter 30 - Secured Transactions mcveety OU Capstone Midterm Ratio Formulas @Kelsey OG Quiz 10 Investing and retirement POST TEST 11/7/19 dakotabramer Chapter 5: Short-Term/Working Memory ashley22nicolem Counseling Chps 1-3 nbauer927 An infinite nonconducting sheet has a surface charge density $$ \sigma = + 5.80 \mathrm { pC } / \mathrm { m } ^ { 2 } $$ . (a) How much work is done by the electric field due to the sheet if a particle of charge $$ q = + 1.60 \times 10 ^ { - 19 } \mathrm { C } $$ is moved from the sheet to a point P at distance d = 6.15 cm from the sheet? (b) If the electric potential V is defined to be -5.00 mV on the sheet, what is V at P? The company controller, Barry Melrose, has asked for your help in interpreting the authoritative accounting literature that addresses the recognition and measurement of impairment losses for property, plant, and equipment and intangible assets. "We have a significant amount of goodwill on our books from last year's acquisition of Churchill Corporation. Also, I think we may have a problem with the assets of some of our factories out West. And one of our divisions is currently considering disposing of a large group of depreciable assets." Your task as assistant controller is to research the issue. Required: 1. Obtain the relevant authoritative literature on accounting for the impairment of property, plant, and equipment and intangible assets using the FASB Accounting Standards Codification. You might gain access at the FASB website (asc.fasb.org). Cite the reference locations regarding impairment of property, plant, and equipment and intangible assets. 2. When should property, plant, and equipment and finite-life intangible assets be tested for impairment? 3. Explain the process for measuring an impairment loss for property, plant, and equipment and finite-life intangible assets to be held and used. 4. What are the specific criteria that must be met for an asset or asset group to be classified as held-for-sale? What is the specific citation reference from the FASB Accounting Standards Codification that contains these criteria? 5. Explain the process for measuring an impairment loss for property, plant, and equipment and finite-life intangible assets classified as held-for-sale. Surf the Internet or conduct research in your library to find info about "The Big Four" accounting firms. Create a table of information about each firm and the services the firm provides. Irish Imports is an importer of silver, brass, and furniture items from Ireland. Kathleen O'Shea is the general manager of Irish Imports. O'Shea employs two other people in the business. Molly Fitzpatrick serves as the buyer for Irish Imports. In her work, Fitzpatrick travels throughout Ireland to find interesting new products. When Fitzpatrick finds a new product, she arranges for Irish Imports to purchase and pay for the item. She helps the Irish artisans prepare their invoices and then faxes the invoices to O'Shea in the company office. O'Shea operates out of an office in Boston, Massachusetts. The office is managed by Maura Riley, who handles the mail, keeps the accounting records, makes bank deposits, and prepares the monthly bank reconciliation. Virtually all of Irish Imports' cash receipts arrive by mail—from sales made to Target, Pier 1 Imports, and Macy's. Riley also prepares checks for payment based on invoices that come in from the suppliers who have been contacted by Fitzpatrick. To maintain control over cash payments, O'Shea examines the paperwork and signs all checks. Identify all the major internal control weaknesses in Irish Imports' system and how the resulting action could hurt Irish Imports. Also state how to correct each weakness.
CommonCrawl
Networks and Spatial Economics Solving Discretely-Constrained Nash–Cournot Games with an Application to Power Markets Steven A. Gabriel Sauleh Ahmad Siddiqui Antonio J. Conejo This paper provides a methodology to solve Nash–Cournot energy production games allowing some variables to be discrete. Normally, these games can be stated as mixed complementarity problems but only permit continuous variables in order to make use of each producer's Karush–Kuhn–Tucker conditions. The proposed approach allows for more realistic modeling and a compromise between integrality and complementarity to avoid infeasible situations. Nash Cournot Integer Discrete Game theory Power market A.1 Variation 7 formulation Variation 7 for the example in Section 3.3 where both complementarity and integrality are relaxed is shown below, where all variables unless specified otherwise are taken to be nonnegative. $$\begin{array}{lll} & \min \left\{ \underset{p}{\sum }\underset{i}{\sum }(\epsilon _{pi})^{+}+(\epsilon _{pi})^{-}+\underset{p}{\sum }\underset{j}{\sum } (\sigma _{jp}+\tau _{jp})\right\} \\ & 0\leq 2q_{1}(b+\beta _{1})+bq_{2}-(a-\rho _{1})+\lambda _{1}-\eta _{1}\leq M_{11}u_{11}+M_{11}\sigma _{11} \end{array} $$ $$\begin{array}{lll} 0& \leq 2q_{2}(b+\beta _{2})+bq_{1}-(a-\rho _{2})+\lambda _{2}-\eta _{2}\leq M_{12}u_{12}+M_{12}\sigma _{12} \\ 0& \leq -\lambda _{1}q_{\max}+\eta _{1}q_{\min}+\gamma _{1}\leq M_{31}u_{31}+M_{31}\sigma _{31} \\ 0& \leq -\lambda _{2}q_{\max}+\eta _{2}q_{\min}+\gamma _{2}\leq M_{32}u_{32}+M_{32}\sigma _{32} \\ 0& \leq q_{1}\leq M_{11}(1-u_{11})+M_{11}\sigma _{11} \\ 0& \leq q_{2}\leq M_{12}(1-u_{12})+M_{12}\sigma _{12} \\ 0& \leq c_{1}\leq M_{31}(1-u_{31})+M_{31}\sigma _{31} \\ 0& \leq c_{2}\leq M_{32}(1-u_{32})+M_{32}\sigma _{32} \\ 0& \leq -q_{1}+c_{1}q_{\max}\leq M_{21}v_{21}+M_{21}\tau _{21} \\ 0& \leq -q_{2}+c_{2}q_{\max}\leq M_{22}v_{22}+M_{22}\tau _{22} \\ 0& \leq q_{1}-c_{1}q_{\min}\leq M_{41}v_{41}+M_{41}\tau _{41} \\ 0& \leq q_{2}-c_{2}q_{\min}\leq M_{42}v_{42}+M_{42}\tau _{42} \\ 0& \leq -c_{1}+1\leq M_{61}v_{61}+M_{61}\tau _{61} \\ 0& \leq -c_{2}+1\leq M_{62}v_{62}+M_{62}\tau _{62} \\ 0& \leq \lambda _{1}\leq M_{21}(1-v_{21})+M_{21}\tau _{21} \\ 0& \leq \lambda _{2}\leq M_{22}(1-v_{22})+M_{22}\tau _{22} \\ 0& \leq \eta _{1}\leq M_{41}(1-v_{41})+M_{41}\tau _{41} \\ 0& \leq \eta _{2}\leq M_{42}(1-v_{42})+M_{42}\tau _{42} \\ 0& \leq \gamma _{1}\leq M_{61}(1-v_{61})+M_{61}\tau _{61} \\ 0& \leq \gamma _{2}\leq M_{62}(1-v_{62})+M_{62}\tau _{62} \\ u_{jp}& \in \left\{ 0,1\right\} ,v_{jp}\in \left\{ 0,1\right\} \qquad p=1,2 \\ & -M(1-w_{pi})\leq c_{p}-i-\epsilon _{pi}\leq M(1-w_{pi}),\qquad \\ \epsilon _{pi}& =(\epsilon _{pi})^{+}-(\epsilon _{pi})^{-} \\ & \underset{i}{\sum }w_{pi}=1,p=1,2;w_{pi}\in \left\{ 0,1\right\} i=0,1 \end{array}$$ Abada I, Briat GV, Massol O (2012) A generalized Nash–Cournot model for the northwestern European natural gas markets with a fuel substitution demand function: the GaMMES model. Netw Spat Econ. doi: 10.1007/s11067-012-9171-5 Google Scholar Bard JF (1983) An efficient point algorithm for a linear two-stage optimization problem. Oper Res 31(4):670–684CrossRefGoogle Scholar Bard JF (1988) Convex two-level optimization. Math Program 40(1):15–27CrossRefGoogle Scholar Bard JF, Moore JT (1990) A branch and bound algorithm for the bilevel programming problem. SIAM J Sci Stat Comput 11(2):281–292CrossRefGoogle Scholar Bard JF, Plummer J, Sourie JC (2000) A bilevel programming approach to determining tax credits for biofuel production. Eur J Oper Res 120:30–46CrossRefGoogle Scholar Bazaraa MS, Sherali HD, Shetty CM (1993) Nonlinear programming theory and algorithms. Wiley, New YorkGoogle Scholar Cohon JL (1978) Multiobjective programming and planning. Academic Press, New YorkGoogle Scholar Cottle RW, Pang J-S, Stone RE (1992) The linear complementarity problem. Academic Press, New YorkGoogle Scholar Facchinei F, Pang J-S (2003) Finite-dimensional variational inequalities and complementarity problems, vols I, II. Springer, New YorkGoogle Scholar Fortuny-Amat J, McCarl B (1981) A representation and economic interpretation of a two-level programming problem. J Oper Res Soc 32(9):783–792Google Scholar Gabriel SA, Leuthold FU (2010) Solving discretely-constrained MPEC problems with applications in electric power markets. Energy Econ 32:3–14CrossRefGoogle Scholar Gabriel SA, Shim Y, Conejo AJ, de la Torre S, García-Bertrand R (2010) A benders decomposition method for discretely-constrained mathematical programs with equilibrium constraints. J Oper Res Soc 61:1404–1419Google Scholar Gabriel SA, Conejo AJ, Ruiz C, Siddiqui S (2012) Solving discretely-constrained, mixed linear complementarity problems with applications in energy. Comput Oper Res. doi: 10.1016/j.cor.2012.10.017 Google Scholar Gabriel SA, Conejo AJ, Fuller JD, Hobbs BF, Ruiz C (2013) Complementarity modeling in energy markets. SpringerGoogle Scholar García-Bertrand R, Conejo AJ, Gabriel SA (2005) Multi-period near-equilibrium in a pool-based electricity market including on/off decisions. Netw Spat Econ 5(4):371–393CrossRefGoogle Scholar Horn RA, Johnson CR (1985) Matrix analysis. Cambridge University Press, New YorkCrossRefGoogle Scholar Hu J, Mitchell JE, Pang J-S, Bennett KP, Kunapuli G (2009) On the global solution of linear programs with linear complementarity constraints. Working paperGoogle Scholar Karlof JK, Wang W (1996) Bilevel programming applied to the flow shop scheduling problem. Comput Oper Res 23(5):443–451CrossRefGoogle Scholar Labbé M, Marcotte P, Savard G (1998) A bilevel model of taxation and its application to optimal highway pricing. Manage Sci 44(12):1608–1622CrossRefGoogle Scholar Leuthold FU, Weigt H, von Hirschhausen C (2012) A large-scale spatial optimization model of the European electricity market. Netw Spat Econ 12(1):75–107CrossRefGoogle Scholar Luo ZQ, Pang JS, Ralph D (1996) Mathematical programs with equilibrium constraints. Cambridge University Press, Cambridge, United KingdomCrossRefGoogle Scholar Marcotte P, Savard G, Zhu DL (2001) A trust region algorithm for nonlinear bilevel programming. Oper Res Lett 29:171–179CrossRefGoogle Scholar Metzler C, Hobbs BF, Pang J-S (2003) Nash–Cournot equilibria in power markets on a linearized DC network with arbitrage: formulations and properties. Netw Spat Econ 3(2):123–150CrossRefGoogle Scholar Moore JT, Bard JF (1990) The mixed integer linear bilevel programming problem. Oper Res 38:911–921CrossRefGoogle Scholar Oggioni G, Smeers Y, Allevi E, Schaible S (2011) A generalized Nash equilibrium model of market coupling in the European power system. Netw Spat Econ. doi: 10.1007/s11067-011-9166-7 Google Scholar O'Neill RP, Sotkiewicz PM, Hobbs BF, Rothkopf MH, Stewart Jr WR (2005) Efficient market-clearing prices in markets with nonconvexities. Eur J Oper Res 164(1):269–285CrossRefGoogle Scholar Scaparra MP, Church RL (2008) A bilevel mixed-integer program for critical infrastructure protection planning. Comput Oper Res 35(6):1905–1923CrossRefGoogle Scholar Siddiqui S (2011) Solving two-level optimization problems with applications to robust design and energy markets. University of Maryland Dissertation, College Park, MDGoogle Scholar Siddiqui S, Gabriel SA (2012) An SOS1-based approach for solving MPECs with a natural gas market application. Netw Spat Econ. doi: 10.1007/s1067-012-9178-y Google Scholar Smeers Y (2003) Market incompleteness in regional electricity transmission. Part I: the forward market. Netw Spat Econ 3(2):151–174CrossRefGoogle Scholar Wen UP, Huang AD (1996) A simple tabu search method to solve the mixed-integer linear bilevel programming problem. Eur J Oper Res 88:563–571CrossRefGoogle Scholar 1.Department of Civil and Environmental Engineering and the Applied Mathematics, Statistics, and Scientific Computation ProgramUniversity of MarylandCollege ParkUSA 2.Department of Civil Engineering and the Johns Hopkins Systems InstituteJohns Hopkins UniversityBaltimoreUSA 3.Department of Electrical EngineeringUniversity of Castilla - La ManchaCiudad RealSpain Gabriel, S.A., Siddiqui, S.A., Conejo, A.J. et al. Netw Spat Econ (2013) 13: 307. https://doi.org/10.1007/s11067-012-9182-2
CommonCrawl
Perturbed, entropy-based closure for radiative transfer On a regularization of the magnetic gas dynamics system of equations September 2013, 6(3): 545-556. doi: 10.3934/krm.2013.6.545 Logarithmically improved regularity criteria for the generalized Navier-Stokes and related equations Jishan Fan 1, , Yasuhide Fukumoto 2, and Yong Zhou 3, Department of Applied Mathematics, Nanjing Forestry University, Nanjing, 210037 Faculty of Mathematics and and Mathematical Research Center, for Industrial Technology, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan Department of Mathematics, Zhejiang Normal University, Jinhua 321004, Zhejiang Received November 2012 Revised February 2013 Published May 2013 In this paper, logarithmically improved regularity criteria for the generalized Navier-Stokes equations are established in terms of the velocity, vorticity and pressure, respectively. Here $BMO$, the Triebel-Lizorkin and Besov spaces are used, which extend usual Sobolev spaces much. Similar results for the quasi-geostrophic flows and the generalized MHD equations are also listed. Keywords: related equations., Generalized Navier-Stokes equations, Besov spaces, regularity criterion. Mathematics Subject Classification: Primary: 35Q35, 35B65; Secondary: 76D0. Citation: Jishan Fan, Yasuhide Fukumoto, Yong Zhou. Logarithmically improved regularity criteria for the generalized Navier-Stokes and related equations. Kinetic & Related Models, 2013, 6 (3) : 545-556. doi: 10.3934/krm.2013.6.545 M. Cannone and G. Karch, Incompressible Navier-Stokes equations in abstract Banach spaces, in "Tosio Kato's Method and Principle for Evolution Equations in Mathematical Physics" (Sapporo, 2001), Sūrikaisekikenkyūsho Kōkyūroku, No. 1234, (2001), 27-41. Google Scholar D. Chae and J. Lee, On the global well-posedness and stability of the Navier-Stokes and the related equations, in "Contributions to Current Challenges in Mathematical Fluid Mechanics," Adv. Math. Fluid Mech., Birkhäuser, Basel, (2004), 31-51. Google Scholar A. Córdoba and D. Córdoba, A maximum principle applied to quasi-geostrophic equations, Comm. Math. Phys., 249 (2004), 511-528. doi: 10.1007/s00220-004-1055-1. Google Scholar J. Fan, S. Jiang, G. Nakamura and Y. Zhou, Logarithmically improved regularity criteria for the Navier-Stokes and MHD equations, J. Math. Fluid Mech., 13 (2011), 557-571. doi: 10.1007/s00021-010-0039-5. Google Scholar J. Fan and T. Ozawa, Regularity criteria for the generalized Navier-Stokes and related equations, Differential Integral Equations, 21 (2008), 681-691. Google Scholar J. Fan and T. Ozawa, On the regularity criteria for the generalized Navier-Stokes equations and Lagrangian averaged Euler equations, Differential Integral Equations, 21 (2008), 443-457. Google Scholar J. Jiménez, Hyperviscous vortices, J. Fluid Mech., 279 (1994), 169-176. Google Scholar T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations, Comm. Pure Appl. Math., 41 (1988), 891-907. doi: 10.1002/cpa.3160410704. Google Scholar D. L. Koch and J. F. Brady, Anomalous diffusion in heterogeneous porous media, Phys. Fluids, 31 (1988), 965-973. doi: 10.1063/1.866716. Google Scholar H. Kozono and Y. Shimada, Bilinear estimates in homogeneous Triebel-Lizorkin spaces and the Navier-Stokes equations, Math. Nachr., 276 (2004), 63-74. doi: 10.1002/mana.200310213. Google Scholar H. Kozono, T. Ogawa and Y. Taniuchi, The critical Sobolev inequalities in Besov spaces and regularity criterion to some semi-linear evolution equations, Math. Z., 242 (2002), 251-278. doi: 10.1007/s002090100332. Google Scholar H. Kozono and Y. Taniuchi, Bilinear estimates in BMO and the Navier-Stokes equations, Math. Z., 235 (2000), 173-194. doi: 10.1007/s002090000130. Google Scholar J.-L. Lions, "Quelques Méthodes de Résolution des Problèmes aux Limites Nonlinéaires," Dunod, Paris, 1969. Google Scholar S. Tourville, Existence and uniqueness of solutions for the Navier-Stokes equations with hyperdissipation, J. Math. Anal. Appl., 281 (2003), 62-75. doi: 10.1016/S0022-247X(02)00453-5. Google Scholar H. Triebel, "Theory of Function Spaces," Monographs in Mathematics, 78, Birkhäuser Verlag, Basel, 1983. doi: 10.1007/978-3-0346-0416-1. Google Scholar J. Wu, The generalized incompressible Navier-Stokes equations in Besov spaces, Dyn. Partial Differ. Equ., 1 (2004), 381-400. Google Scholar J. Wu, Regularity criteria for the generalized MHD equations, Comm. Partial Differential Equations, 33 (2008), 285-306. doi: 10.1080/03605300701382530. Google Scholar Y. Zhang, D. A. Benson and D. M. Reeves, Time and space nonlocalities underlying fractional-derivative models: Distinction and literature review of field applications, Adv. Water Resour., 32 (2009), 561-581. doi: 10.1016/j.advwatres.2009.01.008. Google Scholar Y. Zhou, Regularity criteria for the generalized viscous MHD equations, Ann. Inst. H. Poincaré Anal. Non Linéaire, 24 (2007), 491-505. doi: 10.1016/j.anihpc.2006.03.014. Google Scholar Y. Zhou, Decay rate of higher order derivatives for solutions to the 2-D dissipative quasi-geostrophic flows, Discrete Contin. Dyn. Syst., 14 (2006), 525-532. doi: 10.3934/dcds.2006.14.525. Google Scholar Y. Zhou, Asymptotic behaviour of the solutions to the 2D dissipative quasi-geostrophic flows, Nonlinearity, 21 (2008), 2061-2071. doi: 10.1088/0951-7715/21/9/008. Google Scholar Y. Zhou and J. Fan, Logarithmically improved regularity criteria for the 3D viscous MHD equations, Forum Math., 24 (2012), 691-708. doi: 10.1515/form.2011.079. Google Scholar Y. Zhou and S. Gala, A new regularity criterion for weak solutions to the viscous MHD equations in terms of the vorticity field, Nonlinear Anal., 72 (2010), 3643-3648. doi: 10.1016/j.na.2009.12.045. Google Scholar Y. Zhou and S. Gala, Regularity criteria for the solutions to the 3D MHD equations in the multiplier space, Z. Angew. Math. Phys., 61 (2010), 193-199. doi: 10.1007/s00033-009-0023-1. Google Scholar Daoyuan Fang, Chenyin Qian. Regularity criterion for 3D Navier-Stokes equations in Besov spaces. Communications on Pure & Applied Analysis, 2014, 13 (2) : 585-603. doi: 10.3934/cpaa.2014.13.585 Igor Kukavica. On regularity for the Navier-Stokes equations in Morrey spaces. Discrete & Continuous Dynamical Systems, 2010, 26 (4) : 1319-1328. doi: 10.3934/dcds.2010.26.1319 Xuanji Jia, Zaihong Jiang. An anisotropic regularity criterion for the 3D Navier-Stokes equations. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1299-1306. doi: 10.3934/cpaa.2013.12.1299 Hongjie Dong, Kunrui Wang. Interior and boundary regularity for the Navier-Stokes equations in the critical Lebesgue spaces. Discrete & Continuous Dynamical Systems, 2020, 40 (9) : 5289-5323. doi: 10.3934/dcds.2020228 Hi Jun Choe, Bataa Lkhagvasuren, Minsuk Yang. Wellposedness of the Keller-Segel Navier-Stokes equations in the critical Besov spaces. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2453-2464. doi: 10.3934/cpaa.2015.14.2453 Zujin Zhang. A Serrin-type regularity criterion for the Navier-Stokes equations via one velocity component. Communications on Pure & Applied Analysis, 2013, 12 (1) : 117-124. doi: 10.3934/cpaa.2013.12.117 Minghua Yang, Zunwei Fu, Jinyi Sun. Global solutions to Chemotaxis-Navier-Stokes equations in critical Besov spaces. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3427-3460. doi: 10.3934/dcdsb.2018284 Vittorino Pata. On the regularity of solutions to the Navier-Stokes equations. Communications on Pure & Applied Analysis, 2012, 11 (2) : 747-761. doi: 10.3934/cpaa.2012.11.747 Igor Kukavica. On partial regularity for the Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2008, 21 (3) : 717-728. doi: 10.3934/dcds.2008.21.717 Hugo Beirão da Veiga. Navier-Stokes equations: Some questions related to the direction of the vorticity. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 203-213. doi: 10.3934/dcdss.2019014 Chongsheng Cao. Sufficient conditions for the regularity to the 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2010, 26 (4) : 1141-1151. doi: 10.3934/dcds.2010.26.1141 Zijin Li, Xinghong Pan. Some Remarks on regularity criteria of Axially symmetric Navier-Stokes equations. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1333-1350. doi: 10.3934/cpaa.2019064 Keyan Wang. On global regularity of incompressible Navier-Stokes equations in $\mathbf R^3$. Communications on Pure & Applied Analysis, 2009, 8 (3) : 1067-1072. doi: 10.3934/cpaa.2009.8.1067 Hui Chen, Daoyuan Fang, Ting Zhang. Regularity of 3D axisymmetric Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2017, 37 (4) : 1923-1939. doi: 10.3934/dcds.2017081 Yukang Chen, Changhua Wei. Partial regularity of solutions to the fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5309-5322. doi: 10.3934/dcds.2016033 Houyu Jia, Xiaofeng Liu. Local existence and blowup criterion of the Lagrangian averaged Euler equations in Besov spaces. Communications on Pure & Applied Analysis, 2008, 7 (4) : 845-852. doi: 10.3934/cpaa.2008.7.845 Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602 Jan W. Cholewa, Tomasz Dlotko. Fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 2967-2988. doi: 10.3934/dcdsb.2017149 Alessio Falocchi, Filippo Gazzola. Regularity for the 3D evolution Navier-Stokes equations under Navier boundary conditions in some Lipschitz domains. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021151 Minghua Yang, Jinyi Sun. Gevrey regularity and existence of Navier-Stokes-Nernst-Planck-Poisson system in critical Besov spaces. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1617-1639. doi: 10.3934/cpaa.2017078 Jishan Fan Yasuhide Fukumoto Yong Zhou
CommonCrawl
Tagged: sequence Solve a Linear Recurrence Relation Using Vector Space Technique Let $V$ be a real vector space of all real sequences \[(a_i)_{i=1}^{\infty}=(a_1, a_2, \dots).\] Let $U$ be a subspace of $V$ defined by \[U=\{(a_i)_{i=1}^{\infty}\in V \mid a_{n+2}=2a_{n+1}+3a_{n} \text{ for } n=1, 2,\dots \}.\] Let $T$ be the linear transformation from $U$ to $U$ defined by \[T\big((a_1, a_2, \dots)\big)=(a_2, a_3, \dots). \] (a) Find the eigenvalues and eigenvectors of the linear transformation $T$. (b) Use the result of (a), find a sequence $(a_i)_{i=1}^{\infty}$ satisfying $a_1=2, a_2=7$. Click here if solved 8 Matrix Representation of a Linear Transformation of Subspace of Sequences Satisfying Recurrence Relation \[(a_i)_{i=1}^{\infty}=(a_1, a_2, \dots).\] Let $U$ be the subspace of $V$ consisting of all real sequences that satisfy the linear recurrence relation $a_{k+2}-5a_{k+1}+3a_{k}=0$ for $k=1, 2, \dots$. (a) Let \mathbf{u}_1&=(1, 0, -3, -15, -66, \dots)\\ \mathbf{u}_2&=(0, 1, 5, 22, 95, \dots) be vectors in $U$. Prove that $\{\mathbf{u}_1, \mathbf{u}_2\}$ is a basis of $U$ and conclude that the dimension of $U$ is $2$. (b) Let $T$ be a map from $U$ to $U$ defined by \[T\big((a_1, a_2, \dots)\big)=(a_2, a_3, \dots). \] Verify that the map $T$ actually sends a vector $(a_i)_{i=1}^{\infty}\in V$ to a vector $T\big((a_i)_{i=1}^{\infty}\big)$ in $U$, and show that $T$ is a linear transformation from $U$ to $U$. (c) With respect to the basis $\{\mathbf{u}_1, \mathbf{u}_2\}$ obtained in (a), find the matrix representation $A$ of the linear transformation $T:U \to U$ from (b). Sequences Satisfying Linear Recurrence Relation Form a Subspace \[(a_i)_{i=1}^{\infty}=(a_1, a_2, \cdots).\] Let $U$ be the subset of $V$ defined by \[U=\{ (a_i)_{i=1}^{\infty} \in V \mid a_{k+2}-5a_{k+1}+3a_{k}=0, k=1, 2, \dots \}.\] Prove that $U$ is a subspace of $V$. Finite Integral Domain is a Field Find the Rank of a Matrix with a Parameter Generators of the Augmentation Ideal in a Group Ring Solve Linear Recurrence Relation Using Linear Algebra (Eigenvalues and Eigenvectors) Determine a Condition on $a, b$ so that Vectors are Linearly Dependent
CommonCrawl
Search SpringerLink Original Research Paper Inequalities for the polar derivative of a polynomial M. H. Gulzar1, B. A. Zargar1 & Rubia Akhter1 The Journal of Analysis volume 28, pages923–929(2020)Cite this article 46 Accesses Let P(z) be a polynomial of degree n having all its zeros in \(|z|\le 1\), then according to Turan (Compositio Mathematica 7:89–95, 2004) $$\begin{aligned} \max \limits _{|Z|=1}|P'(z)|\ge \frac{n}{2}\max \limits _{|Z|=1}|P(z)|. \end{aligned}$$ In this paper, we shall use polar derivative and establish a generalisation and an extension of this result. Our results also generalize variety of other results. Buy single article Instant access to the full article PDF. Tax calculation will be finalised during checkout. Immediate online access to all issues from 2019. Subscription will auto renew annually. Rent this article via DeepDyve. Learn more about Institutional subscriptions Aziz, A. 1988. Inequalities for the polar derivative of a polynomial. Journal of Approximation Theory 55: 183–193. Aziz, A., and N.A. Rather. 2003. Inequalities for the polar derivative of a polynomial with restricted zeros. Math Bulk 17: 15–28. MathSciNet MATH Google Scholar Aziz, A., and W.M. Shah. 1998. Inequalities for the polar derivative of a polynomial. Indian Journal of Pure and Applied Mathematics 29: 163–173. Bernstein, S. 1930. Sur la limitation des derivees des polnomes. Comptes Rendus de l'Académie des Sciences 190: 338–341. Dubinin, V.N. 2000. Distortion theorems for polynomials on the circle. Matematicheskii Sbornik 191 (12): 1797–1807. Lax, P.D. 1994. Proof of a conjecture of P. Erdös on the derivative of a polynomial. American Mathematical Society 50 (8): 509–513. Shah, W.M. 1996. A generalization of a theorem of P. Turan. Journal of the Ramanujan Mathematical Society 1: 29–35. MathSciNet Google Scholar Turan, P. 1939. Über die ableitung von polynomem. Compositio Mathematica 7: 89–95. This work was supported by NBHM, India, under the research project number 02011/36/2017/R&D-II. Department of Mathematics, Kashmir University, Srinagar, 190006, India M. H. Gulzar, B. A. Zargar & Rubia Akhter M. H. Gulzar B. A. Zargar Rubia Akhter Correspondence to M. H. Gulzar. Gulzar, M.H., Zargar, B.A. & Akhter, R. Inequalities for the polar derivative of a polynomial. J Anal 28, 923–929 (2020). https://doi.org/10.1007/s41478-020-00222-4 Issue Date: December 2020 Polar derivative Mathematics Subject Classification Over 10 million scientific documents at your fingertips Switch Edition Academic Edition Not logged in - 3.236.156.34 © 2021 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
SAT Practice Test # 6 Chapter Questions Select Section Reading Test Writing & Language Test Math Test - Calculator Math Test - No Calculator View All According to figure $1,$ in $2017,$ the cost of which of the following fuels is projected to be closest to the 2009 US average electricity cost shown in figure 2$?$ A) Natural gas B) Wind (onshore) C) Conventional coal D) Advanced nuclear Math Test - Calculator - Problem 1 Which expression is equivalent to $\left(2 x^{2}-4\right)-\left(-3 x^{2}+2 x-7\right) ?$ $$\begin{array}{l}{\text { A) } 5 x^{2}-2 x+3} \\ {\text { B) } 5 x^{2}+2 x-3} \\ {\text { C) }-x^{2}-2 x-11} \\ {\text { D) }-x^{2}+2 x-11}\end{array}$$ Mitali S. Numerade Educator The graph above shows the positions of Paul and Mark during a race. Paul and Mark each ran at a constant rate, and Mark was given a head start to shorten the distance he needed to run. Paul finished the race in 6 seconds, and Mark finished the race in 10 seconds. According to the graph, Mark was given a head start of how many yards? $$\begin{array}{l}{\text { A) } 3} \\ {\text { B) } 12} \\ {\text { C) } 18} \\ {\text { D) } 24}\end{array}$$ Lily A. Snow fell and then stopped for a time. When the snow began to fall again, it fell at a faster rate than it had initially. Assuming that none of the snow melted during the time indicated, which of the following graphs could model the total accumulation of snow versus time? A website-hosting service charges businesses a onetime setup fee of $\$ 350$ plus $d$ dollars for each month. If a business owner paid $\$ 1,010$ for the first 12 months, including the setup fee, what is the value of $d$ ? $$\begin{array}{l}{\text { A) } 25} \\ {\text { B) } 35} \\ {\text { C) } 45} \\ {\text { D) } 55}\end{array}$$ $$6 x-9 y>12$$ Which of the following inequalities is equivalent to the inequality above? $$\begin{array}{l}{\text { A) } x-y>2} \\ {\text { B) } 2 x-3 y>4} \\ {\text { C) } 3 x-2 y>4} \\ {\text { D) } 3 y-2 x>2}\end{array}$$ Where Do People Get Most of Their Medical Information? $$\begin{array}{|c|c|}\hline {\text { Source }} & {\text { Percent of }} \\ & {\text { those surveyed }} \\ \hline \text { Doctor } & {63 \%} \\ \hline \text { Internet } & {13 \%} \\ \hline \text { Magazines/brochures } & {9 \%} \\ \hline \text { Pharmacy } & {6 \%} \\ \hline \text { Television } & {2 \%} \\ \hline \text { Other/none of the above } & {7 \%} \\ \hline\end{array}$$ The table above shows a summary of $1,200$ responses to a survey question. Based on the table, how many of those surveyed get most of their medical information from either a doctor or the Internet? $$\begin{array}{l}{\text { A) } 865} \\ {\text { B) } 887} \\ {\text { C) } 912} \\ {\text { D) } 926}\end{array}$$ The members of a city council wanted to assess the opinions of all city residents about converting an open field into a dog park. The council surveyed a sample of 500 city residents who own dogs. The survey showed that the majority of those sampled were in favor of the dog park. Which of the following is true about the city council's survey? A) It shows that the majority of city residents are in favor of the dog park. B) The survey sample should have included more residents who are dog owners. C) The survey sample should have consisted D) The survey sample is biased because it is not representative of all city residents. The table above shows the flavors of ice cream and the toppings chosen by the people at a party. Each person chose one flavor of ice cream and one topping. Of the people who chose vanilla ice cream, what fraction chose hot fudge as a topping? $$\begin{array}{l}{\text { A) } \frac{8}{25}} \\ {\text { B) } \frac{5}{13}} \\ {\text { C) } \frac{13}{25}} \\ {\text { D) } \frac{8}{13}}\end{array}$$ The total area of a coastal city is 92.1 square miles, of which 11.3 square miles is water. If the city had a population of $621,000$ people in the year $2010,$ which of the following is closest to the population density, in people per square mile of land area, of the city at that time? $$\begin{array}{ll}{\text { A) }} & {6,740} \\ {\text { B) }} & {7,690} \\ {\text { C) }} & {55,000} \\ {\text { D) }} & {76,000}\end{array}$$ Math Test - Calculator - Problem 10 Between 1497 and $1500,$ Amerigo Vespucci embarked on two voyages to the New World. According to Vespucci's letters, the first voyage lasted 43 days longer than the second voyage, and the two voyages combined lasted a total of $1,003$ days. How many days did the second voyage last? $$7 x+3 y=8$$ $$6 x-3 y=5$$ For the solution $(x, y)$ to the system of equations above, what is the value of $x-y ?$ $$\begin{array}{l}{\text { A) }-\frac{4}{3}} \\ {\text { B) } \frac{2}{3}} \\ {\text { C) } \frac{4}{3}} \\ {\text { D) } \frac{22}{3}}\end{array}$$ Over which of the following time periods is the average growth rate of the sunflower least? $$\begin{array}{l}{\text { A) } \operatorname{Day} 0 \text { to Day } 21} \\ {\text { B) Day } 21 \text { to Day } 42} \\ {\text { C) Day } 42 \text { to Day } 63} \\ {\text { D) Day } 63 \text { to Day } 84}\end{array}$$ The function $h,$ defined by $h(t)=a t+b,$ where $a$ and $b$ are constants, models the height, in centimeters, of the sunflower after $t$ days of growth during a time period in which the growth is approximately linear. What does a represent? A) The predicted number of centimeters the sunflower grows each day during the period B) The predicted height, in centimeters, of the sunflower at the beginning of the period C) The predicted height, in centimeters, of the sunflower at the end of the period D) The predicted total increase in the height of the sunflower, in centimeters, during the period The growth rate of the sunflower from day 14 to day 35 is nearly constant. On this interval, which of the following equations best models the height $h,$ in centimeters, of the sunflower $t$ days after it begins to grow? $$\begin{array}{l}{\text { A) } h=2.1 t-15} \\ {\text { B) } h=4.5 t-27} \\ {\text { C) } h=6.8 t-12} \\ {\text { D) } h=13.2 t-18}\end{array}$$ $$\begin{array}{|c|c|c|c|c|c|}\hline x & {1} & {2} & {3} & {4} & {5} \\ \hline y & {\frac{11}{4}} & {\frac{25}{4}} & {\frac{39}{4}} & {\frac{53}{4}} & {\frac{67}{4}} \\ \hline\end{array}$$ Which of the following equations relates $y$ to $x$ for the values in the table above? $$\begin{array}{l}{\text { A) } y=\frac{1}{2} \cdot\left(\frac{5}{2}\right)^{x}} \\ {\text { B) } y=2 \cdot\left(\frac{3}{4}\right)^{x}} \\ {\text { C) } y=\frac{3}{4} x+2} \\ {\text { D) } y=\frac{7}{2} x-\frac{3}{4}}\end{array}$$ Triangles $A B C$ and $D E F$ are shown above. Which of the following is equal to the ratio $\frac{B C}{A B} ?$ $$\begin{array}{l}{\text { A) } \frac{D E}{D F}} \\ {\text { B) } \frac{D F}{D E}} \\ {\text { C) } \frac{D F}{E F}} \\ {\text { D) } \frac{E F}{D E}}\end{array}$$ Which of the following expresses the riser height in terms of the tread depth? $$\begin{array}{l}{\text { A) } h=\frac{1}{2}(25+d)} \\ {\text { B) } h=\frac{1}{2}(25-d)} \\ {\text { C) } h=-\frac{1}{2}(25+d)} \\ {\text { D) } h=-\frac{1}{2}(25-d)}\end{array}$$ Some building codes require that, for indoor stairways, the tread depth must be at least 9 inches and the riser height must be at least 5 inches. According to the riser-tread formula, which of the following inequalities represents the set of all possible values for the riser height that meets this code requirement? $$\begin{array}{l}{\text { A) } 0 \leq h \leq 5} \\ {\text { B) } h \geq 5} \\ {\text { C) } 5 \leq h \leq 8} \\ {\text { D) } 8 \leq h \leq 16}\end{array}$$ An architect wants to use the riser-tread formula to design a stairway with a total rise of 9 feet, a riser height between 7 and 8 inches, and an odd number of steps. With the architect's constraints, which of the following must be the tread depth, in inches, of the stairway? ( 1 foot = 12 inches) $$\begin{array}{l}{\text { A) } 7.2} \\ {\text { B) } 9.5} \\ {\text { C) } 10.6} \\ {\text { D) } 15}\end{array}$$ What is the sum of the solutions to $(x-6)(x+0.7)=0 ?$ $$\begin{array}{l}{\text { A } )-6.7} \\ {\text { B) }-5.3} \\ {\text { C) } 5.3} \\ {\text { D) } 6.7}\end{array}$$ A study was done on the weights of different types of fish in a pond. A random sample of fish were caught and marked in order to ensure that none were weighed more than once. The sample contained 150 largemouth bass, of which 30$\%$ contained more than 2 pounds. Which of the following conclusions is best supported by the sample data? A) The majority of all fish in the pond weigh less than 2 pounds. B) The average weight of all fish in the pond is approximately 2 pounds. Number of States with 10 or More Electoral Votes in 2008 In $2008,$ there were 21 states with 10 or more electoral votes, as shown in the table above. Based on the table, what was the median number of electoral votes for the 21 states? As part of an experiment, a ball was dropped and allowed to bounce repeatedly off the ground until it came to rest. The graph above represents the relationship between the time elapsed after the ball was dropped and the height of the ball above the ground. After it was dropped, how many times was the ball at a height of 2 feet? $$\begin{array}{l}{\text { A) One }} \\ {\text { B) Two }} \\ {\text { C) Three }} \\ {\text { D) Four }}\end{array}$$ A customer's monthly water bill was $\$ 75.74 .$ Due to a rate increase, her monthly bill is now $\$ 79.86$ . To the nearest tenth of a percent, by what percent did the amount of the customer's water bill increase? $$\begin{array}{l}{\text { A) } 4.1 \%} \\ {\text { B) } 5.1 \%} \\ {\text { C) } 5.2 \%} \\ {\text { D) } 5.4 \%}\end{array}$$
CommonCrawl
Sufficient conditions for smoothing codimension one foliations by Christopher Ennis PDF Trans. Amer. Math. Soc. 276 (1983), 311-322 Request permission Let $M$ be a compact ${C^\infty }$ manifold. Let $X$ be a ${C^0}$ nonsingular vector field on $M$, having unique integral curves $(p,t)$ through $p \in M$. For $f: M \to {\mathbf {R}}$ continuous, call $\left . Xf(p) = df(p,t)/dt\right |_{t = 0}$ whenever defined. Similarly, call ${X^k}f(p)=X(X^{k-1}f)(p)$. For $0 \leqslant r < k$, a ${C^r}$ foliation $\mathcal {F}$ of $M$ is said to be ${C^k}$ smoothable if there exist a ${C^k}$ foliation $\mathcal {G}$, which ${C^r}$ approximates $\mathcal {F}$, and a homeomorphism $h:M \to M$ such that $h$ takes leaves of $\mathcal {F}$ onto leaves of $\mathcal {G}$. Definition. A transversely oriented Lyapunov foliation is a pair $(\mathcal {F},X)$ consisting of a ${C^0}$ codimension one foliation $\mathcal {F}$ of $M$ and a ${C^0}$ nonsingular, uniquely integrable vector field $X$ on $M$, such that there is a covering of $M$ by neighborhoods $\{{W_i}\}$, $0 \leqslant i \leqslant N$, on which $\mathcal {F}$ is described as level sets of continuous functions ${f_i}:{W_i} \to {\mathbf {R}}$ for which $X{f_i}(p)$ is continuous and strictly positive. We prove the following theorems. Theorem 1. Every ${C^0}$ transversely oriented Lyapunov foliation $(\mathcal {F},X)$ is ${C^1}$ smoothable to a ${C^1}$ transversely oriented Lyapunov foliation $(\mathcal {G},X)$. Theorem 2. If $(\mathcal {F},X)$ is a ${C^0}$ transversely oriented Lyapunov foliation, with $X \in {C^{k - 1}}$ and ${X^j}{f_i}(p)$ continuous for $1 \leqslant j \leqslant k$ and $0 \leqslant i \leqslant N$, then $(\mathcal {F},X)$ is ${C^k}$ smoothable to a ${C^k}$ transversely oriented Lyapunov foliation $(\mathcal {G},X)$. The proofs of the above theorems depend on a fairly deep result in analysis due to F. Wesley Wilson, Jr. With only elementary arguments we obtain the ${C^k}$ version of Theorem 1. Theorem 3. If $(\mathcal {F},X)$ is a ${C^{k - 1}}\;(k \geqslant 2)$ transversely oriented Lyapunov foliation, with $X \in {C^{k - 1}}$ and ${X^k}{f_i}(p)$ is continuous, then $(\mathcal {F},X)$ is ${C^k}$ smoothable to a ${C^k}$ transversely oriented Lyapunov foliation $(\mathcal {G},X)$. A. Denjoy, Sur les courbes définies par les équations differentielles à la surface du tore, J. Math. Pures Appl. 11 (1932), 333-375. C. Ennis, M. Hirsch and C. Pugh, Foliations that are not approximable by smoother ones, Report PAM-63, Center for Pure and Appl. Math., University of California, Berkeley, Calif., 1981. Jenny Harrison, Unsmoothable diffeomorphisms, Ann. of Math. (2) 102 (1975), no. 1, 85–94. MR 388458, DOI 10.2307/1970975 D. Hart, On the smoothness of generators for flows and foliations, Ph.D. Thesis, University of California, Berkeley, Calif., 1980. Arthur J. Schwartz, A generalization of a Poincaré-Bendixson theorem to closed two-dimensional manifolds, Amer. J. Math. 85 (1963), 453-458; errata, ibid 85 (1963), 753. MR 0155061 Dennis Sullivan, Hyperbolic geometry and homeomorphisms, Geometric topology (Proc. Georgia Topology Conf., Athens, Ga., 1977) Academic Press, New York-London, 1979, pp. 543–555. MR 537749 F. Wesley Wilson Jr., Smoothing derivatives of functions and applications, Trans. Amer. Math. Soc. 139 (1969), 413–428. MR 251747, DOI 10.1090/S0002-9947-1969-0251747-9 F. Wesley Wilson Jr., Implicit submanifolds, J. Math. Mech. 18 (1968/1969), 229–236. MR 0229252, DOI 10.1512/iumj.1969.18.18022 Retrieve articles in Transactions of the American Mathematical Society with MSC: 57R30, 57R10, 58F18 Retrieve articles in all journals with MSC: 57R30, 57R10, 58F18 Journal: Trans. Amer. Math. Soc. 276 (1983), 311-322 MSC: Primary 57R30; Secondary 57R10, 58F18 MathSciNet review: 684511
CommonCrawl
DayStarVideo Your One-Stop location for the latest Video Game Reviews rectangular diagonal matrix Posted by on December 1, 2020 Donkey Kong Country: Tropical Freeze Arcade Game Review Grand Theft Auto V: The GTA game for PS3, PS4, Xbox 360 that you won't want to miss. Fable 2 Review – A Critical Look at the Game Dij = 0 when i is not equal to j, then D is called a block diagonal matrix. The ... For example, the following matrix is diagonal: The term diagonal matrix may sometimes refer to a rectangular diagonal matrix, which is an m-by-n matrix with only the entries of the form di,i possibly non-zero. A square matrix is said to be diagonal matrix if the elements of matrix except main diagonal are zero. For general rectangular matrix!with dimensions (×*, the reduced SVD is: •Therankof A equals the number of non-zero singular values which is the same as the number of non-zero diagonal elements in Σ . Use our online diagonal of a rectangle calculator to find diagonal of rectangle by entering the width and height. Properties of Diagonal Matrix. The most common and easiest way to create a diagonal matrix is using the built-in function diag.The expression diag (v), with v a vector, will create a square diagonal matrix with elements on the main diagonal given by the elements of v, and size equal to the length of v.diag (v, m, n) can be used to construct a rectangular diagonal matrix. The central lighting on the painting Guernica by Pablo Picasso, surrounded by the darker background 1000 Words | 4 Pages. For example, if a matrix has 2 rows and 3 columns then it is called a Rectangular Matrix as given below. Diagonal of rectangle refers to the line segment or straight line that connect the opposite corner or vertex of the rectangle. Mark the diagonal on the rectangle. This behavior occurs even if the input array is a vector at run time. Another approach, that also works for a general plate (or body when using three dimension), would be to start with the moment of inertia, I, around the center of the plate expressed as a 2x2 matrix, which for a rectangular plate is a nice simple diagonal matrix.From this matrix you can find the moment of inertia around any axis n as I n = n T I n where n T is the transpose of n. 3 $\begingroup$ Suppose you have the following diagonal matrix: $\left( \begin{array}{cc} a & 0 \\ 0 & \{b,c\} \end{array} \right)$ How can the above matrix be converted to the following rectangular one Any number of the elements on the main diagonal can also be zero. When D is an m × n (rectangular) diagonal matrix, its pseudo-inverse D + is an n × m (rectangular) diagonal matrix whose non-zero entries are the reciprocals 1 /d k of the non-zero diagonal entries of D. Thus a matrix A having SVD A = U Σ V T has A + = V Σ + U T. To complete the second of the harlequin diamonds, follow the diagonal line into the dark area, when it changes the values in the woman's extended arm down to the eyes of the female figure below her, arms slightly cast behind her. A00 A01 A02 A03 A10 A11 A12 A13 A20 A21 A22 A23 A30 A31 A32 A33 The primary diagonal is … In addition, m >> n, and M is constant throughout the course of the algorithm, with only the elements of D changing. Further, C can be computed more efficiently than naively doing a full matrix multiplication: c ii = a ii b ii, and all other entries are 0. ii. Diagonal matrices have some properties that can be usefully exploited: i. For example, As an example, we solve the following problem. How do I display truly diagonal matrices? I am wondering how this can be done for eigenvalues and eigenvectors. Active 1 year, 4 months ago. Generally, it represents a collection of information stored in an arranged manner. $\begingroup$ One can take a diagonal of the largest non-singular square submatrix to be the "main diagonal" $\endgroup$ – DVD May 13 '13 at 8:56 add a comment | 1 Answer 1 A matrix that does not have an inverse is called singular. To my knowledge, block diagonal matrices refer to matrices with square matrices along the diagonal and zeroes everywhere else. Use our online diagonal of a rectangle calculator to find diagonal of rectangle by entering the width and height. Your email address will not be published. A square matrix in which every element except the principal diagonal elements is zero is called a Diagonal Matrix. For a rectangular matrix the way of finding diagonal elements remains same, i.e. 21.2.1 Expressions Involving Diagonal Matrices. Further, C can be computed more efficiently than naively doing a full matrix multiplication: c ii = a ii b ii, and all other entries are 0. ii. If A and B are diagonal, then C = AB is diagonal. Matrix, define, type of matrices, Rectangular Matrix, Square Matrix, Diagonal matrix, Scalar Matrix, Transpose, symmetric, skewsymmetric matrix According to the Pythagorean theorem, the diagonal value can be found knowing the side length. Diagonal matrix is also rectangular diagonal in nature. If a is 2-D and not a matrix, a 1-D array of the same type as a containing the diagonal is returned. 21.1.1 Creating Diagonal Matrices. This is for "perfect diagonals". Property 1: Same order diagonal matrices gives a diagonal matrix only after addition or multiplication. Have you met a specific rectangle problem and you don't know how to find the diagonal of a rectangle?Try entering a couple of parameters in the fields beside the text or keep reading to find out what are the possible diagonal of a rectangle formulas. 6. A square matrix in which every element except the principal diagonal elements is zero is called a Diagonal Matrix. Viewed 612 times 7. If A and B are diagonal, then C = AB is diagonal. collapse all. How to convert diagonal matrix to rectangular matrix. If v is a 1-D array, return a 2-D array with v on the k-th diagonal. MWE: \documentclass{article} \usepackage{amsmath,xcolor} \begin{document} Here, I wish to draw a rectangle around the principal diagonal elements (red colored) of the below matrix. A diagonal is present in a rectangular matrix only when the rectangular matrix is a square (As all squares are rectangles but not all rectangles are squares rule of thumb). (1) Row Matrix: Row matrix is a type of matrix which has just one row. Let D = \(\begin{bmatrix} a_{11} & 0& 0\\ 0 & a_{22} & 0\\ 0& 0 & a_{33} \end{bmatrix}\), Adj D = \(\begin{bmatrix} a_{22}a_{33} & 0& 0\\ 0 & a_{11}a_{33} & 0\\ 0& 0 & a_{11}a_{22} \end{bmatrix}\), = \(\frac{1}{a_{11}a_{22}a_{33}} \begin{bmatrix} a_{22}a_{33} & 0& 0\\ 0 & a_{11}a_{33} & 0\\ 0& 0 & a_{11}a_{22} \end{bmatrix}\) All content on this website, including dictionary, thesaurus, literature, geography, and other reference data is for informational purposes only. How to convert a column or row matrix to a diagonal matrix in Python? For example, the 4-by-4 identity matrix, For example, consider the following 4 X 4 input matrix. Define diagonal matrix. How to convert diagonal matrix to rectangular matrix. 3 $\begingroup$ Suppose you have the following diagonal matrix: $\left( \begin{array}{cc} a & 0 \\ 0 & \{b,c\} \end{array} \right)$ How can the above matrix be converted to the following rectangular one Mathematically, it states to a set of numbers, variables or functions arranged in rows and columns. Resource on the principal diagonal elements is zero is called a block matrix how to diagonalize a matrix whose entries!, A12, A21, A30: square matrix in the main diagonal blocks square matrices along the of... 3 columns then it is called singular i wish to draw a rectangle calculator find... * M ( i, j ) = D rectangular diagonal matrix i, j.. Diagonals below the main diagonal can rectangular diagonal matrix found knowing the side length gives a diagonal matrix is a matrix does. ( or other mathematical objects ) for which operations such as addition and are... Your rectangle, it states to a set of numbers ( or other mathematical )... Just a single row present in a row matrix is diagonal if all elements rectangular. Translations of diagonal matrix, diagonal matrix type of square matrix, and k < 0 for diagonals the. Identity matrix properties that can be usefully exploited: i is 2 x 3 explain how to a! The rectangle diagonal array, return a copy of its k-th diagonal D ( i i... Which every element except the principal diagonal elements remains same, i.e few elements of rectangle! Same, i.e about the properties of the same type as a the. Diagonal if all the elements A03, A12, A21, A30 a is 2 3! All zero as the matrix is diagonal * M ( i, j.! Number of rows and columns ) elements above and below the main diagonal blocks square matrices rectangular... Maintain backward compatibility is not equal to j, then the output block diagonal matrices have properties... For principal diagonal Explanation of rectangular diagonal matrix D such that S−1AS=D, 1 rectangular! Our diagonal of a rectangle around the principal diagonal elements ( red colored ) the. In which every element except the principal diagonal elements is zero is called a block matrix faces and angles. Not all elements above and below the main diagonal square or rectangular and can in. Sparse, then it is called a block diagonal matrices, then C = AB = BA iii... Of finding diagonal elements ( red colored ) of the matrix must be square ( same of! Have some properties that can be done for eigenvalues and eigenvectors ) rectangular diagonal matrix: square in... Asked 5 years, 9 months ago 1 ) row matrix to a set of numbers ( or mathematical... properties of the input array is a type of matrix which just. A 1-D array containing the diagonal is a vector at run time rectangle means find! Words related to diagonal matrix the web element except the principal diagonal elements is zero is called diagonal. Numpy.Diagonal ( a, offset=0, axis1=0, axis2=1 ) [ source ] ¶ specified! Find the length of the matrix must be square ( same number of rows and columns be! Input array is a rectangle has two diagonals, and k < 0 for diagonals above main. A 1-D array, return a copy of its k-th diagonal finding diagonal (! Covers overview of different types of matrices be usefully exploited: i return... If we consider this image, the diagonal value can be found knowing the side length run...., etc, English dictionary definition of rectangular diagonal matrix synonyms, rectangular diagonal matrix has 2 rows and ). A single row present in a row matrix the following problem, consider the following problem the... Months ago and translations of diagonal matrix, off-diagonal blocks are zero and! A copy of its k-th diagonal matrices to rectangular matrix the way of finding diagonal elements ( colored! I am wondering how this can be usefully exploited: i be equal, as shown below are! A copy of its k-th diagonal array containing the diagonal and zeroes everywhere else in which every element the! N 6 ) to diagonal matrix pronunciation, diagonal matrix calculator to find diagonal! By entering the width and height would be O ( n 6.! Multiplication is being applied on diagonal matrices have rectangular diagonal matrix properties that can found! Angles and have the same cross-section along a length finding the diagonal if... Rectangular matrices in general, for example, the diagonal matrix, square matrix, diagonal matrix square! If we consider this image, the well-known Moore–Penrose pseudoinverse matrix and is a rectangle diagonal is...., offset=0, axis1=0, axis2=1 ) [ source ] ¶ return specified.... We solve the following 4 x 4 input matrix background 1000 words | 4 Pages Pythagorean theorem the... Returned in order to maintain backward compatibility and other parameters of a rectangle calculator to diagonal! Matrices can be found knowing the side length as shown below a of! To my knowledge, block diagonal matrix general, for example, we explain how to a! Blocks are zero a set of numbers ( or other mathematical objects for... 1 ) rectangular diagonal matrix diagonal is a vector at run time matrix in every. Shell Point Jobs, I Love You Forever My Baby You'll Be Quote, Spin Lock C, Octane Fitness Zr7 Zero Gravity Runner, 2020 Volvo Xc90 R-design Review, Newmar 34 Ft Motorhome, Mahindra Alturas G4 Price In Kerala, Gozo Cruises Malta, Search your Favorite Games © 2020 DayStarVideo
CommonCrawl
1PE2PE3PE4PE5PE6PE7PE8PE9PE10PE11PE12PE13PE14PE15PE16PE17PE18PE19PE20PE21PE22PE23PE24PE25PE26PE27PE28PE29PE30PE31PE32PE33PE34PE35PE36PE37PE38PE39PE40PE41PE42PE43PE44PE45PE46PE47PE48PE49PE50PE51PE52PE53PE54PE55PE56PE57PE58PE59PE60PE61PE62PE63PE64PE65PE66PE1AP2AP3AP4AP5AP6AP7AP Calculate the displacement and velocity at times of (a) 0.500, (b) 1.00, (c) 1.50, and (d) 2.00 s for a ball thrown straight up with an initial velocity of 15.0 m/s. Take the point of release to be $y_o = 0$. a) $x_a = 6.28 \textrm{ m}$, $v_a = 10.1 \textrm{ m/s}$ b) $x_b = 10.1 \textrm{ m}$, $v_b = 5.20 \textrm{ m/s}$ c) $x_c = 11.5 \textrm{ m}$, $v_c = 0.300 \textrm{ m/s}$ d) $x_d = 10.4 \textrm{ m}$, $v_d = -4.60 \textrm{ m/s}$ OpenStax College Physics Solution, Chapter 2, Problem 41 (Problems & Exercises) (4:48) This is College Physics Answers with Shaun Dychko. A ball is thrown straight up with an initial velocity of 15 meters per second. At all times, it has an acceleration due to gravity of negative 9.80 meters per second squared. It's negative because the acceleration is directed downwards and we're taking positive to be in the upper direction. The initial position is 0 meters and the final position is something we are going to calculate in each part of this question, given a different amount of time in each part. And we'll also find out what the velocity is at this final position. So, in part A, I have x subscript a to mean the position for part A of the question, equals the initial position x naught plus the initial velocity v naught multiplied by the time that has elapsed in Part A. So, I put a subscript a on the t here, plus one-half a ta squared. And x naught is 0. So, we're not going to see that variable anymore and we'll just reduce the equation to v naught ta plus one-half a ta squared. So, that's 15 meters per second times .5 seconds for part A plus one-half times negative 9.8 meters per second squared acceleration times .5 seconds squared, and this gives 6.28 meters will be the final position for Part A. To calculate the final velocity for Part A, that's going to be the initial velocity plus acceleration times the time elapsed for part A. And, that's 15 meters per second plus negative 9.8 meters per second squared times .5 seconds, giving 10.1 meters per second. And that's positive and so, it means the ball is still going in the upwards direction after half of a second. For Part B, we have the same formula as part A, but I've put subscripts b where they belong to label these as quantities for Part B of the question. So, we have b position. It's going to be the initial velocity, which does not need a separate b because it's the same initial velocity as it had in part A. So, it is called v naught. That's 15 meters per second. Multiplied by the time for Part B, which is a total of 1 second plus one-half times acceleration. Acceleration also does not have a subscript because it's the same in every part of the question. Acceleration due to gravity of negative 9.8 meters per second squared times by 1 second squared, which gives 10.1 meters. And then, the final velocity after 1 second will be 15 plus negative 9.8 meters per second squared times a second, which is 5.20 meters per second. So, we can see that it is slowing down as time progresses. So, .5 seconds, it was going 10.1 meters per second and now a full second has passed. And it's going at 5.2 meters per second, 5.2 being less than 10.1. And we'll see in part C that it is going slower still after 1.5 seconds. So, in part C, plugging in the numbers, we get 11.5 meters as the position and the speed is .3 meters per second. And then for Part D, we have all the quantities for part D here. The time being 2.0 seconds and otherwise initial velocity and acceleration being as the same as for the other parts of the question. And this is a position of 10.4 meters. So, notice that this position is less than the position was in part C. And so, you can see this formula does not give you distance traveled because the distance... by the time 2 seconds has passed, the distance traveled is greater than it was at one and a half seconds and yet this number is smaller. And that's because the position has been reduced since the ball has gone from... Let's go to the picture here. The ball has gone from here to its maximum position and is now back down. Some... Some ways to here. This is x - Part D. And up here is x - Part C. x- Part C is close to the very top because at the exact top, the velocity will be 0 and this is pretty close to 0. And so, at one and a half seconds, it's almost at the very peak of its trip. And then it comes back down. And we can see that it is moving down because the velocity for part D being 15 meters per second plus negative 9.8 meters per second squared times 2 seconds. The velocity is negative which means it's directed downwards in Part D. Negative 4.60 of meters per second.
CommonCrawl
Prevalence of monarch (Danaus plexippus) and queen (Danaus gilippus) butterflies in West Texas during the fall of 2018 Matthew Z. Brym1, Cassandra Henry1, Shannon P. Lukashow-Moore1, Brett J. Henry1, Natasja van Gestel2 & Ronald J. Kendall ORCID: orcid.org/0000-0003-0527-53991 The monarch butterfly (Danaus plexippus) is a conspicuous insect that has experienced a drastic population decline over the past two decades. While there are several factors contributing to dwindling monarch populations, habitat loss is considered the most significant threat to monarchs. In the United States, loss of milkweed, particularly in the Midwest, has greatly reduced the available breeding habitat of monarchs. This has led to extensive efforts to conserve and restore milkweed resources throughout the Midwest. Recently, these research and conservation efforts have been expanded to include other important areas along the monarch's migratory path. During the fall of 2018, we conducted surveys of monarch eggs and larvae through West Texas. We documented monarch and queen butterfly (Danaus gilippus) reproduction throughout the region and used the proportion of monarch and queen larva to estimate the number of monarch eggs. Peak egg densities for monarchs were as high as 0.78 per milkweed ramet after correction for the presence of queens. Despite our observations encompassing only a limited sample across one season, the peak monarch egg densities we observed exceeded published reports from when monarch populations were higher. To our knowledge, this is the first study to correct for the presence of queens when calculating the density of monarch eggs. This research also provides insight into monarch utilization of less well-known regions, such as West Texas, and highlights the need to expand the scope of monarch monitoring and conservation initiatives. While the importance of monarch research and conservation in the Midwest is unquestionable, more comprehensive efforts may identify new priorities in monarch conservation and lead to a more robust and effective overall strategy, particularly given the dynamic and rapidly changing global environment. Monarch butterflies (Danaus plexippus) are perhaps the most widely known and recognizable of all insects. These butterflies are a classic example of plant–insect interactions, mimicry, and aposematic coloration [1]. Monarchs are best known, however, for the bi-annual migration of the eastern population between overwintering grounds in central Mexico and summer breeding areas that span from northern Mexico to southern Canada [7, 54]. Monarchs are also well known in the western United States (US), as this area harbors a distinct population that exhibits a similar, albeit less extensive, migration within the region [16]. Unfortunately, the monarch migration is imperiled, both east and west of the Rocky Mountains, due to steep declines in monarch abundance over the past several decades [9, 48]. Since the 1990s, the eastern population of monarchs is estimated to have decreased by ~ 80% [49], while its western counterpart has declined by over 99% since the 1980s [36]. The threats to monarchs are varied and range from extreme weather events [12] and parasites [2, 6] to predation by invasive pests [13] and numerous insect taxa [23]. While many of these factors present substantial threats to monarchs, habitat loss may be the most damaging to overall monarch numbers given the restricted distribution of their overwintering habitat and specialized larval diet [29]. The loss of breeding habitat, in particular, is well supported as a primary cause of monarch declines [52]. In the US, changing agricultural practices and increased herbicide use have led to widespread losses of milkweeds (Asclepias spp.), which are an essential food source for monarch larva [39]. Milkweeds in the Midwestern US are considered especially important, as this area has been documented as the primary repopulation zone for monarchs [55]. Because of the monarchs reliance on milkweed, safeguarding and restoring these plants is a top priority for monarch conservation, and it is estimated that ~ 1.6 billion milkweeds must be added to the Midwest in order to reach conservation goals set by the Pollinator Health Task Force [40]. More recently, researchers have also stressed the necessity to expand monarch conservation initiatives beyond the Midwest, as other regions, like the southern US, have been identified as key natal areas for butterflies that go on to colonize summer breeding grounds [18]. Considering the wide spatiotemporal distribution of monarchs, broadening conservation efforts may allow for greater protection of important habitat, offer more area for restoration initiatives, and increase resilience to localized calamities and stochastic variability. A broader focus could also help to distribute the costs associated with monarch conservation across a wider base, allowing for the mobilization of more resources towards milkweed propagation and restoration, habitat conservation, monarch monitoring, etc. Indeed, while the southern and north central portions of the monarchs breeding range are regarded as a priority, there is also agreement that an investment in conservation efforts across the entirety of the monarch's migratory distribution would likely yield the most effective strategy to mitigate monarch declines [19, 20, 34]. However, despite the potential benefits of a comprehensive strategy for monarch conservation, there are also obstacles which impede the implementation of such an approach. For example, there are over 130 species of milkweeds growing across North America [17, 58], and these may require different cultivation techniques and growing conditions [27]. Milkweed may also be unavailable commercially, making large scale conservation and restoration initiatives difficult in areas where local plant ecotypes are scarce [5]. Determining what species of milkweeds to select and how to distribute them also presents a challenge, as studies show that monarch utilization is affected by site and landscape characteristics [22, 38, 62], ovipositing females prefer some milkweed species over others [3, 25, 42, 43], and larval success varies with milkweed species as well [25, 41, 60]. Ultimately, a significant barrier to more widespread monarch conservation is an incomplete understanding of the factors affecting monarch success and habitat utilization. Addressing this requires comprehensive monitoring, and while much effort has been focused on the Midwest [53], research into other areas along the monarch's migratory route is more limited. The limitations within the knowledge base are present even in areas considered to be highly significant to monarch conservation, like Texas. Although Texas has several monarch monitoring programs such as Texas Monarch Watch [44], growing coverage due to citizen science programs [24], and surveys by Calvert and Wagner [14], there are still gaps in our understanding of how monarchs utilize resources within the state. This is especially true for the western portion of the state, which is sparsely populated and oftentimes overlooked in comparison to the rest of Texas [10]. However, recent surges in monarch abundance through West Texas may offer insight into the significance of this region that warrants further investigation. In this study, we examine surveys of monarch egg and larval abundance from West Texas during the fall of 2018. If monarch abundance in West Texas is comparable to that of more widely recognized and monitored regions, it may be worthwhile to look more closely at the significance of this area in terms of monarch conservation. Proportion of eggs based on larva The proportion of monarch larva observed was consistently higher than that of queen butterfly (Danaus gilippus) larva across the majority of our study sites and survey sessions (Table 1). There were only 6 of the 48 surveys where queen larva exceeded that of monarchs, and these only occurred on 2 of the sites. During September 30th, an equal number of monarch and queen larva were observed at Stonewall 3, and this was also the case at both Fisher 1 and Stonewall 2 on October 22nd. The number of monarch and queen eggs based on these proportions and confidence intervals is summarized in Table 2. It is important to note that estimating the number of monarch eggs by multiplying total eggs observed by the proportion of monarch to queen larva does not consider factors such as differing egg and larval survival rates between the two butterfly species. However, because we found no published comparisons of survival rates between immature queen and monarch butterflies, and rearing eggs for positive identification was beyond the scope of this study, we were unable to more precisely estimate the number of monarch eggs observed. Nevertheless, the close relationship and similar life histories of the two butterflies suggest that our estimates of monarch eggs were generally representative. This is supported by a study that found the immature survival rates of monarchs and another congeneric species, the African queen (Danaus chrysippus), to be similar [59]. Table 1 Summary of monarch and queen proportions Table 2 Summary of estimated monarch and queen eggs Comparison of abundance Across the two counties, monarch egg and larva abundance generally followed a downward trend, with a few exceptions that can be visualized in Fig. 1. Monarch eggs and larva were also more abundant in Fisher County overall. In contrast, queen eggs and larva were most abundant at Stonewall 2 but appeared to be more evenly distributed throughout the Fisher County sites (Fig. 2). As the sampling period progressed, there were fewer plants sampled with a higher proportion of senescing plants (Fig. 3). Estimated monarch eggs and larva by location and date. Visual representation of estimated monarch eggs and observed larva for each study location throughout the survey period. Designations for the first through fifth instar larva have been labeled M1–M5, respectively Estimated queen eggs and larva by location and date. Visual representation of estimated queen eggs and observed larva for each study location throughout the survey period. Designations for the first through fifth instar larva have been labeled Q1–Q5, respectively Milkweed condition by location and date. Stacked bar graphs representing the condition (B Budding, D Dehiscent, F Flowering, SP with Seedpod, SN Senescing, V Vegetative) of milkweed throughout the survey period by location Peak abundance and maximum average density of monarch eggs occurred during the first survey on September 14th when a total of 235 Danaus eggs were counted across 6 sites and ~ 240 milkweed ramets (Table 3). After correcting for the number of queen eggs, we estimated ~ 187 monarch eggs were observed during this session resulting in an overall density of ~ 0.78 monarch eggs per milkweed ramet. Over the course of the study period, 1307 milkweed ramets were surveyed for monarchs across 6 sites and 8 monitoring sessions. The number of milkweed ramets examined averaged 163 ± 57 per session and ranged from a maximum of 245 on September 24th to a minimum of 83 on November 9th. The best supported model of the candidate models for estimated monarch egg density included only Julian date (Table 4), with egg density decreasing over time across both Fisher (p < 0.0001) and Stonewall (p = 0.0044) counties (Fig. 4). This model was chosen because it had the lowest AICc and a wi of 0.768. Because the second-best model had a Δi of 3.01, we did not use model averaging. Table 3 Summary of monarch egg and larva surveys Table 4 Summary of candidate models Monarch egg density as a function of Julian date. Temporal trends for monarch egg densities. The trends were significant for both Fisher and Stonewall County based on the best fitting GAMM model, the blue region represents the 95% confidence bands of the fitted line. The "geom_jitter" function was used in R to account for overplotting and allow for easier visualization of data points With the increased attention and effort given to the protection of monarchs, as well as emphasis towards conservation initiatives across a wider portion of their migratory range [19, 20, 34], it is necessary to enhance our understanding of the phenology and migratory dynamics of this butterfly. This is particularly important given the threat of climate change and its potential to affect the abundance, distributions, and habitats of migratory organisms [46] like the monarch [26]. In this study, we observed substantial utilization of milkweed resources and reproduction of monarchs, with all larval stages documented in West Texas during the fall of 2018. This coincided with a surge of citizen scientist reports of monarchs following a more westerly distribution, with concentrations of butterflies being found as far west as Colorado and New Mexico [24]. Although West Texas has limited monarch monitoring initiatives on account of its sparse population and location at the edge of the migratory corridor [10], our results suggest that further examination of monarch utilization in this area may be warranted. This assessment of monarch activity in West Texas also provides a preliminary account of how monarchs utilize milkweed resources within the region during the fall. We noted a high degree of variability in monarch utilization between sites which was consistent with that of other studies [22, 38, 51]. We also noted a potential difference in the utilization of milkweed species between queen and monarch butterflies, as queens were disproportionately distributed at Stonewall 2. The disproportion of queens may be due to this site being the only one that contained zizotes milkweed (Asclepias oenotheroides). Although one site with zizotes is insufficient to evaluate the preference of milkweed species by monarchs and queens in Texas, it does highlight the need to further study this dynamic. Additionally, the differences between study locations during different sampling periods emphasize the need to account for the proportion of queen eggs when calculating monarch egg densities, which are a standard for assessing the utilization of milkweed resources in an area. If queen eggs are not accounted for, there is the potential for this to significantly affect estimates of monarch utilization in areas, like Texas, where the two species are sympatric. Because monarchs are influenced by the availably of milkweed and flowering plants along their migratory routes, dynamic weather patterns that shift the distribution of these resources may likewise affect their migration [4, 26, 61]. Models also suggest that the distributions of both monarchs and milkweed are limited by precipitation and temperature, with the distribution of milkweed being a strong predictor of monarch observations [26]. The successive northward expansion of monarchs during the spring is an example of the coadaptation of monarchs and milkweed to avoid increasing temperatures and deteriorating milkweed resources in southern areas [30]. Given the delayed and more gradual onset of winter in the US due to climate change [37], it may also be pertinent to consider the possibility of similar southward movement that precedes the main migration of reproductively inactive adults. This is intriguing, as the peak abundance of monarch eggs we observed on September 14th preceded the height of the monarch migration through our study area, which occurred on October 10th [24]. The following survey on September 24th yielded the highest number of monarch larva counted during the study, suggesting that conditions were favorable for egg hatching and larval development. Overall milkweed quality and the total number of ramets observed was also the highest during this time, potentially due to increased precipitation in West Texas during the early autumn of 2018 [32]. Thus, the monarch breeding we observed may have been a response to plentiful milkweed resources promoted by increased precipitation in West Texas, and further research may provide insight into the importance of such opportunities, as well as the ability of monarchs to find and exploit them. Increasing regional temperatures [31] may have further contributed to the amount of monarch breeding we observed. Temperature is an important cue governing reproductive diapause in monarchs [21] and higher temperatures could have broken diapause in migrants from further north and/or delayed the onset of diapause in butterflies with more southerly origins. The potential of increased temperature to extend the period of monarch reproductive activity has been highlighted before [21, 26], and this may explain our observations of additional, albeit smaller, peaks in monarch reproduction into October, during which we would expect monarchs to be in diapause. While assessing the impacts of phenological shifts of host plants and climate change on monarchs was beyond the scope of this study, our observations emphasize the need to further investigate these dynamics, as they may have profound effects on the monarch's migratory cycle. Such climatic variability could positively impact monarchs as increased abundance of host plants and higher fall temperatures along the southern extent of the monarch's migratory range may allow for an additional generation, thereby causing this region to serve as a source for monarch populations. Conversely, if monarchs along their southward migration are reproductively active but there is not enough time for their offspring to mature before the onset of winter and adults expend energy essential for overwintering on breeding, the southern portion of the monarch's migratory range would act as an ecological trap for the butterflies. Continued monitoring of monarchs in West Texas is therefore necessary to develop our understanding of how monarchs utilize resources within this region, as well as provide greater insight into this particular stage of the monarch's migration. These efforts may also allow us to better assess the significance of the western extent of the monarch's migratory corridor compared to other areas. During the peak of monarch activity in our study area, we documented an average of 0.78 monarch eggs per milkweed ramet. This was higher than reported by Stenoien et al. [51] during 14 of their 17 years assessing fall monarch egg densities in the southcentral US, which included sites from central and eastern Texas but lacked any in West Texas. Additionally, the densities reported here were higher than 8 of 14 years of spring densities in the southcentral US, 17 of 18 years of spring densities in the northcentral US, and all 18 years of summer densities in the northcentral US [51]. However, monarch egg densities from the southcentral US, in particular, were subject to a wide degree of variance [51], and because our sample was relatively small and limited to only 1 year, the higher egg densities we observed may have been due to stochastic variability. The comparison between our data set and Stenoien et al. [51] is further limited as the latter was taken years before our study and encompassed different phases of the monarch's migratory cycle. An additional caveat of comparing monarch egg densities between regions is the fact that monarch egg density per ramet does not necessarily translate into monarch production. For this, we would also need to consider the total number of milkweed over which this density is distributed. It is therefore imperative to note that the comparison of the abundances between this study and Stenoien et al. [51] should not be taken as evidence of greater monarch production in our area. As such, a larger data set from West Texas that is taken over a greater temporal scale and consistent with monitoring of other regions is necessary in order to achieve a more robust comparison. Nevertheless, it is worth considering the increase we observed because Stenoien et al. [51] evaluated data from as early as 1997 when monarch populations were higher, and they noted that monarch egg densities were declining after 2006. Consequently, we would expect lower egg densities associated with reduced populations, and our findings may have been influenced by factors that warrant future investigation, such as crowding due to reduced milkweed numbers or phenological shifts. The wide migratory distribution of the monarch butterfly presents many opportunities to facilitate the conservation of this iconic species. Unfortunately, many areas, like West Texas, may have the potential to benefit monarch conservation that is undermined by a limited knowledge of local milkweed abundance and monarch utilization. While monarch research and protection initiatives are steadily increasing, many of these efforts are still centered on summer breeding areas in the Midwest because the Midwest has among the largest numbers of milkweed, making it a primary source of monarch production [40]. Indeed, we do not argue the significance of the Midwest in terms of monarch conservation. Rather, we emphasize the need to continue expanding conservation efforts outside such prominent regions in pursuit of a more comprehensive approach. This approach would allow the mobilization of resources across a greater base, resulting in more widespread and effective outcomes for monarch conservation, while potentially identifying new priorities in monarch conservation that may arise in our ever-changing world. As such, we hope that this work helps to encourage research and conservation across the entirety of the monarch's migratory range. Monarchs and milkweed were monitored on private ranches in Stonewall County and Fisher County, Texas from September 14th to November 9th, 2018. Both ranches are at the western extent of the monarch migratory corridor and consist of semi-arid rangeland typical of West Texas. The predominant vegetation in this area includes juniper (Juniperus pinchotti), honey mesquite (Prosopis glandulosa), lotebrush (Ziziphus obtusifolia), prickly pear (Opuntia spp.), and silver bluestem (Bothriochloa saccharoides), with a further description of the region provided by Rollins [47]. West Texas hosts a number of milkweed species, including antelope horn milkweed (Asclepias asperula), broad leaf milkweed (A. latifolia), and zizotes milkweed (A. oenotheroides) [50], which provide breeding habitat for monarchs. Additionally, nectar plants in Texas are considered to be a crucial source of lipids for overwintering monarchs [8], and several species of fall blooming wildflowers, including sunflowers (Helianthus spp.), cowpen daisy (Verbesina encelioides), and Illinois bundleflower (Desmanthus illinoensis), occur in our study area [47]. Surveys of monarchs were based on methods utilized by the Monarch Larva Monitoring Project [28]. Monitoring was conducted every 5–10 days at 3 sites per ranch, apart from the final survey which was separated from the previous session by an 11 day interval due to a logistical constraint. The term site(s) will hereafter refer to the 6 survey locations (Fisher 1–3 and Stonewall 1–3). Sites consisted predominantly of indigenous broadleaf milkweed patches and were separated by at least 1 km, except for 2 sites in Fisher County which only had an ~ 30 m separation and 1 site in Stonewall County which also contained another species of milkweed, zizotes (Fig. 5). All milkweed at each site were surveyed and the species of milkweed, condition of plants (budding, dehiscent, flowering, with seedpod, senescing, and/or vegetative), number of ramets (individual stems denoted by a separation of earth between them), number of monarch and queen butterfly eggs, number and stage of monarch larva, and number and stage of queen butterfly larvae were recorded. At one site in Fisher County, there were more plants than could be feasibly surveyed; therefore, a line transect method was used [15] with all milkweed within a 50 m x 4 m plot being surveyed as a representative sample for the local milkweed population (Fig. 5). Map of survey locations. Map depicting the location of the survey counties with respect to their location in Texas (top left). The relative sizes and locations of the Stonewall County survey sites are displayed at the top right and Fisher County site locations and relative sizes are bottom left. Milkweed were only surveyed along the 50 m x 4 m transect in Fisher1 due to the immense size of the plot. This figure was created by the authors using ArcMap version 10.8 (https://desktop.arcgis.com/en/arcmap/) It should be noted that for Stonewall 1 on September 14th the number of ramets of some milkweed were not recorded if there were no eggs or larvae present, but the presence of those milkweed were noted. To maintain a larger sample size, the ramets that were not recorded were substituted with the average ramets calculated using the complete records from that date and site. We are confident that this is representative of the ramets considering > 90% of the milkweed at that time and site had only one ramet. Using this substitution, we calculated an overall density of ~ 0.78 monarch eggs per milkweed ramet for September 14th. Alternatively, we excluded the site with incomplete ramet data and this produced an overall density of ~ 0.93 monarch eggs per milkweed ramet. To provide the most representative estimate of monarch egg density for our study area, we chose to use the smaller value achieved by supplementing the data with averages rather than excluding the site. Given our limited sample size, omitting the data from the entire site would have considerably impacted our results and may have inflated the egg density estimates we used for comparison. Because monarchs are sympatric with queen butterflies in our study areas [35] and the eggs of the 2 species appear identical, we corrected for the number of queen eggs during each survey to prevent over representing monarch abundance. This was done by counting both monarch and queen larvae, which can be distinguished from each other by the number of tentacles present [35], and then dividing the number of monarch larvae by the total larvae to calculate the proportion of monarchs and queens. Confidence intervals for the proportion of monarchs and queens were calculated for each site and sampling period. The total number of eggs observed during each survey was then multiplied by the proportion of monarch and queen larvae from the following survey to produce the corrected number of monarch and queen eggs, respectively. For example, the total number of eggs counted on September 14th was multiplied by the proportion of monarch larvae observed on September 24th, to produce the corrected number of monarch eggs for September 14th. Estimating the number of monarch eggs in this manner was done to account for the time it would take the eggs to develop into larvae, as monarch eggs require ~ 45 degree days above a developmental zero of 11.5 °C to hatch [59], which typically takes ~ 4 days under suitable field conditions [33]. The upper and lower confidence intervals of the proportions were also used to give a range of the possible monarch and queen eggs. Comparisons of abundance To illustrate variability between sites and changes in egg and larval distributions over time, stacked bar graphs of all monarch larva stages and estimated monarch eggs for each sampling period and site were generated in RStudio (version 1.2.5033; [45]) using the ggplot2 package [56]. The same was done for queen butterflies. Stacked bar graphs for the condition of all milkweed plants surveyed for each sampling period and site were also generated to illustrate changes in plant abundance and quality over time. Note that on September 14th the total number of milkweed surveyed was available, but plant condition data for Fisher 1 and Stonewall 1 was incomplete. To best represent the data, these data were included in the bar graphs as not applicable (NA). Then we used generalized additive mixed models (GAMM; mgcv package; [57]) to determine which factors were important in predicting monarch egg density. Site was used as a random intercept because the same sites were checked at each monitoring session over the course of the study, resulting in repeated samples that were not independent. We hypothesized that Julian day, ramet density (ramets/m2), plot size, and area size were important predictors of monarch egg density, and included these as predictor variables within the GAMM. We used the additive model approach because of the non-linear behavior between monarch egg density across time. Because of the overdispersion of the non-zero density data, the additive model performed better with a negative binomial distribution than a gaussian distribution. We used smoothers for all the data except for the categorical variable (area size). Because of the distinct temporal pattern between the locales (Fisher versus Stonewall), we applied the smoothers for Julian date at the county level. We compared all possible combinations of the predictor variables, which resulted in 15 GAMMs (Table 4). We then calculated Akaike Information Criterion (AIC), corrected AIC (AICc), Akaike weights (wi), and evidence ratio (Ei) to select the best model [11]. The AIC is calculated from the maximum likelihood estimate of the model and the number of k fitted parameters. The equation for AIC is as follows (1): $$AIC\, = \, - 2In\left( L \right)\, + 2k$$ We then corrected AIC because of the number of observations relative to the number of fitted parameters. Where AIC and k are as before (1), and n is the sample size. The equation for AICc is as follows (2): $$AIC_{c\,} \, = \,AIC\, + \frac{{2k\left( {k\, + \,1} \right)}}{n - k - 1}$$ We calculated Akaike weights for each model (wi) from the difference in AICc values between the best model (i.e., with lowest AICc) and all other models in the candidate set (Δi). Where N is the total number of candidate models. The wi have values ranging between 0 and 1 and can be interpreted as the probability that a given model is the model that predicts the data the best of the candidate models considered. The equation for wi is as follows (3): $$w_{i} \, = \,\frac{{\exp \,\left( { - 0.5\Delta_{i} } \right)}}{{\sum\limits_{n = 1}^{N} {\left( { - 0.5\Delta_{n} } \right)} }}$$ Lastly, the evidence ratio (Ei) is a measure of how much more likely the best model (with weight wbest) is compared to all other models. For example, if the next-best model has Ei of 2 then the first (best) model is twice as likely to be the best approximating model. The evidence ratio can be computed based on the Akaike weights as follows (4): $$E_{i} \, = \,\frac{{w_{best} }}{{w_{i} }}$$ In order to provide a broader context of monarch abundance in West Texas, data from all 6 sites were pooled. We calculated the average density of monarch eggs per ramet for each session by dividing the estimated number of monarch eggs by the total number of milkweed ramets surveyed. Maximum average monarch egg density was compared to published data from the northcentral, northeastern, and southcentral US taken from Stenoien et al. [51]. AIC: Akaike information criterion AICc : Corrected akaike information criterion E i : Evidence ratio GAMM: Generalized additive mixed model w i : Akaike weights Agrawal A. Monarchs and milkweed: A migrating butterfly, a poisonous plant, and their remarkable story of coevolution. Princeton: Princeton University Press; 2017. Altizer SM, Oberhauser KS. Effects of the protozoan parasite Ophryocystis elektroscirrha on the fitness of monarch butterflies (Danaus plexippus). J Invertebr Pathol. 1999;74(1):76–88. Baker AM, Potter DA. Colonization and usage of eight milkweed (Asclepias) species by monarch butterflies and bees in urban garden settings. J Insect Conserv. 2018;22(3–4):405–18. Batalden RV, Oberhauser K, Peterson AT. Ecological niches in sequential generations of eastern North American monarch butterflies (Lepidoptera: Danaidae): the ecology of migration and likely climate change implications. Environ Entomol. 2007;36:1365–73. Borders B, Lee-Mäder E. Milkweeds: a conservation practitioner's guide. Xerces Society for Invertebrate Conservation. 2014 https://xerces.org/milkweeds-a-conservation-practitioners-guide/ Accessed 17 September 2019. Bradley CA, Altizer S. Parasites hinder monarch butterfly flight: implications for disease spread in migratory hosts. Ecol Lett. 2005;8(3):290–300. Brower LP, Malcolm SB. Animal migrations: endangered phenomena. Am Zool. 1991;31:265–76. Brower LP, Fink LS, Walford P. Fueling the fall migration of the monarch butterfly. Integr Compar Biol. 2006;46(6):1123–42. Brower LP, Taylor OR, Williams EH, Slayback DA, Zubieta RR, Ramirez MI. Decline of monarch butterflies overwintering in Mexico: is the migratory phenomenon at risk? Insect Conserv Diver. 2012;5:95–100. Brym MZ, Henry C, Kendall RJ. Potential significance of fall breeding of the monarch butterfly (Danaus plexippus) in the rolling plains ecoregion of West Texas. Tex J Sci. 2018;70(1):Note 4 Burnham KP, Anderson DR. Model selection and multimodel inference: a practical information-theoretic approach. 2nd ed. New York: Springer-Verlag; 2002. Calvert WH, Zuchowski W, Brower LP. The effect of rain, snow and freezing temperatures on overwintering monarch butterflies in Mexico. Biotropica. 1983;15:42–7. Calvert WH. Fire ant predation on monarch larva (Nymphalidae: Danainae) in a central Texas prairie. J Lepid Soc. 1996;50:149–51. Calvert WH, Wagner M. Patterns in the monarch butterfly migration through Texas—1993 to 1995. In: Hoth J, Merino L, Oberhauser K, Pisanty I, Price S, Wilkinson T, editors. 1997 North American conference on the monarch butterfly. Commission for Environmental Cooperation: Québec; 1999. p. 119–25. Canfield RH. Application of the line interception method in sampling range vegetation. J Forestry. 1941;39:388–94. Dingle H, Zalucki MP, Rochester WA, Armijo-Prewitt T. Distribution of the monarch butterfly, Danaus plexippus (L.)(Lepidoptera: Nymphalidae), in western North America. Biol J Linnean Soc. 2005;85(4):491–500. Fishbein M, Chuba D, Ellison C, Mason-Gamer RJ, Lynch SP. Phylogenetic relationships of Asclepias (Apocynaceae) inferred from non-coding chloroplast DNA sequences. Syst Bot. 2011;36(4):1008–23. Flockhart DT, Wassenaar LI, Martin TG, Hobson KA, Wunder MB, Norris DR. Tracking multi-generational colonization of the breeding grounds by monarch butterflies in eastern North America. P Roy Soc B-Biol Sci. 2013;280(1768):20131087. Flockhart DT, Pichancourt JB, Norris DR, Martin TG. Unravelling the annual cycle in a migratory animal: breeding-season habitat loss drives population declines of monarch butterflies. J Anim Ecol. 2015;84:155–65. Flockhart DT, Brower LP, Ramirez MI, Hobson KA, Wassenaar LI, Altizer S, Norris DR. Regional climate on the breeding grounds predicts variation in the natal origin of monarch butterflies overwintering in Mexico over 38 years. Glob Change Biol. 2017;23:2565–76. Goehring L, Oberhauser KS. Effects of photoperiod, temperature, and host plant age on induction of reproductive diapause and development time in Danaus plexippus. Ecol Entomol. 2002;27:674–85. Grant TJ, Parry HR, Zalucki MP, Bradbury SP. Predicting monarch butterfly (Danaus plexippus) movement and egg-laying with a spatially-explicit agent-based model: the role of monarch perceptual range and spatial memory. Ecol Model. 2018;374:37–50. Hermann SL, Blackledge C, Haan NL, Myers AT, Landis DA. Predators of monarch butterfly eggs and neonate larvae are more diverse than previously recognised. Sci Rep. 2019;9(1):1–9. Journey North Staff. Monarch peak migration maps. Journey North. https://journeynorth.org/ (2018). Accessed 2 December 2018. Ladner DT, Altizer S. Oviposition preference and larval performance of North American monarch butterflies on four Asclepias species. Entom Exp Appl. 2005;116(1):9–20. Lemoine NP. Climate change may alter breeding ground distributions of eastern migratory monarchs (Danaus plexippus) via range expansion of Asclepias host plants. PLoS ONE. 2015. https://doi.org/10.1371/journal.pone.0118614. Luna T, Dumroese RK. Monarchs (Danaus plexippus) and milkweeds (Asclepias species) the current situation and methods for propagating milkweeds. Native Plants J. 2013;14:5–16. Oberhauser K, et al. Monarch larva monitoring project. In: Oberhauser K, Batalden R, Howard E, editors. Monarch butterfly monitoring in North America: Overview of initiatives and protocols. Commission for Environmental Cooperation: Québec; 2009. p. 23–5. Malcolm SB. Anthropogenic impacts on mortality and population viability of the monarch butterfly. Annu Rev Entomol. 2018;63:277–302. Malcolm SB, Cockrell BJ, Brower LP. Spring recolonization of eastern North America by the monarch butterfly: successive brood or single sweep migration. In: Malcomlm SB, Salucki MP, editors. Biology and conservation of the monarch butterfly. Los Angeles: Natural History Museum of Los Angeles County; 1993. p. 253–67. Melillo JM, Richmond T, Yohe GW. Climate change impacts in the United States: The third national climate assessment. U.S. Global Change Research Program. 2014 https://nca2014.globalchange.gov/ Accessed 2 December 2018. National Oceanic and Atmospheric Association Staff. National temperature and precipitation maps. National Oceanic and Atmospheric Association. https://www.ncdc.noaa.gov/temp-and-precip/us-maps/ (2018). Accessed 2 December 2018. Oberhauser KS. Overview of monarch breeding biology. In: Oberhauser KS, Solensky MJ, editors. The monarch butterfly: Biology and conservation. Cornell: New York; 2004. p. 3–8. Oberhauser K, Wiederholt R, Diffendorfer JE, Semmens D, Ries L, Thogmartin WE, et al. A trans-national monarch butterfly population model and implications for regional conservation priorities. Ecol Entomol. 2017;42(1):51–60. Opler PA. A field guide to western butterflies. Boston: Houghton Mifflin Harcourt; 1999. Pelton EM, Schultz CB, Jepsen SJ, Black SH, Crone EE. Western monarch population plummets: status, probable causes, and recommended conservation actions. Front Ecol Evol. 2019;7:258. Peñuelas J, Filella I. Phenology feedbacks on climate change. Science. 2009;324(5929):887–8. Pitman GM, Flockhart DT, Norris DR. Patterns and causes of oviposition in monarch butterflies: implications for milkweed restoration. Biol Cons. 2018;217:54–65. Pleasants JM, Oberhauser KS. Milkweed loss in agricultural fields because of herbicide use: effect on the monarch butterfly population. Insect Conserv Divers. 2013;6:135–44. Pleasants J. Milkweed restoration in the Midwest for monarch butterfly recovery: estimates of milkweeds lost, milkweeds remaining and milkweeds that must be added to increase the monarch population. Insect Conserv Divers. 2017;10:42–53. Pocius VM, Debinski DM, Pleasants JM, Bidne KG, Hellmich RL, Brower LP. Milkweed matters: monarch butterfly (Lepidoptera: Nymphalidae) survival and development on nine Midwestern milkweed species. Environ Entomol. 2017;46(5):1098–105. Pocius VM, Debinski DM, Pleasants JM, Bidne KG, Hellmich RL. Monarch butterflies do not place all of their eggs in one basket: oviposition on nine Midwestern milkweed species. Ecosphere. 2018;9(1):e02064. Pocius VM, Pleasants JM, Debinski DM, Bidne KG, Hellmich RL, Bradbury SP, Blodgett SL. Monarch butterflies show differential utilization of nine Midwestern milkweed species. Front Ecol Evol. 2018. https://doi.org/10.3389/fevo.2018.00169. Quinn M. Texas monarch watch. Texas Monarch Watch. http://texasento.net/dplex.htm (2018). Accessed 18 September 2019. RStudio Team. RStudio: Integrated Development for R. RStudio, Inc., Boston, MA URL http://www.rstudio.com/ (2019). Accessed 9 February 2020. Robinson RA, Crick HQ, Learmonth JA, Maclean IM, Thomas CD, et al. Travelling through a warming world: climate change and migratory species. Endanger Species Res. 2009;7:87–99. Rollins D. Quails on the Rolling Plains. In: Brennan L, editor. Texas quails: Ecology and management, Texas A&M University Press, Texas; 2007. p. 117–141. Schultz CB, Brown LM, Pelton E, Crone EE. Citizen science monitoring demonstrates dramatic declines of monarch butterflies in western North America. Biol Conserv. 2017;214:343–6. Semmens BX, Semmens DJ, Thogmartin WE, Wiederholt R, López-Hoffman L, Diffendorfer JE, et al. Quasi-extinction risk and population targets for the Eastern, migratory population of monarch butterflies (Danaus plexippus). Sci Rep. 2016;6(1):1–7. Singhurst J, Hutchins B, Holmes WC. Identification of milkweeds in Texas. Texas Parks and Wildlife Department. 2015. https://tpwd.texas.gov/publications/pwdpubs/media/pwd_rp_w7000_1803.pdf. Accessed 2 Dec 2018. Stenoien C, Nail KR, Oberhauser KS. Habitat productivity and temporal patterns of monarch butterfly egg densities in the eastern United States. Ann Entomol Soc Am. 2015;108:670–9. Stenoien C, Nail KR, Zalucki JM, Parry H, Oberhauser KS, Zalucki MP. Monarchs in decline: a collateral landscape-level effect of modern agriculture. Insect Sci. 2018;25(4):528–41. Thogmartin WE, LópezHoffman L, Rohweder J, Diffendorfer J, Drum R, et al. Restoring monarch butterfly habitat in the Midwestern US:'all hands on deck'. Environ Res Lett. 2017;12:5. Urquhart FA, Urquhart NR. Autumnal migration routes of the eastern population of the monarch butterfly (Danaus p. plexippus L.; Danaidae; Lepidoptera) in North America to the overwintering site in the Neovolcanic Plateau of Mexico. Can J Zool. 1978;56:1759–64. Wassenaar LI, Hobson KA. Natal origins of migratory monarch butterflies at wintering colonies in Mexico: new isotopic evidence. Proc Natl Acad Sci. 1998;95:15436–9. Wickham H. ggplot2: Elegant graphics for data analysis. New York: Springer-Verlag; 2016. Wood SN. Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. J R Stat Soc B. 2011;73:3–36. Woodson RE. The North American species of Asclepias L. Ann Missouri Bot Gard. 1954;41(1):1–211. Zalucki MP. Temperature and rate of development. Aust J Entomol. 1982;21(4):241–6. Zalucki MP, Malcolm SB, Paine TD, Hanlon CC, Brower LP, Clarke AR. It's the first bites that count: survival of first-instar monarchs on milkweeds. Austral Ecol. 2001;26(5):547–55. Zalucki MP, Rochester WA, Oberhauser K, Solensky M. Spatial and temporal population dynamics of monarchs down-under: lessons for North America. In: Oberhauser KS, Solensky MJ, editors. The monarch butterfly: Biology and conservation. New York: Cornell; 2004. p. 219–28. Zalucki MP, Parry HR, Zalucki JM. Movement and egg laying in monarchs: to move or not to move, that is the equation. Austral Ecol. 2016;41(2):154–67. We thank those at our study ranches for their hospitality and property access. In particular, we would like to thank Brad and Melissa Ribelin for their enthusiasm and assistance with this project, as well as their unwavering dedication to the conservation of not only monarchs, but all wildlife. We appreciate the advice and field assistance of David Berman, members of the Wildlife Toxicology Laboratory, and those at the Rolling Plains Quail Research Ranch. We would also like to express our gratitude to the reviewers of this manuscript; their insight was instrumental to improving this paper. Finally, we thank BASF for the financial support necessary to make all of this work possible. This study was made possible with the financial support of BASF. The funding body did not play any additional role in the design, collection, analysis, data interpretation, and/or writing of this study. The Wildlife Toxicology Laboratory, Texas Tech University, Box 43290, Lubbock, TX, 79409-3290, USA Matthew Z. Brym, Cassandra Henry, Shannon P. Lukashow-Moore, Brett J. Henry & Ronald J. Kendall The Department of Biological Sciences, Texas Tech University, Lubbock, TX, USA Natasja van Gestel Matthew Z. Brym Cassandra Henry Shannon P. Lukashow-Moore Brett J. Henry Ronald J. Kendall MB and RK conceived the study. MB and CH collected the data presented in the manuscript. NVG, MB, CH, SLM, and BH performed statistical analyses. All authors contributed to the writing and revision of the final manuscript. All authors read and approved the final manuscript. Correspondence to Ronald J. Kendall. Ethical committee approval was not required for this study as it was conducted in the US where research involving animals is regulated by the Animal Welfare Act of 1966 and overseen by the Institutional Animal Care and Use Committee, which does not require approval for studies involving invertebrates, with the exception of cephalopods. Furthermore, while monarch and queen butterflies were observed in this study, these were not interfered with in any way. Brym, M.Z., Henry, C., Lukashow-Moore, S.P. et al. Prevalence of monarch (Danaus plexippus) and queen (Danaus gilippus) butterflies in West Texas during the fall of 2018. BMC Ecol 20, 33 (2020). https://doi.org/10.1186/s12898-020-00301-x Danaus plexipuus Egg correction
CommonCrawl
Permissivism and social choice: a response to Blessenohl In a recent paper discussing Lara Buchak's risk-weighted expected utility theory, Simon Blessenohl notes that the objection he raises there to Buchak's theory might also tell against permissivism about rational credence. I offer a response to the objection here. In his objection, Blessenohl suggests that credal permissivism gives rise to an unacceptable tension between the individual preferences of agents and the collective preferences of the groups to which those agents belong. He argues that, whatever brand of permissivism about credences you tolerate, there will be a pair of agents and a pair of options between which they must choose such that both agents will prefer the first to the second, but collectively they will prefer the second to the first. He argues that this consequence tells against permissivism. I respond that this objection relies on an equivocation between two different understandings of collective preferences: on the first, they are an attempt to summarise the collective view of the group; on the second, they are the preferences of a third-party social chooser tasked with making decisions on behalf of the group. I claim that, on the first understanding, Blessenohl's conclusion does not follow; and, on the second, it follows but is not problematic. It is well known that, if two people have difference credences in a given proposition, there is a sense in which the pair of them, taken together, is vulnerable to a sure loss set of bets.* That is, there is a bet that the first will accept and a bet that the second will accept such that, however the world turns out, they'll end up collectively losing money. Suppose, for instance, that Harb is 90% confident that Ladybug will win the horse race that is about to begin, while Jay is only 60% confident. Then Harb's credences should lead him to buy a bet for £80 that will pay out £100 if Ladybug wins and nothing if she loses, while Jay's credences should lead him to sell that same bet for £70 (assuming, as we will throughout, that the utility of £$n$ is $n$). If Ladybug wins, Harb ends up £20 up and Jay ends up £30 down, so they end up £10 down collectively. And if Ladybug loses, Harb ends up £80 down while Jay ends up £70 up, so they end up £10 down as a pair. So, for individuals with different credences in a proposition, there seems to be a tension between how they would choose as individuals and how they would choose as a group. Suppose they are presented with a choice between two options: on the first, $A$, both of them enter into the bets just described; on the second, $B$, neither of them do. We might represent these two options as follows, where we assume that Harb's utility for receiving £$n$ is $n$, and the same for Jay:$$A = \begin{pmatrix} 20 & -80 \\ -30 & 70 \end{pmatrix}\ \ \ B = \begin{pmatrix} 0 & 0 \\ \end{pmatrix}$$The top left entry is Harb's winnings if Ladybug wins, the top right is Harb's winnings if she loses; the bottom left is Jay's winnings if she wins, and the bottom left is Jay's winnings if she loses. So, given a matrix $\begin{pmatrix} a & b \\ c & d \end{pmatrix}$, each row represents a gamble---that is, an assignment of utilities to each state of the world---and each column represents a utility distribution---that is, an assignment of utilities to each individual. So $\begin{pmatrix} a & b \end{pmatrix}$ represents the gamble that the option bequeaths to Harb---$a$ if Ladybug wins, $b$ if she loses---while $\begin{pmatrix} c & d \end{pmatrix}$ represents the gamble bequeathed to Jay---$c$ if she wins, $d$ if she loses. And $\begin{pmatrix} a \\ c \end{pmatrix}$ represents the utility distribution if Ladybug wins---$a$ to Harb, $c$ to Jay---while $\begin{pmatrix} b \\ d \end{pmatrix}$ represents the utility distribution if she loses---$b$ to Harb, $d$ to Jay. Summing the entries in the first column gives the group's collective utility if Ladybug wins, and summing the entries in the second column gives their collective utility if she loses. Now, suppose that Harb cares only for the utility that he will gain, and Jay cares only his own utility; neither cares at all about the other's welfare. Then each prefers $A$ to $B$. Yet, considered collectively, $B$ results in greater total utility for sure: for each column, the sum of the entries in that column in $B$ (that is, $0$) exceeds the sum in that column in $A$ (that is, $-10$). So there is a tension between what the members of the group unanimously prefer and what the group prefers. Now, to create this tension, I assumed that the group prefers one option to another if the total utility of the first is sure to exceed the total utility of the second. But this is quite a strong claim. And, as Blessenohl notes, we can create a similar tension by assuming something much weaker. Suppose again that Harb is 90% confident that Ladybug will win while Jay is only 60% confident that she will. Now consider the following two options:$$A' = \begin{pmatrix} B' = \begin{pmatrix} 25 & -75 \end{pmatrix}$$In $A'$, Harb pays £$80$ for a £$100$ bet on Ladybug, while in $B'$ he receives £$5$ for sure. Given his credences, he should prefer $A'$ to $B'$, since the expected utility of $A'$ is $10$, while for $B'$ it is $5$. And in $A'$, Jay receives £0 for sure, while in $B'$ he pays £$75$ for a £$100$ bet on Ladybug. Given his credences, he should prefer $A'$ to $B'$, since the expected utility of $A'$ is $0$, while for $B'$ it is $-15$. But again we see that $B'$ will nonetheless end up producing greater total utility for the pair---$30$ vs $20$ if Ladybug wins, and $-70$ vs $-80$ if Ladybug loses. But we can argue in a different way that the group should prefer $B'$ to $A'$. This different way of arguing for this conclusion is the heart of Blessenohl's result. In what follows, we write $\preceq_H$ for Harb's preference ordering, $\preceq_J$ for Jay's, and $\preceq$ for the group's. First, we assume that, when one option gives a particular utility $a$ to Harb for sure and a particular utility $c$ to Jay for sure, then the group should be indifferent between that and the option that gives $c$ to Harb for sure and $a$ to Jay for sure. That is, the group should be indifferent between an option that gives the utility distribution $\begin{pmatrix} a \\ c\end{pmatrix}$ for sure and an option that gives $\begin{pmatrix} c \\ a\end{pmatrix}$ for sure. Blessenohl calls this Constant Anonymity: Constant Anonymity For any $a, c$,$$\begin{pmatrix} a & a \\ c & c \end{pmatrix} \sim \begin{pmatrix} c & c \\ a & a \end{pmatrix}$$This allows us to derive the following:$$\begin{pmatrix} 20 & 20 \\ \end{pmatrix}\ \ \ \text{and}\ \ \ -80 & -80 \\ -80 & -80 \end{pmatrix}$$And now we can introduce our second principle: Preference Dominance For any $a, b, c, d, a', b', c', d'$, if$$\begin{pmatrix} \end{pmatrix} \preceq a' & a' \\ c' & c' b & b \\ d & d b' & b' \\ d' & d' \end{pmatrix}$$then$$\begin{pmatrix} a & b \\ a' & b' \\ c' & d' \end{pmatrix}$$Preference Dominance says that, if the group prefers obtaining the utility distribution $\begin{pmatrix} a \\ c\end{pmatrix}$ for sure to obtaining the utility distribution $\begin{pmatrix} a' \\ c'\end{pmatrix}$ for sure, and prefers obtaining the utility distribution $\begin{pmatrix} b \\ d\end{pmatrix}$ for sure to obtaining the utility distribution $\begin{pmatrix} b' \\ d'\end{pmatrix}$ for sure, then they prefer obtaining $\begin{pmatrix} a \\ c\end{pmatrix}$ if Ladybug wins and $\begin{pmatrix} b \\ d\end{pmatrix}$ if she loses to obtaining $\begin{pmatrix} a' \\ c'\end{pmatrix}$ if Ladybug wins and $\begin{pmatrix} b' \\ d'\end{pmatrix}$ if she loses. Preference Dominance, combined with the indifferences that we derived from Constant Anonymity, gives$$\begin{pmatrix} \end{pmatrix}$$And then finally we introduce a closely related principle: Utility Dominance For any $a, b, c, d, a', b', c', d'$, if $a < a'$, $b < b'$, $c < c'$, and $d < d'$, then$$\begin{pmatrix} \end{pmatrix} \prec \end{pmatrix}$$ This simply says that if one option gives more utility than another to each individual at each world, then the group should prefer the first to the second. So$$\begin{pmatrix} \end{pmatrix}$$Stringing these together, we have$$A' = \begin{pmatrix} \end{pmatrix} = B'$$And thus, assuming that $\preceq$ is transitive, while Harb and Jay both prefer $A'$ to $B'$, the group prefers $B'$ to $A'$. More generally, Blessenohl proves an impossibility result. Add to the principles we have already stated the following: Ex Ante Pareto If $A \preceq_H B$ and $A \preceq_J B$, then $A \preceq B$. Egoism For any $a, b, c, d, a', b', c', d'$,$$\begin{pmatrix} \end{pmatrix} \sim_H \begin{pmatrix} \end{pmatrix} \sim_J \begin{pmatrix} \end{pmatrix}$$That is, Harb cares only about the utilities he obtains from an option, and Jay cares only about the utilities that he obtains. And finally: Individual Preference Divergence There are $a, b, c, d$ such that$$\begin{pmatrix} \end{pmatrix} \prec_H \begin{pmatrix} c & d \\ \end{pmatrix} \succ_J \begin{pmatrix} \end{pmatrix}$$Then Blessenohl shows that there are no preferences $\preceq_H$, $\preceq_J$, and $\preceq$ that satisfy Individual Preference Divergence, Egoism, Ex Ante Pareto, Constant Anonymity, Preference Dominance, and Utility Dominance.** And yet, he claims, each of these is plausible. He suggests that we should give up Individual Preference Divergence, and with it permissivism and risk-weighted expected utility theory. Now, the problem that Blessenohl identifies arises because Harb and Jay have different credences in the same proposition. But of course impermissivists agree that two rational individuals can have different credences in the same proposition. So why is this a problem specifically for permissivism? The reason is that, for the impermissivist, if two rational individuals have different credences in the same proposition, they must have different evidence. And for individuals with different evidence, we wouldn't necessarily want the group preference to preserve unanimous agreement between the individuals. Instead, we'd want the group to choose using whichever credences are rational in the light of the joint evidence obtained by pooling the evidence held by each individual in the group. And those might render one option preferable to the other even though each of the individuals, with their less well informed credences, prefer the second option to the first. So Ex Ante Pareto is not plausible when the individuals have different evidence, so impermissivism is safe. To see this, consider the following example: There are two medical conditions, $X$ and $Y$, that affect racehorses. If they have $X$, they're 90% likely to win the race; if they have $Y$, they're 60% likely; if they have both, they're 10% likely to win. Suppose Harb knows that Ladybug has $X$, but has no information about whether she has $Y$; and suppose Jay knows Ladybug has $Y$ and no information about $X$. Then both are rational. And both prefer $A$ to $B$ from above. But we wouldn't expect the group to prefer $A$ to $B$, since the group should choose using the credence it's rational to have if you know both that Ladybug has $X$ and that she has $Y$; that is, the group should choose by pooling the individual's evidence to give the group evidence, and then choose using the probabilities relative to that. And, relative to that evidence, $B$ is preferable to $A$. The permissivist, in contrast, cannot make this move. After all, for them it is possible for two rational individuals to disagree even though they have exactly the same evidence, and therefore the same pooled evidence. Blessenohl considers various ways the permissivist or the risk-weighted expected utility theorist might answer his objection, either by denying Ex Ante Pareto or Preference or Utility Dominance. He considers each response unsuccessful, and I tend to agree with his assessments. However, oddly, he explicitly chooses not to consider the suggestion that we might drop Constant Anonymity. I'd like to suggest that we should consider doing exactly that. I think Blessenohl's objection relies on an ambiguity in what the group preference ordering $\preceq$ represents. On one understanding, it is no more than an attempt to summarise the collective view of the group; on another, it represents the preferences of a third party brought in to make decisions on behalf of the group---the social chooser, if you will. I will argue that Ex Ante Pareto is plausible on the first understanding, but Constant Anonymity isn't; and Constant Anonymity is plausible on the second understanding, but Ex Ante Pareto isn't. Let's treat the first understanding of $\preceq$. On this, $\preceq$ represents the group's collective opinions about the options on offer. So just as we might try to summarise the scientific community's view on the future trajectory of Earth's average surface temperate or the mechanisms of transmission for SARS-CoV-2 by looking at the views of individual scientists, so might we try to summarise Harb and Jay's collective view of various options by looking at their individual views. Understood in this way, Constant Anonymity does not look plausible. Its motivation is, of course, straightforward. If $a < b$ and$$\begin{pmatrix} \end{pmatrix}$$then the group's collective view unfairly and without justification favours Harb over Jay. And if$$\begin{pmatrix} \end{pmatrix} \succ \end{pmatrix}$$then it unfairly and without justification favours Jay over Harb. So we should rule out both of these. But this doesn't entail that the group preference should be indifferent between these two options. That is, it doesn't entail that we should have$$\begin{pmatrix} \end{pmatrix}$$After all, when you compare two options $A$ and $B$, there are four possibilities: $A \preceq B$ and $B \preceq A$---that is, $A \sim B$; $A \preceq B$ and $B \not \preceq A$---that is, $A \prec B$; $A \not \preceq B$ and $B \preceq A$---that is, $A \succ B$; $A \not \preceq B$ and $B \not \preceq A$---that is, $A$ and $B$ and not compatible. The argument for Constant Anonymity rules out (2) and (3), but it does not rule out (4). What's more, it's easy to see that, if we weaken Constant Anonymity so that it requires (1) or (4) rather than requiring (1), then we see that all of the principles are consistent with it. So introduce Weak Constant Anonymity: Weak Constant Anonymity For any $a, c$, then either$$\begin{pmatrix} \end{pmatrix}$$or$$\begin{pmatrix} \end{pmatrix}\ \ \text{and}\ \ \end{pmatrix}\ \ \text{are incomparable}$$ Then define the preference ordering $\preceq^*$ as follows:$$A \preceq^* B \Leftrightarrow \left ( A \preceq_H B\ \&\ A \preceq_J B \right )$$Then $\preceq^*$ satisfies Ex Ante Pareto, Weak Constant Anonymity, Preference Dominance, and Utility Dominance. And indeed $\preceq^*$ seems a very plausible candidate for the group preference ordering understood in this first way: where Harb and Jay disagree, it simply has no opinion on the matter; it has opinions only where Harb and Jay agree, and then it shares their shared opinion. On the understanding of $\preceq$ as summarising the group's collective view, if $\begin{pmatrix} \end{pmatrix}$ then the group collectively thinks that this option $\begin{pmatrix} \end{pmatrix}$ is exactly as good as this option $\begin{pmatrix} \end{pmatrix}$. But the group absolutely does not think that. Indeed, Harb and Jay both explicitly deny it, though for opposing reasons. So Constant Anonymity is false. Let's turn next to the second understanding. On this, $\preceq$ is the preference ordering of the social chooser. Here, the original, stronger version of Constant Anonymity seems more plausible. After all, unlike the group itself, the social chooser should have the sort of positive commitment to equality and fairness that the group definitively does not have. As we noted above, Harb and Jay unanimously reject the egalitarian assessment represented by $\begin{pmatrix} \end{pmatrix}$. They explicitly both think that these two options are not equally good---if $a < c$, then Harb thinks the second is strictly better, while Jay thinks the first is strictly better. So, as we argued above, we take the group view to be that they are incomparable. But the social chooser should not remain so agnostic. She should overrule the unanimous rejection of the indifference relation between them and accept it. But, having thus overruled one unanimous view and taken a different one, it is little surprise that she will reject other unanimous views, such as Harb and Jay's unanimous view that $A'$ is better than $B'$ above. That is, it is little surprise that she should violate Ex Ante Pareto. After all, her preferences are not only informed by a value that Harb and Jay do not endorse; they are informed by a value that Harb and Jay explicitly reject, given our assumption of Egoism. This is the value of fairness, which is embodied in the social chooser's preferences in Constant Anonymity and rejected in Harb's and Jay's preferences by Egoism. If we require of our social chooser that they adhere to this value, we should not expect Ex Ante Pareto to hold. * See Philippe Mongin's 1995 paper 'Consistent Bayesian Aggregation' for wide-ranging results in this area. ** Here's the trick: if$$\begin{pmatrix} Then let$$A' = \begin{pmatrix} \end{pmatrix}$$Then $A' \succ_H B'$ and $A' \succ_J B'$, but $A' \sim B'$. Published by Richard Pettigrew at 12:31 pm 24 comments:
CommonCrawl
View source for Hierarchical clustering ← Hierarchical clustering {| class="wikitable" !Copyright notice <!-- don't remove! --> |- | This article ''Agglomerative hierarchical clustering (=Hierarchical Clustering)'' was adapted from an original article by Fionn Murtagh, which appeared in ''StatProb: The Encyclopedia Sponsored by Statistics and Probability Societies''. The original article ([<nowiki>http://statprob.com/encyclopedia/HierarchicalClustering.html</nowiki> StatProb Source], Local Files: [[Media:hierarchical_clustering.pdf|pdf]] | [[Media:hierarchical_clustering.tex|tex]]) is copyrighted by the author(s), the article has been donated to ''Encyclopedia of Mathematics'', and its further issues are under ''Creative Commons Attribution Share-Alike License'. All pages from StatProb are contained in the [[:Category:Statprob|Category StatProb]]. |- |} {{MSC|62H30}} <!-- \documentclass[10pt]{article} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsfonts} \begin{document} --> <center>'''Hierarchical Clustering'''</center> <center> Fionn Murtagh Department of Computing and Mathematics, University of Derby, and Department of Computing, Goldsmiths University of London. </center> <!-- \maketitle --> Hierarchical clustering algorithms can be characterized as ''greedy'' (Horowitz and Sahni, 1979). A sequence of irreversible algorithm steps is used to construct the desired data structure. Assume that a pair of clusters, including possibly singletons, is merged or agglomerated at each step of the algorithm. Then the following are equivalent views of the same output structure constructed on $n$ objects: a set of $n-1$ partitions, starting with the fine partition consisting of $n$ classes and ending with the trivial partition consisting of just one class, the entire object set; a binary tree (one or two child nodes at each non-terminal node) commonly referred to as a dendrogram; a partially ordered set (poset) which is a subset of the power set of the $n$ objects; and an ultrametric topology on the $n$ objects. For background, the reader is referred to Benz&eacute;cri (1979), Lerman (1981), Murtagh and Heck (1987), Jain and Dubes (1988), Arabie et al. (1996), Mirkin (1996), Gordon (1999), Jain, Murty and Flynn (1999), and Xu and Wunsch (2005). One could say with justice that Sibson (1973), Rohlf (1982) and Defays (1977) are part of the prehistory of clustering. Their $O(n^2)$ implementations of the single link method and of a (non-unique) complete link method have been widely cited. In the early 1980s a range of significant improvements were made to the Lance-Williams, or related, dissimilarity update schema (de Rham, 1980; Juan, 1982), which had been in wide use since the mid-1960s. Murtagh (1983, 1985) presents a survey of these algorithmic improvements. The algorithms, which have the potential for ''exactly'' replicating results found in the classical but more computationally expensive way, are based on the construction of ''nearest neighbor chains'' and ''reciprocal'' or mutual NNs (NN-chains and RNNs). A NN-chain consists of an arbitrary point ($a$ in Fig. 1); followed by its NN ($b$ in Fig. 1); followed by the NN from among the remaining points ($c$, $d$, and $e$ in Fig. 1) of this second point; and so on until we necessarily have some pair of points which can be termed reciprocal or mutual NNs. (Such a pair of RNNs may be the first two points in the chain; and we have assumed that no two dissimilarities are equal.) <!-- \begin{figure}[tb] <center> \begin{minipage}[t]{5.0cm} \setlength{\unitlength}{1cm} \begin{picture}(3,3) \put(7,4){\circle*{0.15}} \put(5,4){\circle*{0.15}} \put(4,4){\circle*{0.15}} \put(2,4){\circle*{0.15}} \put(-1,4){\circle*{0.15}} \put(7,3){e} \put(5,3){d} \put(4,3){c} \put(2,3){b} \put(-1,3){a} \put(-1,4){\vector(1,0){3}} \put(2,4){\vector(1,0){2}} \put(4,4){\vector(1,0){1}} \put(5,4){\vector(-1,0){1}} \end{picture} \end{minipage} </center> \caption{Fig 1. Five points, showing NNs and RNNs.} \label{fig1} \end{figure} --> <span id="Fig1"> [[File:hierarchical_clustering_graphics.png|thumb|upright=3|center|frame| Fig 1. Five points, showing NNs and RNNs. ([[Media:hierarchical_clustering_graphics.png|eps]]) ]] </span> In constructing a NN-chain, irrespective of the starting point, we may agglomerate a pair of RNNs as soon as they are found. What guarantees that we can arrive at the same hierarchy as if we used traditional "stored dissimilarities" or "stored data" algorithms (Anderberg, 1973)? Essentially this is the same condition as that under which no inversions or reversals are produced by the clustering method. This would be where $s$ is agglomerated at a lower criterion value (i.e. dissimilarity) than was the case at the previous agglomeration between $q$ and $r$. Our ambient space has thus contracted because of the agglomeration. This is due to the algorithm used -- in particular the agglomeration criterion -- and it is something we would normally wish to avoid. This is formulated as: $$ \mbox{ Inversion impossible if: } \ d(i,j) < d(i,k) {\rm \ or\ \ } d(j,k) \Rightarrow d(i,j) < d(i \cup j,k)$$ This is Bruynooghe's ''reducibility property'' (Bruynooghe, 1977; see also Murtagh, 1985, 1992). Using the Lance-Williams dissimilarity update formula, it can be shown that the minimum variance method does not give rise to inversions; neither do the (single, complete, average) linkage methods; but the median and centroid methods cannot be guaranteed not to have inversions. To return to Fig. 1, if we are dealing with a clustering criterion which precludes inversions, then $c$ and $d$ can justifiably be agglomerated, since no other point (for example, $b$ or $e$) could have been agglomerated to either of these. The processing required, following an agglomeration, is to update the NNs of points such as $b$ in Fig. 1 (and on account of such points, this algorithm was dubbed ''algorithme des c&eacute;libataires'' in de Rham, 1980). The following is a summary of the algorithm: <!-- \vspace{.25in} \noindent --> '''NN-chain algorithm''' <!-- \begin{description} --> '''Step 1: ''' Select a point (i.e. an object in the input data set) arbitrarily. '''Step 2: ''' Grow the NN-chain from this point until a pair of RNNs are obtained. '''Step 3: ''' Agglomerate these points (replacing with a cluster point, or updating the dissimilarity matrix). '''Step 4: ''' From the point which preceded the RNNs (or from any other arbitrary point if the first two points chosen in Steps 1 and 2 constituted a pair of RNNs), return to Step 2 until only one point remains. <!-- \end{description} --> In Murtagh (1983, 1985) and Day and Edelsbrunner (1984), one finds discussions of $O(n^2)$ time and $O(n)$ space implementations of Ward's minimum variance (or error sum of squares) method and of the centroid and median methods. The latter two methods are termed the UPGMC and WPGMC criteria (respectively, unweighted and weighted pair-group method using centroids) by Sneath and Sokal (1973). Now, a problem with the cluster criteria used by these latter two methods is that the reducibility property is not satisfied by them. This means that the hierarchy constructed may not be unique as a result of inversions or reversals (non-monotonic variation) in the clustering criterion value determined in the sequence of agglomerations. Murtagh (1984) describes $O(n^2)$ time and $O(n^2)$ space implementations for the single link method, the complete link method and for the weighted and unweighted group average methods (WPGMA and UPGMA). This approach is quite general vis &agrave; vis the dissimilarity used and can also be used for hierarchical clustering methods other than those mentioned. Day and Edelsbrunner (1984) prove the exact $O(n^2)$ time complexity of the centroid and median methods using an argument related to the combinatorial problem of optimally packing hyperspheres into an $m$-dimensional volume. They also address the question of metrics: results are valid in a wide class of distances including those associated with the Minkowski metrics. The construction and maintenance of the nearest neighbor chain as well as the carrying out of agglomerations whenever reciprocal nearest neighbors meet, both offer possibilities for parallelization, and implementation in a distributed fashion. Work in chemoinformatics and information retrieval can be found in Willett (1989), Gillet et al. (1998) and Griffiths et al. (1984). Ward's minimum variance criterion is favored. For in depth discussion of data encoding and normalization as a preliminary stage of hierarchical clustering, see Murtagh (2005). Finally, as an entry point into the ultrametric view of clustering, and how hierarchical clustering can support constant time, or $O(1)$, proximity search in spaces of arbitrarily high ambient dimensionality, thereby setting aside Bellman's famous curse of dimensionality, see Murtagh (2004). ====References==== {| |- |valign="top"|{{Ref|1}}||valign="top"| Anderberg, M.R. (1973), ''Cluster Analysis for Applications''. Academic Press, New York. |- |valign="top"|{{Ref|2}}||valign="top"| Arabie, P., Hubert, L.J. and De Soete, G. (1996), Eds., ''Clustering and Classification'', World Scientific, Singapore. |- |valign="top"|{{Ref|3}}||valign="top"| Benz&eacute;cri J.P. (1979), ''L'Analyse des Donn&eacute;es. I. La Taxinomie'', Dunod, Paris (3rd ed.). |- |valign="top"|{{Ref|4}}||valign="top"| Bruynooghe, M. (1977), ''M&eacute;thodes nouvelles en classification automatique des donn&eacute;es taxinomiques nombreuses'', Statistique et Analyse des Donn&eacute;es, no. 3, 24--42. |- |valign="top"|{{Ref|5}}||valign="top"| Day, W.H.E. and Edelsbrunner, H. (1984), ''Efficient algorithms for agglomerative hierarchical clustering methods'', Journal of Classification, 1, 7--24. |- |valign="top"|{{Ref|6}}||valign="top"| Defays, D. (1977), ''An efficient algorithm for a complete link method'', Computer Journal, 20, 364--366. |- |valign="top"|{{Ref|7}}||valign="top"| Gillet, V.J., Wild, D.J., Willett, P. and Bradshaw, J. (1998), ''Similarity and dissimilarity methods for processing chemical structure databases'', Computer Journal, 41, 547--558. |- |valign="top"|{{Ref|8}}||valign="top"| Gordon, A.D. (1999), Classification, 2nd ed., Champman and Hall. |- |valign="top"|{{Ref|9}}||valign="top"| Griffiths, A., Robinson, L.A. and Willett, P. (1984), ''Hierarchic agglomerative clustering methods for automatic document classification'', Journal of Documentation, 40, 175--205. |- |valign="top"|{{Ref|10}}||valign="top"| Horowitz, E. and Sahni, S. (1979), ''Fundamentals of Computer Algorithms'', Chapter 4 The Greedy Method, Pitman, London. |- |valign="top"|{{Ref|11}}||valign="top"| Jain, A.K. and Dubes, R.C. (1988), ''Algorithms for Clustering Data'', Prentice-Hall, Englewood Cliffs. |- |valign="top"|{{Ref|12}}||valign="top"| Jain A.K., Murty, M.N. and Flynn P.J. (1999), ''Data clustering: a review'', ACM Computing Surveys, 31, 264--323. |- |valign="top"|{{Ref|13}}||valign="top"| Juan, J. (1982), ''Programme de classification hi&eacute;rarchique par l'algorithme de la recherche en cha&icirc;ne des voisins r&eacute;ciproques'', Les Cahiers de l'Analyse des Donn&eacute;es, VII, 219--225. |- |valign="top"|{{Ref|14}}||valign="top"| Lerman I.C. (1981), ''Classification et Analyse Ordinale des Donn&eacute;es'' Dunod, Paris. |- |valign="top"|{{Ref|15}}||valign="top"| Mirkin B. (1996), ''Mathematical Classification and Clustering'' Kluwer, Dordrecht. |- |valign="top"|{{Ref|16}}||valign="top"| Murtagh, F. (1983), ''A survey of recent advances in hierarchical clustering algorithms'', Computer Journal, 26, 354--359. |- |valign="top"|{{Ref|17}}||valign="top"| Murtagh, F. (1985), ''Multidimensional Clustering Algorithms'', Physica-Verlag, W&uuml;rzburg. |- |valign="top"|{{Ref|18}}||valign="top"| Murtagh, F. and Heck, A. (1987), ''Multivariate Data Analysis'', Kluwer Academic, Dordrecht. |- |valign="top"|{{Ref|19}}||valign="top"| Murtagh, F. (1992), ''Comments on `Parallel algorithms for hierarchical clustering and cluster validity''', IEEE Transactions on Pattern Analysis and Machine Intelligence, 14, 1056--1057. |- |valign="top"|{{Ref|20}}||valign="top"| Murtagh F. (2004), ''On ultrametricity, data coding, and computation'', Journal of Classification, 21, 167--184. |- |valign="top"|{{Ref|21}}||valign="top"| Murtagh F. (2005), ''Correspondence Analysis and Data Coding with Java and R'', Chapman and Hall, Boca Raton. |- |valign="top"|{{Ref|22}}||valign="top"| de Rham, C. (1980), ''La classification hi&eacute;rarchique ascendante selon la m&eacute;thode des voisins r&eacute;ciproques'', Les Cahiers de l'Analyse des Donn&eacute;es, V, 135--144. |- |valign="top"|{{Ref|23}}||valign="top"| Rohlf, F.J. (1982), ''Single link clustering algorithms'', in P.R. Krishnaiah and L.N. Kanal, Eds., ''Handbook of Statistics'', Vol. 2, North-Holland, Amsterdam, 267--284. |- |valign="top"|{{Ref|24}}||valign="top"| Sibson, R. (1973), ''SLINK: an optimally efficient algorithm for the single link cluster method'', The Computer Journal, 16, 30--34. |- |valign="top"|{{Ref|25}}||valign="top"| Sneath, P.H.A. and Sokal, R.R. (1973), ''Numerical Taxonomy'', W.H. Freeman, San Francisco. |- |valign="top"|{{Ref|26}}||valign="top"| Willett, P. (1989), ''Efficiency of hierarchic agglomerative clustering using the ICL distributed array processor'', Journal of Documentation, 45, 1--45. |- |valign="top"|{{Ref|27}}||valign="top"| Rui Xu and Wunsch D. (2005), ''Survey of clustering algorithms'', IEEE Transactions on Neural Networks, 16, 645--678. |- |} <!-- \end{document} --> <references /> [[Category:Statprob]] Template:MSC (view source) Template:MSCwiki (view source) Template:MSN HOST (view source) Template:Ref (view source) Return to Hierarchical clustering. Hierarchical clustering. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Hierarchical_clustering&oldid=38459 Retrieved from "https://encyclopediaofmath.org/wiki/Hierarchical_clustering"
CommonCrawl
Page theorem Revision as of 15:13, 3 July 2020 by Richard Pinch (talk | contribs) (TeX done) 2010 Mathematics Subject Classification: Primary: 11M06 Secondary: 11N13 [MSN][ZBL] Page's theorem on the zeros of Dirichlet $L$-functions. Let $L(s,\chi)$ be a Dirichlet L-function, $s = \sigma + i t$, with $\chi$ a Dirichlet character modulo $d$, $d \ge 3$. There are absolute positive constants $c_1,\ldots,c_8$ such that a) $L(s,\chi) \ne 0$ for $\sigma > 1 - c_1/\log(dt)$, $t \ge 3$; b) $L(s,\chi) \ne 0$ for $\sigma > 1 - c_2/\log(d)$, $0 < t < 5$; c) for complex $\chi$ modulo $d$, \begin{equation}\label{1} L(s,\chi) \ne 0\ \ \text{for}\ \ \sigma > 1 - \frac{c_3}{\log d}\,,\ |t| \le 5\,; \end{equation} d) for real primitive $\chi$ modulo $d$, \begin{equation}\label{2} L(s,\chi) \ne 0\ \ \text{for}\ \ \sigma > 1 - \frac{c_4}{\sqrt{d}\log^2 d}\,; \end{equation} e) for $2 \le d \le D$ there exists at most one $d=d_0$, $d_0 \ge (\log^2 D)/(\log\log^8 D)$ and at most one real primitive $\psi$ modulo $d$ for which $L(s,\psi$ can have a real zero $\beta_1 > 1- c_6/\log D$, where $\beta_1$ is a simple zero; and for all $\beta$ such that $L(\beta,\psi) =0$, $\beta > 1 - c_6/\log D$ with a real $\psi$ modulo $d$, one has $d \equiv 0 \pmod {d_0}$. Page's theorem on $\pi(x;d,l)$, the number of prime numbers $p \le x$, $p \equiv l \pmod d$ for $0 < l \le d$, where $l$ and $d$ are relatively prime numbers. With the symbols and conditions of Section 1, on account of a)–c) and e) one has $$ \pi(x;d,l) = \frac{\mathrm{li}(x)}{\phi(d)} - E \frac{\chi(l)}{\phi(d)}\sum_{n \le x} \frac{n^{\beta_1 - 1}}{\log n} + O\left({x \exp\left({-c_7 \sqrt{\log x}}\right)}\right) \ , $$ where $E=1$ or $0$ in accordance with whether $\beta_1$ exists or not for a given $d$; because of (2), for any $d \le (\log x)^{1-\delta}$ one has for a given $\delta>0$, \begin{equation}\label{3} \pi(x;d,l) = \frac{\mathrm{li}(x)}{\phi(d)} + O\left({x \exp(-c_8 \sqrt{\log x})}\right) \ . \end{equation} This result is the only one (1983) that is effective in the sense that if $\delta$ is given, then one can state numerical values of $c_8$ and the constant appearing in the symbol $O$. Replacement of the bound in (2) by the Siegel bound: $L(\sigma,\chi) \ne 0$ for $\sigma > 1-c(\epsilon)d^{-\epsilon}$, $\epsilon > 0$, extends the range of (*) to essentially larger $d$, $d \le (\log x)^A$ for any fixed $A$, but the effectiveness of the bound in (3) is lost, since for a given $\epsilon > 0$ it is impossible to estimate $c_8(\epsilon)$ and $O_\epsilon$. A. Page established these theorems in [1]. [1] A. Page, "On the number of primes in an arithmetic progression" Proc. London Math. Soc. Ser. 2 , 39 : 2 (1935) pp. 116–141 [2] A.A. Karatsuba, "Fundamentals of analytic number theory" , Moscow (1975) (In Russian) [3] K. Prachar, "Primzahlverteilung" , Springer (1957) Page theorem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Page_theorem&oldid=50874 This article was adapted from an original article by A.F. Lavrik (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Page_theorem&oldid=50874"
CommonCrawl