text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
stork enamine problems
So we put you would prefer to make six member drains, perhaps five, perhaps seven member greens. You can follow their steps in the video explanation above. So after that deportation, I should end up with lone pairs on that carbon resulting in the following molecule. The first step is a carbonyl condensation reaction, and the second step is an $\mathrm{S}_{\mathrm{N}} 2$ reaction. Okay. So what I'm gonna do is identify my offer. And if we were to use this one that would be and it were attacked, were to attack that carbon would end up with a 23456 Okay, that is ah, press possible as possibility. By registering, I agree to the Terms of Service and Privacy Policy. Is the following Okay, So this problem is also stating that we undergo hydraulic cysts. After you click on the link, it will open in a new tab so that you can continue to see the guide and follow the troubleshooting steps if required. 100 whereas this particular hydrogen doesn't have anything in the way as much as it does on this carbon. Stork Enamine Reaction Explained: Ketones cannot be directly alkylated or acylated but when treated with secondary amines they are converted into enamines which can further react with various reagents. The Stork enamine reaction has wide application in the synthesis of - substituted aldehydes and ketones with an electrophilic reactant. Michael Addition Retrosynthesis Practice Problems, See all problems in Michael Addition Retrosynthesis, Michael Addition Retrosynthesis practice problems. Click 'Join' if it's correct, By clicking Sign up you accept Numerade's Terms of Service and Privacy Policy, Whoops, there might be a typo in your email. The lone pair of electrons in the secondary amine acts as a nucleophile and attacks the electrophilic carbon atom of the carbonyl compound in presence of an acid catalyst. Okay, so in this case, um, we might have, um, more acidic conditions and the others. You must be logged in to bookmark a video. So after all that, I should end up with the following Get my alphabet unsaturated system. Okay, so now we have two options. Okay, So if I were to d protein eight, let's just say r two d protein, eight. The generated intermediate then undergoes isomerism to yield a α\alphaα-hydroxylamine. The nucleophilic character of carbon on enamines makes it possible to undergo alkylation, acylation, and 1, 4 – addition reaction. In each reaction box, place the best reagent and conditions from the list below. So I have my hydroxide. The required reaction condition is mild and it works well for both aldehydes and ketones. Next up. The first step of the reaction is the Michael addition of the enamine. For example, reaction of the pyrrolidine enamine of cyclohexanone with 3-buten-2-one, followed by enamine hydrolysis and base treatment, yields the product indicated. Enamine/Dienamine and Brønsted Acid Catalysis: Elusive Intermediates, Reaction Mechanisms, and Stereoinduction Modes Based on in Situ NMR Spectroscopy and Computational Studies. lntramolecular Diels-Alder reactions are possible when a substrate contains both a 1,3-diene and a dienophile, as shown in the following general reaction.With this in mind, draw the product of each intramolecular Dials-Alder reaction. Based on our data, we think this problem is relevant for Professor Bean's class at UH. Okay, So what I'm gonna do here is I'm going to use thes lone pairs and attack my electric filic site, which is my beta carpet get. Results in the formation of a carbon. The dual nucleophilic character of enamines is depicted below: © 2003-2020 Chegg Inc. All rights reserved. You wanted very right, as opposed to the one. We just would have to know that, um, hydraulics is of an enemy. It is an enamine, an effective enol synthon, mediated alkylation or acylation at the α‐carbon of the carbonyl compounds (i.e., aldehyde or ketone) and the enamine‐based alkylation is referred to as the Stork alkylation, and the comparable acylation is termed as the Stork acylation. Writing a mechanism for this reaction provides a good test of ones' understanding of acid-catalyzed processes. Write both steps, and show their mechanisms. We can either use this one or this one. So let's, um, theorize which one would be the most attractive one and the most reactive one is this one. Stork Enamine Alkylation. This carbon that would result in the impairs their those lone pairs. So I want to create a new right now one who I want to basically make an intra molecular reaction to make a ring. And of course I have water because I just, um I just deep resonated. The alkylation or acylation reaction of enamines with electrophilic substituents like alkyl halides, acyl halides, or acceptor-substituted alkenes is called the Stork Enamine reaction. What scientific concept do you need to know in order to solve this problem? Soc. Okay, so that's four carbons. (a) $\mathrm{H}_{2} \mathrm{C}=\mathrm{CHCO}_{2} \mathrm{Et}$(b) $\mathrm{H}_{2} \mathrm{C}=\mathrm{CHCHO}$(c) $\mathrm{CH}_{3} \mathrm{CH}=\mathrm{CHCOCH}_{3}$. I just have my alphabet, etcetera system with a method group coming off of that Carbondale. The very left. The synthesis of enamines from carbonyl compounds and secondary amines are reported by Mannich and Davidson. Because usually we would see something. So I'm gonna write down that Carbonell there and the cramp number three. I have this compound. But I don't have any good leaving groups in this case. Write a stepwise mechanism for the following reaction. So that's why we're going to be pregnant. So this is 12345 And then, of course, six. (a) Ethyl butanoate(b) Cycloheptanone(c) 3,7 -Nonanedione(d) 3-Phenylpropanal. If you have any issues, please follow our troubleshooting guide below. But this is specifically for a, um, a reaction in a nuclear folk attack. Ate the most acidic hydrogen. Enamines are considered as the nitrogen enolates or in other words, , - unsaturated amines. You Okay, So what's next? This is the base of the Stork enamine reaction. Okay, so the first thing I'm gonna draw out is my pure holding and amino cycle accident. For example, reaction of the pyrrolidine enamine of cyclohexanone with 3-buten-2-one, followed by enamine hydrolysis and base treatment, yields the product indicated. 2000, 122, 2395-2396. The Darzens reaction involves a two-step, base-catalyzed condensation of ethyl chloroacetate with a ketone to yield an epoxy ester. And then I have my O Kane and my, um Carbonell, Position three. I don't have anything for nothing. Draw a stepwise mechanism for the acid-catalyzed dehydration of 3-methyl-2-buten-1-ol [(CH$_3$)$_2$C = CHCH$_2$OH] to isoprene [CH$_2$ = C(CH$_3$)CH = CH$_2$] . The Study-to-Win Winning Ticket number has been announced! Get a better grade with hundreds of hours of expert tutoring videos for your textbook. My hydroxide is going to deep protein. The alkylation reaction contains the following two steps: The enamines formed in the above step undergo reaction with alkyl halides. Table 2. Carbons. EMAILWhoops, there might be a typo in your email. Show how one could use the Stork enamine synthesis to carry out the following transformation. Based on our data, we think this question is relevant for Professor Bean's class at UH. The Stork enamine reaction is named after its inventor; Gilbert Stork. So the purpose of using enemies for these types of reactions is that if I were to use Cycle Hoxha known, for example, and our to, um, undergo every direction with a alphabet on Saturday key tone, I would have had to have a base catalysed reaction in order to depart Nate the ascetic hydrogen associated with that Alfa Carbon. So this one has a little bit of competition because it has, um, these Hadrian's over here. Password must contain at least one uppercase letter, a number and a special character. And then the very last thing is the base de protein ating my acidic proton. Like alkenes, conjugated dienes can be prepared by elimination reactions. Go to your Tickets dashboard to see if you won! The mechanism follows the below-given steps: The enamines formed in the above-given step undergo reaction with acyl halides. Join thousands of students and gain free access to 63 hours of Organic videos that follow the topics your textbook covers. Because if we're to make a four carbon compound, that would result in a cycle hex air cycle butane, which has a lot of industry. Problem: Show how one could use the Stork enamine synthesis to carry out the following transformation. The enamines are commonly used because of the easy preparation method, and that they are neutral. Same thing was with this one, that is 1234 If this one attacked by carbon on this side would result in a cycle butane, which we don't particularly want. This problem is asking us to show the mechanism of a pure holding enemy of cycle hex unknown and three butin to own. What condensation products would you expect to obtain by treatment of the following substances with sodium ethoxide in ethanol?
Mint Leaves Benefits For Weight Loss, Harvard World Mun 2019, James 4:8 Nkjv, Kenstar Cooler Motor Online, Targeting Analyst Air Force, Juliet Ibrahim New Husband, Sous Vide Pickled Eggs, White Spinach Lasagna, Agartha The Hidden Land, Scrambled Egg Recipes, Arms Meaning In Gujarati Body Parts, George Washington Bridge Toll, Introductory Discrete Mathematics, Shamrock Farms, Llc, Asu Blackboard Archive, Small Ant Like Bugs That Bite, Sicilian Ricotta Cheesecake Recipe, Vaping Dream Meaning, Regina Daniels Net Worth 2020 In Naira, Happy Birthday D Song, Cataract Gorge Suspension Bridge, Child Support Acknowledgement Letter, Stainless Steel Cake Pan Made In Usa, Accessible London Tours, Is Yvonne Jegede Still Married, Oroscopo Sagittario Branko, How To Connect Router To Router Without Cable, Essential Oil Constituent Chart, Itc Infotech Salary Hike, How To Pronounce Twirl, Ac Unity Is Great, Bayside Fairhaven Menu, Can You Start A Sentence With Has, Skim Milk Shortage 2020, Homes For Sale By Owner In North Charleston, Sc, Wholesale Old Fashioned Candy Suppliers, Mit Schwarzman College Of Computing Admission, Sex And Gender Are The Same, Quality Of Life Index By State, Ways To Say Something, Epic Games Games Crashing, Hindu Dhormo Bangla Book, D'link Router Local, Ikea Hacks Lounge, D Addario Bass Strings Singles, Cranberry Juice Calories, Rapid Hair Growth Products For Black Hair, Wall Painting Equipment, Plush Blankets Baby, Dynamic Healthcare Systems Ceo, Foreclosed Homes For Sale In Spartanburg, Sc, Kaplan Caia Login, Marks And Spencer Job Titles, Bollywood Songs On Fashion, Plantronics Voyager 4210 Uc Review, 36 Inch Wide Conference Table, Parade Float Ideas, China Pestle Analysis 2020, Gl Ar750s Ext Gigabit Travel Ac, Cupcake Jemma Red Velvet Cake, Netgear Cm1000 Review Reddit, Vodafone Gigacube Manual, Rice Porridge Meaning In Bengali, Assassin's Creed 3 Pc Flickering, Dark Iron Dwarf Recruitment Scenario, Carbon-13 Nmr Practice Problems Pdf, How To Get Rid Of Lightning Bugs In The House, Best Kitchenaid Attachments, Can You Drink Milk On Keto, Black Dragon 5e Lore, 14-day Clean Eating Menu, A Trophy Father's Trophy Son Sleeping With Sirens, Kawasaki W800 Horsepower, Wizard Of Oz Ruby Slippers Google, Wow Transmog Locations, District Wise Population Of Manipur, Grand Circle Travel, Polarity Amide Vs Carboxylic Acid, Prince Leopold Once Upon A Time,
stork enamine problems 2020 | CommonCrawl |
Suspension of the billiard maps in the Lazutkin's coordinate
April 2017, 37(4): 2243-2257. doi: 10.3934/dcds.2017097
Wave breaking and global existence for the periodic rotation-Camassa-Holm system
Ying Zhang
School of Mathematics and Statistics, Tianshui Normal University, Tianshui 741001, China
Received September 2016 Revised November 2016 Published December 2016
Fund Project: This work is supported by the National Natural Science Foundation of China (No. 11561059).
Full Text(HTML)
The rotation-two-componentCamassa-Holm system with the effect of the Coriolis force in therotating fluid is a model in the equatorial water waves. In thispaper we consider its periodic Cauchy problem. The precise blow-upscenarios of strong solutions and several conditions on the initialdata that produce blow-up of the induced solutions are described indetail. Finally, a sufficient condition for global solutions isestablished.
Keywords: Rotation-two-component Camassa-Holm system, blow-up, wave-breaking.
Mathematics Subject Classification: 35B30, 35B44, 35G25.
Citation: Ying Zhang. Wave breaking and global existence for the periodic rotation-Camassa-Holm system. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 2243-2257. doi: 10.3934/dcds.2017097
A. Bressan and A. Constantin, Global conservative solutions of the Camassa-Holm equation, Arch. Ration. Mech. Anal, 183 (2007), 215-239. doi: 10.1007/s00205-006-0010-z. Google Scholar
A. Bressan and A. Constantin, Global dissipative solutions of the Camassa-Holm equation, Anal. Appl., 5 (2007), 1-27. doi: 10.1142/S0219530507000857. Google Scholar
R. Camassa and D. D. Holm, An integrable shallow water equation with peaked solitons, Phys. Rev. Lett., 71 (1993), 1661-1664. doi: 10.1103/PhysRevLett.71.1661. Google Scholar
R. M. Chen and Y. Liu, Wave breaking and global existence for a generalized two-component Camassa-Holm system, Int. Math. Res. Not., 6 (2011), 1381-1416. doi: 10.1093/imrn/rnq118. Google Scholar
C. Chen and S. Wen, Wave breaking phenomena and global solutions for a generalized periodic two-component Camassa-Holm system, Discrete Continuous Dynamical Systems, 32 (2012), 3459-3484. doi: 10.3934/dcds.2012.32.3459. Google Scholar
A. Constantin, An exact solution for equatorially trapped waves, J. Geophys. Res.: Oceans, 117 (2012), 247-253. doi: 10.1029/2012JC007879. Google Scholar
A. Constantin, On the Cauchy problem for the periodic Camassa-Holm equation, J. Differential Equations, 141 (1997), 218-235. doi: 10.1006/jdeq.1997.3333. Google Scholar
A. Constantin, On the blow-up of solutions of a periodic shallow water equation, J. Nonlinear Sci., 10 (2000), 391-399. doi: 10.1007/s003329910017. Google Scholar
A. Constantin and J. Escher, On the structure of a family of quasilinear equations arising in shallow water theory, Math. Ann., 312 (1998), 403-416. doi: 10.1007/s002080050228. Google Scholar
A. Constantin and J. Escher, Well-posedness, global existence and blow-up phenomenon for a periodic quasi-linear hyperbolic equation, Comm. Pure Appl. Math., 51 (1998), 475-504. doi: 10.1002/(SICI)1097-0312(199805)51:5<475::AID-CPA2>3.0.CO;2-5. Google Scholar
A. Constantin and J. Escher, On the blow-up rate and the blow-up set of breaking waves for a shallow water equation, Math. Z., 233 (2000), 75-91. doi: 10.1007/PL00004793. Google Scholar
A. Constantin and J. Escher, Wave breaking for nonlinear nonlocal shallow water equations, Acta Math., 181 (1998), 229-243. doi: 10.1007/BF02392586. Google Scholar
A. Constantin and P. Germain, Instability of some equatorially trapped waves, J. Geophys. Res.: Oceans, 118 (2013), 2802-2810. doi: 10.1002/jgrc.20219. Google Scholar
A. Constantin and R. Ivanov, On the integrable two-component Camassa-Holm shallow water system, Phys. Lett. A, 372 (2008), 7129-7132. doi: 10.1016/j.physleta.2008.10.050. Google Scholar
A. Constantin and R. S. Johnson, The dynamics of waves interacting with the Equatorial Undercurrent, Geophys. Astrophys. Fluid Dyn., 109 (2015), 311-358. doi: 10.1080/03091929.2015.1066785. Google Scholar
R. Dullin, G. Gottwald and D. Holm, An integrable shallow water equation with linear and nonlinear dispersion, Phys. Rev. Lett., 87 (2001), 194501. doi: 10.1103/PhysRevLett.87.194501. Google Scholar
J. Escher, D. Henry, B. Kolev and T. Lyons, Two-component equations modelling water waves with constant vorticity, Ann. Mat. Pura Appl., 195 (2016), 249-271. doi: 10.1007/s10231-014-0461-z. Google Scholar
J. Escher, O. Lechttenfeld and Z. Yin, Well-posedness and blow-up phenomena for the 2-component Camassa-Holm equation, Discrete Contin. Dyn. Syst., 19 (2007), 493-513. doi: 10.3934/dcds.2007.19.493. Google Scholar
L. Fan, H. Gao and Y. Liu, On the rotation-two-component Camassa-Holm system modelling the equatorial water waves, Advances in Mathematics, 291 (2016), 59-89. doi: 10.1016/j.aim.2015.11.049. Google Scholar
F. Genoud and D. Henry, Instability of equatorial water waves with an underlying current, J. Math. Fluid Mech., 16 (2014), 661-667. doi: 10.1007/s00021-014-0175-4. Google Scholar
G. L. Gui and Y. Liu, On the global existence and wave-breaking criteria for the two-component Camassa-Holm system, J. Funct. Anal., 258 (2010), 4251-4278. doi: 10.1016/j.jfa.2010.02.008. Google Scholar
F. Guo, H. J. Gao and Y. Liu, On the wave-breaking phenomena for the two-component Dullin-Gottwald-Holm system, J. Lond. Math. Soc., 86 (2012), 810-834. doi: 10.1112/jlms/jds035. Google Scholar
Y. Han, F. Guo and H. J. Gao, On solitary waves and wave-breaking phenomena for a generalized two-component integrable Dullin-Gottwald-Holm system, J. Nonlinear Sci., 23 (2013), 617-656. doi: 10.1007/s00332-012-9163-0. Google Scholar
D. Henry, Equatorially trapped nonlinear water waves in an $ β $-plane approximation with centripetal forces J. Fluid Mech. , 804(2016), R1, 11pp. doi: 10.1017/jfm.2016.544. Google Scholar
D. Henry and R. Ivanov, One-dimensional weakly nonlinear model equations for the Rossby waves, Discrete Contin. Dyn. Syst. A, 34 (2014), 3025-3034. doi: 10.3934/dcds.2014.34.3025. Google Scholar
H. Holden and X. Raynaud, Periodic conservative solutions of the Camassa-Holm equation, Ann. Inst. Fourier, 58 (2008), 945-988. doi: 10.5802/aif.2375. Google Scholar
R. Ivanov, Two-component integrable systems modelling shallow water waves: the constant vorticity case, Wave Motion, 46 (2009), 389-396. doi: 10.1016/j.wavemoti.2009.06.012. Google Scholar
T. Kato, Quasi-linear equations of evolution, with applications to partialdifferential equations spectral theory and differential equation, Lecture Notes in Math., 448 (1975), 25-70. Google Scholar
H. P. Mckean, Breakdown of a shallow water equation, Asian J. Math., 2 (1998), 867-874. doi: 10.4310/AJM.1998.v2.n4.a10. Google Scholar
G. Misiolek, Classical solutions of the periodic Camassa-Holm equation, Geom. Funct. Anal., 12 (2002), 1080-1104. doi: 10.1007/PL00012648. Google Scholar
P. J. Olver and P. Rosenau, Tri-Hamiltonian duality between solitons and solitary-wave solutions having compact support, Phys. Rev. E, 53 (1996), 1900-1906. doi: 10.1103/PhysRevE.53.1900. Google Scholar
Z. Yin, On the blow-up of solutions of the periodic Camassa-Holm equation, Dyn. Cont. Discrete Impuls. Syst. Ser. A Math. Anal., 12 (2005), 375-381. Google Scholar
P. Z. Zhang and Y. Liu, Stability of solitary waves and wave-breaking phenomena for the two-component Camassa-Holm system, Int. Math. Res. Not., 2010 (2010), 1981-2021. doi: 10.1093/imrn/rnp211. Google Scholar
Rong Chen, Shihang Pan, Baoshuai Zhang. Global conservative solutions for a modified periodic coupled Camassa-Holm system. Electronic Research Archive, 2021, 29 (1) : 1691-1708. doi: 10.3934/era.2020087
Zaihui Gan, Fanghua Lin, Jiajun Tong. On the viscous Camassa-Holm equations with fractional diffusion. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3427-3450. doi: 10.3934/dcds.2020029
Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388
Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216
Juliana Fernandes, Liliane Maia. Blow-up and bounded solutions for a semilinear parabolic problem in a saturable medium. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1297-1318. doi: 10.3934/dcds.2020318
Tetsuya Ishiwata, Young Chol Yang. Numerical and mathematical analysis of blow-up problems for a stochastic differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 909-918. doi: 10.3934/dcdss.2020391
Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264
Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259
Daniele Bartolucci, Changfeng Gui, Yeyao Hu, Aleks Jevnikar, Wen Yang. Mean field equations on tori: Existence and uniqueness of evenly symmetric blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3093-3116. doi: 10.3934/dcds.2020039
Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052
Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081
Sze-Bi Hsu, Yu Jin. The dynamics of a two host-two virus system in a chemostat environment. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 415-441. doi: 10.3934/dcdsb.2020298
Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Salim A. Messaoudi. New general decay result for a system of viscoelastic wave equations with past history. Communications on Pure & Applied Analysis, 2021, 20 (1) : 389-404. doi: 10.3934/cpaa.2020273
Bopeng Rao, Zhuangyi Liu. A spectral approach to the indirect boundary control of a system of weakly coupled wave equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 399-414. doi: 10.3934/dcds.2009.23.399
Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243
Manil T. Mohan. First order necessary conditions of optimality for the two dimensional tidal dynamics system. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020045
Helmut Abels, Andreas Marquardt. On a linearized Mullins-Sekerka/Stokes system for two-phase flows. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020467
Yong-Jung Kim, Hyowon Seo, Changwook Yoon. Asymmetric dispersal and evolutional selection in two-patch system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3571-3593. doi: 10.3934/dcds.2020043
Mengting Fang, Yuanshi Wang, Mingshu Chen, Donald L. DeAngelis. Asymptotic population abundance of a two-patch system with asymmetric diffusion. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3411-3425. doi: 10.3934/dcds.2020031
PDF downloads (107)
HTML views (58)
Article outline | CommonCrawl |
Nano Idea
Energy Transfer in Mixed Convection MHD Flow of Nanofluid Containing Different Shapes of Nanoparticles in a Channel Filled with Saturated Porous Medium
Gul Aaiza1,
Ilyas Khan2 &
Sharidan Shafie1
Energy transfer in mixed convection unsteady magnetohydrodynamic (MHD) flow of an incompressible nanofluid inside a channel filled with saturated porous medium is investigated. The channel with non-uniform walls temperature is taken in a vertical direction under the influence of a transverse magnetic field. Based on the physical boundary conditions, three different flow situations are discussed. The problem is modelled in terms of partial differential equations with physical boundary conditions. Four different shapes of nanoparticles of equal volume fraction are used in conventional base fluids, ethylene glycol (EG) (C 2 H 6 O 2 ) and water (H 2 O). Solutions for velocity and temperature are obtained discussed graphically in various plots. It is found that viscosity and thermal conductivity are the most prominent parameters responsible for different results of velocity and temperature. Due to higher viscosity and thermal conductivity, C 2 H 6 O 2 is regarded as better convectional base fluid compared to H 2 O.
Thermal conductivity plays a vital role in heat transfer enhancement. Conventional heat transfer fluids such as water, ethylene glycol (EG), kerosene oil and lubricant oils have poor thermal conductivities compared to solids. Solids particles on the other hand have higher thermal conductivities compared to conventional heat transfer fluids. Choi [1] in his pioneering work indicated that when a small amount of nanoparticles is added to common base fluids, it increases significantly the thermal conductivity of the base fluids as well as their convective heat transfer rate. This mixtures is known as nanofluids. More exactly, nanofluids are suspensions of nano-size particles in base fluids. Usually nanofluids contain different types of nanoparticles such as oxides, metals and carbides in commonly base fluids like water, EG, propylene glycol and kerosene oil. Some specific applications of nanofluids are found in various electronic equipment, energy supply, power generation, air conditioning and production. Vajjha and Das [2] for the first time used EG (60 %) and water (40 %) mixture as base fluid for the preparation of alumina (Al 2 O 3 ), copper oxide (CuO) and zinc oxide (ZnO)nanofluids. At the same temperature and concentration, they found that CuO nanofluid posses high thermal conductivity compare to those of Al 2 O 3 and ZnO nanofluids. Naik and Sundar [3] took 70 % propylene glycol and 30 % water and prepared CuO nanofluid. As expected, they found that CuO nanofluid has better thermal conductivity and viscosity properties compare to base fluid. Recently, Mansur et al. [4] studied nanofluids for magnetohydrodynamic (MHD) stagnation point flow past a permeable sheet for stretching and shrinking cases. They obtained numerical solutions using bvp4c program in MATLAB and computed results for embedded parameters.
The ability of nanoparticles to enhance the thermal conductivity of base fluids together with numerous applications of nanofluids in industry has attracted the interest of researchers to conduct further studies. Amongst them, several are performing experimental work, some of them are using numerical computations, however very few studies are available on analytic side. Perhaps, it is due to the reason that analytic solutions are not always convenient. Among the various attempts are mention here those made in [5–15].
The quality of nanofluid not only depends on the type of nanoparticles but also their shapes. Researchers usually use nanoparticles of spherical shapes. However, in terms of applications and significance, spherical shaped nanoparticles are limited. Due to this reason non-spherical shaped nanoparticles are choose in this study. More exactly, this study incorporates four different types of nanoparticles namely cylinder, platelet, blade and brick. Furthermore, nanofluids literature reveals that non-spherical shaped nanoparticles carry a number of key desirable properties to be the main focus of current research especially in cancer therapy. Recently, investigation shows that cylindrical shaped nanoparticles are seven times more deadly than traditional spherical shaped nanoparticles in the delivery of drug to breast cancer cells. To the best of author knowledge, analytic studies on different shapes of nanoparticles contained in EG or water as the base fluids is not reported yet. Although, Timofeeva et al. [16] study the problem of Al 2 O 3 nanofluids containing different shaped nanoparticles but they conducted this study experimentally together with theoretical modelling. More exactly they investigated various shapes of Al 2 O 3 nanoparticles in a base fluid mixture of EG and water of equal volumes. By using Hamilton and Crosser model, they noted enough enhancements in the effective thermal conductivities due to particle shapes. Loganathan et al. [17] considered spherical nanoparticles and analyzed radiation effects on an unsteady natural convection flow of nanofluids past an infinite vertical plate. They concluded that spherical silver (Ag) nanofluids velocity is less than copper (Cu), titanium dioxide (TiO 2 ) and Al 2 O 3 spherical nanofluids due to greater viscosity. Recently, Asma et al. [18] obtained exact solutions for free convection flow of nanofluids with ramped wall temperature by taking five different types of spherical shaped nanoparticles.
Heat transfer due to convection arises in many physical situations. Convection is of three types i.e. free convection, forced convection and mixed convection. The buoyancy induced convection is called free convection whereas forced convection causes due to external pressure gradient or object motion. Mixed convection induces only due to simultaneous occurrence of free and forced convection to transfer heat. The most typical and common situations where mixed convection is almost always realized is the flow in the channel due to the process on heating or cooling the channel walls. In such a flow situation, the buoyancy force causes free convection whereas the external pressure gradient or the non homogeneous boundary conditions on the velocity results forced convection. Sebdani et al. [19] studied heat transfer of Al 2 O 3 water nanofluid in mixed convection flow inside a square cavity. Fan et al. [20] investigated mixed convection heat transfer in a horizontal channel filled with nanofluids. Tiwari and Das [21] and Sheikhzadeh et al. [22] analyzed laminar mixed convection flow of a nanofluid in two-sided lid-driven enclosures. Further, magnetic field in nanofluids has its numerous applications such as in the polymer industry and metallurgy where hydromagnetic techniques are being used. Nadeem and Saleem [23] examined the unsteady flow of a rotating MHD nanofluid in a rotating cone in the presence of magnetic field. Al-Salem et al. [24] investigated MHD mixed convection flow in a linearly heated cavity. The effects of variable viscosity and variable thermal conductivity on the MHD flow and heat transfer over a non-linear stretching sheet was investigated by Prasad et al. [25]. The problem of Darcy Forchheimer mixed convection heat and mass transfer in fluid-saturated porous media in the presence of thermophoresis was presented by Rami et al. [26]. Effect of radiation and magnetic field on the mixed convection stagnation-point flow over a vertical stretching sheet in a porous medium bounded by a stretching vertical plate was presented by Hayat et al. [27]. Few other studies on mixed convection and nanofluids are given in [28–38].
Based on above literature, the present investigation is concerned with the radiative heat transfer in mixed convection MHD flow of a different shapes of Al 2 O 3 in EG-based nanofluid in a channel filled with saturated porous medium. The focus in this work is the effect of different parameters on cylinder shape nanofluids. The fluid is assumed to be electrically conducting and the no slip condition is considered at the boundary of the channel. Three different flow situations are discussed. In the first case, both of the bounding walls of the channel are at rest. Fluid motion is originated due to buoyancy force together with external pressure gradient of oscillatory form applied in the flow direction. In the second case, the upper wall of the channel is set into oscillatory motion whereas the third case extends this idea when both of the channel walls are given oscillatory motions. Analytical solutions are obtained for velocity and temperature profile. The results for skin friction and Nusselt number are computed. Graphical results for velocity field and temperature distributions are displayed for various parameters of interest and discussed in details.
Formulation and Solution of the Problem
Consider oscillatory flow of an incompressible nanofluids in a channel filled with a saturated porous medium. The fluid is assumed electrically conducting under the influence of a uniform magnetic field of strength B0 applied in a transverse direction to the flow. The magnetic Reynolds number is also assumed small enough so that the effect of induced magnetic field can be neglected. The assumptions that the external electric field is considered zero and that the electric field due to polarization is negligible. The no-slip condition at the boundary walls is considered and there is radiation effect in the energy equation. The x– axis is taken along the flow and y– axis is taken normal to the flow direction. The mixed convection is caused due to buoyancy force together with external pressure gradient applied along the x– direction. Under the usual assumption of Boussinesq approximation, the governing equations of momentum and energy as follows:
$$ {\rho}_{nf}\frac{\partial u}{\partial t}=-\frac{\partial p}{\partial x}+{\mu}_{nf}\frac{\partial^2u}{\partial {y}^2}-\left(\sigma {B}_0^2+\frac{\mu_{nf}}{k_1}\right)\;u+{\left(\rho \beta \right)}_{nf}g\left(T-{T}_0\right), $$
$$ {\left(\rho {c}_p\right)}_{nf}\frac{\partial T}{\partial t}={k}_{nf}\frac{\partial^2T}{\partial {y}^2}-\frac{\partial q}{\partial y}, $$
where u = u(y, t) denotes the fluid velocity in the x– direction, T = T(y, t) is the temperature, ρ nf is density of nanofluids, μ nf is the dynamic viscosity of nanofluid, σ is the electrical conductivity of the base fluid, k 1 > 0 is the permeability of the porous medium (ρβ) nf , thermal expansion coefficient of nanofluids g, is the acceleration due to gravity (ρc p ) nf , is the heat capacitance of nanofluids, k nf is the thermal conductivity of nanofluid, q is the radiative heat flux in x– direction. The first term on the right denotes the externa l pressure gradient.
In this study, Hamilton and Crosser model [28], for thermal conductivity and dynamic viscosity is used, being valid for both spherical and non spherical shapes nanoparticles. According to this model:
$$ {\mu}_{nf}=\kern1em {\mu}_f\left(1+a\phi +b{\phi}^2\right), $$
$$ \frac{k_{nf}}{k_f}=\frac{k_s+\left(n-1\right){k}_f+\left(n-1\right)\left({k}_s-{k}_f\right)\phi }{k_s+\left(n-1\right){k}_f-\left({k}_s-{k}_f\right)\phi }, $$
In equations (1) and (2), the density ρ nf , thermal expansion coefficient (ρβ) nf , heat capacitance (ρc p ) nf and thermal conductivity of nanofluids are derived by using the relations given by [17, 18] as follows:
$$ \begin{array}{c}{\rho}_{nf}=\left(1-\phi \right){\rho}_f+\phi {\rho}_s,\kern0.48em {\left(\rho \beta \right)}_{nf}=\left(1-\phi \right){\left(\rho \beta \right)}_f+\phi {\left(\rho \beta \right)}_s\\ {}{\left(\rho {c}_p\right)}_{nf}=\left(1-\phi \right){\left(\rho {c}_p\right)}_f+\phi {\left(\rho {c}_p\right)}_s,\kern0.36em \end{array} $$
where ϕ is the nanoparticles volume fraction, ρ f and ρ s are the densities of the base fluid and solid nanoparticles, β s and β f are the volumetric coefficients of thermal expansions of solid nanoparticles and base fluids (c p ) s , and (c p ) f are the specific heat capacities of solid nanoparticles and base fluids at constant pressure a, and b are constants and depend on the particle shape as given in Table 1 [16].
Table 1 Constants a and b empirical shape factors
The n appearing in Eq. (4) is the empirical shape factor given by n = 3/Ψ, where Ψ is the sphericity defined as the ratio between the surface area of the sphere and the surface area of the real particle with equal volumes. The values of Ψ for different shape particles are given in Table 2 [16].
Table 2 Sphericity Ψ for different shapes nanoparticles
In addition to above, some physical properties of base fluid and nanoparticles are given in Table 3 as mentioned by [17] and [18].
Table 3 Thermophysical properties of water and nanoparticles
Following Makinde and Mhone [31], both plates temperature T 0 and T w are assumed high enough and produces the radiative heat transfer. Thus, the radiative heat flux is given by
$$ \frac{\partial q}{\partial y}=-4{\alpha}^2\left(T-{T}_0\right), $$
where α is the radiation absorption coefficient.
Substituting Eq. (6) into Eq. (2), gives
$$ {\left(\rho {c}_p\right)}_{nf}\frac{\partial T}{\partial t}={k}_{nf}\frac{\partial^2T}{\partial {y}^2}+4{\alpha}^2\left(T-{T}_0\right), $$
where α is the mean radiation absorption coefficient.
Introducing the following dimensionless variables
$$ \begin{array}{l}{x}^{\ast }=\frac{x}{d},\;{y}^{\ast }=\frac{y}{d},\kern0.5em {u}^{\ast }=\frac{u}{U_0},\kern0.62em {t}^{\ast }=\frac{t{U}_0}{d},\kern0.62em {p}^{\ast }=\frac{d}{\mu {U}_0}p,\;\\ {}\kern0.5em {T}^{\ast }=\frac{T-{T}_0}{T_w-{T}_0},\kern0.62em {\omega}^{\ast }=\frac{d\omega }{U_0},\kern0.24em \frac{\partial {p}^{\ast }}{\partial {x}^{\ast }}=\lambda exp\left(i{\omega}^{*}{t}^{*}\right)\end{array} $$
into Eqs. (1) and (7), give (*symbol is dropped for convenience)
$$ \begin{array}{c}\left[\left(1-\phi \right)+\phi \frac{\rho_s}{\rho_f}\right]\;Re\frac{\partial u}{\partial t}=\lambda \varepsilon exp\left(i\omega t\right)+\left(1+a\phi +b{\phi}^2\right)\frac{\partial^2u}{\partial {y}^2}-{M}^2u\\ {}-\frac{\left(1+a\phi +b{\phi}^2\right)u}{K}+\left[\left(1-\phi \right)+\phi \frac{{\left(\rho \beta \right)}_s}{{\left(\rho \beta \right)}_f}\right]GrT,\;\end{array} $$
$$ Pe\frac{\phi_4}{\lambda_n}\frac{\partial T}{\partial t}=\frac{\partial^2T}{\partial {y}^2}+\frac{N^2}{\lambda_n}T, $$
$$ \begin{array}{c}Re=\frac{U_0d}{v_f},\kern0.62em {M}^2=\frac{\sigma {B}_0^2{d}^2}{\mu_f},\;K=\frac{k_1}{d^2},\kern0.62em Gr=\frac{g{\beta}_f{d}^2\left({T}_w-{T}_0\right)}{\nu_f{U}_0},\\ {}Pe=\frac{U_0d{\left(\rho {c}_p\right)}_f}{k_f},\;{N}^2=\frac{4{d}^2{\alpha}^2}{k_f},\kern0.62em {\lambda}_n=\frac{k_{nf}}{k_f}.\end{array} $$
are the Reynold's number , the magnetic parameter also called Hartmann number, the permeability parameter, the thermal Grashof number, the Peclet number, the radiation parameter, and
$$ {\phi}_1=\left(1-\phi \right)+\phi \frac{\rho_s}{\rho_f},\kern0.36em {\phi}_2=\left(1+a\phi +b{\phi}^2\right),\kern0.24em {\phi}_3=\left(1-\phi \right){\rho}_f+\phi \frac{{\left(\rho \beta \right)}_s}{\beta_f}, $$
$$ {\phi}_4=\left[\left(1-\phi \right)+\phi \frac{{\left(\rho {c}_p\right)}_s}{{\left(\rho {c}_p\right)}_f}\right]. $$
In order to solve Eqs. (9) and (10), we consider the following three cases.
Case-I: Flow Inside a Channel with Stationary Walls
In the first case, the flow inside a channel of width d filled with nanofluids is considered. Both of the walls of the channel are kept stationary at y = 0 and y = d. The upper wall of the channel is assumed maintained at constant temperature T w and the lower wall has uniform temperature T 0. Thus, the boundary conditions are
$$ u\left(0,\;t\right)=0,\kern1.12em u\left(d,\;t\right)=0, $$
$$ T\left(0,\;t\right)={T}_0,\kern1.12em T\left(d,\;t\right)={T}_w. $$
In dimensionless form Eqs. (11) and (12) are
$$ T\left(0,\;t\right)=0;\kern1.62em T\left(1,\;t\right)=1;\kern1.62em t>0, $$
$$ u\left(0,\;t\right)=0;\kern1.62em u\left(1,\;t\right)=0,\kern0.62em t>0. $$
After simplification, Eqs. (9) and (10), take the forms
$$ {a}_0\frac{\partial u}{\partial t}=\lambda \varepsilon exp\left(i\omega t\right)+{\phi}_2\frac{\partial^2u}{\partial {y}^2}-{m}_0^2u+{a}_1T, $$
$$ {b_0}^2\frac{\partial T}{\partial t}=\frac{\partial^2T}{\partial {y}^2}+{b_1}^2T, $$
$$ {a}_0={\phi}_1Re,\kern0.62em {m}_0^2={M}^2+\frac{\phi_2}{K},\kern0.24em {a}_1={\phi}_3Gr,\kern0.24em {b_0}^2=\frac{Pe{\phi}_4}{\lambda_n},\kern0.62em {b_1}^2=\frac{N^2}{\lambda_n}. $$
Now to solve Eqs. (15) and (16) with boundary conditions (13) and (14), the perturbed solutions are taken of the forms:
$$ u\left(y,\;t\right)=\left[{u}_0(y)+\varepsilon exp\left(i\omega t\right)\;{u}_1(y)\right], $$
$$ T\left(y,\;t\right)=\left[{T}_0(y)+\varepsilon exp\left(i\omega t\right)\;{T}_1(y)\right], $$
for velocity and temperature respectively.
Using Eqs. (17) and (18) into Eqs. (15) and (16), we obtain the following system of ordinary differential equations
$$ \frac{d^2{u}_0(y)}{d{y}^2}-{m}_1^2{u}_0(y)=-{a}_2{T}_0(y), $$
$$ \frac{d^2{u}_1}{d{y}^2}-{m}_2^2{u}_1(y)=-\lambda, $$
$$ \frac{d^2{T}_0(y)}{d{y}^2}+{b_1}^2{T}_0(y)=0, $$
$$ \frac{d^2{T}_1(y)}{d{y}^2}+{m}_3^2{T}_1(y)=0, $$
$$ {m}_1=\frac{m_0^2}{\phi_2},\;{a}_2=\frac{a_1}{\phi_2},\;{m}_2=\sqrt{\frac{m_0^2+i\omega {a}_0}{\phi_2}},\kern0.62em {m}_3=\sqrt{{b_1}^2-{b_0}^2i\omega }. $$
The associated boundary conditions (13) and (14) are reduce to
$$ {u}_0(0)=0;\kern0.62em {u}_0(1)=0, $$
$$ {T}_0(0)=0;\kern0.62em {T}_0(1)=1, $$
$$ {T}_1(0)=0;\kern0.62em {T}_1(1)=0. $$
Solutions of Eqs. (21) and (22) under boundary conditions (25) and (26) yield to
$$ {T}_0(y)=\frac{sin{b}_1y}{sin{b}_1}, $$
$$ {T}_1(y)=0. $$
Eq. (18) using Eqs. (27) and (28), gives
$$ T\left(y,\;t\right)=T(y)=\frac{sin{b}_1y}{sin{b}_1}. $$
Eqs. (19) and (20), using Eq. (27) under boundary conditions (23) and (24), give
$$ {u}_0(y)={c}_1 sinh\left({m}_1y\right)+{c}_2 cosh{m}_1y+\frac{a_2}{\left({b}_1^2+{m}_1^2\right)}\frac{sin{b}_1y}{sin{b}_1}, $$
$$ {u}_1(y)={c}_3 sinh{m}_2y+{c}_4 cosh{m}_2y+\frac{\lambda }{m_2^2{\phi}_2}, $$
with arbitrary constants
$$ {c}_1=-\frac{a_2}{sinh{m}_1\left({b}_1^2+{m}_1^2\right)},\;{c}_2=0,\kern0.62em {c}_3=\frac{\lambda }{m_2^2{\phi}_2 sinh{m}_2}\left( cosh{m}_2-1\right),\kern1.12em {c}_4=-\frac{\lambda }{m_2^2{\phi}_2}. $$
Finally, substituting Eqs. (30)-(32), into Eq. (17), we obtained:
$$ \begin{array}{c}u\left(y,\;t\right)=-\frac{a_2 sin h{m}_1y}{\left({b}_1^2+{m}_1^2\right) sin h{m}_1}+\frac{a_2 sin{b}_1y}{\left({b}_1^2+{m}_1^2\right) sin{b}_1}\\ {}+\varepsilon \exp \left(i\omega t\right)\;\left[\frac{\lambda \left( cosh{m}_2-1\right) sin h{m}_2y}{m_2^2{\phi}_2 sin h{m}_2}-\frac{\lambda }{m_2^2{\phi}_2}\left( cosh{m}_2y-1\right)\right].\end{array} $$
Case-2: Flow Inside a Channel with Oscillating Upper Plate
Here the upper wall of the channel (at y = d ) is set into oscillatory motion while the lower wall (at y = 0), is held stationary. The first boundary condition is the same as in Case-1, whereas the second boundary condition in dimensionless form modifies to
$$ u\left(1,\;t\right)=H(t)\varepsilon \exp \left(i\omega t\right);\kern0.62em t>0, $$
where H(t) is the Heaviside step function.
By using the same procedure as in Case-1, and the solution is obtained as
$$ \begin{array}{c}u\left(y,\;t\right)=-\frac{a_2 sin h{m}_1y}{\left({b}_1^2+{m}_1^2\right) sin h{m}_1}+\frac{a_2 sin{b}_1y}{\left({b}_1^2+{m}_1^2\right) sin{b}_1}\\ {}+\varepsilon exp\left(i\omega t\right)\;\left[\begin{array}{c}\hfill {\scriptscriptstyle \frac{sinh{m}_2y}{sinh{m}_2}}\left\{H(t)+{\scriptscriptstyle \frac{\lambda }{m_2^2{\phi}_2}}\left( cosh{m}_2-1\right)\right\}-{\scriptscriptstyle \frac{\lambda }{m_2^2{\phi}_2}} cosh{m}_2y+{\scriptscriptstyle \frac{\lambda }{m_2^2{\phi}_2}}\hfill \\ {}\hfill \hfill \end{array}\right].\end{array} $$
Case-3: Flow Inside a Channel with Oscillating Upper and Lower Plates
In this case both of the channel walls are set into oscillatory motions. The dimensionless form of the boundary conditions is
$$ u\left(0,\;t\right)=u\left(1,\;t\right)=H(t)\varepsilon exp\left(i\omega t\right);\kern0.62em t>0. $$
The resulting expression for velocity is obtained as:
$$ \begin{array}{c}u\left(y,\;t\right)=-\frac{a_2 sin h{m}_1y}{sinh{m}_1\left({b}_1^2+{m}_1^2\right)}+\frac{a_2 sin{b}_1y}{\left({b}_1^2+{m}_1^2\right) sin{b}_1}\\ {}+\varepsilon exp\left(i\omega t\right)\;\left[\begin{array}{c}\hfill {\scriptscriptstyle \frac{\left( cosh{m}_2-1\right) sin h{m}_2y}{sinh{m}_2}}\left({\scriptscriptstyle \frac{\lambda }{m_2^2{\phi}_2}}-H(t)\right)+\left(H(t)-{\scriptscriptstyle \frac{\lambda }{m_2^2{\phi}_2}}\right) cosh{m}_2y+{\scriptscriptstyle \frac{\lambda }{m_2^2{\phi}_2}}\hfill \\ {}\hfill \hfill \end{array}\right].\end{array} $$
Nusselt Number and Skin-friction
The dimensionless expressions for Nusselt number and skin-frictions are evaluated from Eqs. (29), (33), (35) and (37) are as follows:
$$ Nu=-\frac{b_1}{sin{b}_1}, $$
$$ {\tau}_1={\tau}_1(t)=\frac{a_2{m}_1}{\left({b}_1^2+{m}_1^2\right) sin h{m}_1}-\frac{b_1{a}_2}{\left({b}_1^2+{m}_1^2\right) sin{b}_1}+\varepsilon exp\left(i\omega t\right)\;\left[\frac{\lambda \left(1- cosh{m}_2\right)}{m_2{\phi}_2 sinh{m}_2}\right], $$
$$ \begin{array}{c}{\tau}_2={\tau}_2(t)=\frac{a_2{m}_1}{\left({b}_1^2+{m}_1^2\right) sin h{m}_1}-\frac{b_1{a}_2}{\left({b}_1^2+{m}_1^2\right) sin{b}_1}\\ {}+\varepsilon \exp \left(i\omega t\right)\;\left[\frac{m_2}{sinh{m}_2}\left\{H(t)+\frac{\lambda }{m_2^2{\phi}_2}\Big( cosh{m}_2-1\right\}\right],\;\end{array} $$
$$ \begin{array}{c}{\tau}_3={\tau}_3(t)=\frac{a_2{m}_1}{sinh{m}_1\left({b}_1^2+{m}_1^2\right)}-\frac{a_2{b}_1}{\left({b}_1^2+{m}_1^2\right) sin{b}_1}\\ {}+\varepsilon exp\left(i\omega t\right)\;\left[\frac{m_2\left( cosh{m}_2-1\right)}{sinh{m}_2}\left(H(t)-\frac{\lambda }{m_2^2{\phi}_2}\right)\right].\end{array} $$
Graphical Results and Discussion
Influence of radiation effect on heat transfer in mixed convection MHD flow of nanofluids inside a channel filled with saturated porous medium is studied. Based on the boundary conditions three different cases are discussed. Four different shapes of Al 2 O 3 nanoparticles which are cylinder, platelet, brick and blade are dropped into conventional base fluid EG and water. The governing partial differential equations with imposed boundary conditions are solved for analytic solutions using perturbation technique. Expressions of velocity and temperature are obtained on the basis of Hamilton and Crosser model [28]. The physics of the problem is studied using various graphs and discussed in details for embedded parameters. The constants α and b (called empirical shape factors) are chosen from Table 1, and numerical values of sphericity ψ are chosen from Table 2. It should be noted that a and b coefficients vary significantly with particle shape. Four different shapes of nanoparticles (platelet, blade, cylinder and brick) of equal volume fraction are used in the numerical computation as given in Table 3. Figures 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, and 25 are sketched for velocity profiles whereas Figs. 26, 27, 28, 29, and 30 are plotted for temperature profiles. Figures 1, 2, 3, 4, 5, 6, 7, 8, and 9, are plotted for the case when flow is inside a channel with stationary walls, Figs. 10, 11, 12, 13, 14, 15, 16, and 17 are sketched for the flow situation inside a channel with oscillating upper wall and Figs. 18, 19, 20, 21, 22, 23, 24, and 25 are drawn when both walls of the channel are executing the same oscillating motion.
Velocity profile of different shapes of Al 2 O 3 nanoparticles in EG-based nanofluid when Gr = 0.1, N = 0.1, φ = 0.04, λ = 1, K = 1, M = 1, t = 5, ω = 0.2.
Velocity profile of different shapes of Al 2 O 3 nanoparticles in water-based nanofluid when Gr = 0.1, N = 0.1, φ = 0.04, λ = 1, M = 1, K = 1, t = 5, ω = 0.2.
Velocity comparison graph of EG and water-based nanofluids when Gr = 0.1, N = 0.1, φ = 0.04, λ = 1, M = 1, K = 1, t = 5, ω = 0.2.
Velocity profile of different nanoparticles in EG-based nanofluid when Gr = 0.1, N = 0.1, φ = 0.04, λ = 1, M = 1, t = 5, K = 2, ω = 0.2.
Velocity profile of different ϕ of Al 2 O 3 in EG-based nanofluid when Gr = 0.1, N = 0.1, λ = 1, M = 1, K = 1, t = 5, ω = 0.2.
Velocity profile for different values of N in EG-based nanofluid when Gr = 0.1, φ = 0.04, λ = 1, M = 1, K = 1, t = 10, ω = 0.2.
Velocity profile for different values of M in EG-based nanofluid when Gr = 0.1, N = 0.1, φ = 0.04, λ = 1, K = 1, t = 10, ω = 0.2.
Velocity profile for different values of Gr in EG-based nanofluid when N = 0.1, φ = 0.04, λ = 1, M = 1, K = 1, t = 10, ω = 0.2.
Velocity profile of different value of K in EG-based nanofluid when Gr = 0.1, N = 0.1, φ = 0.04, λ = 0.01, M = 1, t = 5, ω = 0.2.
Velocity profile of different shapes of Al 2 O 3 nanoparticles in EG-based nanofluid when Gr = 0.1, N = 0.1, φ = 0.04, λ = 0.01, M = 1, K = 1, t = 5, ω = 0.2.
Velocity profile of different shapes of Al 2 O 3 nanoparticles in water-based nanofluid when Gr = 0.1, N = 0.1, φ = 0.04, λ = 0.01, M = 1, t = 5, K = 1, ω = 0.2.
Velocity comparison graph of EG and water-based nanofluids when Gr = 0.1, N = 0.1, φ = 0.04, λ = 0.01, M = 1, K = 1, t = 5, ω = 0.2.
Velocity profile of different ϕ of Al 2 O 3in EG-based nanofluid Gr = 0.1, N = 0.1, φ = 0.04, λ = 0.01, M = 1, K = 1, t = 5, ω = 0.2.
Velocity profile for different values of N in EG-based nanofluid when Gr = 1, φ = 0.04, λ = 0.01, M = 1, K = 0.2, t = 10, ω = 0.2.
Velocity profile for different values of M in EG-based nanofluid when Gr = 1, N = 0.1, φ = 0.04, λ = 0.001, K = 1, t = 10, ω = 0.2.
Velocity profile for different values of Gr in EG-based nanofluid when N = 0.1, φ = 0.04, λ = 0.01, M = 1, K = 0.2, t = 10, ω = 0.2.
Velocity profile for different values of K in EG-based nanofluid when N = 0.1, φ = 0.04, λ = 0.01, M = 1, t = 10, ω = 0.2.
Velocity profile of different shapes of Al 2 O 3 nanoparticles in EG-based nanofluid when Gr = 1, N = 0.1, φ = 0.04, λ = 0.01, M = 1, K = 1, t = 10, ω = 0.2.
Velocity profile of different shapes of Al 2 O 3 nanoparticles in water-based nanofluid when Gr = 1, N = 0.1, φ = 0.04, λ = 0.01, M = 1, K = 1, t = 10, ω = 0.2.
Velocity comparison graph of EG and water-based nanofluids when Gr = 1, N = 0.1, φ = 0.04, λ = 0.01, M = 1, K = 1, t = 10, ω = 0.2.
Velocity profile for different values of ϕ in EG-based nanofluid when Gr = 1, N = 0.1, λ = 0.01, M = 1, K = 1, t = 10, ω = 0.2.
Velocity profile for different values of N in EG-based nanofluid when Gr = 1, φ = 0.04, λ = 0.01, M = 1, K = 1, t = 10, ω = 0.2.
Velocity profile for different values of M in EG-based nanofluid when Gr = 1, N = 0.1, φ = 0.04, λ = 0.01, K = 1, t = 10, ω = 0.2.
Velocity profile for different values of Gr in EG-based nanofluid when N = 0.1, φ = 0.04, λ = 0.01, M = 1, K = 1, t = 10, ω = 0.2.
Temperature profile of different shapes of Al 2 O 3 nanoparticles in EG-based nanofluid when N = 1.5, t = 1.
Temperature profile of different shapes of Al 2 O 3 nanoparticles in water-based nanofluid when N = 1.5, t = 1.
Temperature comparison graph of EG and water-based nanofluid when N = 1.5, t = 1.
Temperature profile for different ∅ in EG based nanofluid when N = 1.5, t = 1.
Temperature profile for different values of N in EG based nanofluid when t = 1.
The influence of different shapes of Al 2 O 3 nanoparticles on the velocity of EG-based nanofluids is shown in Fig. 1. It can be seen from this figure that the blade shape Al 2 O 3 nanoparticles has the highest velocity followed by brick, platelet and cylinder shapes nanoparticles. The influence of the shapes on the velocity of nanofluids is due to the strong dependence of viscosity on particle shapes for the volume fraction ϕ < 0.1. The present results show that the elongated shape nanoparticles like cylinder and platelet have the highest viscosities as compared to square shape nanoparticles like brick and blade. The obtained results agree well with the experimental results predicted by Timofeeva et al. [16]. A very small deviation is observed in the present study, where the cylinder shape nanoparticles has the highest viscosity, whereas from the experimental results reported by Timofeeva et al. [16] the platelet has the highest viscosity. Timofeeva et al. [16] had compared their results with Hamilton and Crosser model [28] and found that their results are identical with Hamilton and Crosser model [28]. In the present work, we have used Hamilton and Crosser model [28] and found that our analytical results also match with the experimental results of Timofeeva at el. [16].
Figure 2 is plotted to examine the effect of different shapes of Al 2 O 3 nanoparticle on the velocity of water nanofluids. It is clearly seen that the cylinder shape Al 2 O 3 nanoparticles has the lowest velocity followed by platelet, brick and blade. Thus, according to Hamilton and Crosser model [28], suspension of elongated and thin shape particles (high shape factor n) should have higher thermal conductivities, if the ratio of k nf /k f is greater than 100. It is also mentioned by Colla et al. [32] that the thermal conductivity and viscosity increase with the increase of particle concentration due to which velocity decreases. Therefore, the cylinder shape Al 2 O 3 nanoparticles has the highest thermal conductivity followed by platelet, brick and blade. Timofeeva et al. [16] described the reason that when the sphericity of nanoparticles is below 0.6, the negative contribution of heat flow resistance at the solid–liquid interface increases much faster than the particle shape contribution. Thus, the overall thermal conductivity of suspension start decreasing below sphericity of 0.6 but it is increasing in case of Hamilton and Crosser model [28] due to the only contribution of particle shape parameter n. Furthermore, flow in this research is single phase, therefore the negative contribution of heat flow resistance is neglected. Timofeeva et al. [16] used the model \( {k}_{nf}/{k}_f=1+\left({c}_k^{shape}+{c}_k^{surface}\right)\;\phi \), for finding the thermal conductivity of nanoparticles. According to this model, \( {c}_k^{shape} \) and \( {c}_k^{surface} \) coefficients reflecting contributions to the effective thermal conductivity due to particle shape (positive effect) and due to surface resistance (negative effect) respectively. Particle shape coefficient \( {c}_k^{shape} \) was calculated by Hamilton and Crosser equation.
A comparison of EG-based nanofluid with water-based nanofluid is made in Fig. 3. It is found that the velocity of water-based nanofluid is greater than the velocity of EG-based nanofluid. The viscosity and thermal conductivity of EG and water base nanofluids are also predicted by Hamilton and Crosser model [28] for the same ϕ. This result shows that EG based nanofluid has greater viscosity and thermal conductivity compared to water based nanofluid.
The effect of different nanoparticles on the velocity of nanofluids is presented in Fig. 4. From this figure, it is noted that cylinder shape Al 2 O 3 nanofluid has the highest velocity followed by Fe 3 O 4, TiO 2, Cu and silver nanofluids. This shows that cylinder shape silver nanofluids has the highest viscosity and thermal conductivity compared to Cu TiO 2, iron oxide Fe 3 O 4 and Al 2 O 3 nanofluids. One can see from this result that cylinder shape silver nanofluid has better quality fluids compared to magnetite cylinder shape Fe 3 O 4 nanofluid. This result is supported by Hamilton and Crosser model [28] that the viscosity and thermal conductivity of nanofluid are also affected by nanoparticles φ i.e. the viscosity and thermal conductivity increase with the increase of φ, therefore, velocity decreases with the increase of φ. This figure further shows that viscosity of Al 2 O 3 nanofluid at φ is less than 0.1 increases nonlinearly with nanoparticles concentration. This result is found identical with the experimental result reported by Colla et al. [32].
Different φ of cylinder shape Al 2 O 3 nanoparticles on the velocity of cylinder shape Al 2 O 3 nanofluid is shown in Fig. 5. It is clear from this figure that with the increase of φ velocity of nanofluids is decreased. This is due to the reason that the fluid becomes more viscous with the increase of φ which leads to decrease the velocity of nanofluids. The thermal conductivity of nanofluids also increase with the increase of φ. Experimentally by Colla et al. [32] also reported this behaviour. Results for different values of radiation parameter N are presented in Fig. 6. It is found that velocity increases with the increase of N. This result agrees well with the result obtained by Makinde and Mhone [31]. Physically, this means that with the increase of N, increases the amount of heat energy transfers to the fluid.
The graphical results of velocity for different values of magnetic parameter M are shown in Fig. 7. With increasing M, velocity of the nanofluid decreases. Increasing transverse magnetic field on the electrically conducting fluid gives rise to a resistive type force called Lorentz force which is similar to drag force and upon increasing the value of M, increases the drag force which has the tendency to slow down the fluid velocity. The drag force is maximum near the channel walls and minimum in the middle of the channel. Therefore, velocity is maximum in the middle of the channel and minimum at the boundaries.
The velocity profile for different value of Grashof number Gr is plotted in Fig. 8. It is found that an increase in Gr, leads to an increase in the velocity. Increase of Gr, increases temperature gradient which leads to an increase in the buoyancy force. Therefore, velocity increase occurs with Gr, is due to the enhancement of buoyancy force. Figure 9 is prepared for permeability parameter K. It is found that velocity increases with increasing K due to less friction force. More exactly, increasing K reduce fluid friction with channel wall and velocity enhances.
In the second case, Figs. 10, 11, 12, 13, 14, 15, 16, and 17 are plotted for the flow situation when the upper wall is oscillating and the lower wall is at rest. For the last case, when both boundaries are oscillating, Figs. 18, 19, 20, 22, 23, 24, and 25 are plotted. From all these graphs (Figs. 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, and 25), we found that they are qualitatively similar but different quantitatively to Figs. 1, 2, 3, 4, 5, 6, 7,8, and 9.
The effect of different particle shapes on the temperature of the nanofluid is shown in Figs. 26 and 27. The temperature in the present work is different for different shapes due to different viscosity and thermal conductivity of different shapes of nanoparticles. It should be noted that the effect of thermal conductivity increases with the increase of temperature but the viscosity decreases with the increase of temperature. It is clear that elongated shape of nanoparticles like cylinder and platelet have minimum temperature because of the greater viscosity and thermal conductivity whereas blade has the highest temperature due to least viscosity and thermal conductivity. The brick shape is lowest in temperature range, although, it has low viscosity. This is due to the shear thinning behavior with temperature. Further, cylinder shape also show shear thinning behavior but the effect is less prominent here. All the other shapes like platelet and blade show Newtonian behavior and independence of viscosity on shear rate. This shear thinning behavior is also studied experimentally by Timofeeva et al. [16].
Figure 28 shows a comparison of water and EG-based nanofluids. It is evaluated that both are temperature dependent and the variation is observed at the same rate for both fluids. This means that the effect of temperature on the thermal conductivity and viscosity of different based nanofluids occur at the same rate. Figure 29 is plotted in order to see the effect of φ on the temperature of the EG-based nanofluid. It is observed that with the increase of φ temperature of the fluid increases due to the shear thinning behavior. The viscosity of cylinder shape nanoparticles show shear thinning behavior at the highest concentration. This was also experimentally shown by Timofeeva et al. [16]. The graphical results of temperature for different values of radiation parameter N are shown in Fig. 30. It is clear from this figure that temperature of the cylinder shape nanoparticles in EG-based nanofluid get more sinusoidal with the increase of N. The increasing N means cooler or dense fluid or decrease the effect of energy transport to the fluid. The cylinder shape nanofluid has temperature dependent viscosity due to shear thinning behavior.
In this paper, the effects of radiative heat transfer in mixed convection MHD flow of a different shapes of Al 2 O 3 in ethylene glycol and water-based nanofluids in a channel filled with saturated porous medium are analyzed. The channel with non-uniform walls temperature is taken in a vertical direction under the influence of a transverse magnetic field. The governing partial differential equations are solved by perturbation technique for three different flow situations and analytic solutions are obtained. The influence of the different shapes of nanoparticles namely platelet, blade, cylinder and brick of equal volume on the velocity and temperature of nanofluids is determined with different results. Elongated particles like cylinder and platelet result in higher viscosity at the same volume fraction due to structural limitation of rotational and transitional Brownian motion. The shear thinning behavior of cylinder and blade shape of nanoparticles is also studied in this work. Viscosities and thermal conductivities of nanofluids are shown depending on particle shapes, volume fraction and base fluid of nanoparticles. The concluded remarks are as follow:
The velocity of nanofluid decrease with the increase of volume fraction of nanoparticles due to increase of viscosity and thermal conductivity.
Velocity of EG-based nanofluid is concluded lower than water-based nanofluid because the viscosity of base fluid effect the Brownian motion of the nanoparticles.
Elongated particles like cylinder and platelet shapes have lower velocity as compared to blade and brick shapes of nanoparticles due to higher viscosity.
The velocity of the nanofluid decrease with the increase of magnetic parameter due to increase of the drag force which has the tendency to slow down the motion of the fluid.
The velocity of the nanofluid also increase with the increase of thermal Grashof number. Increasing of thermal Grashof number, temperature gradient increases which leads to increase the buoyancy force.
Choi SUS (1995) Enhancing thermal conductivity of fluids with nanoparticle, in: D.A. Siginer, H.P. Wang (Eds.), Developments and Applications of Non-Newtonian Flows. ASME FED 66:99–105
Vajjha RS, Das DK (2009) Experimental determination of thermal conductivity of three nanofluids and development of new correlations. Int J Heat Mass Transf 52:4675–4682
Naik MT, Sundar LS (2011) Investigation into thermophysical properties of glycol based CuO nanofluid for heat transfer applications. World Acad Science Engineer Technology 59:440–446
Mansur S, Ishak A, Pop I (2015) The magnetohydrodynamic stagnation point flow of a nanofluid over a stretching/shrinking sheet with suction. PLoS One 10(3):e0117733
Ellahi R, Hassan M, Zeeshan A (2015) Shape effects of nanosize particles in Cu-H2O nanofluid on entropy generation. Int J Heat Mass Transf 81:449–456
Sheikholeslami M, Ganji DD, Javed MY (2015) R. Ellahi cEffect of thermal radiation on magnetohydrodynamics nanofluid flow and heat transfer by means of two phase model. J Magn Magn Mater 374:36–43
Rashidi S, Dehghan M, Ellahi R, Riaz M, Jamal-Abad MT (2015) Study of stream wise transverse magnetic fluid flow with heat transfer around an obstacle embedded in a porous medium. J Magn Magn Mater 378:128–137
Noreen SA, Raza M, Ellahi R (2014) Influence of heat generation and heat flux in peristalsis with interaction of nanoparticles. The European Physical Journal Plus 129:185
Ellahi R, Hassan M, Soleimani S (2014) A study of natural convection heat transfer in a nanofluid filled enclosure with elliptic inner cylinder. International Journal for Numerical Methods for Heat and Fluid Flow 24(8):1906–1927
Sheikholeslami M, Ellahi R, Ashorynejad HR, Domairry G, Hayat T (2014) Effects of heat transfer in flow of nanofluids over a permeable stretching wall in a porous medium. Computational and Theoretical Nanoscience 11(2):486–496
SheikholeslamI M, Bandpy MG, Ellahi R, Zeeshan A (2014) Simulation of CuO-water nanofluid flow and convective heat transfer considering Lorentz forces. J Magn Magn Mater 369:69–80
Noreen SA, Rahman SU, Ellahi R, Nadeem S (2014) Nano fluid flow in tapering stenosed arteries with permeable walls. Int J Therm Sci 85:54–61
Ellahi R (2013) The effects of MHD and temperature dependent viscosity on the flow of non-Newtonian nanofluid in a pipe: Analytical solutions. Applied Mathematical Modeling 37:1451–1467
Ellahi R, Raza M, Vafai K (2012) Series solutions of non-Newtonian nanofluids with Reynolds' model and Vogel's model by means of the homotopy analysis method. Math Comput Model 55:1876–1891
Wang XB, Zhou PL, Peng FX (2003) A fractal model for predicting the effective thermal conductivity of liquid with suspension of nanoparticles. Internation Journal of Heat and Mass Transfer 46(14):2665–2672
Timofeeva EV, Jules RL, Dileep S (2009) Particle shape effect on thermophysical properties of alumina nanofluids. J Appl Phys 106:014304
Loganathan P, Chand PN, Ganesan P (2013) Radiation effects on an unsteady natural convection flow of a nanofluids past an infinite vertical plate. NANO 08:1350001. doi:10.1142/S179329201350001X [10 pages]
Asma K, Khan I, Sharidan S (2015) Exact solutions for free convection flow of nanofluids with ramped wall temperatur. The European Physical Journal Plus 130:57–71
Sebdani S, Mahmoodi M, Hashemi S (2012) Effect of nanofluid variable properties on mixed convection in a square cavity. Int J Therm Sci 52:112–126
Fan T, XU H, Pop I (2013) Mixed convection heat transfer in horizontal channel filled with nanofluids. International Journal of springer plus 34:339–350
Tiwari RK, Das MK (2007) Heat transfer augmentation in a two-sided lid-driven differentially heated square cavity utilizing nanofluids. Int J Heat Mass Transf 50:9–10
Sheikhzadeh GA, Hajialigol N, Qomi ME, Fattahi A (2012) Laminar mixed convection of Cu-water nano-fluid in two sided lid-driven enclosures. Journal of Nanostructures 1:44–53
Nadeem S, Saleem S (2014) Unsteady mixed convection flow of nanofluid on a rotating cone with magnetic field. Apply Nanoscience 4:405–414
Al-Salem K, Oztop HF, Pop I, Varol Y (2012) Effect of moving lid direction on MHD mixed convection in a linearly heated cavity. Int J Heat Mass Transf 55:1103–1112
Prasad KV, Vajravelu K, Datti PS (2010) The effect of variable fluid properties on the MHD flow and heat transfer over a non-linear stretching sheet. International Journal of Thermal Science 49:603–610
Rami JY, Fawzi A, Abu-Al-Rub F (2011) Darcy-Forchheimer mixed convection heat and mass transfer in fluid saturated porous media. International Journal of Numerical Methods for Heat & Fluid Flow 11:600–618
Hayat T, Abbas Z, Pop I, Asghar S (2010) Effect of radiation and magnetic field on the mixed convection stagnation-point flow over a vertical stretching sheet in a porous medium. Int J Heat Mass Transf 53:466–474
Hamilton RL, Crosser OK (1962) Thermal conductivity of heterogeneous two-component systems. Journal of Industrial & Engineering Chemistry Fundamentals 1:187–191
Turkyilmazoglu M (2014) Unsteady convection flow of some nanofluids past a moving vertical flat plate with heat transfer. J Heat Transf 136:031704–031711
Zeehan A, Ellahi R, Hassan M (2014) Magnetohydrodynamic flow of water/ethylene glycol based nanofluids with natural convection through porous medium. European Physical Journal Plus 129:261
Makinde OD, Mhone PY (2005) Heat transfer to MHD oscillatory flow in a channel filled with porous medium. Romanian Journal of Physics 50:931–938
Colla L, Fedele L, Scattolini M, Bobbo S (2012) Water-based Fe2O3 nanouid characterization: thermal conductivity and viscosity measurements and correlation. Advances in Mechanical Engineering Article ID 674947:8
Noreen AS, Raza M, Ellahi R (2015) Influence of induced magnetic field and heat flux with the suspension of carbon nanotubes for the peristaltic flow in a permeable channel. J Magn Magn Mater 381:405–415
Ellahi R, Aziz S, Zeeshan A (2013) Non Newtonian nanofluids flow through a porous medium between two coaxial cylinders with heat transfer and variable viscosity. Journal of Porous Media 16(3):205–216
Noreen AS, Raza M, Ellahi R (2014) Interaction of nano particles for the peristaltic flow in an asymmetric channel with the induced magnetic field. The European Physical Journal - Plus 129:155–167
Sheikholeslami M, Ellahi R (2015) Three dimensional mesoscopic simulation of magnetic field effect on natural convection of nanofluid. Int J Heat Mass Transf 89:799–808
Ellahi R, Hassan M, Zeeshan A (2015) Study on magnetohydrodynamic nanofluid by means of single and multi-walled carbon nanotubes suspended in a salt water solution. IEEE Trans Nanotechnol 14(4):726–734
Sheikholeslami M, Ellahi R (2015) Simulation of ferrofluid flow for magnetic drug targeting using Lattice Boltzmann method. Journal of Zeitschrift Fur Naturforschung A, Verlag der Zeitschrift für Naturforschung 70:115–124
The authors are grateful to the reviewers for their excellent comments to improve the quality of the present article. The authors would also like to acknowledge the Research Management Center-UTM for the financial support through vote numbers 4 F109 and 03 J62 for this research.
Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi, 81310, UTM, Skudai, Malaysia
Gul Aaiza & Sharidan Shafie
Basic Engineering Sciences Department, College of of Engineering Majmaah University, Majmaah, 11952, Saudi Arabia
Ilyas Khan
Gul Aaiza
Sharidan Shafie
Correspondence to Sharidan Shafie.
GA and IK Modelled the problem and solved. SS participated in the sequence alignment and drafted the manuscript. All authors read and approved the final manuscript.
Aaiza, G., Khan, I. & Shafie, S. Energy Transfer in Mixed Convection MHD Flow of Nanofluid Containing Different Shapes of Nanoparticles in a Channel Filled with Saturated Porous Medium. Nanoscale Res Lett 10, 490 (2015). https://doi.org/10.1186/s11671-015-1144-4
Mixed convection
Nanofluid
Cylindrical shaped nanoparticles
MHD flow
Porous medium
Analytical solutions | CommonCrawl |
CMOS true-time delay IC for wideband phased-array antenna
Kim, Jinhyun;Park, Jeongsoo;Kim, Jeong-Geun 693
https://doi.org/10.4218/etrij.2018-0113 PDF KSCI
This paper presents a true-time delay (TTD) using a commercial $0.13-{\mu}m$ CMOS process for wideband phased-array antennas without the beam squint. The proposed TTD consists of four wideband distributed gain amplifiers (WDGAs), a 7-bit TTD circuit, and a 6-bit digital step attenuator (DSA) circuit. The T-type attenuator with a low-pass filter and the WDGAs are implemented for a low insertion loss error between the reference and time-delay states, and has a flat gain performance. The overall gain and return losses are >7 dB and >10 dB, respectively, at 2 GHz-18 GHz. The maximum time delay of 198 ps with a 1.56-ps step and the maximum attenuation of 31.5 dB with a 0.5-dB step are achieved at 2 GHz-18 GHz. The RMS time-delay and amplitude errors are <3 ps and <1 dB, respectively, at 2 GHz-18 GHz. An output P1 dB of <-0.5 dBm is achieved at 2 GHz-18 GHz. The chip size is $3.3{\times}1.6mm^2$, including pads, and the DC power consumption is 370 mW for a 3.3-V supply voltage.
New low-complexity segmentation scheme for the partial transmit sequence technique for reducing the high PAPR value in OFDM systems
Jawhar, Yasir Amer;Ramli, Khairun Nidzam;Taher, Montadar Abas;Shah, Nor Shahida Mohd;Audah, Lukman;Ahmed, Mustafa Sami;Abbas, Thamer 699
Orthogonal frequency division multiplexing (OFDM) has been the overwhelmingly prevalent choice for high-data-rate systems due to its superior advantages compared with other modulation techniques. In contrast, a high peak-to-average-power ratio (PAPR) is considered the fundamental obstacle in OFDM systems since it drives the system to suffer from in-band distortion and out-of-band radiation. The partial transmit sequence (PTS) technique is viewed as one of several strategies that have been suggested to diminish the high PAPR trend. The PTS relies upon dividing an input data sequence into a number of subblocks. Hence, three common types of the subblock segmentation methods have been adopted - interleaving (IL-PTS), adjacent (Ad-PTS), and pseudorandom (PR-PTS). In this study, a new type of subblock division scheme is proposed to improve the PAPR reduction capacity with a low computational complexity. The results indicate that the proposed scheme can enhance the PAPR reduction performance better than the IL-PTS and Ad-PTS schemes. Additionally, the computational complexity of the proposed scheme is lower than that of the PR-PTS and Ad-PTS schemes.
Joint optimization of beamforming and power allocation for DAJ-based untrusted relay networks
Yao, Rugui;Lu, Yanan;Mekkawy, Tamer;Xu, Fei;Zuo, Xiaoya 714
Destination-assisted jamming (DAJ) is usually used to protect confidential information against untrusted relays and eavesdroppers in wireless networks. In this paper, a DAJ-based untrusted relay network with multiple antennas installed is presented. To increase the secrecy, a joint optimization of beamforming and power allocation at the source and destination is studied. A matched-filter precoder is introduced to maximize the cooperative jamming signal by directing cooperative jamming signals toward untrusted relays. Then, based on generalized singular-value decomposition (GSVD), a novel transmitted precoder for confidential signals is devised to align the signal into the subspace corresponding to the confidential transmission channel. To decouple the precoder design and optimal power allocation, an iterative algorithm is proposed to jointly optimize the above parameters. Numerical results validate the effectiveness of the proposed scheme. Compared with other schemes, the proposed scheme shows significant improvement in terms of security performance.
Discrete bacterial foraging optimization for resource allocation in macrocell-femtocell networks
Lalin, Heng;Mustika, I Wayan;Setiawan, Noor Akhmad 726
Femtocells are good examples of the ultimate networking technology, offering enhanced indoor coverage and higher data rate. However, the dense deployment of femto base stations (FBSs) and the exploitation of subcarrier reuse between macrocell base stations and FBSs result in significant co-tier and cross-tier interference, thus degrading system performance. Therefore, appropriate resource allocations are required to mitigate the interference. This paper proposes a discrete bacterial foraging optimization (DBFO) algorithm to find the optimal resource allocation in two-tier networks. The simulation results showed that DBFO outperforms the random-resource allocation and discrete particle swarm optimization (DPSO) considering the small number of steps taken by particles and bacteria.
Modeling and cost analysis of zone-based registration in mobile cellular networks
Jung, Jihee;Baek, Jang Hyun 736
This study considers zone-based registration (ZBR), which is adopted by most mobile cellular networks. In ZBR, a user equipment (UE) registers its location area (or zone) in a network database (DB) whenever it enters a new zone. Even though ZBR is implemented in most networks for a UE to keep only one zone (1ZR), it is also possible for a UE to keep multiple zones. Therefore, a ZBR with two zones (2ZR) is investigated, and some mathematical models for 2ZR are presented. With respect to ZBR with three zones (3ZR), several studies have been reported, but these employed computer simulations owing to the complexity of the cases, and there have been no reports on a mathematical 3ZR model to analyze its performance. In this study, we propose a new mathematical model for 3ZR for the first time, and analyze the performance of 3ZR using this model. The numerical results for various scenarios show that, as the UE frequently enters zones, the proposed 3ZR model outperforms 1ZR and 2ZR. Our results help determine the optimal number of zones that a UE keeps, and minimize the signaling cost for radio channels in mobile cellular networks.
Exploring the dynamic knowledge structure of studies on the Internet of things: Keyword analysis
Yoon, Young Seog;Zo, Hangjung;Choi, Munkee;Lee, Donghyun;Lee, Hyun-woo 745
A wide range of studies in various disciplines has focused on the Internet of Things (IoT) and cyber-physical systems (CPS). However, it is necessary to summarize the current status and to establish future directions because each study has its own individual goals independent of the completion of all IoT applications. The absence of a comprehensive understanding of IoT and CPS has disrupted an efficient resource allocation. To assess changes in the knowledge structure and emerging technologies, this study explores the dynamic research trends in IoT by analyzing bibliographic data. We retrieved 54,237 keywords in 12,600 IoT studies from the Scopus database, and conducted keyword frequency, co-occurrence, and growth-rate analyses. The analysis results reveal how IoT technologies have been developed and how they are connected to each other. We also show that such technologies have diverged and converged simultaneously, and that the emerging keywords of trust, smart home, cloud, authentication, context-aware, and big data have been extracted. We also unveil that the CPS is directly involved in network, security, management, cloud, big data, system, industry, architecture, and the Internet.
Low-power heterogeneous uncore architecture for future 3D chip-multiprocessors
Dorostkar, Aniseh;Asad, Arghavan;Fathy, Mahmood;Jahed-Motlagh, Mohammad Reza;Mohammadi, Farah 759
Uncore components such as on-chip memory systems and on-chip interconnects consume a large amount of energy in emerging embedded applications. Few studies have focused on next-generation analytical models for future chip-multiprocessors (CMPs) that simultaneously consider the impacts of the power consumption of core and uncore components. In this paper, we propose a convex-optimization approach to design heterogeneous uncore architectures for embedded CMPs. Our convex approach optimizes the number and placement of memory banks with different technologies on the memory layer. In parallel with hybrid memory architecting, optimizing the number and placement of through silicon vias as a viable solution in building three-dimensional (3D) CMPs is another important target of the proposed approach. Experimental results show that the proposed method outperforms 3D CMP designs with hybrid and traditional memory architectures in terms of both energy delay products (EDPs) and performance parameters. The proposed method improves the EDPs by an average of about 43% compared with SRAM design. In addition, it improves the throughput by about 7% compared with dynamic RAM (DRAM) design.
Enhanced technique for Arabic handwriting recognition using deep belief network and a morphological algorithm for solving ligature segmentation
Essa, Nada;El-Daydamony, Eman;Mohamed, Ahmed Atwan 774
Arabic handwriting segmentation and recognition is an area of research that has not yet been fully understood. Dealing with Arabic ligature segmentation, where the Arabic characters are connected and unconstrained naturally, is one of the fundamental problems when dealing with the Arabic script. Arabic character-recognition techniques consider ligatures as new classes in addition to the classes of the Arabic characters. This paper introduces an enhanced technique for Arabic handwriting recognition using the deep belief network (DBN) and a new morphological algorithm for ligature segmentation. There are two main stages for the implementation of this technique. The first stage involves an enhanced technique of the Sari segmentation algorithm, where a new ligature segmentation algorithm is developed. The second stage involves the Arabic character recognition using DBNs and support vector machines (SVMs). The two stages are tested on the IFN/ENIT and HACDB databases, and the results obtained proved the effectiveness of the proposed algorithm compared with other existing systems.
Fast 3D reconstruction method based on UAV photography
Wang, Jiang-An;Ma, Huang-Te;Wang, Chun-Mei;He, Yong-Jie 788
3D reconstruction of urban architecture, land, and roads is an important part of building a "digital city." Unmanned aerial vehicles (UAVs) are gradually replacing other platforms, such as satellites and aircraft, in geographical image collection; the reason for this is not only lower cost and higher efficiency, but also higher data accuracy and a larger amount of obtained information. Recent 3D reconstruction algorithms have a high degree of automation, but their computation time is long and the reconstruction models may have many voids. This paper decomposes the object into multiple regional parallel reconstructions using the clustering principle, to reduce the computation time and improve the model quality. It is proposed to detect the planar area under low resolution, and then reduce the number of point clouds in the complex area.
Novel graphene-based optical MEMS accelerometer dependent on intensity modulation
Ahmadian, Mehdi;Jafari, Kian;Sharifi, Mohammad Javad 794
This paper proposes a novel graphene-based optical microelectromechanical systems MEMS accelerometer that is dependent on the intensity modulation and optical properties of graphene. The designed sensing system includes a multilayer graphene finger, a laser diode (LD) light source, a photodiode, and integrated optical waveguides. The proposed accelerometer provides several advantages, such as negligible cross-axis sensitivity, appropriate linearity behavior in the operation range, a relatively broad measurement range, and a significantly wider bandwidth when compared with other important contributions in the literature. Furthermore, the functional characteristics of the proposed device are designed analytically, and are then confirmed using numerical methods. Based on the simulation results, the functional characteristics are as follows: a mechanical sensitivity of 1,019 nm/g, an optical sensitivity of 145.7 %/g, a resonance frequency of 15,553 Hz, a bandwidth of 7 kHz, and a measurement range of ${\pm}10g$. Owing to the obtained functional characteristics, the proposed device is suitable for several applications in which high sensitivity and wide bandwidth are required simultaneously.
Sensor array optimization techniques for exhaled breath analysis to discriminate diabetics using an electronic nose
Jeon, Jin-Young;Choi, Jang-Sik;Yu, Joon-Boo;Lee, Hae-Ryong;Jang, Byoung Kuk;Byun, Hyung-Gi 802
Disease discrimination using an electronic nose is achieved by measuring the presence of a specific gas contained in the exhaled breath of patients. Many studies have reported the presence of acetone in the breath of diabetic patients. These studies suggest that acetone can be used as a biomarker of diabetes, enabling diagnoses to be made by measuring acetone levels in exhaled breath. In this study, we perform a chemical sensor array optimization to improve the performance of an electronic nose system using Wilks' lambda, sensor selection based on a principal component (B4), and a stepwise elimination (SE) technique to detect the presence of acetone gas in human breath. By applying five different temperatures to four sensors fabricated from different synthetic materials, a total of 20 sensing combinations are created, and three sensing combinations are selected for the sensor array using optimization techniques. The measurements and analyses of the exhaled breath using the electronic nose system together with the optimized sensor array show that diabetic patients and control groups can be easily differentiated. The results are confirmed using principal component analysis (PCA).
How do multilevel privacy controls affect utility-privacy trade-offs when used in mobile applications?
Kim, Seung-Hyun;Ko, In-Young 813
In existing mobile computing environments, users need to choose between their privacy and the services that they can receive from an application. However, existing mobile platforms do not allow users to perform such trade-offs in a fine-grained manner. In this study, we investigate whether users can effectively make utility-privacy trade-offs when they are provided with a multilevel privacy control method that allows them to recognize the different quality of service that they will receive from an application by limiting the disclosure of their private information in multiple levels. We designed a research model to observe users' utility-privacy trade-offs in accordance with the privacy control methods and other factors such as the trustworthiness of an application, quality level of private information, and users' privacy preferences. We conducted a user survey with 516 participants and found that, compared with the existing binary privacy controls, both the service utility and the privacy protection levels were significantly increased when the users used the multilevel privacy control method. | CommonCrawl |
Complexity analysis and performance of double hashing sort algorithm
Hazem M. Bahig ORCID: orcid.org/0000-0001-9448-61681,2
Journal of the Egyptian Mathematical Society volume 27, Article number: 3 (2019) Cite this article
Sorting an array of n elements represents one of the leading problems in different fields of computer science such as databases, graphs, computational geometry, and bioinformatics. A large number of sorting algorithms have been proposed based on different strategies. Recently, a sequential algorithm, called double hashing sort (DHS) algorithm, has been shown to exceed the quick sort algorithm in performance by 10–25%. In this paper, we study this technique from the standpoints of complexity analysis and the algorithm's practical performance. We propose a new complexity analysis for the DHS algorithm based on the relation between the size of the input and the domain of the input elements. Our results reveal that the previous complexity analysis was not accurate. We also show experimentally that the counting sort algorithm performs significantly better than the DHS algorithm. Our experimental studies are based on six benchmarks; the percentage of improvement was roughly 46% on the average for all cases studied.
Sorting is a fundamental and serious problem in different fields of computer science such as databases [1], computational geometry [2, 3], graphs [4], and bioinformatics [5]. For example, to determine a minimum spanning tree for a weighted connected undirect graph, Kruskal designed a greedy algorithm and started the solution by sorting the edges according to weight in increasing order [6].
Additionally, there are several reasons why the sorting problem in the algorithm aspect is important. The first pertains to finding a solution for a problem in which data are sorted according to certain criteria; this routine should be more efficient, in running time, than when the data are unsorted. For example, searching for an element in an unsorted array requires O(n); the searching requires O(log n) time when the array is sorted [6]. The second reason is that the sorting problem has been solved by a lot of algorithms using different strategies such as brute-force, divide-and-conquer, randomized, distribution, and advanced data structures [6, 7]. Insertion sort, bubble sort, and selection sort are examples of sorting algorithms using brute-force; merge sort and quick sort algorithms are examples of the divide-and-conquer strategy. Radix sort and flash sort algorithms are examples of sorting using the distribution technique; heap sort is an example of using an advanced data structure method. Additionally, the randomization techniques have been used for many previous sorting algorithms such as randomized shell sort algorithm [8]. The third reason is the lower bound for the sorting problem which is equal to Ω(n log n) was determined based on a comparison model. Based on a determined lower bound, the sorting algorithms are classified into two groups: (1) optimal algorithms such as merge sort and heap sort algorithms and (2) non-optimal algorithms such as insertion sort and quick sort algorithms. The fourth reason is when the input data of sorting problem are taken from the domain of integers [1,m], different strategies are suggested to reduce the time of sorting from O(n log n) to linear, O(n). These strategies are not based on a comparison model. Examples for this kind of sorting are counting sort and bucket sort [6].
The sorting problem has been studied thoroughly, and many research papers have focused on designing fast and optimal algorithms [9,10,11,12,13,14,15,16]. Also, some studies have focused on implementing these algorithms to obtain an efficient sorting algorithm on different platforms [15,16,17]. Additionally, several measurements have been suggested to compare and evaluate these sorting algorithms according to the following criteria [6, 7, 18]: (1) Running time, which is equal to the total number of operations done by the algorithm and is computed for three cases: (i) best case, (ii) worst case, and (iii) average case; (2) the number of comparisons performed by the algorithm; (3) data movements, which are equal to the total number of swaps or shifts of elements in the array; (4) in place, which means that the extra memory required by the algorithm is constant; and (5) stable, which means that the order of equal data elements in the output array is similar in which they appear in the input data.
Recently, a new sorting algorithm has been designed and called the double hashing sort (DHS) algorithm [19]. This algorithm is based on using the hashing strategy in two steps; the hash method used in the first step is different than in the second step. Based on these functions, the elements of the input array are divided into two groups. The first group is already sorted, and the second group will be sorted using a quick sort algorithm. The authors in [19] studied the complexity of the algorithm and calculated three cases of running time and storage of the algorithm. In addition, the algorithm was implemented and compared with a quick sort algorithm experimentally. The results reveal that the DHS algorithm is faster than the quick sort algorithm.
In this paper, we study the DHS algorithm from three viewpoints. The first aspect involves reevaluating the complexity analysis of the DHS algorithm based on the relation between the size of the input array and the range of the input elements. Then, we prove that the time complexity is different than that is calculated in [19] for most cases. The second aspect involves proving that a previous algorithm, counting sort algorithm, exhibits a time complexity less than or equal to that of the DHS algorithm. The third aspect refers to proving that the DHS algorithm exhibits a lower level of performance than another certain algorithm from a practical point of view.
The results of these studies are as follows: (1) the previous complexity analysis of the DHS algorithm was not accurate; (2) we calculated the corrected analysis of the DHS algorithm; (3) we proved that the counting sort algorithm is faster than the DHS algorithm from theoretical and practical points of view. Additionally, the percentage of improvement was roughly 46% on the average for all cases studied.
The remainder of this work is organized into four sections. In the "Comments on DHS algorithm" section, we discuss briefly the DHS algorithm, its analysis, and then, we provide some commentary about the analysis of DHS algorithm that was introduced by [19]. In the "Complexity analysis of DHS algorithm" section, we analyze the DHS algorithm using different methods. Also, we show that a previous algorithm exhibits a time complexity less than that of the DHS algorithm in most cases. We prove experimentally that the DHS algorithm is not as fast as the previous algorithm in the "Performance evaluation" section. Finally, our conclusions are presented in the "Conclusions" section.
Comments on DHS algorithm
The aim of this section is to give some comments about DHS algorithm. So, we mention in this section shortly the main stages and complexity analysis of DHS algorithm. Then, we give some comments about the analysis of the algorithm.
DHS algorithm
The DHS algorithm is based on using two hashing functions to classify input elements into two main groups. The first group contains all elements that have repetitions greater than one; the second group contains all elements in the input array that do not have repetition. The first hashing function is used to compute the number of elements in each block and to determine the boundaries of each block. The second hashing function is used to give a virtual index to each element. Based on the values of the indices, the algorithm divides the input into two groups as described previously. The algorithm sorts the second group using a quick sort algorithm only. The algorithm consists of three main stages [19]. The first stage involves determining the number of elements belonging to each block assuming that the number of blocks is nb. The block number of each element, ai, can be determined using ⌈ai/sb ⌉, where sb is the size of the block and equal ⌈(Max(A) − Min(A) + 1)/nb⌉. The second stage refers to determining a virtual index for each element that belongs to the block bi, ∀ 1 ≤ i ≤ nb. The values of the indices are integers and float numbers according to the equations in [19]. The final stage involves classifying the virtual indices into two separate arrays, EqAr and GrAr. The EqAr array is used to represent all elements that have repetitions greater than 1; the GrAr array is used to represent all input elements that do not have repetition. The EqAr array stores all virtual integer indices and its repetitions; the GrAr array stores all virtual float indices. The algorithm sorts only the GrAr array using a quick sort algorithm.
The running times of the first and the second stages are always O(n), because we scan an array of size n. The running time of the third stage varied from one case to another; the running time of the DHS algorithm is based mainly on the third stage. Based on the concept of complexity analysis for the running time of the algorithm, we have three cases: best, worst, and average. The running time of the DHS algorithm is based on the size of the array, n, and the maximum element, m, of the input array. The authors [19] analyzed the running time of the DHS algorithm as follows:
Best case: this case occurs if the elements of the input array are well distributed and either of n or m is small. In this case, the running time of the third phase is O(n).
Worst case: this case occurs if the value of n is large and m is small. Therefore, the number of elements that belong to the array GrAr is large. So, the running time of DHS algorithm is O(n + x log m), where x ≪ n.
Average case: the authors in [19] do not specify the value of n and m in this case. The running time of the DHS algorithm is O(n + x log m), where x ≪ n.
In this part, we provide two main comments about the DHS algorithm. The first category of comments is related to the theoretical analysis of the DHS algorithm; the second category of comments is related to the data generated in the practical study.
For the first category, we found that the running times for the DHS algorithm have the following three notes.
The first note is that the running time calculated for the best case is correct when m is small; the running time calculation is not correct when n is small. When n is small and m is large, the number of repetitions in the input array is very small in general. Therefore, most of the elements belong to the GrAr array. This situation implies that the DHS algorithm uses the quick sort algorithm on the GrAr array. Therefore, the running time of the third phase is O(n log n), not O(n).
Example 1 Let n = 10, m = n2 = 100, and the elements of A is well distributed as follows:
It is clear that the value of n is small compared with m. Therefore, in general, the number of elements that belong to the EqAr array is very small compared with the GrAr array that contains most of the input elements.
The second note is that the calculated running time for the worst case is not correct if the value of n is large and m is small. This situation implies that the number of repetitions in the input array is large because the n elements of the input array belong to a small range. Therefore, the maximum number of elements belonging to the GrAr array is less than m, say α. On the other hand, the array EqAr contains n − α elements. Therefore, the statement "the number of elements belong to the GrAr array is large" [19] is not accurate. It should be small since m is small. Therefore, the calculated running time for the worst case of the DHS algorithm, O(n + x log m), is not accurate in the general case.
Example 2 Let m = 4, n = m2 = 16 and A is given as follows
It is clear that the number of non-repeated elements is 3 and the GrAr array contains only 3 different elements, 2, 3, and 4; the EqAr array contains 13 elements from 16.
The third note is that no determination when the average case occurs, which is why the running time is O(n + x log m).
For the second category, the data results in [19] for the three cases reveal how that the percentages of elements in the EqAr array, repeated elements, is at least 65%, which is too much and do not represents the general or average cases. This situation means that the input data used to measure the performance of DHS algorithm do not represent various types of input. For example, in the average case, n = 100,000 and the range of elements equals 100,000, the number of elements in the EqAr array is 70,030 [19].
Complexity analysis of DHS algorithm
In this section, we study the complexity of the DHS algorithm using another method of analysis. The DHS algorithm is based on dividing the elements of an input array into many slots; each slot contains elements in a specific range. Therefore, we mainly analyze the DHS algorithm based on the relation between the size of the array, n, and the domain of the elements in the array, m. There are three cases for the relation between n and m.
Case 1: O(m) < O(n). In this case, the range of values for the elements of the input array is small compared with the number of elements in A. This case can be formed as A = (a1, a2, …, an), where ai < m and m < n. We use big Oh notation to illustrate that the difference between n and m is significant. For example, let \( m=\sqrt{n} \) and m = log n and if n = 10,000, then m = 100 and 4, respectively.
Case 2: O(m) = O(n). In this case, the range of the values for the elements of the input array is equal to the number of elements. This case can be formed as A = (a1, a2, …, an), where ai ≤ m, n ≈ m, and m = α n ± β such that α and β are constant. For example, let m = 2n and m = n + 25; if n = 1000, then m = 2000 and 1025, respectively.
Case 3: O(n) < O(m). In this case, the range of the values for the elements of the input array is greater than the number of elements. This case can be formed as A = (a1, a2, …, an), where ai < m, m > n. For example, let m = nk, where k > 1. If n = 100 and k= 3, then m = 1,000,000.
Now, we study the complexity of the DHS algorithm in terms of three cases.
Case 1: O(m) < O(n). The value of m is small compared with the input size n; the array contains many repeated elements. In this case, the maximum number of slots is m, and there is no need to map the elements of the input array to n slots such as mapping sort algorithm [20], where the index of the element ai is calculated using the equation: ⌊((ai − Min(A)) × n)/(Max(A) − Min(A))⌋.
The solution to this case can be found using an efficient previous sorting algorithm called counting sort (CS) algorithm [6]. Therefore, there is no need to use the insertion sort, quick sort, and merge sort algorithms as in [19, 20] to sort un-repeated elements. The main idea of the CS algorithm is to calculate the number of elements less than the integer i ∈ [1, m]. Then, we use this value to allocate the element aj in a correct location in the array A, ∀ 1 ≤ j ≤ n. The CS algorithm consists of three steps. The first step of the CS algorithm starts with scanning the input array A and computing the number of repetitions each element occurs within the input array A. The second step of the CS algorithm is to calculate, for each i ∈ [1, m], the starting location in the output array by updating the array C using the prefix-sum algorithm. The prefix-sum of the array C is to compute \( C\left[i\right]={\sum}_{j=1}^iC\left[j\right] \). The final step of the CS algorithm allocates each i ∈ [1, m] and its repetition in the output array using the array C.
Additionally, the running time of the CS algorithm is O(n + m) = O(n), because O(m) < O(n). The running time of the CS algorithm does not depend on the distribution of the elements, uniform and non-uniform, over the range m. Also, the CS algorithm is independent of how many repeated and unrepeated elements are found in the input array.
The following example illustrates how to use the CS algorithm in this case; there is no need to distribute the input into two arrays, EqAr and GrAr, as in the DHS Algorithm.
Example 3 Let m = 5, n = m2 = 25, and the elements of the input array A as in Fig. 1a. As a first step, we calculate the repetition array C, where C[i] represents the number of repetitions of the integer i ∈ [1, m] in the input array A as in Fig. 1b. It is clear that the number of repetition for the integer "1" is 6; the integer "4" has zero repetition. In the second step, we calculate the prefix-sum of C as in Fig. 1c, where the prefix-sum for C[i] is equal to \( {\sum}_{j=1}^iC\left[j\right] \). In the last step, the integer 1 is located from positions 1–6; the integer 2 is located from positions 7–14 and so on. Therefore, the output array is shown as in Fig. 1d.
Tracing of the CS algorithm in case of O(m) < O(n). a Input array A. b Count array C. c Prefix-sum for C. d Sorted array A
Remark Sometimes the value of m cannot fit in memory because the storage of the machine is limited. Then, we can divide the input array into k (<m) buckets, where the bucket number i contains the elements in the range [(i − 1)m/k + 1, i m/k], 1 ≤ i ≤ k. For a uniform distribution, each bucket contains n/k elements approximately. Therefore, the running time to sort each bucket is O(n/k + k). Hence, the overall running time is O(k(n/k + k)) = O(n + k2) = O(n). For non-uniform distributions, the number of elements in each bucket i is ni such that \( {\sum}_{i=1}^k{n}_i=n \). Therefore, the overall running time is O(n + k) = O(n).
Case 2: O(m) = O(n). The value of m is approximately equal to the input size n. If the elements of the array are distributed uniformly, then the number of repetitions for the elements of the array is constant. In this case, we have two comments about the DHS algorithm. The first comment is that there is no need to construct two different arrays, GrAr and EqAr. The second comment is that there is no need to use the quick sort algorithm in the sorting because we can sort the array using the CS algorithm.
If the distribution of the elements for the input array is non-uniform, then the number of repetitions for the elements of the array is varied. Let the total number of repetitions for all the elements of the input array be φ(n). Therefore, the array EqAr contains φ(n) elements; the array GrAr contains n − φ(n) elements. The running time for executing the DHS algorithm is O(n + (n − φ(n)) log (n − φ(n)) ), where the first term represents the running time for the first two stages and the second term represents applying the quick sort algorithm on the GrAr array. In the average case, we have n/2 repeated elements, so the running time of the DHS algorithm is O((n/2) log (n/2) ) = O(n log n). In this case, the CS algorithm is better than the DHS algorithm. On the other side, if φ(n) ≈ n, then the running time of the DHS algorithm is O(n).
Example 4 Let m = 30, n = 25. Fig. 2 shows how the CS algorithm can be used instead of the DHS algorithm in the case of a uniform distribution.
Tracing of the CS algorithm in case of O(m) = O(n). a Input array A. b Count array C. c Prefix-sum for C. d Sorted array A
Case 3: O(n) < O(m). The value of m is large compared with the input size n, so the elements of the input array are distinct or the number of repetitions in the input array is constant in general. The DHS and CS algorithms are not suitable for this case. Reasons for not considering these strategies include the following:
All of these algorithms require a large amount of storage to map the elements according to the number of slots. For example, if m = n2 and n = 106 (this value is small for many applications), then m = 1012 which is large.
If the machine being used contains a large amount of memory, then the running times of the DHS algorithm are O(n log n). But the main drawbacks of the DHS algorithm are (1) the output of the second hashing function is not unique; (2) the equations used to differentiate between repeated elements and non-repeated elements are not accurate which means that there is an element with certain repetitions and another element without repetition have the same visual indices generated by the suggested equations. Therefore, merge sort and quick sort are better than the DHS algorithm.
In the case of CS, the algorithm will scan an auxiliary array of size m to allocate the elements at the correct position in the output. Therefore, the running time is O(m), where O(m) > O(n). If m = n2, then the running time is O(n2) which is greater than merge sort algorithm, O(n log n).
From the analysis of the DHS algorithm for the three cases based on the relation between m and n, there is a previous sorting algorithm that is associated with less time complexity than the DHS algorithm.
In this section, we studied the performance of the DHS and CS algorithms from a practical point of view based on the relation between m and n for the two cases, O(m) < O(n) and O(m) = O(n). Note that both algorithms are not suitable in the case of O(n) < O(m).
Platforms and benchmarks setting
The algorithms were implemented using C language and executed on a computer consisting of a processor with a speed of 2.4 GHz and a memory of 16 GB. The computer ran the Windows operating system.
The comparison between the algorithms is based on a set of varied benchmarks to assess the behavior of the algorithms for different cases. We build six functions as follows.
Uniform distribution [U]: the elements of the input are generated as a uniform random distribution. The elements were generated by calling the subroutine random() in the C library to generate a random number.
Duplicates [D]: the elements in the input are generated as a uniform random distribution. The method then selects log n elements from the beginning of the array and assigns them to the last log n elements of the array.
Sorted [S]: similar to method [U] such that the elements are sorted in increasing order.
Reverse sorted [RS]: similar to method [U] such that the elements are sorted in decreasing order.
Nearly sorted [NS]: similar to [S]; we then select 5% random pairs of element swaps.
Gaussian [G]: the elements of the input are generated by taking the integer value for the average of four calling for the subroutine random().
In the experiment, we have three parameters affecting the running time for both algorithms. The first two parameters are the size of the array n and the domain of the input m; the third parameter is the data distribution (six benchmarks). Based on the relation between n and m, say O(m) < O(n), we fixed the size of the array n and adopted different values of m, mi, such that mi ∈ O(m) < O(n). For example, let n = 108, and the values of m are m1 = 106, m2 = 105, m3 = 104, m4 = 103, and m5 = 102. For each fixed value of n and mi, we generated six different input data values based on the six benchmarks (U, D, S, RS, NS, G). For each benchmark, the running time for an algorithm was the average time of 50 instances, and the time was measured in milliseconds. Therefore, the running time for the algorithm, Alg, using the parameters n, m and a certain type of data distribution, is given by the following equation.
$$ \frac{1}{n_m}\sum \limits_{i=1}^{n_m}\left(\frac{1}{50}\sum \limits_{j=1}^{50}{t}_i\left(n,{m}_i, dd, Alg\right)\right) $$
mi is one of the values for m such that mi satisfies either O(m) < O(n) or O(m) = O(n). In the experiment, if n = 10x, then 102 ≤ mi ≤ 10x-2.
nm is the number of different values for mi. In the experiment, if n = 10x, then nm = x-3, because 102 ≤ mi ≤ 10x-2.
dd is the type of data distribution used in the experiment, and the value of dd is one of six benchmarks (U, D, S, RS, NS, G).
Alg is either the CS or DHS algorithm.
ti is the running time for the Alg algorithm using the parameters n, mi, and the data distribution dd.
In our experiments for both cases, we choose the value of n equal to 108, 107, 106, and 105, because the running times for both algorithms are very small when n is less than 105.
Experimental results
The results of implementing the methodology to measure the running time of the CS and DHS algorithms considering all parameters that affect the execution times are shown in Figs. 3 and 4. Each figure consists of four subfigures (a), (b), (c), and (d) for n = 105, 106, 107, and 108, respectively. Also, each subfigure consists of six pairs of bars. Each pair of bar represents the running times for the CS and DHS algorithms using a certain type of data distribution. Fig. 3 illustrates the running times for the CS and DHS algorithm in the case of O(m) < O(n) and shows that the running time for the CS algorithm is faster than for the DHS algorithm for all values of n and benchmarks. The difference in running time between the algorithms varies from one type of data distribution to another. For example, the running time for the CS algorithm using the six benchmarks are 7.7, 9.7, 8.2, 9.8, 8.2, and 6.1 milliseconds; while the running times for the DHS algorithm using the same benchmarks are 11.2, 12.4, 10.6, 15, 13.9, and 14.1 milliseconds in the case of n = 106. In general, the maximum difference between the two algorithms occurs in the case of a Gaussian distribution
Running time for the CS and DHS algorithms in case of O(m) < O(n). a n = 105. b n = 106. c n = 107. d n = 108
Running time for the CS and DHS algorithms in case of O(m) = O(n). a n = 105. b n = 106. c n = 107. d n = 108
Similarly, Fig. 4 illustrates the running times for the CS and DHS algorithm in the case of O(m) = O(n) and shows that the running time for the CS algorithm is faster than for the DHS algorithm for all values of n and benchmarks.
Table 1 lists data pertaining to the performance improvements of the CS algorithm in two points of view (i) range of improvements, and (ii) mean of improvements. In the case of a range of improvements, we fix the size of the array n and calculate the percentage of improvement for each benchmark. Then, we record the range of improvements from the minimum to maximum values as in the second and fourth columns in Table 1 for O(m) < O(n) and O(m) = O(n), respectively. In the case of the mean of improvements, we take the mean value for the percentage of improvements for all data distributions, as in the third and fifth columns. The results of applying these measurements are as follows.
In the case of O(m) < O(n), the CS algorithm performed 45–80%, 21.5–56.5%, 20–57%, and 21–47% faster than the DHS algorithm for n = 105, 106, 107, and 108, respectively. For example n = 108, the percentage of improvements for CS algorithm for data distribution: [U], [D], [S], [RS], [NS], and [G], are 21%, 27.2%, 24.4%, 23.9%, 24.9%, and 47%, respectively. Therefore, the range of improvements for the CS algorithm is 21–47% when n = 108. Additionally, based on the percentage of improvement calculated for each data distribution and a fixed value of n, we can calculate the mean of improvements which are equal to 62.5%, 34.7%, 28.3%, and 28.1% for n = 105, 106, 107, and 108, respectively. For example, n = 108, the mean of improvements is 28%.
In the case of O(m) = O(n), the CS algorithm performed 88.7–96.8%, 58.5–60.7%, 20–53.8%, and 24.8–43.5% faster than the DHS algorithm for n = 105, 106, 107, and 108, respectively. For example n = 107, the percentage of improvements for CS algorithm for data distribution: [U], [D], [S], [RS], [NS], and [G], are 20.5%, 20%, 24.4%, 47.4%, 46.3%, and 53.8%, respectively. Therefore, the range of improvements for CS algorithm is 20–53.8% when n = 107. Similarly, we can compute the mean of improvements which are equal to 91.9%, 59.9%, 35.4%, and 33.6% for n = 105, 106, 107, and 108, respectively.
Table 1 Range of improvements for the CS and DHS algorithms
From previous results, the CS algorithm performed 38.4% and 55.2% faster than the DHS algorithm for O(m) < O(n) and O(m) = O(n), respectively. Therefore, the percentage of improvement for the CS algorithm was roughly 46% on the average for all cases studied.
The sorting problem is to rearrange the elements of a given array in increasing order. This problem is important in a variety of computer science applications, and it is used as a subroutine in many computer applications. In this work, we studied the complexity analysis and measured performance of the double hashing sort (DHS) algorithm. The results of this study are (1) the previous complexity analysis of the DHS algorithm was not accurate; (2) we calculated the corrected analysis of this algorithm based on the relation between size of the input array n and domain of the input elements m; (3) there is a previous sorting algorithm called counting sort algorithm that is faster than the DHS algorithm in the case of O(m) ≤ O(n) from theoretical and practical points of view; and (4) our experimental studies are based on six benchmarks; the percentage of improvement was roughly 46% on the average for all cases studied.
CS:
Counting sort
DHS:
Double hashing sort
NS:
Nearly sorted
RS:
Reverse sorted
Uniform distribution
Graefe, G.: Implementing sorting in database systems. ACM Comput. Surv. 38(3), 10 (2006)
Abam, M., Berg, M.: Kinetic sorting and kinetic convex hulls. Comput. Geom. 37(1), 16–26 (2007)
Ezra, E., Mulzer, W.: Convex hull of points lying on lines in O(n log n) time after preprocessing. Comput. Geom. 46(4), 417–434 (2013)
Kim, D.: Sorting on graphs by adjacent swaps using permutation groups. Comput. Sci. Rev. 22, 89–105 (2016)
Shao, M., Lin, Y., Moret, B.: Sorting genomes with rearrangements and segmental duplications through trajectory graphs. BMC Bioinform. 14(Suppl 15), S9 (2013)
Cormen, T., Leiserson, C., Rivest, R., Stein, C.: Introduction to algorithms. 3rd ed. MIT Press, Cambridge, England (2009)
Knuth, D.: The art of computer programming, vol. 3: sorting and searching, 2nd edn. Addison-Wesley, Reading (1973)
Goodrich, M.: Randomized shellsort: a simple data-oblivious sorting algorithm. J. ACM. 58(6), 27 (2011)
Aumüller, M., Dietzfelbinger, M.: Optimal partitioning for dual-pivot quicksort. ACM Trans. Algorithms. 12(2), 18 (2016)
Brodal, G., Fagerberg, R., Moruz, G.: On the adaptiveness of quicksort. ACM J. Exp. Algorithmics. 12, 3.2 (2008)
Cook, C., Kim, D.: Best sorting algorithm for nearly sorted lists. Commun. ACM. 23(11), 620–624 (1980)
Diekert, V., Wei, A.: QuickHeapsort: modifications and improved analysis. Theory Comput. Syst. 59(2), 209–230 (2009)
Mohammed, A., Amrahov, S., Celebi, F.: Bidirectional conditional insertion sort algorithm; An efficient progress on the classical insertion sort. Futur. Gener. Comput. Syst. 71, 102–112 (2017)
Wickremesinghe, R., Arge, L., Chase, J., Scott Vitter, J.: Efficient sorting using registers and caches. J. Exp. Algorithm. 7, 9 (2002)
Cole, R., Ramachandran, V.: Resource Oblivious Sorting on Multicores. ACM Trans. Parallel Comput. 3(4), 23 (2017)
Stehle, E., Jacobsen, H.: A memory bandwidth-efficient hybrid radix sort on GPUs. In: SIGMOD '17 Proceedings of the 2017 ACM International Conference on Management of Data, Chicago, Illinois, USA, pp. 417–43 (2017)
Elmasry, A., Hammad, A.: Inversion-sensitive sorting algorithms in practice. ACM J. Exp. Algorithmics. 13, 11 (2009)
Franceschini, G., Geffert, V.: An in-place sorting with O(n log n) comparisons and O(n) moves. J. ACM. 52(4), 515–537 (2005)
Omar, Y., Osama, H., Badr, A.: Double hashing sort algorithm. Comput. Sci. Eng. 19(2), 63–69 (2017)
Osama, H., Omar, Y., Badr, A.: Mapping Sorting Algorithm, pp. 48–491. SAI Computing Conference 2016, London (2016)
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Computer Science Division, Department of Mathematics, Faculty of Science, Ain Shams University, Cairo, 11566, Egypt
Hazem M. Bahig
College of Computer Science and Engineering, Hail University, Hail, Kingdom of Saudi Arabia
The author read and approved the final manuscript.
Correspondence to Hazem M. Bahig.
Hazem Bahig received the B.Sc. degree in Pure Mathematics and Computer Science from Ain Shams University, Faculty of Science in 1990. He also received M.Sc. and Ph. D. degrees in Computer Science in 1997 and 2003 respectively from Computer Science Division, Ain Sham University, Cairo, Egypt. Also, he is currently working in the College of Computer Science and Engineering, Hail University, KSA. His current research interests include high performance computing, design and analysis of algorithms and e-learning systems for algorithms.
The online version of this article was revised: the inline tables were originally omitted and have been added post-publication.
Bahig, H.M. Complexity analysis and performance of double hashing sort algorithm. J Egypt Math Soc 27, 3 (2019). https://doi.org/10.1186/s42787-019-0004-2
Performance of algorithm
Complexity analysis
Mathematics Subject Classfication
11Y16 | CommonCrawl |
Volume Of Cone And Cylinder
9 cm³ 18) A cone with diameter 20 cm and a height of 20 cm. Find more on PROGRAM TO FIND VOLUME OF A CYLINDER Or get search suggestion and latest updates. 9 Know the formulas for the volumes of cones, cylinders, and spheres and use them to solve. 14 r is the radius of a circle. We know volume is the amount of cubic units that fill a 3-D object. Cones And Cylinders Volume. diameter of cylinder is 8cm, find the total surface area & volume of the given solid. 5 Inch Radius = 6. These review sheets are great to use in class or as a homework. Volume Of A Circle Cylinder. Surface area of a sphere is = 4 x pi x r squared. Where, 'r' is the base radius of the cone 'l' is the slant height of a cone 'h' is the height of the cone. volume / instancing. Thus, the volume of this cylinder is:. But if you just want the cone, it's 1/3 of that. Title: Volume of a Cylinder, Cone, and Sphere 1 Volume of a Cylinder, Cone, and Sphere. Static analysis and strength calculation of drive shaft of large-scale cone crusher 01038. Thus, you solve the formula in two steps: one, calculating the area of the disk by multiplying the radius by itself and then by Pi (3. Definition. What is the volume of a cylinder. I am going to remove the cone of radius r and height h from the cylinder and show that the volume of the remaining piece (call it S) is 2/3 r 2 h leaving the cone with volume r 2 h - 2/3 r 2 h = 1/3 r 2 h. 7854 x b x c: CI Example: 4 x 4 x. The volume of a cone or pyramid is exactly one third of what it would be for a box or cylinder with the same base. Arc Length. Visual Derivation of the Volume of a Sphere Formula Making Connections Between Volume of a Cone and Sphere. This diagram just helps us to start thinking about the problem. Find the volume of the given body if the common base radius is of 3. org Math Tables: Areas, Volumes, Surface Areas. and radius 1 in. Cone based on the circle, ellipse, parabola or hyperbole respectively called circular, elliptical, hyperbolic or parabolic cone (the last two have infinite volume). Surface area to volume ratio is also known as surface to volume ratio and denoted as sa÷vol, where sa is the surface area and vol is the volume. The surface area of a cylinder is easy to work out because its surface is actually a composite of regular plane shapes. Volume of a Cylinder, Cone congruent bases connected by a curved lateral surface Cone A three dimensional figure Cylinder Formula B = Area of the base h - A free PowerPoint PPT presentation (displayed as a Flash slide show) on PowerShow. Find the volume. h is the height of the cone. The radius of a sphere is 6 units. Think of the volume of a cylinder as stacking circles. What will be the volume of the cylinder having same. Head Volume Calculator. Free online Volume and Surface Area Calculator: Determine the Volume and the Surface Area of Barrel, Cone, Frustum Cone, Cube, Cylinder, Hollow Cylinder, Sectioned Cylinder, Parallelepiped, Hexagonal Prism, Pyramid, Frustum Pyramid, Sphere, Spherical Cap, Spherical Sector, Spherical Zone and Torus. Start by finding the volume of the cone. Find the volume to the nearest hundredth. The above formulas can be used to show that the volumes of a cone, sphere and cylinder of the same radius and height are in the ratio 1 : 2 : 3, as follows. In this article, we derive the formulae for the volumes of a square-based pyramid and a cone, using relatively simple mathematical concepts. There are two identical circles – one for the top, and one for the bottom of the cylinder. In calculus, students learn methods to calculate the volume of solids formed by rotating 2-dimensional surfaces around different axes. How to Read: - Moving Average crossovers are used to help determine a possible trend change or retracement. It is the combined capacity for all cylinders of the engine added together while it completes its one cycle. Volume of Cylinders, Cones and Spheres. The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. 75 m 3 = 2,547,758. Could you also explain the steps cause I'm not bright in math. Popular volume of a cylinder of Good Quality and at Affordable Prices You can Buy on AliExpress. 262 Lesson 26 Understand Volume of Cylinders, Cones, and Spheres ©urriculum ssociats opyig is ot prmitt Solve. What is the surface area of the cylinder (use 3. of Cylinders. A styrofoam model of a volcano is in the shape of a cone. Winking Unit 4‐9. Round your answers to the nearest hundredth, if necessary. Volume of a Cylinder: A cylinder is a three-dimensional shape circular in cross section. The volume would be given by. Surface Area and Volume are the main chapter in mathematics for class 9th and 10th. The volume of a cone is the same as the volume of a cylinder with the same dimensions divided by three. 4b Remediation Plan Summary Students measure the volume and surface area of real-life examples of rectangular prisms and cylinders to solve practical problems. Practice: Volume of cylinders, spheres, and cones word problems. If we know the radius and height of the Cone then we can calculate the volume using the formula Using these values, this C program will calculate the Surface Area, Volume, length of a side (Slant), and Lateral Surface Area of a Cone as per the formulas. In this type of question, first we will calculate the volume of water displaces then will multiply with the density of water. Try Prime for free. Mensuration of Class 8. What will be the volume of the cylinder having same. xlStockVOHLC. Volume and Surface Area Task Cards- Common Core and TEKS Aligned are a set of 16 task cards that I have created to assess Volume and Surface Area in your classroom. We all have seen a cylinder, now let us learn to define it in technical terms. The bulk volume and grain volume can then be used to calculate the connected porosity of the rock. volume / cloud. com, a free online dictionary with pronunciation, synonyms and translation. EXAMPLE 1 Finding the Volume of a Cylinder Find the volume of the cylinder. See full list on mathsisfun. Storage Tank A grain storage tank is in the shape of a cylinder covered by a half sphere as shown. Volume of Cylinders, Cones, and Spheres No teams 1 team 2 teams 3 teams 4 teams 5 teams 6 teams 7 teams 8 teams 9 teams 10 teams Custom Press F11 Select menu option View > Enter Fullscreen for full-screen mode. of triangular prisms and cylinders. Let us assume that ( r) = 2 r ⇒ ( 1 / 3 ) * π ( 2 r) 2 h = ( 4 / 3 ) * π r 2 h. thick, what is the difference between the volume of the cone, including the. Shop now for Electronics, Books, Apparel & much more. Flashcards. Volume of a cone can be calculated a by multiplying a third of its Height by the Area of the circular base. The volume of a cylinder is the amount of space that will fit inside it. But there's a special relationship between the volume of the cylinder and volume of a cone. (A) 9: 8 (B) 8: 9 (C) 3: 4 (D) 4: 3. 4 cm³ 19) A cone with diameter 14 yd and a height of 14 yd. If the cylinder has a radius r and length (height) h, then its volume is given by. Write an expression to represent the volume of the sphere, in cubic units. Round your answers to the nearest tenth, if necessary. Next, we are given the formula for the volume of a sphere, so to find the volume of the hemisphere we will use this formula and then half the result. Volume of a Sphere (Radius/Diameter Given) Worksheets. Cross product. So the surface area of the cone equals the area of the circle plus the area of the cone and the final formula is given by: SA = πr 2 + πrl Where, r is the radius h is the height l is the slant height The area of the curved (lateral) surface of a cone = πrl Note:. * Effective Cone Diameter (D). Cylindrical Coordinates. Cones And Cylinders Volume. This exercise practices applying the volume formulas for cones, cylinders, and spheres. Step-by-step explanation: step 1. Volume Brakout indicator is used to help determine trend direction strictly based on Negative and Positive volume data. You can simplify and evaluate expressions, factor/multiply polynomials, combine expressions. Volume of cone = (1/3) π r^2 h and. Volume interactive and downloadable worksheet. Forming a cylinder - segments connecting the corresponding points of the circles. Vande Fischer author of PROGRAM TO FIND VOLUME OF A CYLINDER is from Frankfurt, Germany. Given a problem involving cones or cylinders, the student will find the volume using appropriate units of measure. Try Prime for free. Question 21. A cylinder of radius r and height has volume r 2 h. It is the solid figure that you get when you rotate a rectangle about one of its sides. Given a problem involving cones or cylinders, the student will find the volume using appropriate units of measure. Aug 18,2020 - Test: Volume Of A Cone | 20 Questions MCQ Test has questions of Class 9 preparation. The volume of a container is generally understood to be the capacity of the container, i. Find the volume of water left in the cylinder, if the radius of the cylinder is 60 cm and its height is 180 cm. The volume of a cone is 1 ··3 the volume of the cylinder and the volume of the sphere s i 4 ··3 the volume of the cylinder. 14 r is the radius of a circle. The optimisation of the cylinder-spiral soil-cultivating roller 01001. 75 m 3 = 2,547,758. Worksheets are 10 volume of prisms and cylinders, Unit 8 syllabus surface area volume, Find the volume of each round your answers to the, Volume prisms cylinders l1es1, Volume of prisms cones pyramids spheres h, Volume of prism l1es1, Name date per, Volume. The sound will be clean and the frequencies will not interfere. As the volume of production and output increases, variable costs will also increase. The volume of a cylinder 5 inches in diameter and 6 inches high is 117. The model has a circular base with a diameter of 48 centimeters and a height of 12 centimeters. Video Engagement: Environmental Control on the ISS Learn how engineers turn perspiration into innovation on the International Space Station so astronauts can live and work. A is the area of its base. Yan Aditya P Yola Yaneta H. Archimedes' tombstone was engraved with the image of a sphere within a cylinder, illustrating one of his Archimedes was able to determine that the crown was not pure gold due to the volume of the The Archimedes principle was compared with using cone beam computed tomography (CBCT) to. A square pyramid has a volume of 245 in3. Plane figures have two dimensions, but solid figures have _____ dimensions. Find the dimensions of the right circular cylinder of greatest volume that can be inscribed in a right circular cone of radius 5 cm and height 12 cm. Online calculator to find the conical cylinder volume. Your test on Volume of rectangular prisms, cylinders, rectangular pyramids, cones, and spheres will be on _____. The volume of the cone. Students will also find volumes of hemispheres, and solids composed of spheres, hemispheres, cylinders and cones. Cylinder=_____ 7. Volume of solid of revolution. For the following exercises, use differentials to estimate the maximum and relative 41. They are parallel and equal. relation between volume of sphere , cylinder and cone Cylinders and Cones. This is a convenient cubic meter calculator for shipping volume of cartons, calculation based on metric unit cm and kg. Volume of cones. The area B of the base is given by the formula B = πr², where r is the radius of the cir. Activity Time Find volume of cone. Lumos EdSearch Overview: EdSearch is a free standards-aligned educational search engine specifically designed to help teachers, parents, and students find engaging videos, apps, worksheets, interactive quizzes, sample questions and other resources. Volume of Cones Find the volume of each of the following cones… 3. Relationship Between Volume Of Cylinder Cone And Sphere. It can be seen from the above formula that the volume of a cone is one-third of the volume of a cylinder. A cylinder has a diameter of 8m and a height of 9m. Thus, the volume of this cylinder is:. The volume of a sphere is $ {\frac{4}{3}\times\pi\times r^3} $. A cylinder with a radius of 4 yd and a volume of 80 yd. Compute the volume of a fluid within a horizontal tank of a cylindrical shape. Word Problems Volume And Surface Area Of. Volumes of Cylinders, Cones and Spheres. 1) 7 km 22 km 2) 9 mi 10 mi 3) 7 mi 4 mi 4) 7 km 6 km Find the volume of each figure. Algebra V = Bh Area of base Height of cylinder Study Tip Because B = π r 2, you can use V = π r 2h to fi nd the volume of a cylinder. You can do the exercises online or download the worksheet as pdf. The solid enclosed by this surface and by two planes perpendicular to the axis is also called a cylinder. Common Core Math Standard: CCSS. Remember your units!. AliExpress carries wide variety of products, so you can find just what you're looking for - and maybe something you never even imagined along. Points O, A, B and C are in the same plane. Flashcards. 097 (approximately). A cone is a three-dimensional geometric shape that tapers smoothly from a flat base (frequently, though not necessarily, circular) to a point called the apex or vertex. It "leans" to one side, similarly to the oblique cylinder. If the cone is 0. The volume of a cone is 1/3 the volume of a cylinder that shares. 0L, that means all the four cylinders can. Volume of a cone formula. Learn how to find the Volume of Cones and Pyramids in this free math video tutorial by Mario's Math Tutoring. Right Cone Surface area of a right cone: S=B+πrl or S=πr2+πrl S is the surface area, B is the area of the base, and πrl is the lateral area of the cone. This MCQ test is related to Class 9 syllabus, prepared by Class 9 teachers. 55 Cubic Centimeters. Shop now for Electronics, Books, Apparel & much more. The curved surface area of a right circular cone equals the perimeter of the base times one-half slant height. A high temperature voice coil wound on an aluminium voice coil former gives a high power handling capacity. A cone is formed by a set of line segments, half-lines, or lines connecting a common point, the apex, to all of the points on a base that is in a plane that does not contain the apex. s = √(r2 + h2). 3 - Use volume formulas for cylinders, pyramids, cones, and spheres to solve problems. welcome to RV TUTORIALS In this video i am going to explain , The Cylinder has three times the volume of a Cone with the same. Vector analysis. , the volume between points A and B), ∆P is pressure dierence, and L is the length of the capillary. As we can see from the above cone formula, the capacity of a cone is one-third of the capacity of the cylinder. Cylinder - geometric body bounded by a closed cylindrical surface and two parallel planes that intersect it. What is the Volume of a Cone with the given dimensions?. Volume of a Conical Cylinder Calculator. Now using the same formula subtract the cubic inches (volume) of your part. He could calculate their volumes, and, as appears from his taking the Egyptian seked, the horizontal distance associated with a vertical rise of one cubit, as the defining quantity for the pyramid's slope, he knew something about similar triangles. This gives the formula for the volume of Volumes of "Real-world" objects e. Find the volume of rubber used to make the ball. The volume V of a cone is equal to pi (3. and a height of 10 in. \(dr \, dz \, d\theta\) Figure \(\PageIndex{4}\): Setting up a triple integral in cylindrical coordinates over a conical region. Find the volume. Figure 2: The cross-section of a right circular cone by a plane perpendicular to the axis of the cone is a circle. Thus, you solve the formula in two steps: one, calculating the area of the disk by multiplying the radius by itself and then by Pi (3. Imagine drawing a cylinder around the cone, with the same base and height – this is called the circumscribed cylinder. x π is a constant and height, h, is a constant in this task. As the explanations above for prisms, cylinders, pyramids, cones, and spheres will indicate, the formulas for volume of these solids are attainable through discovery. Volume of cylinder with radius. The volume of the cylinder is three times the volume of the cone. When we expanded the traditional Cartesian coordinate system from two dimensions to three, we simply Let's consider the differences between rectangular and cylindrical coordinates by looking at the surfaces generated when each of the coordinates is held constant. Cone (lateral surface). It is also called truncated right circular cone. The right circular cylinder and cone shown in the accompanying figure both have a base of radius r and height 2r, and the sphere has radius r. - If negative value is given then it is taken as the volume of the unit cell. Step-by-step explanation: step 1. volume of cylinder = π r^2 h. The questions usually involve calculating a volume of a cone, sphere or hemisphere. Volume of Cylinders, Cones and Spheres. This exercise uses geometric volume formulas to solve word problems. thxThe volume of a cone is 6π cubic inches. At constant pressure some of the heat goes to doing work. Word Problems Greatest Common Factor Amp Least Common. A cylinder has base radius x cm and height 2x. 156 views3 year ago. The surface to volume ratio of this cylinder = 1. 4b Remediation Plan Summary Students measure the volume and surface area of real-life examples of rectangular prisms and cylinders to solve practical problems. -1-Find the volume of each figure. This exercise practices applying the volume formulas for cones, cylinders, and spheres. In this article, we are going to see how to extend volume group, extend and reduce a logical volume in Logical volume management (LVM) also called as flexible volume file-system. Function Average. This can be calculated by adding the volume of cylinder and cone. Practice: Volume of cylinders, spheres, and cones word problems. Online calculator to find the conical cylinder volume. Volume of water displaced = 3*2*0. Autoplay Hide Buttons. Riccardo Rubini, Davide Lasagna, Andrea Da Ronch. What is the impact on. Volumes of Cylinders, Cones and Spheres. So a cone's volume is exactly one third ( 1 3) of a cylinder's volume. Volume Of A Pyramid Solutions Examples Videos. Both production and utilization cost are zero if output is zero. Pi is approximately equal to. Volume of a Box Volume of a Cube Volume of a Sphere Volume of a Cone Volume of a cylinder formula The formula for the volume of a cylinder is height x π x (diameter / 2) 2 , where (diameter / 2) is the radius of the base (d = 2 x r), so another way to write it is height x π x radius 2. The callout also overrides GD&T Rule#2 or the Regardless of Feature Size. 29) Volume and surface area of spheres (Eighth grade - Q. But we want a cone with double the height, therefore. Nagaura, F. Then multiply by the height. Familiarity with volumes is suggested. Symbol: Category: Feature of Size Definition: Least material condition is a feature of size symbol that describes a dimensional or size condition where the least amount of material (volume/size) exists within its dimensional tolerance. The volume of a cylinder is the amount of space that will fit inside it. small cylinder. Find the radius of a cone with a volume of 175 cm 3 and a height of 21 cm. Write a C++ program to find Volume of Cube, Cylinder,Sphere using Function Overloading. 2) An ice cream package is cone shaped. Metallische Rohrverschraubungen für Fluidtechnik und allgemeine Anwendung - Teil 1: Verschraubungen mit 24°-Konus (ISO 8434-1:2007). Practice: Volume of spheres. The volume of a cylinder is $ {\pi\times r^2\times h} $, the volume of the cone multiplied by three. There is no need to input values all in the same measurement units, just select your preferred units for each. The volume of the cone is 1/3 the volume of the cylinder. From the formula. It is 1/3 of that. cylinder( color, start_key, start_name, start_radius, end_key, end_value, end_radius ) - Draw a cylinder between two entities. Round the measurements. 25 × 6 Inches Height = 37. A cylinder has 2 edges, a cone 1. Which holds more ice cream; a small cup or a cone?. 5 Volume of Pyramids and Cones 12. POD 6 Nov To get from point A to point B you must avoid walking through a pond. Easiest way to Learn Volume of Cylinder, Cone, Sphere and Hemisphere. Teaching the volume of cylinders, cones, and spheres is all about the formulas. Part 1: Determine the radius of the cylinder such that its volume is a maximum. A right triangle whose sides are 15cm and 20cm, is made to revolve about its hypotenuse. It Includes two (2) 1 HP electromechanical, adjustable amplitude vibrators. The formula for the volume of a cone is (height x π x (diameter / 2) 2) / 3, where (diameter / 2) is the radius of the base (d = 2 x r), so another way to write it is (height x π x radius 2) / 3, as seen in the figure below:. Description: 8. Example: Your part is 8" x 3. Have them practice 10 problems finding the volume of cylinders, cones, and spheres (composite solids too) and color an adorable Pi Day color page. 14 r is the radius of a circle. A cylinder looks like this: As you can see, this is a cylinder with a radius of 4 inches and a height of 8 inches. Step-by-step solutions to millions of textbook and homework questions!. 06 m cube Mass of Man = Volume of water displaced * Density of water = 0. The surface area and the volume of a cylinder have been known since ancient times. Use the formula. volume (countable and uncountable, plural volumes). Could you also explain the steps cause I'm not bright in math. The volume of a cone is 1/3 the volume of a cylinder that shares. h is the height of the cone. Please leave your answer in terms of p a. Notice that the volume of a cylinder is derived by taking the area of its base and multiplying by the height h {\displaystyle h}. If you select the normalize setting, the volume will get as loud as possible without causing clipping. The formula for the volume of cylinders can also be applied to the volume of cones, which occupy only one third of the space of a corresponding cylinder with the same base and height. Volume of cones. Grade 8 » Geometry » Solve real-world and mathematical problems involving volume of cylinders, cones, and spheres. The area B of the base is given by the formula B = πr², where r is the radius of the cir. Now let's fit a cylinder around a sphere. The base of a cone is a circle, so the volume of a cone with radius r and height h is. Welcome to The Calculating Surface Area and Volume of Cylinders (A) Math Worksheet from the Measurement Worksheets Page at Math-Drills. A cylinder has a radius of 6m and a height of 11m. At constant volume all the heat added goes into raising the temperature. It is a measure of how much "stuff" an object has in a unit Recall that though density is indeed mass divided by volume, it is often measured in units of grams per cubic centimeter because grams represent a. A cylinder with a radius of 3 in has a volume of 𝟖𝟔𝒊𝒏 Find the height of the cylinder. A cone, sphere and cylinder of radius r and height h. We all have seen a cylinder, now let us learn to define it in technical terms. We didn't do volume in 8th grade this year, but I have extended this to spheres as well — and they measure volume by smooshing a sphere (technical term) into a cylinder with the same radius. 17) A cylinder with a radius of 3 cm and a height of 7 cm. Potential sedimentation - of the potential difference between the electrodes, which set at different heights cylinder, in which the dispersed phase. Volume of a cone can be calculated a by multiplying a third of its Height by the Area of the circular base. In the good old days the students didn't have to memorize the formulas, but those days are gone. Prisms, Pyramids, Cylinders, Cones, and Spheres Homework Students are provided with problems to achieve the concepts of Prisms, Pyramids, Cylinders, Cones, and Spheres. A styrofoam model of a volcano is in the shape of a cone. Breaking news and video. So, I got the volume of the first cylinder by putting the width of one cylinder into the volume as a function of x formula, which got pi r squared times the height over n cubed. , the volume between points A and B), ∆P is pressure dierence, and L is the length of the capillary. Students will observe that the cylinder gets filled after pouring the sand three times from cone. V = Notice the similarity with the equation for the volume of a cylinder. 20) A cylinder with a radius of 5 ft and a height of 11 ft. Pre Algebra Dummies. Round your answer to the nearest cubic centimeter. Breaking News, Latest News and Current News from FOXNews. What percent is the volume of the largest cylinder which can inscribed in the cone to the volume of the cone? A. 7 Similar Solids. Let the height of cone and cylinder be h and the diameter of cone and cylinder be d. 6 Volume of pyramids and cones 7J3. Vernier Callipers If the rock is a perfect right cylinder with smooth surfaces, then calliper measurements of length and diameter can give quite an. 0 Equation Volume of a Cylinder, Cone, and Sphere Volume Cylinder Previous Formulas Learned Area and Circumference of a Circle Cylinder Volume of a Cylinder Volume of a Cylinder Volume of Cylinders Class Practice Volume of Cylinders. Because this is an advanced-level worksheet, most problems have decimals in the given numbers. Volume of Cylinders, Cones and Spheres. Use the Speaker Box Designer to determine the optimal volume for your enclosure. Buy directly from the person who made the thing you love. The diagram shows a plastic cone with a cylindrical hole in the middle. In this problem. Online calculators and formulas for a hemisphere and other geometry problems. Volume of a cone. 61 cm3 _____ 2. This diagram just helps us to start thinking about the problem. We all have seen a cylinder, now let us learn to define it in technical terms. r = 1/3 πr 3. Access 2000 free online courses from 140 leading institutions worldwide. The volume of a cone is the same as the volume of a cylinder with the same dimensions divided by three. Volume of Cylinders, Cones and Spheres. Volume Of A Cylinder Cone. Costs that vary depending on the volume of activity. See more volume of a cylinder videos at Brightstorm. mass of solute (g) volume of. • Developments – every line in a development is a TL • Revolution method • Box, right prism, right pyramid, right cylinder, right cone • Parallel Line Method. Therefore, volume of first cone + volume of second cone = volume of sphere. First, find the area of the base (π r^2). The formula for computing the height of the cone can be derived from the Volume equation i. See full list on onlinemathlearning. Volume formulas are useful in the calculus for understand some of the beginning integral applications. On the standard cone there is an edge between the nose and the cylinder which forms the body of the rocket. Height of cylinder. The side length s can be found using the Pythagorean Theorem. if the ratio of their surface areas is 9/1. View All Articles. xlCylinderBarStacked. The next is anything coming to a point like pyramid or cone that is one third of the prism with the same base as these students found out. : This problem shows an illustration of a cone and the student is asked to find the volume of it. For a hemisphere; r = 60 cm and, for a cylinder r = 60 cm, H = 180 cm Now, volume of the solid consisting of a cone over a hemisphere = volume of cone + volume of hemisphere. Find the height of the cone. Practice: Volume of cylinders, spheres, and cones word problems. The cone is of radius 1 where it meets the. This form was created inside of NLESD. STRAND CONCEPT: Volume and Surface Area SOL 7. So, I got the volume of the first cylinder by putting the width of one cylinder into the volume as a function of x formula, which got pi r squared times the height over n cubed. 20) A cylinder with a radius of 5 ft and a height of 11 ft. The radius or diameter of each sphere is provided, and you must round the volume to the nearest tenth. How to Read: - Moving Average crossovers are used to help determine a possible trend change or retracement. The base of a cone is a circle, so the volume of a cone with radius r and height h is. The volume of a cylinder 5 inches in diameter and 6 inches high is 117. Volume of a Conical Cylinder Calculator. A cylinder and a cone have equal radii of their bases and equal heights. Given => cylinder, a cone and a sphere has the same diameter and same height ***** Volume Of The Cylinder = Volume Of The cone = Volume Of The sphere. Right Cone Surface area of a right cone: S=B+πrl or S=πr2+πrl S is the surface area, B is the area of the base, and πrl is the lateral area of the cone. Torus (surface). From Old French volume, from Latin volūmen ("book, roll"), from volvō ("roll, turn about"). The diameter of the cylinder is about 3 ft. 100% Stacked Cone Column. The base of a cone is a circle, so the volume of a cone with radius r and height h is. Therefore, the volume of a cone is `V = (pi xx r^2 xx h) : 3`. (A) 9: 8 (B) 8: 9 (C) 3: 4 (D) 4: 3. Volume of cylinders cones and spheres edmodo 1. add sweep node. Volume and Surface Area Task Cards- Common Core and TEKS Aligned are a set of 16 task cards that I have created to assess Volume and Surface Area in your classroom. You can write (for the cylinder) 1353 cm 3 = pi*(r 2)*h. Volume of a cone formula. A cone is one third of the volume of a cylinder. 14 r is the radius of a circle. Online calculators and formulas for a hemisphere and other geometry problems. if the ratio of their surface areas is 9/1. This is a very pleasing result. In common use a cylinder is taken to mean a finite section of a right circular cylinder, i. a sphere with radius r. \(dr \, dz \, d\theta\) Figure \(\PageIndex{4}\): Setting up a triple integral in cylindrical coordinates over a conical region. Problem 2 Find the volume of a composite body comprised of a right circular cylinder and a hemisphere attached center-to-center to one of the cylinder bases (Figure 1) if both the cylinder diameter and the hemisphere diameter are of 10 cm, and the cylinder height is of 20 cm. This is similar to line(), but with the addition of two radius properties that are looked up on the target entities. The volume of the cone is 10 units^3. This can be calculated by adding the volume of cylinder and cone. To find the volume of a cylinder, we must find the area of the base and multiply that by the height. Concept: Concept of Surface Area, Volume, and Capacity. 5 Inch Radius = 6. 00-mol sample of a diatomic ideal gas expands slowly and adiabatically from a pressure of 5. 2 Surface Area of Prisms and Cylinders 12. Find h in terms of x. The heights of the cylinder and cone are 8 and 4 respectively. Find your thing or open your own shop. From the formula. Quiz 3 - The diameter of the base of a cone is 10 cm and its height is 16 cm. Volume of a Sphere, How to get the formula animation Easiest way to Learn Volume of Cylinder, Cone, Sphere and Hemisphere Volume Cylinder, Cone, Sphere Comparing spheres, cones, and cylinders Demonstration. The volume of a container is generally understood to be the capacity of the container, i. Practice applying the volume formulas for cylinders. 世界中のあらゆる情報を検索するためのツールを提供しています。さまざまな検索機能を活用して、お探しの情報を見つけてください。. If the cylinder has a radius r and length (height) h, then its volume is given by. To calculate the volume of a cylinder we need to know the radius of the circular cross-section of the cylinder - this is the measurement from the centre of the circle, to the outer-edge. We must now make the cylinder's height 2r so the sphere fits perfectly inside. Cone From Wolfram MathWorld. cylinder( color, start_key, start_name, start_radius, end_key, end_value, end_radius ) - Draw a cylinder between two entities. Ill just tell yu the formula cylinder- pie x radius x radius x height cone- pie x radius x radius x height,divided by 3 i hope yu understand it, srry. Find the Whole Surface and Volume of the Remaining Cylinder. 156 views3 year ago. I am going to remove the cone of radius r and height h from the cylinder and show that the volume of the remaining piece (call it S) is 2/3 r 2 h leaving the cone with volume r 2 h - 2/3 r 2 h = 1/3 r 2 h. A cone has one-third the volume of a cylinder with the same base. 4 Volume of Prisms and Cylinders 12. Frustum cone. Now using the same formula subtract the cubic inches (volume) of your part. Know the formulas for the volumes of cones, cylinders, and spheres and use them to solve real-world and mathematical problems. What is the impact on. Use the Speaker Box Designer to determine the optimal volume for your enclosure. 272 Lesson 27 Solve Problems with Cylinders, Cones, and Spheres ©Curriculum Associates, LLC Copying is not permitted. 00-mol sample of a diatomic ideal gas expands slowly and adiabatically from a pressure of 5. 2 Given the formula, determine the lateral area, surface area, and volume of prisms, pyramids, spheres, cylinders, and cones;. Consider the cylinder and cone shown below The diameter (D) of the top of the cone and the cylinder are equal. Pyramid Surface Area Formula and Pyramid Volume Formula. Find the radius of a cone with a volume of 175 cm 3 and a height of 21 cm. With over 15 million members and 120 million publications, ResearchGate is the best way to connect with your peers and discover research. Volume of Cylinders, Cones, and Spheres No teams 1 team 2 teams 3 teams 4 teams 5 teams 6 teams 7 teams 8 teams 9 teams 10 teams Custom Press F11 Select menu option View > Enter Fullscreen for full-screen mode. This can be calculated by adding the volume of cylinder and cone. h x CAilmlm `rGi_gVhitlsk grrepsBeBrjvKeEdF. Objects that can be created out of tubes include pipes or drinking glasses (the basic difference between a cylinder and a tube is that the former has closed ends). Finally, divide by 3 to get the volume of the cone. Want to get a quick and easy calculation of how many of your product(s) will fit in a shipping. This discussion on Volume of cylinder,cone,sphere and hemisphere? is done on EduRev Study Group by Class 10 Students. The volume formula is:. 06 m cube Mass of Man = Volume of water displaced * Density of water = 0. The cones and cylinders shown previously are right circular cones and right circular cylinders, which means that the central axis of each is perpendicular to the base. This burning, or combustion, takes place an a high speed termed as an "explosion". A high temperature voice coil wound on an aluminium voice coil former gives a high power handling capacity. The model has a circular base with a diameter of 48 centimeters and a height of 12 centimeters. If r and h are the radius and height respectively of both cone and the cylinder, then their volumes are. * Required Parameter for Sealed Box Only. Among the list of. Find the volume of the given body if the common base radius is of 3. Cones And Cylinders Volume. We must now make the cylinder's height 2r so the sphere fits perfectly inside. A cylinder looks like this: As you can see, this is a cylinder with a radius of 4 inches and a height of 8 inches. At the top of this page is a model problem that shows students how to calculate the volume of a cone. we know that. There's also a rectangle, which is the shape you get when you unwrap the curved surface and lay it flat:. » 9 Print this page. A right circular cylinder of radius r and height h is inscribed in a right circular cone of radius 6 m and height 12 m. x The relationship between the volume of a cylinder and its radius. 81 Advanced Area and Volume of Prisms Surface Area of a Prism, Volume 84 Area and Volume of Cylinders and Cones Area of a Cylinder, Volume notes, and chapter. Potential sedimentation - of the potential difference between the electrodes, which set at different heights cylinder, in which the dispersed phase. He could calculate their volumes, and, as appears from his taking the Egyptian seked, the horizontal distance associated with a vertical rise of one cubit, as the defining quantity for the pyramid's slope, he knew something about similar triangles. A styrofoam model of a volcano is in the shape of a cone. 5 Inch Radius = 6. Careful matching of the polypropylene based cone, the high loss rubber surround and the bullet shaped phase plug has resulted in a well behaved frequency response, even in the upper roll off region. Calculate the cubic metre (or cubic foot ), volume and quantity per shipping container. A cylinder is just an extension of a circle so that it is given a height of some sort. More Volume Problems. To view a PDF file, you must have the Adobe® Acrobat® Reader installed on your computer. cube, cuboid, cone, cylinder, etc. check angle fix (so it keeps its volume). Teach or review how to calculate the volume of a cone, cylinder and sphere with Flocabulary's educational rap song and lesson plan. STRAND CONCEPT: Volume and Surface Area SOL 7. 00 m3 large cylinder: 950. Aug 29, 2020 cones pyramids piers and projections objects without subjects Posted By Edgar Rice BurroughsMedia TEXT ID a615d2b6 Online PDF Ebook Epub Library cones pyramids piers and projections objects without subjects one particular world wide web page for each book at any time printed is the final word target of open library an initiative with the rather. Flashcards. Volume of a Cylinder Words The volume V of a cylinder is the product of the area of the base and the height of the cylinder. Find the volume of the globe. Substitute the values of the radius and height into the formula to find the volume of the cone. The radius or diameter of each sphere is provided, and you must round the volume to the nearest tenth. The optimisation of the cylinder-spiral soil-cultivating roller 01001. Volumes of Revolution A-Level Maths revision section looking at Volumes of Revolution when dealing with Integration. So the surface area of the cone equals the area of the circle plus the area of the cone and the final formula is given by: SA = πr 2 + πrl Where, r is the radius h is the height l is the slant height The area of the curved (lateral) surface of a cone = πrl Note:. Volume Of Sphere Cylinder And Cone. , World, Entertainment, Health, Business, Technology, Politics, Sports. Your test on Volume of rectangular prisms, cylinders, rectangular pyramids, cones, and spheres will be on _____. As the number of cylinders approaches infinity the total volume of all the cylinder approaches the actual volume of the sphere. Now, when my students practice using the formulas for volume of cylinders, cones, and spheres, we do a lot of practice focused on understanding the formulas. cube, cuboid, cone, cylinder, etc. Author: Nathan Scott Created Date: 11/23/2012 3:27:34 PM. \(dr \, dz \, d\theta\) Figure \(\PageIndex{4}\): Setting up a triple integral in cylindrical coordinates over a conical region. Key Concepts: Terms in this set (14) Find the volume of a sphere with a radius of 5. 9 Know the formulas for the volumes of cones, cylinders, and spheres and use them to solve real-world and mathematical problems. Type: Lesson Plan. volume of cylinder = pi (r^2)(h). A cone has 1/3 the volume of its cylinder. By (date), when given (5) real-life problems on calculating the volume of a solid, examples on finding the volumes of cones, cylinders, and spheres (e. Use volume formulas for cylinders, pyramids, cones, and spheres to solve problems. The volume of a cone is one third of the volume of a cylinder if they share same base and equal height. A cone of the same r and h should have 1/3 this volume. The final result is the ration of the volume of the cylinder to the volume of the sphere. A cylinder and a cone have equal radii of their bases and equal heights. jum/, /ˈvɑl. Volume of a Cylinder Words The volume V of a cylinder is the product of the area of the base and the height of the cylinder. \text{volume of cylinder }=\pi \times (2. What is the volume of this cylinder? (5 Inch Diameter Divided by 2) = 2. 3 mi³ 2) 5 mi 3 mi 4 mi 4 mi 8 mi³ 3) 11 cm 11 cm 12 cm 484 cm³ 4) 2 in 5 in 5 in 16. Cone (lateral surface). Other Volume Formulas. The cylinder has a height h of 15 cm and a radius of 5 cm. x Estimation is a useful skill. Thingiverse is a universe of things. The cone volume formula of the oblique cone is the same as for the right one. Where, V = Volume, D = Bore Diameter, H = Stroke Length, N = No. relation between volume of sphere , cylinder and cone Cylinders and Cones. oil flow meter reading for specific time interval usually take one hour period. Worksheets are 10 volume of prisms and cylinders, Unit 8 syllabus surface area volume, Find the volume of each round your answers to the, Volume prisms cylinders l1es1, Volume of prisms cones pyramids spheres h, Volume of prism l1es1, Name date per, Volume. The volume of the cone (V cone) is one-third that of a cylinder that has the same base and height:. The base of the cylinder is large circle and the top portion is smaller circle. Practice applying the volume formulas for cylinders. Mega Cone Table: The Mega Cone Table is used for very difficult to densify products, typically finely divided powders, and similar ingredients. Volume Of Cones And Cylinders to develop or edit PDF data files. Students will learn the formulas for the volume of a cylinder, volume of a cone, and volume of a sphere to solve real-world and mathematical problems. Let h be the height, R the radius of the lower base, and r the radius of the upper base. h is the height of the. The curved surface area of a right circular cone equals the perimeter of the base times one-half slant height. The height of the waffle cone is 10 cm and its radius is 5 cm. What is the volume of a pyramid or cone with a base of area B and. The radius and height of the cylinder that just encloses the sphere of radius r and 2r respectively. Enter this in field marked RADIUS below. Learn volume of cone formula with solved examples from CoolGyan. In the good old days the students didn't have to memorize the formulas, but those days are gone. Base Area = πr2. Work out its volume in cubic centimetres. Displaying top 8 worksheets found for - Volume Of Prisms Cylinders Cones And Pyramids. Example A Find the volume of the solid below. if the ratio of their surface areas is 9/1. h is the height. The height of the waffle cone is 10 cm and its radius is 5 cm. Volume is measured in cubic units ( in 3 , ft 3 , cm 3 , m 3 , et cetera). 06 m cube Mass of Man = Volume of water displaced * Density of water = 0. So the surface area of the cone equals the area of the circle plus the area of the cone and the final formula is given by: SA = πr 2 + πrl Where, r is the radius h is the height l is the slant height The area of the curved (lateral) surface of a cone = πrl Note:. It is 1/3 of that. The volume formula is:. Displaying all worksheets related to - Cones And Cylinders Volume. Since the formula for the volume of the cylinder is V = π r2h, it follows that the volume of the cone can be represented by. 74 times 3 is 216. A cone is a pyramid with a circular base that has sloping sides which meet at a central point. Teach or review how to calculate the volume of a cone, cylinder and sphere with Flocabulary's educational rap song and lesson plan. Volume of a Sphere, How to get the formula animation Easiest way to Learn Volume of Cylinder, Cone, Sphere and Hemisphere Volume Cylinder, Cone, Sphere Comparing spheres, cones, and cylinders Demonstration. if the ratio of their surface areas is 9/1. What is surface area and volume of the given shape, if the radius is 12, the height is 16. There are several models in literature (Delale and Erdogan, 1988; Fuchiyama and Noda, 1995; Liu et al. It is usually referred to as an internal combustion engine because gasoline is burned within its cylinders or combustion chambers. Find the volume of the globe. 2) – used easy cases to check proposed formulas, as a method of analysis. k Assignment Find the volume of each figure. Can I plz have legit answers not someone just copying and pasting the questions to earn points. In this article, you will learn about the volume of cones, its formula and derivation along with some solved examples. Calculating Volumes e. share to facebook share to twitter Questions. Volume of a cone formula. Understand the relationship between the volume of cylinders and the volume of spheres with the same diameter and height; determine the formula $${V={4\over{3}}\pi r^3}$$ for the volume of spheres. Considering one cylinder the pressure of the exhaust at the exhaust valve will vary considerably, falling to 2-3 lbs and rising to 4-7 lbs. Now, you are given a volume for the cylinder, but neither the height nor the radius. Mostramos abaixo uma ferramenta matemática para calcular o volume do cilindro ou prisma circular, a partir de sua fórmula e com a nossa calculadora. V= 1/3 π x r 2 x h. Volume of a Cylinder: A cylinder is a three-dimensional shape circular in cross section. Connect with friends, family and other people you know. Leave your answers in terms of ππππ. Volume Formulas: Cones, Cylinders, and Spheres Purpose: This tutorial will aid students in the memorization of the formulas for the volume of cones, cylinders, and spheres. So the volume of the cylinder is 3 times greater than that of the cone. First, find the area of the base (π r^2). Displaying all worksheets related to - Volume Of Prisms Cylinders Cones And Pyramids. jum/, /ˈvɑl. The surface area and the volume of a cylinder have been known since ancient times. While the inventions of Archimedes were known in the antiquity but his mathematical writings were little known. Volume of a. Other Volume Formulas. Aug 29, 2020 cones pyramids piers and projections objects without subjects Posted By Edgar Rice BurroughsMedia TEXT ID a615d2b6 Online PDF Ebook Epub Library cones pyramids piers and projections objects without subjects one particular world wide web page for each book at any time printed is the final word target of open library an initiative with the rather. Search millions of for-sale and rental listings, compare Zestimate® home values and connect with local professionals. Both have a radius of 3. A cube has edges of length 0. | CommonCrawl |
On the 70th birthday of J.Moser
Arnold V. I.
Higher dimensional continued fractions
The higher-dimensional analogue of a continuous fraction is the polyhedral surface, bounding the convex hull of the semigroup of the integer points in a simplicial cone of the euclidian space. The article describes some conjectures and theorems, extending to such higher-dimensional continouos fraction the Lagrange theorem on quadraticirrationals and the Gauss–Kuzmin statistics.
Citation: Arnold V. I., Higher dimensional continued fractions, Regular and Chaotic Dynamics, 1998, vol. 3, no. 3, pp. 10-17
DOI:10.1070/RD1998v003n03ABEH000076
Dullin H. R., Richter P. H., Veselov A. P.
Action variables of the Kovalevskaya top
An explicit formula for the action variables of the Kovalevskaya top as Abelian integrals of the third kind on the Kovalevskaya curve is found. The linear system of differential equations of Picard–Fuchs type, describing the dependence of these variables on the integrals of the Kovalevskaya system, is presented in explicit form. The results are based on the formula for the actions derived by S.P.Novikov and A.P.Veselov within the theory of algebro-geometric Poisson brackets on the universal bundle of hyperelliptic Jacobians.
Citation: Dullin H. R., Richter P. H., Veselov A. P., Action variables of the Kovalevskaya top, Regular and Chaotic Dynamics, 1998, vol. 3, no. 3, pp. 18-31
Goriely A., Tabor M.
The role of complex-time singularities in chaotic dynamics
The analysis of complex-time singularities has proved to be the most useful tool for the analysis of integrable systems. Here, we demonstrate its use in the analysis of chaotic dynamics. First, we show that the Melnikov vector, which gives an estimate of the splitting distance between invariant manifolds, can be given explicitly in terms of local solutions around the complex-time singularities. Second, in the case of exponentially small splitting of invariant manifolds, we obtain sufficient conditions on the vector field for the Melnikov theory to be applicable. These conditions can be obtained algorithmically from the singularity analysis.
Citation: Goriely A., Tabor M., The role of complex-time singularities in chaotic dynamics, Regular and Chaotic Dynamics, 1998, vol. 3, no. 3, pp. 32-44
Verhulst F., Huveneers R.
Evolution towards symmetry
The dynamics of time-dependent evolution towards symmetry in Hamiltonian systems poses a difficult problem as the analysis has to be global in phasespace. For one and two degrees of freedom systems this leads to the presence of one respectively two global adiabatic invariants and also the persistence of asymmetric features over a long time.
Citation: Verhulst F., Huveneers R., Evolution towards symmetry, Regular and Chaotic Dynamics, 1998, vol. 3, no. 3, pp. 45-55
Benettin G., Fasso F., Guzzo M.
Nekhoroshev-stability of $L_4$ and $L_5$ in the spatial restricted three-body problem
We show that $L_4$ and $L_5$ in the spatial restricted circular three-body problem are Nekhoroshev-stable for all but a few values of the reduced mass up to the Routh critical value. This result is based on two extensions of previous results on Nekhoroshev-stability of elliptic equilibria, namely to the case of "directional quasi-convexity", a notion introduced here, and to a (non-convex) steep case. We verify that the hypotheses are satisfied for $L_4$ and $L_5$ by means of numerically constructed Birkhoff normal forms.
Citation: Benettin G., Fasso F., Guzzo M., Nekhoroshev-stability of $L_4$ and $L_5$ in the spatial restricted three-body problem, Regular and Chaotic Dynamics, 1998, vol. 3, no. 3, pp. 56-72
Treschev D. V., Zubelevich O. E.
Invariant tori in Hamiltonian systems with two degrees of freedom in a neighborhood of a resonance
An estimate for the difference of the frequencies on two invariant curves, bounding a resonance zone of an area-preserving close to integrable map, is obtained. Analogous results for Hamiltonian systems are presented.
Citation: Treschev D. V., Zubelevich O. E., Invariant tori in Hamiltonian systems with two degrees of freedom in a neighborhood of a resonance, Regular and Chaotic Dynamics, 1998, vol. 3, no. 3, pp. 73-81
Sevryuk M. B.
Invariant sets of degenerate Hamiltonian systems near equilibria
For any collection of $n \geqslant 2$ numbers $\omega_1,\ldots,\omega_n$, we prove the existence of an infinitely differentiable Hamiltonian system of differential equations $X$ with $n$ degrees of freedom that possesses the following properties: 1) $0$ is an elliptic (provided that all the $\omega_i$ are different from zero) equilibrium of system $X$ with eigenfrequencies $\omega_1,\ldots,\omega_n$; 2) system $X$ is linear up to a remainder flat at $0$; 3) the measure of the union of the invariant $n$-tori of system $X$ that lie in the $\varepsilon$-neighborhood of $0$ tends to zero as $\varepsilon\to 0$ faster than any prescribed function. Analogous statements hold for symplectic diffeomorphisms, reversible flows, and reversible diffeomorphisms. The results obtained are discussed in the context of the standard theorems in the KAM theory, the well-known Russmann and Anosov–Katok theorems, and a recent theorem by Herman.
Citation: Sevryuk M. B., Invariant sets of degenerate Hamiltonian systems near equilibria, Regular and Chaotic Dynamics, 1998, vol. 3, no. 3, pp. 82-92
Chenciner A.
Collisions totales, Mouvements Complètement Paraboliques et Réduction des Homothéties Dans le Problème des $n$ corps
Nous étudions les propriétés du problème des n corps qui proviennent de l'homogénéité du potentiel et retrouvons dans un cadre conceptuel commun divers résultats de Sundman, McGehee et Saari. Les résultats ne sont pas nouveaux mais il nous a semblé que cette présentation les éclaire agréablement. Nous considérons des potentiels de type newtonien, homogènes de degre $2\kappa$ en la configuration. Pour n'être pas obligés de distinguer divers cas dans les inégalités, nous supposerons, ce qui inclut le cas newtonien, que $-1<\kappa<0$.
Citation: Chenciner A., Collisions totales, Mouvements Complètement Paraboliques et Réduction des Homothéties Dans le Problème des $n$ corps, Regular and Chaotic Dynamics, 1998, vol. 3, no. 3, pp. 93-106
Celletti A., Chierchia L.
Construction of stable periodic orbits for the spin-orbit problem of celestial mechanics
Birkhoff periodic orbits associated to spin-orbit resonances in Celestial Mechanics and in particular to the Moon–Earth and Mercury–Sun systems are considered. A general method (based on a quantitative version of the Implicit Function Theorem) for the construction of such orbits with particular attention to "effective estimates" on the size of the perturbative parameters is presented and tested on the above mentioned systems. Lyapunov stability of the periodic orbits (for small values of the perturbative parameters) is proved by constructing KAM librational invariant surfaces trapping the periodic orbits.
Citation: Celletti A., Chierchia L., Construction of stable periodic orbits for the spin-orbit problem of celestial mechanics, Regular and Chaotic Dynamics, 1998, vol. 3, no. 3, pp. 107-121
Lenz K. E., Lomeli H. E., Meiss J. D.
Quadratic volume preserving maps: an extension of a result of Moser
A natural generalization of the Henon map of the plane is a quadratic diffeomorphism that has a quadratic inverse. We study the case when these maps are volume preserving, which generalizes the the family of symplectic quadratic maps studied by Moser. In this paper we obtain a characterization of these maps for dimension four and less. In addition, we use Moser's result to construct a subfamily of in n dimensions.
Citation: Lenz K. E., Lomeli H. E., Meiss J. D., Quadratic volume preserving maps: an extension of a result of Moser, Regular and Chaotic Dynamics, 1998, vol. 3, no. 3, pp. 122-131
Pedroni M., Vanhaecke P.
A Lie algebraic generalization of the Mumford system, its symmetries and its multi-hamiltonian structure
In this paper we generalize the Mumford system which describes for any fixed $g$ all linear flows on all hyperelliptic Jacobians of dimension $g$. The phase space of the Mumford system consists of triples of polynomials, subject to certain degree constraints, and is naturally seen as an affine subspace of the loop algebra of $\mathfrak{sl}(2)$. In our generalizations to an arbitrary simple Lie algebra $\mathfrak{g}$ the phase space consists of $\mathrm{dim}\,\mathfrak{g}$ polynomials, again subject to certain degree constraints. This phase space and its multi-Hamiltonian structure is obtained by a Poisson reduction along a subvariety $N$ of the loop algebra $\mathfrak{g}((\lambda-1))$ of $\mathfrak{g}$. Since $N$ is not a Poisson subvariety for the whole multi-Hamiltonian structure we prove an algebraic. Poisson reduction theorem for reduction along arbitrary subvarieties of an affine Poisson variety; this theorem is similar in spirit to the Marsden–Ratiu reduction theorem. We also give a different perspective on the multi-Hamiltonian structure of the Mumford system (and its generalizations) by introducing a master symmetry; this master symmetry can be described on the loop algebra $\mathfrak{g}((\lambda-1))$ as the derivative in the direction of $\lambda$ and is shown to survive the Poisson reduction. When acting (as a Lie derivative) on one of the Poisson structures of the system it produces a next one, similarly when acting on one of the Hamiltonians (in involution) or their (commuting) vector fields it produces a next one. In this way we arrive at several multi-Hamiltonian hierarchies, built up by a master symmetry.
Citation: Pedroni M., Vanhaecke P., A Lie algebraic generalization of the Mumford system, its symmetries and its multi-hamiltonian structure, Regular and Chaotic Dynamics, 1998, vol. 3, no. 3, pp. 132-160
Jalnapurkar S. M., Marsden J. E.
Stabilization of relative equilibria II
In this paper, we obtain feedback laws to asymptotically stabilize relative equilibria of mechanical systems with symmetry. We use a notion of stability "modulo the group action" developed by Patrick [1992]. We deal with both internal instability and instability of the rigid motion. The methodology is that of potential shaping, but the system is allowed to be internally underactuated, i.e., have fewer internal actuators than the dimension of the shape space.
Citation: Jalnapurkar S. M., Marsden J. E., Stabilization of relative equilibria II, Regular and Chaotic Dynamics, 1998, vol. 3, no. 3, pp. 161-179
Simó C.
Invariant curves of analytic perturbed nontwist area preserving maps
Area preserving maps close to integrable but not satisfying the twist condition are studied. The existence of invariant curves is proved, but they are no longer graphs with respect to the angular variable. Beyond the generic, codimension 1 case, several higher codimension cases are studied. Meandering curves, higher order meandering and labyrinthic curves show up. Several examples illustrate that this behavior occurs in very simple families of maps.
Citation: Simó C., Invariant curves of analytic perturbed nontwist area preserving maps, Regular and Chaotic Dynamics, 1998, vol. 3, no. 3, pp. 180-195 | CommonCrawl |
Audiophiler (1)
Auricap (1)
CE Manufacturing (44)
JJ Electronics (14)
MKP Audiophiler (1)
Musicap (1)
Orange Drop (7)
Peavey (1)
Solen (3)
Sozo (4)
Sprague Atom (1)
Termination Style
Axial Leads (13)
Solder Lugs/Tabs (1)
Electrolytic (14)
Multi-Section / Can Type (5)
Capacitors are passive devices that are used in almost all electrical circuits for rectification, coupling and tuning. Also known as condensers, a capacitor is simply two electrical conductors separated by an insulating layer called a dielectric. The conductors are usually thin layers of aluminum foil, while the dielectric can be made up of many materials including paper, mylar, polypropylene, ceramic, mica, and even air. Electrolytic capacitors have a dielectric of aluminum oxide which is formed through the application of voltage after the capacitor is assembled. Characteristics of different capacitors are determined by not only the material used for the conductors and dielectric, but also by the thickness and physical spacing of the components.
The Leslie speaker system is a combined amplifier and loudspeaker which rotates a large chamber in front of the speaker to produce a unique audio effect commonly associated with Hammond organs.
Capacitor - Mallory, 630V, 150s, Axial Lead
Tubular metalized polyester film construction
±10% tolerance
Axial leads
Non-Inductive, Self Healing
Please Note: Color of capacitor may vary from one production run to another.
Capacitance * - Select -.001 µF, $0.99.0022 µF, $0.81.0047 µF, $0.90.01 µF, $1.05.022 µF, $1.05.047 µF, $1.55.1 µF, $1.25.22 µF, $1.95.47 µF, $2.85
Capacitor - F&T, 160V, 33µF, Axial Lead Electrolytic
.47" diameter x 1.15" height. Operating temperature range -40°C to 85°C.
Axial lead electrolytic. Made in Germany.
Founded in 1948 by Heinz Fischer and Alfred Tausche, F&T Capacitors has over 60 years of experience in producing some of the finest capacitors available today. F&T Caps are known for their great tone and reliability.
Capacitor - F&T, 100V, 100µF, Axial Lead
0.41" dia. x 1.21" long. Operating temperature range -40°C to 105°C.
Capacitor - F&T, 500V, Type A, Axial Lead
Operating temperature range -40°C to 85°C. Axial lead electrolytic. Made in Germany. Founded in 1948 by Heinz Fischer and Alfred Tausche, F&T Capacitors has over 60 years of experience in producing some of the finest capacitors available today. F&T Caps are known for their great tone and reliability.
Capacitance - µF * - Select -10 µF, $6.4022 µF, $3.9530 µF, $7.5047 µF, $7.85
Capacitor - F&T, 475V, 16µF, Axial Lead
16uF/475V
Axial lead electrolytic
0.63" dia. x 1.50" long
Operating temperature range -40°C to 85°C
Capacitor - F&T, 350V, 100µF, Axial Lead Electrolytic
F&T axial lead electrolytic capacitor
100 uF/ 350V
.98" diameter x 1.47" length
Founded in 1948 by Heinz Fischer and Alfred Tausche, F&T Capacitors has over 60 years of experience in producing some of the finest capacitors available today. F&T Caps are known for their great tone and reliability
80 uF/ 450V
Capacitor - F&T, 300V, 220µF, Electrolytic
Electrolytic F&T axial lead electrolytic capacitor
Capacitor - F&T, 450V, ~\frac{8}{8}~ µF, Axial Lead Electrolytic
Made in Germany. 1" diameter x 1.5" length. Used in Vox AC30 Founded in 1948 by Heinz Fischer and Alfred Tausche, F&T Capacitors has over 60 years of experience in producing some of the finest capacitors available today. F&T Caps are known for their great tone and reliability.
Capacitor - F&T, 500V, 16/16µF, Axial Lead Electrolytic
Axial lead two section electrolytic 16/16µF, 500V. Made in Germany. Founded in 1948 by Heinz Fischer and Alfred Tausche, F&T Capacitors has over 60 years of experience in producing some of the finest capacitors available today. F&T Caps are known for their great tone and reliability.
100uF/450V
Capacitor - F&T, Multi-Section, Electrolytic
Made in Germany. 1-⅜" x 2". Clamp required. Similar to LCR.
Capacitance - µF * - Select -100/100 µF - 500V, $18.9516/16 µF - 450V, $9.9532/32 µF - 500V, $9.9550/50 µF - 500V, $11.85
Capacitor - F&T, 25V, 25/25µF, Electrolytic
Made in Germany. 1" diameter x 2" length. Founded in 1948 by Heinz Fischer and Alfred Tausche, F&T Capacitors has over 60 years of experience in producing some of the finest capacitors available today. F&T Caps are known for their great tone and reliability.
Capacitor - F&T, 450V, ~\frac{33}{33}~ µF, Electrolytic
Capacitor - CE Mfg., 475V, 30/30/30/10µF, Electrolytic
Capacitor - Electrolytic, 30/30/30/10 µF @ 475 VDC Drop-In replacement for Leslie 4 section 30uF/30uF/30uF/10uF @ 475V For use in Leslie 122, 147, many others. CE Manufacturing capacitors are hand made in the USA using original Mallory Equipment. All CE manufactured can capacitors have a two year warranty against manufactured defects. | CommonCrawl |
September 2017 , Volume 7, Issue 5, pp 2309–2320 | Cite as
Optimization study for Pb(II) and COD sequestration by consortium of sulphate-reducing bacteria
Anamika Verma
Narsi R. Bishnoi
First Online: 06 April 2016
In this study, initial minimum inhibitory concentration (MIC) of Pb(II) ions was analysed to check optimum concentration of Pb(II) ions at which the growth of sulphate-reducing consortium (SRC) was found to be maximum. 80 ppm of Pb(II) ions was investigated as minimum inhibitory concentration for SRC. Influence of electron donors such as lactose, sucrose, glucose and sodium lactate was examined to investigate best carbon source for growth and activity of sulphate-reducing bacteria. Sodium lactate was found to be the prime carbon source for SRC. Later optimization of various parameters was executed using Box–Behnken design model of response surface methodology to explore the effectiveness of three independent operating variables, namely, pH (5.0–9.0), temperature (32–42 °C) and time (5.0–9.0 days), on dependent variables, i.e. protein content, precipitation of Pb(II) ions, and removal of COD by SRC biomass. Maximum removal of COD and Pb(II) was observed to be 91 and 98 %, respectively, at pH 7.0 and temperature 37 °C and incubation time 7 days. According to response surface analysis and analysis of variance, the experimental data were perfectly fitted to the quadratic model, and the interactive influence of pH, temperature and time on Pb(II) and COD removal was highly significant. A high regression coefficient between the variables and response (r 2 = 0.9974) corroborate eminent evaluation of experimental data by second-order polynomial regression model. SEM and Fourier transform infrared analysis was performed to investigate morphology of PbS precipitates, sorption mechanism and involved functional groups in metal-free and metal-loaded biomass of SRC for Pb(II) binding.
Pb(II) MIC Box–Behnken design Protein COD
Industrialization and urbanization have resulted in a phenomenal increase in metallic contents in the environment and has emerged as a worldwide environmental problem. Heavy metals are innate constituents of the earth's crust. Some are fundamental micronutrients for life, but at elevated concentrations they induce rigorous poisoning. Heavy metals are recalcitrant and in no way degrade in environment, but are only transformed and transferred (Satyawali et al. 2011; Hashim et al. 2011; Barka et al. 2013). Lead has no known biological functions. It is referred as a cumulative poison as it leads to biomagnifications at various trophic levels in food chains (Dauvin 2008; Flora et al. 2008; Lombardi et al. 2010). Lead is a mutagenic and teratogenic metal and induces stringent toxic effects such as cancer, hepatitis, neurodegenerative impairment, encephalopathy, renal failure, anaemia, and reproductive damage in living beings (Shahid et al. 2012; Ghazy and Gad 2014). Diverse industrial processes such as lead smelting, refining and manufacturing industries, battery manufacturing, printing and pigment, metal plating and finishing, ceramic and glass industries, and iron and steel manufacturing units are key sources of lead contamination in wastewater (Yurtsever and Sengil 2008; Sahu et al. 2013). Being a precarious neurotoxic metal, 10 µg/L of Pb(II) has been recommended by WHO as secure permissible level in drinking water (Watt et al. 2000; Naik and Dubey 2013). Inorganic form of lead is reviewed as a metabolic poison and enzyme inhibitor; still, organic forms of Pb(II) are extremely noxious (Anayurt et al. 2009; Javanbakht et al. 2011). Several methodologies such as chemical oxidation, electrocoagulation, electrodeposition, filtration, adsorption, chemical precipitation, solvent exchange, photo-degradation and membrane separation technologies have been explored long ago to mitigate recalcitrant heavy metals from adulterated wastewater (Verma et al. 2013; Rasool et al. 2013; Kumar et al. 2014; Zewaila and Yousef 2015). But these conventional methods proved unproductive due to their technical or economical constraints (Demir and Arisoy 2007; Cibati et al. 2013; Verma et al. 2015). Biological mechanisms impart a leading edge to current physico-chemical methods to wipe out noxious heavy metals from polluted waste water as they are eco-friendly, cost effective and do not produce colossal quantities of sludge (Cirik et al. 2013; Wang et al. 2014). Recently, biogenic sulphate reduction has been developing as an innovative bioprocess to remediate sulphate and heavy metals from effluents (Barrera et al. 2014; Lee et al. 2014; Sanchez-Andrea et al. 2014). Sulphate-reducing bacteria (SRB) metabolize organic matter in rigorous anaerobic environment using sulphate as an electron acceptor and subsequently results in generation of hydrogen sulphide and bicarbonate. This biogenic sulphide quickly reacts with heavy metal ions and finally transforms them into insoluble metal sulphides (Bratkova et al. 2013; Hao et al. 2014). SRB also possess other budding advantages such as they can reduce heavy metals directly by enzymatic approach and also endowed with high extracellular metal-binding capacity to accomplish bioremediation (Bridge et al. 1999; Pagnanelli et al. 2010; Sahinkaya et al. 2011; Wang et al. 2014). The present work aims at the study of influence of electron donors and optimization of bioprecipitation process to explore perfect conditions for efficient removal of Pb(II) and COD from simulated waste water. Therefore, the objective of current research is to present an effective method for treating Pb(II)-loaded wastewater with COD removal.
Origin of sulphate-reducing consortium
The sulphate-reducing consortium (SRC) used in this research was derived from sludge of an electroplating industry SKH metals LTD., Manesar, Gurgaon district, Haryana, India. Sample was stored at anaerobic conditions in sealed plastic bag and was instantly brought to lab. From the sample, 5 g of sludge was added to 500 ml master culture flask filled with culture media containing modified Postgate media (g/L): Na2SO4 1.0; KH2PO4 0.5; NH4Cl 2.0; FeSO4 0.005; CaCl2 0.06; sodium citrate 0.3; yeast extract 0.1; sodium lactate 15 ml at pH 7 (Singh et al. 2011). Then master culture flask was purged with high-purity nitrogen to curtail the concentration of dissolved oxygen. The consortium was incubated for 2 months at 37 °C. The presence of black precipitates and foul odour of hydrogen sulphide was examined which indicates the presence of sulphate-reducing microbial consortia proficient in precipitating metal ions.
Experiments were carried out in batch mode using 120-mL serum vials containing 100 mL of modified Postgate growth medium with pH 7 and were sealed with aluminum crimps and butyl rubber stoppers. After that 2 mL supernatant of SRC (5000 mg/L VSS) was injected in serum vials and were incubated at 37 °C in static position and were flushed with pure nitrogen gas (99 %) to set up anaerobic environment. For optimization experiments, incubation time was adjusted according to the experiments designed by Box–Behnken design model as shown in Table 2. Minimum inhibitory concentration (MIC) of Pb(II) ions for isolated sulphate-reducing bacterial consortium was analysed by implementing heavy metals tolerance assay. Lead nitrate (PbNO3)2; 1.598 g was dissolved in double distilled water (1 L) to obtain Pb(II) solution at 1000 mg/L. It was used as stock solution. Pb(II) varied in concentration ranging from 10 to 110 mg/L. Optical density (OD), metal removal (%) and protein content were studied to find out the minimum inhibitory concentration of Pb(II) ions. Different carbon sources like lactose, sucrose, glucose and sodium lactate were optimized using modified Postgate growth medium to ascertain best electron donor for anaerobic sulphate reduction. 3.0 % of total carbon content was supplemented from each carbon source. MIC and carbon source optimization experiments were performed at pH 7, temperature 37 °C and incubation time 7 days. All experiments were conducted in triplicate and average values were determined.
Diagnostic techniques
At defined time intervals, 5 mL of sample was collected from cultures using sterile and N2-purged syringe for analysis. OD, redox potential (Eh) and pH of withdrawn samples was measured instantly using EUTECH Instruments (pH and Eh Tutor). To prepare the cell-free supernatant, samples in vials were centrifuged at 5000 rpm for 10 min at 4 °C. Supernatant was further used for investigation of other parameters. The soluble chemical oxygen demand in the supernatant was measured by the Spectralab COD Digester (2015 M) and COD Titrator (CT-15) using the platinum combined electrode. Lowry's method was performed to calculate protein content (Lowry et al. 1951; Bhatia et al. 2011) and OD was analysed at λ max 600 nm by UV–visible Spectrophotometer (T80 UV/VIS). Oxidation reduction potential (ORP) was monitored using pH 1500 Cyberscan (EUTECH Instruments). The ORP measurements as millivolt (mV) were carried out at room temperature. Total residual Pb(II) was quantified by atomic absorption spectrophotometer (Shimadzu AA-6300, Japan). Percent removal of Pb(II) was determined using the following equation:
$${\text{Removal}} \% = \frac{{C_{i} - C_{f} }}{{C_{i} }} \times 100 ,$$
where C i is the initial heavy metal concentration; C f is the final heavy metal concentration.
Morphology of Pb(II) sulphide precipitates was explored with scanning electron microscopy (SEM) using (JSM-E510LV, JEOL) in high-vacuum mode (accelerating voltage 10 kV) to study the morphology of metal loaded and unloaded biomass. The samples were prepared using phosphate buffer and 2 % glutaraldehyde and were kept overnight at 4 °C for fixation. Sodium cacodylate buffer (0.1 mol/L) was applied to wash the fixed granules and then samples were dewatered with a graded ethanol series, i.e. 10, 25, 50, 75, 90 and 100 %. Finally dewatered samples were dried to perform SEM.
Infrared analysis was accomplished to explain sorption mechanism to recognize the existing functional groups on SRC using PerkinElmer spectrum BX FTIR system (Beaconfield Buckinghamshire HP9 1QA) within range of 400–4000 cm−1 furnished with diffuse reflectance accessory. Fourier transform infrared (FTIR) measurements were carried out by the KBr technique. The samples were assorted with potassium bromide (KBr) and mounted beneath the spectrometer apparatus.
Box–Behnken design model
The optimization of biosorption process was performed using Box–Behnken design model (Bezerra et al. 2008) and was standardized on the basis of Design Expert software (Stat Ease, 9.0.4 trial version). In the present design, effect of individual variables, i.e. pH (5.0–9.0), temperature (32–42 °C) and time (5.0–9.0 days) were studied on responses, i.e. protein, Pb(II) precipitation and COD sequestration. Table 1 shows minimum and maximum level of each independent variable of the selected experimental design. Total 17 experiments were designed and conducted to study the simultaneous effect of these variables. The obtained responses and experiments run are shown in Table 2.
The experimental domain factor and level for Box–Behnken model
Name of factor
Factor range and levels (coded)
Contact time (days)
Experimental design in term of coded factors and results of Box–Behnken model
Protein (mg/mL)
Pb removal (%)
COD (%)
To predict the most favourable conditions for better elimination of Pb(II) and COD, quadratic equation [Eq. (2)] is expressed as follows: The quadratic equation model for predicting the optimal point was expressed according to Eq. (2).
$$Y_{i} = a_{0} + \mathop \sum \limits_{i = 0}^{k} a_{i} X_{i} + \mathop \sum \limits_{i = 0}^{k} a_{ii} X_{ii}^{2} + \mathop \sum \limits_{i = 1}^{k = 1} \mathop \sum \limits_{j = i + 1}^{k} a_{ij} X_{i} X_{j} + e ,$$
where Yi (i = 3) is the predicted response, i.e. % removal of Pb(II) ions, COD and protein conc. (mg/mL), using SRC biomass, a o is constant coefficient, X i X j are the coded experimental variables correlated to response, e is the error of model and k is the number of variables studied (Nair and Ahammed 2015). The second-order polynomial Eq. (3), includes independent variables, coded as A, B and C and is expressed as follows:
$$Y\left( {{\text{\% Removal of metal ions}}} \right) = a_{0} + a_{i} A + a_{i} B + a_{i} C + a_{i} A^{2} + a_{i} B^{2} + a_{i} C^{2} + a_{ij} A*B + a_{ij} A*C + a_{ij} B*C$$
In the present study, protein concentration (mg/mL), percentage removal of Pb(II) and COD were analysed using Eq. (3) including ANOVA to attain interaction between process variables and response. The quality-of-fit of polynomial model was expressed by the coefficient of determination r 2 and statistical significance was checked by F test in the program.
Minimum inhibitory concentration (MIC) of Pb(II) ions
The minimum inhibitory concentration (MIC) of Pb(II) ions for SRC consortium was investigated using Postgate growth medium modified with varied concentration of heavy metal ranging from 10 to 110 mg/L. The cultures were incubated at 37 °C for a period of 7 days to monitor growth of SRB consortium. On 80 ppm concentration of Pb(II) ions, maximum OD, protein content and Pb(II) sequestration was found to be 0.98, 0.88 mg/mL and 99.8 %, respectively, as shown in Fig. 1. So MIC of Pb(II) ions for the said consortium was concluded to be 80 ppm.
Determination of MIC for Pb(II) at pH 7 and temperature = 37 °C and contact time = 7 days
Influence of electron donors on Pb(II) and COD removal
Sulphate-reducing bacterial consortium was cultured for Pb(II) removal. The modified Postgate growth medium was tested by varying different carbon sources such as lactose, sucrose, glucose and sodium lactate each with 3.0 % carbon content. Experiments were conducted to compare the pH profile, ORP (oxidation reduction potential), soluble COD removal and Pb(II) removal efficiency in presence of four carbon sources (Fig. 2). The lowest Pb(II) removal 59.2 and 70.3 % was obtained with lactose and sucrose, respectively. The substrate lactose, sucrose and glucose were fermented and assisted the fermentative bacteria as compared to sulphate-reducing bacteria. Thus fructose, sucrose and glucose resulted in reduction in pH of media and developed acidic conditions. This confirmed that fermentation had occurred when the consortium was supplemented with these carbon sources. The maximum Pb(II) removal achieved was 99.2 % with sodium lactate as carbon source. So it can be concluded that when sodium lactate was used as a carbon source at temperature 37 °C and time 7 days, maximum ORP, pH and soluble COD removal attained were −398, 8.06, and 55.5 %, respectively. So sodium lactate was considered to be the most efficient carbon source for the said sulphate-reducing bacterial consortium.
Effect of carbon sources on various parameters at temperature = 37 oC and contact time = 7 days
Optimization with response surface methodology
The interactive effect of different variables, i.e. pH (5.0–9.0), temperature (32–42 °C) and time (5–9 days) on responses, protein concentration (mg/mL), COD removal and bioprecipitation of Pb(II) with SRB consortium were studied by Box–Behnken design matrix and results are described in Table 3.
Analysis of variance for RSM variables fitted to quadratic model
Mean square
F value
Prob > F
>0.0001
Lack of fit
Pure error
Adjusted r 2
COD removal
Pb (II) removal
df degree of freedom
Validation of response surface models and statistical analysis
Data analysis by Box–Behnken design model reported optimum condition for COD removal and bioprecipitation of Pb(II) with SRB consortia and also examined the interactive effect of independent variables on the responses. On the basis of quadratic polynomial equations, correlation between the independent variables and responses were studied (4–6): the regression equation coefficients were calculated and data were fitted to a second-order polynomial equation to evaluate protein concentration (mg/mL), % COD removal and bioprecipitation of Pb(II) with SRB consortia.
$${\text{Protein concentration }}\left( {{\text{mg}}/{\text{mL}}} \right) = 0. 5 4 + 0. 1 1*{\text{A}} + 0.0 8*{\text{B}} + 0.0 6*{\text{C}} + 0.0 5*{\text{ AB}} + 0.0 5*{\text{AC}} + 0.0 3*{\text{BC}} - \, 0. 1 2 { }*{\text{ A}}^{ 2} - 0. 1 2*{\text{ B}}^{ 2} - 0.0 6*{\text{C}}^{ 2}$$
$$\% {\text{ COD removal}} = 6 6 + 1 4. 5*{\text{A}} + 1 1. 5*{\text{B}} + 10. 5*{\text{ C }} + 9. 2 5 { }*{\text{ AB }} + 1 3. 7 5*{\text{AC}} + 5. 7 5*{\text{BC}} - 1 6. 1 2 5*{\text{A}}^{ 2} - 7. 6 2 5*{\text{B}}^{ 2} + 0. 3 7 5*{\text{C}}^{ 2}$$
$$\% {\text{Pb}}\left( {\text{II}} \right){\text{removal}} = 7 3.0 + 1 2. 2 5*{\text{A}} + 7. 2 5*{\text{ B }} + 9. 5*{\text{C}} + 8. 2 5*{\text{AB}} + 1 3. 7 5*{\text{AC}} + 2. 7 5*{\text{BC}} - 1 1. 6 2 5*{\text{A}}^{ 2} - 4. 6 2 5*{\text{B}} - 0. 6 2 5*{\text{C}}^{ 2}$$
The results of ANOVA for removal of protein concentration, COD and Pb(II) ions are mentioned in Table 3. Values of Prob > F less than 0.0001 designate that model terms are considerable for Pb(II) ions and COD sequestration. For this research the non-significant lack-of-fit (>0.05), is proficient for data fitness and demonstrated that quadratic model is quite satisfactory. In the experimental data, R-square (r 2) 0.9706 and adjusted r 2 0.9327 for protein, r 2 0.9928 and adjusted r 2 0.9835 for COD and r 2 0.9945 and adjusted r 2 0.9875 for Pb(II) is closer to 1.0 and rationalized the better fitness of model in the investigated data.
Optimization of variables for removal of Pb(II) and COD
The effect of independent variables pH (A), temperature (B) and contact time (C) on protein concentration, COD and Pb(II) removal with SRB consortia was studied using quadratic polynomial equations of response surface methodology (Eqs. (4)–(6)).
Independent variables, pH (A), temperature (B) and contact time (C), being crucial factors in bioprecipitation process, were studied intensively. As shown in Eqs. (4)–(6), it was analysed that all three independent variables have a linear positive effect on protein concentration, COD and Pb(II) elimination from aqueous solutions using SRB consortia.
First, pH (A) was an essential factor (P > 0.0001) and had a linear positive effect on protein concentration (mg/mL), biosorption of Pb(II) ions and COD removal from aqueous solution by SRB consortia (Eqs. (4)–(6)). It can be concluded that with increase in pH, protein concentration increases and also sequestration of COD and Pb(II) enhanced with rise in pH. As both chemical and biological factors affect Pb(II) sequestration, when sulphate-reducing bacteria (SRB) converts sulphate into sulphide, this biological process results in the generation of bicarbonate ions and increases the alkalinity of the medium which in turn provides the favourable conditions for SRB to develop and the sulphide then combines with heavy metal and results in formation of metal sulphide precipitates.
Second, temperature (B), (P > 0.0001) significantly influences COD and Pb(II) ions removal. Independent variable temperature had a positive effect on protein concentration (mg/mL), COD and Pb(II) elimination (Eqs. (4)–(6)). The increase in bioprecipitation of Pb(II) ions with increase in temperature is due increase in sulphate removal efficiency as concentration of protein is also increased with rise in temperature.
Third, incubation time (C), (P > 0.0001) also plays a fundamental role in removal of Pb(II) and had a positive effect on protein production, COD removal and Pb(II) precipitation by sulphate-reducing bacterial consortium (Eqs. (4)–(6)). As shorter HRT may not allow adequate time for SRB activity to neutralize acidity and precipitate metals. A longer HRT may imply depletion of either the available organic matter source or the sulphate source for SRB (Dvorak et al. 1992; Singh et al. 2011).
The interactive effect of two independent variables with another variable at a fixed level of protein concentration, biosorption of Pb(II) ions and COD removal with SRB consortium is shown in 3D surface plots (Figs. 3, 4, 5a–d).
3D-surface plot showing the interactive of pH and temperature (°C) on: a protein, b % removal of Pb(II), c % removal of COD by consortium of sulphate-reducing bacteria
3D-surface plot showing the interactive of pH and time (days) on: a protein, b % removal of Pb(II), c % removal of COD by consortium of sulphate-reducing bacteria
3D-surface plot showing the interactive of time (days) and temperature (°C) on: a protein, b % removal of Pb(II), c % removal of COD by consortium of sulphate-reducing bacteria
Figure 3a–c shows the interactive effect of two variables pH (A) 5.0–9.0 and temperature (B) 32–42 °C on protein concentration, biosorption of Pb(II) ions and COD removal. In Fig. 3a, protein concentration initially increased with increase of pH i.e. up to 7 and then decreased with further increase of pH. Similar pattern was observed in case of temperature also. Maximum protein concentration was found to be 0.62 mg/mL at pH 7 and temperature 37 °C. SRBs mostly grow in neutral conditions of pH 6–8, but inhibition is detected at pH values below 6 or higher than 9 (Widdel 1988; Johnson et al. 2009; Zhao et al. 2011; Moon et al. 2013). In Fig. 3b, the removal of Pb(II) was first increased with increase in pH, i.e. up to 7 and then decreased with further increase of pH but slight variation was observed in precipitation of Pb(II) with change in temperature, i.e. from 32 to 42 °C. Hoa et al. (2007), revealed similar results as optimum pH was in range of 7.5–8.5 for lead sulphide precipitation through biological sulphate reduction process. Alvarez et al. (2007) reported similar findings as pH 7.5–8.0 was found to be the optimum pH for lead sulphide precipitation. In Fig. 3c, removal of COD was increased with increase of pH up to 7 and later it follows a decreasing trend with further increase of pH. Maximum removal of COD and Pb(II) was observed to be 91 and 98 %, respectively, at pH 7.0 and temperature 37 °C. It can be concluded from the results that maximum growth of SRB and maximum reduction in parameters were investigated at pH 7 and temperature 37 °C.
Figure 4a–c presents interactive effect of pH (A) and time (C) on concentration of protein, % removal of Pb(II) ions and COD with SRC. In Fig. 4a, protein concentration initially increased with increase in pH i.e. up to 7 and then decreased with further increase in pH. But in case of time, initially there was slight increase in protein concentration up to 7th day and further decreased with increase in time period. Maximum protein concentration was found to be 0.62 at pH 5 and time 7 days. Figure 4b shows that removal of Pb(II) ions was first increased and then decreased with increase of pH from 5.0 to 9.0 and no significant affect was perceived with increase of time, i.e. up to 5th to 9th day, In Fig. 4c, no significant effect was noticed in removal of COD with time 5–9 days while with pH, initially slight increase was observed in COD removal but later decreased with further increase in pH. Maximum removal of COD and Pb(II) was observed at pH 7.0 and incubation time 7 days. Wang et al. 2008 reported similar results as reducing rate is high when pH values are between 6 and 8 and lead removal rate was found to be >88.2 % when the pH value is 8.
Figure 5a–c shows interactive effect of two variables: time (C) 5.0–9.0 days and temperature (B) 32–42 °C on quantity of protein, sequestration of Pb(II) and removal of COD. In Fig. 5a concentration of protein initially increased with increase of temperature, i.e. up to 38 °C, and then decreased with further increase of temperature. Maximum protein concentration was found to be 0.62 mg/mL at temperature 38 °C and time 7 days. In Fig. 5b it is observed that no significant change was found in sequestration of Pb(II) with increase in temperature, i.e. 32–42 °C. While gradual increase was noticed in % removal of Pb(II) with increase in incubation time i.e. 5–9 days. Maximum Pb(II) ions removal was found to be 98 % at temperature 37 °C and time 7 days. In Fig. 5c, removal of COD was slightly increased with increase of temperature up to 37 °C and later it was almost stable with further increase of temperature. But with time COD removal followed an increasing trend with increase in incubation time, i.e. from 5 to 9 days. Maximum removal of COD was observed to be 91 % at pH 7.0 and time 7 days.
Characterization of Pb(II) sulphide precipitates
SEM was performed to further characterize the lead sulphide (PbS) precipitates from metal-free and metal-loaded biomass of SRC as shown in Fig. 6. SEM images revealed that the surface morphology of precipitates had a very muddled morphology of metal-loaded biomass with no definite pattern, as shown in Fig. 6b. Thus it can be inferred that there was profound effect of Pb(II) ions on surface of SRC biomass.
Typical SEM micrograph of SRB consortium a Pb (II) unloaded biomass. b Pb (II) loaded biomass
Fourier transform infrared spectroscopy (FTIR) study was carried out to identify the functional groups present in SRC in the range 4000–400 cm−1. The biosorbent capacity of SRC depends upon chemical reactivity of functional groups at the biomass surface. Figure 7 shows the shift in the wavelength of dominant peak associated with the plots by comparing between the control, i.e. lead-free and lead-loaded biomass. Metal binding process took place by precipitation of metal and also at the surface of SRC as shown by the shifts in the wavelength. Two peaks at 604.19 and 564.25 cm−1 disappeared and the peak at 872.28 cm−1 was flat showing that sulphate group was strongly involved in the adsorption of lead by adsorbent. The 1000–1400 cm−1 absorbtion band corresponds to –CH3, –CH2–, and C–F group, whereas the 750–1000 cm−1 band corresponds to S=O, –C–C–, and C–Cl functional groups.
Fourier transform infrared absorption spectrum of Pb(II) precipitation: a Pb(II) ions loaded biomass and b Pb(II) free biomass
In this study, MIC of Pb(II) ions was found to be 80 ppm. Using Box–Behnken design, it was concluded that combination of pH, temperature and incubation time, had significant effect on biosorption of Pb(II) precipitation and COD removal. The maximum responses were found to be 0.62 mg/mL protein concentration, 98 % Pb(II) and 91 % COD removal at optimum independent variables such as pH 7, temperature 37 °C and incubation time 7 days. From significant model and mathematical evaluation, this study deduced that RSM approach was found to be effective and efficient process for optimization of biosorption process.
The author Anamika Verma is grateful to University Grant Commission, New Delhi, for awarding Basic Scientific Research (BSR) fellowship as Senior Research Fellowship (SRF) for this investigation.
Alvarez MT, Crespo C, Mattiasson B (2007) Precipitation of Zn(II), Cu(II) and Pb(II) at bench-scale using biogenic hydrogen sulfide from the utilization of volatile fatty acids. Chemosphere 66:1677–1683CrossRefGoogle Scholar
Anayurt RA, Sari A, Tuzen M (2009) Equilibrium, thermodynamic and kinetic studies on biosorption of Pb(II) and Cd(II) from aqueous solution by macrofungus (Lactarius scrobiculatus) biomass. Chem Eng J 151:255–261CrossRefGoogle Scholar
Barka N, Abdennouri M, Makhfouk ME, Qourzal S (2013) Biosorption characteristics of cadmium and lead onto eco-friendly dried cactus (Opuntia ficus indica) cladodes. J Environ Chem Eng 1:144–149CrossRefGoogle Scholar
Barrera EL, Spanjers H, Romero O, Rosa E, Dewulf J (2014) Characterization of the sulfate reduction process in the anaerobic digestion of a very high strength and sulfate rich vinasse. Chem Eng J 248:383–393CrossRefGoogle Scholar
Bezerra MA, Santelli RE, Oliveira EP, Villar LS, Escaleira LA (2008) Response surface methodology (RSM) as a tool for optimization in analytical chemistry. Talanta 76:965–977CrossRefGoogle Scholar
Bhatia D, Kumar R, Singh R, Chadetrik R, Bishnoi NR (2011) Statistical modelling and optimization of substrate composition for bacterial growth and cadmium removal using response surface methodology. Ecol Eng 37:2076–2081CrossRefGoogle Scholar
Bratkova S, Koumanova B, Beschkov V (2013) Biological treatment of mining wastewaters by fixed-bed bioreactors at high organic loading. Bioresour Technol 137:409–413CrossRefGoogle Scholar
Bridge TAM, White C, Gadd GM (1999) Extracellular metal-binding activity of the sulphate-reducing bacterium Desulfococcus multivorans. Microbiology 145:2987–2995CrossRefGoogle Scholar
Cibati A, Cheng KY, Morris C, Ginige MP, Sahinkaya E, Pagnanelli F, Kaksonen AH (2013) Selective precipitation of metals from synthetic spent refinery catalyst leach liquor with biogenic H2S produced in a lactate-fed anaerobic baffled reactor. Hydrometallurgy 139:154–161CrossRefGoogle Scholar
Cirik K, Dursun N, Sahinkaya E, Cinar O (2013) Effect of electron donor source on the treatment of Cr(VI) containing textile wastewater using sulfate-reducing fluidized bed reactors (FBRs). Bioresour Technol 133:414–420CrossRefGoogle Scholar
Dauvin JC (2008) Effects of heavy metal contamination on the macrobenthic fauna in estuaries: the case of the Seine estuary. Mar Pollut Bull 57:160–167CrossRefGoogle Scholar
Demir A, Arisoy M (2007) Biological and chemical removal of Cr(VI) from waste water: cost and benefit analysis. J Hazard Mater 147:275–280CrossRefGoogle Scholar
Dvorak DH, Hedin RS, Edenborn HM, McIntire PE (1992) Treatment of metal contaminated water using bacterial sulfate reduction: results from pilot-scale reactors. Biotechnol Bioeng 40:609–616CrossRefGoogle Scholar
Flora SJS, Mittal M, Mehta A (2008) Heavy metal induced oxidative stress and its possible reversal by chelation therapy. Indian J Med Res 128:501–523Google Scholar
Ghazy SE, Gad AHM (2014) Lead separation by sorption onto powdered marble waste. Arabian J Chem 7:277–286CrossRefGoogle Scholar
Hao TW, Xiang PY, Mackey HR, Chi K, Lu H, Chui HK, Loosdrecht MCMV, Chen GH (2014) A review of biological sulfate conversions in wastewater treatment. Water Resour 65:1–21CrossRefGoogle Scholar
Hashim MA, Mukhopadhyay S, Sahu JN, Sengupta B (2011) Remediation technologies for heavy metal contaminated groundwater. J Environ Manage 92:2355–2388CrossRefGoogle Scholar
Hoa TTH, Liamleam W, Annachhatre AP (2007) Lead removal through biological sulfate reduction process. Bioresour Technol 98:2538–2548CrossRefGoogle Scholar
Javanbakht V, Zilouei H, Karimi K (2011) Lead biosorption by different morphologies of fungus Mucor indicus. Int Biodeterior Biodegrad 65:294–300CrossRefGoogle Scholar
Johnson DB, Jameson E, Rowe OF, Wakeman K, Hallberg KB (2009) Sulfidogenesis at low pH by acidophilic bacteria and its potential for the selective recovery of transition metals from mine water. Adv Mater Res 71–73:693–696CrossRefGoogle Scholar
Kumar N, Omoregie EO, Rose J, Masion A, Lloyd JR, Diels L, Bastiaens L (2014) Inhibition of sulfate reducing bacteria in aquifer sediment by iron nanoparticles. Water Res 51:64–72CrossRefGoogle Scholar
Lee DJ, Liu X, Weng HL (2014) Sulfate and organic carbon removal by microbial fuel cell with sulfate-reducing bacteria and sulfide-oxidising bacteria anodic biofilm. Bioresour Technol 156:14–19CrossRefGoogle Scholar
Lombardi PE, Peri SI, Verrengia NR (2010) ALA-D and ALA-D reactivated as biomarkers of lead contamination in the fish Prochilodus lineatus. Ecotoxicol Environ Saf 73:1704–1711CrossRefGoogle Scholar
Lowry OH, Rosebrough NJ, Farr AL, Randall RJ (1951) Protein measurement with the folin phenol reagent. J Biol Chem 193:265–275Google Scholar
Moon C, Singh R, Chaganti SR, Lalman JA (2013) Modeling sulfate removal by inhibited mesophilic mixed anaerobic communities using a statistical approach. Water Res 47:2341–2351CrossRefGoogle Scholar
Naik MM, Dubey SK (2013) Lead resistant bacteria: lead resistance mechanisms, their applications in lead bioremediation and biomonitoring. Ecotoxicol Environ Saf 98:1–7CrossRefGoogle Scholar
Nair AT, Ahammed MM (2015) The reuse of water treatment sludge as a coagulant for post-treatment of UASB reactor treating urban wastewater. J Clean Prod 96:272–281CrossRefGoogle Scholar
Pagnanelli F, Viggi CC, Toro L (2010) Isolation and quantification of cadmium removal mechanisms in batch reactors inoculated by sulphate reducing bacteria: biosorption versus bioprecipitation. Bioresour Technol 101:2981–2987CrossRefGoogle Scholar
Rasool K, Woo SH, Lee DS (2013) Simultaneous removal of COD and Direct Red 80 in a mixed anaerobic sulfate-reducing bacteria culture. Chem Eng J 223:611–616CrossRefGoogle Scholar
Sahinkaya E, Gunes FM, Ucar D, Kaksonen AH (2011) Sulfidogenic fluidized bed treatment of real acid mine drainage water. Bioresour Technol 102:683–689CrossRefGoogle Scholar
Sahu MK, Mandal S, Dash SS, Badhai P, Patel RK (2013) Removal of Pb(II) from aqueous solution by acid activated red mud. J Environ Chem Eng 1:1315–1324CrossRefGoogle Scholar
Sanchez-Andrea I, Sanz JL, Bijmans MFM, Stams AJM (2014) Sulfate reduction at low pH to remediate acid mine drainage. J Hazard Mater 269:98–109CrossRefGoogle Scholar
Satyawali Y, Seuntjens P, Van Roy S, Joris I, Vangeel S, Dejonghe W, Vanbroekhoven K (2011) The addition of organic carbon and nitrate affects reactive transport of heavy metals in sandy aquifers. J Contam Hydrol 123:83–93CrossRefGoogle Scholar
Shahid M, Pinelli E, Dumat C (2012) Review of Pb availability and toxicity to plants in relation with metal speciation; role of synthetic and natural organic ligands. J Hazard Mater 219:1–12CrossRefGoogle Scholar
Singh R, Kumar A, Kirrolia A, Kumar R, Yadav N, Bishnoi NR, Lohchab RK (2011) Removal of sulphate, COD and Cr(VI) in simulated and real wastewater by sulphate reducing bacteria enrichment in small bioreactor and FTIR study. Bioresour Technol 102:677–682CrossRefGoogle Scholar
Verma A, Shalu Singh A, Bishnoi NR, Gupta A (2013) Biosorption of Cu (II) using free and immobilized biomass of Penicillium citrinum. Ecol Eng 61:486–490CrossRefGoogle Scholar
Verma A, Dua R, Singh A, Bishnoi NR (2015) Biogenic sulfides for sequestration of Cr(VI), COD and sulfate from synthetic wastewater. Water Sci 29:19–25CrossRefGoogle Scholar
Wang QL, Ding DX, Hue M, Yuran L, Qiu GZ (2008) Removal of SO4 2−, uranium and other heavy metal ions from simulated solution by sulfate reducing bacteria. Trans Nonferrous Met Soc China 18:1529–1532CrossRefGoogle Scholar
Wang J, Li Q, Li MM, Chen TH, Zhou YF, Yue ZB (2014) Competitive adsorption of heavy metal by extracellular polymeric substances (EPS) extracted from sulfate reducing bacteria. Bioresour Technol 163:374–376CrossRefGoogle Scholar
Watt GCM, Britton A, Gilmour HG, Moore MR, Murray GD, Robertson SJ (2000) Public health implications of new guidelines for lead in drinking water: a case study in an area with historically high water lead levels. Food Chem Toxicol 38:73–79CrossRefGoogle Scholar
Widdel F (1988) Microbiology and ecology of sulfate-and sulfur-reducing bacteria. In: Zehnder AJB (ed) Biology of Anaerobic Microorganisms. Wiley Interscience, New York, pp 469–585Google Scholar
Yurtsever M, Sengil IA (2008) Biosorption of Pb(II) ions by modified quebracho tannin resin. J Hazard Mater 163:58–64CrossRefGoogle Scholar
Zewaila TM, Yousef NS (2015) Kinetic study of heavy metal ions removal by ion exchange in batch conical air spouted bed. Alex Eng J 54:83–90CrossRefGoogle Scholar
Zhao CQ, Yang QH, Chen WY, Li H, Zhang H (2011) Isolation of a sulfate reducing bacterium and its application in sulfate removal from tannery wastewater. Afr J Biotechnol 10:11966–11971CrossRefGoogle Scholar
1.Department of Environmental Science and EngineeringGuru Jambheshwar University of Science and TechnologyHisarIndia
Verma, A., Bishnoi, N.R. & Gupta, A. Appl Water Sci (2017) 7: 2309. https://doi.org/10.1007/s13201-016-0402-7
Received 25 August 2015
Accepted 10 March 2016
First Online 06 April 2016 | CommonCrawl |
The Law of Cosines (Independent of the Pythagorean Theorem)
The Law of Cosines (interchangeably known as the Cosine Rule or Cosine Law) can be shown to be a consequence of the Pythagorean theorem of which it is a generalization. Euclid proved his variant of the Law of Cosines in two propositions: who proved the obtuse case as II.12 for obtuse angles and II.13 for acute one. He used the Pythagorean theorem in both cases. For this reason, I expressed interest whether it is possible to prove the Law of Cosines independent of the latter.
My inquiry was rejected on the grounds that the two - the Pythagorean theorem and the Law of Cosines - are either both hold (in Euclidean geometry) or both do not hold (in spherical or hyperbolic geometries), implying their dependencies on each other. However, John Molokach came up with a proof of the Law of Cosines (see below) which does not appear to rely on the Pythagorean theorem. How can this be explained?
There is no paradox here. The two statements - the Pythagorean theorem and the Law of Cosines - are indeed equivalent. In any given geometry, they are either both true or both false. However, both may stem (quite independently) from another - perhaps, but not necessarily, more fundamental - proposition.
For a triangle with sides $a$,$b$, and $c$ and the angle $\gamma$ opposite the side $c$, one has
$c^{2} = a^{2} + b^{2} - 2ab\cdot\cos\gamma.$
John Molokach
In any triangle with sides $a,$ $b,$ $c$ and opposite angles $\alpha ,$ $\beta ,$ $\gamma ,$ we have three identities:
$\begin{align} a &= b\cdot\cos\gamma + c\cdot\cos\beta\\ b &= c\cdot\cos\alpha + a\cdot\cos\gamma \\ c &= a\cdot\cos\beta + b\cdot\cos\alpha . \end{align}$
The identities are obtained by drawing the altitudes - one at a time, and applying the definition of the cosine in the two right triangles so obtained. For each side, Euclid would consider two cases, as he did in II.12 and II.13. With the extension of the cosine functions from its original definition for acute angles, we may combine the acute and obtuse cases in one identity.
Let's multiply the first identity by $a,$ the second by $b,$ and the third by $c,$ and subtract the first two from the third:
$c^{2} - a^{2} - b^{2} = -2ab\cdot\cos\gamma ,$
which is exactly the Law of Cosines.
The Law of Cosines (Cosine Rule)
The Illustrated Law of Cosines
The Law of Sines and Cosines
The Law of Cosines: Plane Tessellation
The Law of Cosines: after Thâbit ibn Qurra
The Law of Cosines: Unfolded Version
The Cosine Law by Similarity
The Law of Cosines by Larry Hoehn
The Law of Cosines - Another PWW
The Law of Cosines - Yet Another PWW
Law of Cosines by Ancient Sliding
The Cosine Law: PWW by S. Kung
|Contact| |Front page| |Contents| |Geometry| |Up| | CommonCrawl |
{{#invoke:Hatnote|hatnote}} {{#invoke:Hatnote|hatnote}}
The popular puzzle Rubik's cube invented in 1974 by Ernő Rubik has been used as an illustration of permutation groups.
In mathematics and abstract algebra, group theory studies the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right.
Various physical systems, such as crystals and the hydrogen atom, can be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography.
One of the most important mathematical achievements of the 20th century[1] was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 1980, that culminated in a complete classification of finite simple groups.
2 Main classes of groups
2.1 Permutation groups
2.2 Matrix groups
2.3 Transformation groups
2.4 Abstract groups
2.5 Topological and algebraic groups
3 Branches of group theory
3.1 Finite group theory
3.2 Representation of groups
3.3 Lie theory
3.4 Combinatorial and geometric group theory
4 Connection of groups and symmetry
5 Applications of group theory
5.1 Galois theory
5.2 Algebraic topology
5.3 Algebraic geometry and cryptography
5.4 Algebraic number theory
5.5 Harmonic analysis
5.6 Combinatorics
5.8 Physics
5.9 Chemistry and materials science
{{#invoke:main|main}} Group theory has three main historical sources: number theory, the theory of algebraic equations, and geometry. The number-theoretic strand was begun by Leonhard Euler, and developed by Gauss's work on modular arithmetic and additive and multiplicative groups related to quadratic fields. Early results about permutation groups were obtained by Lagrange, Ruffini, and Abel in their quest for general solutions of polynomial equations of high degree. Évariste Galois coined the term "group" and established a connection, now known as Galois theory, between the nascent theory of groups and field theory. In geometry, groups first became important in projective geometry and, later, non-Euclidean geometry. Felix Klein's Erlangen program proclaimed group theory to be the organizing principle of geometry.
Galois, in the 1830s, was the first to employ groups to determine the solvability of polynomial equations. Arthur Cayley and Augustin Louis Cauchy pushed these investigations further by creating the theory of permutation groups. The second historical source for groups stems from geometrical situations. In an attempt to come to grips with possible geometries (such as euclidean, hyperbolic or projective geometry) using group theory, Felix Klein initiated the Erlangen programme. Sophus Lie, in 1884, started using groups (now called Lie groups) attached to analytic problems. Thirdly, groups were, at first implicitly and later explicitly, used in algebraic number theory.
The different scope of these early sources resulted in different notions of groups. The theory of groups was unified starting around 1880. Since then, the impact of group theory has been ever growing, giving rise to the birth of abstract algebra in the early 20th century, representation theory, and many more influential spin-off domains. The classification of finite simple groups is a vast body of work from the mid 20th century, classifying all the finite simple groups.
Main classes of groups
{{#invoke:main|main}}
The range of groups being considered has gradually expanded from finite permutation groups and special examples of matrix groups to abstract groups that may be specified through a presentation by generators and relations.
Permutation groups
The first class of groups to undergo a systematic study was permutation groups. Given any set X and a collection G of bijections of X into itself (known as permutations) that is closed under compositions and inverses, G is a group acting on X. If X consists of n elements and G consists of all permutations, G is the symmetric group Sn; in general, any permutation group G is a subgroup of the symmetric group of X. An early construction due to Cayley exhibited any group as a permutation group, acting on itself (X = G) by means of the left regular representation.
In many cases, the structure of a permutation group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that for n ≥ 5, the alternating group An is simple, i.e. does not admit any proper normal subgroups. This fact plays a key role in the impossibility of solving a general algebraic equation of degree n' ≥ 5 in radicals.
Matrix groups
The next important class of groups is given by matrix groups, or linear groups. Here G is a set consisting of invertible matrices of given order n over a field K that is closed under the products and inverses. Such a group acts on the n-dimensional vector space Kn by linear transformations. This action makes matrix groups conceptually similar to permutation groups, and the geometry of the action may be usefully exploited to establish properties of the group G.
Transformation groups
Permutation groups and matrix groups are special cases of transformation groups: groups that act on a certain space X preserving its inherent structure. In the case of permutation groups, X is a set; for matrix groups, X is a vector space. The concept of a transformation group is closely related with the concept of a symmetry group: transformation groups frequently consist of all transformations that preserve a certain structure.
The theory of transformation groups forms a bridge connecting group theory with differential geometry. A long line of research, originating with Lie and Klein, considers group actions on manifolds by homeomorphisms or diffeomorphisms. The groups themselves may be discrete or continuous.
Abstract groups
Most groups considered in the first stage of the development of group theory were "concrete", having been realized through numbers, permutations, or matrices. It was not until the late nineteenth century that the idea of an abstract group as a set with operations satisfying a certain system of axioms began to take hold. A typical way of specifying an abstract group is through a presentation by generators and relations,
G=⟨S|R⟩.{\displaystyle G=\langle S|R\rangle .}
A significant source of abstract groups is given by the construction of a factor group, or quotient group, G/H, of a group G by a normal subgroup H. Class groups of algebraic number fields were among the earliest examples of factor groups, of much interest in number theory. If a group G is a permutation group on a set X, the factor group G/H is no longer acting on X; but the idea of an abstract group permits one not to worry about this discrepancy.
The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant under isomorphism, as well as the classes of group with a given such property: finite groups, periodic groups, simple groups, solvable groups, and so on. Rather than exploring properties of an individual group, one seeks to establish results that apply to a whole class of groups. The new paradigm was of paramount importance for the development of mathematics: it foreshadowed the creation of abstract algebra in the works of Hilbert, Emil Artin, Emmy Noether, and mathematicians of their school.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}
Topological and algebraic groups
An important elaboration of the concept of a group occurs if G is endowed with additional structure, notably, of a topological space, differentiable manifold, or algebraic variety. If the group operations m (multiplication) and i (inversion),
m:G×G→G,(g,h)↦gh,i:G→G,g↦g−1,{\displaystyle m:G\times G\to G,(g,h)\mapsto gh,\quad i:G\to G,g\mapsto g^{-1},}
are compatible with this structure, i.e. are continuous, smooth or regular (in the sense of algebraic geometry) maps then G becomes a topological group, a Lie group, or an algebraic group.[2]
The presence of extra structure relates these types of groups with other mathematical disciplines and means that more tools are available in their study. Topological groups form a natural domain for abstract harmonic analysis, whereas Lie groups (frequently realized as transformation groups) are the mainstays of differential geometry and unitary representation theory. Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups. Thus, compact connected Lie groups have been completely classified. There is a fruitful relation between infinite abstract groups and topological groups: whenever a group Γ can be realized as a lattice in a topological group G, the geometry and analysis pertaining to G yield important results about Γ. A comparatively recent trend in the theory of finite groups exploits their connections with compact topological groups (profinite groups): for example, a single p-adic analytic group G has a family of quotients which are finite p-groups of various orders, and properties of G translate into the properties of its finite quotients.
Branches of group theory
Finite group theory
{{#invoke:main|main}} During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially the local theory of finite groups and the theory of solvable and nilpotent groups.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} As a consequence, the complete classification of finite simple groups was achieved, meaning that all those simple groups from which all finite groups can be built are now known.
During the second half of the twentieth century, mathematicians such as Chevalley and Steinberg also increased our understanding of finite analogs of classical groups, and other related groups. One such family of groups is the family of general linear groups over finite fields. Finite groups often occur when considering symmetry of mathematical or physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory of Lie groups, which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associated Weyl groups. These are finite groups generated by reflections which act on a finite-dimensional Euclidean space. The properties of finite groups can thus play a role in subjects such as theoretical physics and chemistry.
Representation of groups
{{#invoke:main|main}} Saying that a group G acts on a set X means that every element defines a bijective map on a set in a way compatible with the group structure. When X has more structure, it is useful to restrict this notion further: a representation of G on a vector space V is a group homomorphism:
ρ : G → GL(V),
where GL(V) consists of the invertible linear transformations of V. In other words, to every group element g is assigned an automorphism ρ(g) such that ρ(g) ∘ ρ(h) = ρ(gh) for any h in G.
This definition can be understood in two directions, both of which give rise to whole new domains of mathematics.[3] On the one hand, it may yield new information about the group G: often, the group operation in G is abstractly given, but via ρ, it corresponds to the multiplication of matrices, which is very explicit.[4] On the other hand, given a well-understood group acting on a complicated object, this simplifies the study of the object in question. For example, if G is finite, it is known that V above decomposes into irreducible parts. These parts in turn are much more easily manageable than the whole V (via Schur's lemma).
Given a group G, representation theory then asks what representations of G exist. There are several settings, and the employed methods and obtained results are rather different in every case: representation theory of finite groups and representations of Lie groups are two main subdomains of the theory. The totality of representations is governed by the group's characters. For example, Fourier polynomials can be interpreted as the characters of U(1), the group of complex numbers of absolute value 1, acting on the L2-space of periodic functions.
Lie theory
{{#invoke:main|main}} A Lie group is a group that is also a differentiable manifold, with the property that the group operations are compatible with the smooth structure. Lie groups are named after Sophus Lie, who laid the foundations of the theory of continuous transformation groups. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie's student Arthur Tresse, page 3.[5]
Lie groups represent the best-developed theory of continuous symmetry of mathematical objects and structures, which makes them indispensable tools for many parts of contemporary mathematics, as well as for modern theoretical physics. They provide a natural framework for analysing the continuous symmetries of differential equations (differential Galois theory), in much the same way as permutation groups are used in Galois theory for analysing the discrete symmetries of algebraic equations. An extension of Galois theory to the case of continuous symmetry groups was one of Lie's principal motivations.
Combinatorial and geometric group theory
{{#invoke:main|main}} Groups can be described in different ways. Finite groups can be described by writing down the group table consisting of all possible multiplications g • h. A more compact way of defining a group is by generators and relations, also called the presentation of a group. Given any set F of generators {gi}i ∈ I, the free group generated by F subjects onto the group G. The kernel of this map is called subgroup of relations, generated by some subset D. The presentation is usually denoted by 〈F | D 〉. For example, the group Z = 〈a | 〉 can be generated by one element a (equal to +1 or −1) and no relations, because n · 1 never equals 0 unless n is zero. A string consisting of generator symbols and their inverses is called a word.
Combinatorial group theory studies groups from the perspective of generators and relations.[6] It is particularly useful where finiteness assumptions are satisfied, for example finitely generated groups, or finitely presented groups (i.e. in addition the relations are finite). The area makes use of the connection of graphs via their fundamental groups. For example, one can show that every subgroup of a free group is free.
There are several natural questions arising from giving a group by its presentation. The word problem asks whether two words are effectively the same group element. By relating the problem to Turing machines, one can show that there is in general no algorithm solving this task. Another, generally harder, algorithmically insoluble problem is the group isomorphism problem, which asks whether two groups given by different presentations are actually isomorphic. For example the additive group Z of integers can also be presented by
〈x, y | xyxyx = e〉;
it may not be obvious that these groups are isomorphic.[7]
The Cayley graph of ⟨ x, y ∣ ⟩, the free group of rank 2.
Geometric group theory attacks these problems from a geometric viewpoint, either by viewing groups as geometric objects, or by finding suitable geometric objects a group acts on.[8] The first idea is made precise by means of the Cayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs the word metric given by the length of the minimal path between the elements. A theorem of Milnor and Svarc then says that given a group G acting in a reasonable manner on a metric space X, for example a compact manifold, then G is quasi-isometric (i.e. looks similar from the far) to the space X.
Connection of groups and symmetry
{{#invoke:main|main}} Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This occurs in many cases, for example
If X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups.
If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (an isometry). The corresponding group is called isometry group of X.
If instead angles are preserved, one speaks of conformal maps. Conformal maps give rise to Kleinian groups, for example.
Symmetries are not restricted to geometrical objects, but include algebraic objects as well. For instance, the equation
x2−3=0{\displaystyle x^{2}-3=0}
has the two solutions +3{\displaystyle +{\sqrt {3}}} , and −3{\displaystyle -{\sqrt {3}}} . In this case, the group that exchanges the two roots is the Galois group belonging to the equation. Every polynomial equation in one variable has a Galois group, that is a certain permutation group on its roots.
The axioms of a group formalize the essential aspects of symmetry. Symmetries form a group: they are closed because if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry. The identity keeping the object fixed is always a symmetry of an object. Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions are associative.
Frucht's theorem says that every group is the symmetry group of some graph. So every abstract group is actually the symmetries of some explicit object.
The saying of "preserving the structure" of an object can be made precise by working in a category. Maps preserving the structure are then the morphisms, and the symmetry group is the automorphism group of the object in question.
Applications of group theory
Applications of group theory abound. Almost all structures in abstract algebra are special cases of groups. Rings, for example, can be viewed as abelian groups (corresponding to addition) together with a second operation (corresponding to multiplication). Therefore group theoretic arguments underlie large parts of the theory of those entities.
Galois theory
{{#invoke:main|main}} Galois theory uses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). The fundamental theorem of Galois theory provides a link between algebraic field extensions and group theory. It gives an effective criterion for the solvability of polynomial equations in terms of the solvability of the corresponding Galois group. For example, S5, the symmetric group in 5 elements, is not solvable which implies that the general quintic equation cannot be solved by radicals in the way equations of lower degree can. The theory, being one of the historical roots of group theory, is still fruitfully applied to yield new results in areas such as class field theory.
Algebraic topology
{{#invoke:main|main}} Algebraic topology is another domain which prominently associates groups to the objects the theory is interested in. There, groups are used to describe certain invariants of topological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to some deformation. For example, the fundamental group "counts" how many paths in the space are essentially different. The Poincaré conjecture, proved in 2002/2003 by Grigori Perelman is a prominent application of this idea. The influence is not unidirectional, though. For example, algebraic topology makes use of Eilenberg–MacLane spaces which are spaces with prescribed homotopy groups. Similarly algebraic K-theory stakes in a crucial way on classifying spaces of groups. Finally, the name of the torsion subgroup of an infinite group shows the legacy of topology in group theory.
A torus. Its abelian group structure is induced from the map C → C/Z + τZ, where τ is a parameter living in the upper half plane.
The cyclic group Z26 underlies Caesar's cipher.
Algebraic geometry and cryptography
{{#invoke:main|main}} Algebraic geometry and cryptography likewise uses group theory in many ways. Abelian varieties have been introduced above. The presence of the group operation yields additional information which makes these varieties particularly accessible. They also often serve as a test for new conjectures.[9] The one-dimensional case, namely elliptic curves is studied in particular detail. They are both theoretically and practically intriguing.[10] Very large groups of prime order constructed in Elliptic-Curve Cryptography serve for public key cryptography. Cryptographical methods of this kind benefit from the flexibility of the geometric objects, hence their group structures, together with the complicated structure of these groups, which make the discrete logarithm very hard to calculate. One of the earliest encryption protocols, Caesar's cipher, may also be interpreted as a (very easy) group operation. In another direction, toric varieties are algebraic varieties acted on by a torus. Toroidal embeddings have recently led to advances in algebraic geometry, in particular resolution of singularities.[11]
Algebraic number theory
{{#invoke:main|main}} Algebraic number theory is a special case of group theory, thereby following the rules of the latter. For example, Euler's product formula
∑n≥11ns=∏p prime11−p−s{\displaystyle {\begin{aligned}\sum _{n\geq 1}{\frac {1}{n^{s}}}&=\prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}}\\\end{aligned}}\!}
captures the fact that any integer decomposes in a unique way into primes. The failure of this statement for more general rings gives rise to class groups and regular primes, which feature in Kummer's treatment of Fermat's Last Theorem.
{{#invoke:main|main}} Analysis on Lie groups and certain other groups is called harmonic analysis. Haar measures, that is, integrals invariant under the translation in a Lie group, are used for pattern recognition and other image processing techniques.[12]
In combinatorics, the notion of permutation group and the concept of group action are often used to simplify the counting of a set of objects; see in particular Burnside's lemma.
The circle of fifths may be endowed with a cyclic group structure
The presence of the 12-periodicity in the circle of fifths yields applications of elementary group theory in musical set theory.
In physics, groups are important because they describe the symmetries which the laws of physics seem to obey. According to Noether's theorem, every continuous symmetry of a physical system corresponds to a conservation law of the system. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include the Standard Model, gauge theory, the Lorentz group, and the Poincaré group.
Chemistry and materials science
In chemistry and materials science, groups are used to classify crystal structures, regular polyhedra, and the symmetries of molecules. The assigned point groups can then be used to determine physical properties (such as polarity and chirality), spectroscopic properties (particularly useful for Raman spectroscopy and infrared spectroscopy), and to construct molecular orbitals.
Molecular symmetry is responsible for physical and spectroscopic properties of compounds and provide relevant information about how chemical reactions occur. In order to assign a point group for any given molecule, it is necessary to find the set of symmetry operations present on it. The symmetry operation is an action, such as a rotation around an axis or a reflection through a mirror plane. In other words, it is an operation that moves the molecule into the original orientation. However, these operations should keep the molecule unchanged. The rotation axis and mirror planes are called in group theory as symmetry elements. These elements can be a point, line or plane with respect to which the symmetry operation is carried out. When all the symmetry operations of a molecule are known, one can determine the specific point group for this molecule.
For chemistry purposes, there are five important symmetry operations. Identity (E) consists of doing nothing with the molecule. A different idea for this would be a rotation of 360 degrees around an axis of any order. All molecules have this operation and some molecules only have identity as symmetry operation. Rotation around an axis (Cn) consists on spinning the molecule around an specific axis by a specific degree. For example, the water molecule has an rotation axis that passes through the oxygen atom and if it gyrates 180 degrees, the molecule remains unchanged after this operation. In this case, Cn is represented as C2 because order of rotation = 360/n. Others symmetry operations are reflection, inversion and improper rotation (rotation followed by reflection).[13]
Group (mathematics)
Glossary of group theory
List of group theory topics
↑ * Elwes, Richard, "An enormous theorem: the classification of finite simple groups," Plus Magazine, Issue 41, December 2006.
↑ This process of imposing extra structure has been formalized through the notion of a group object in a suitable category. Thus Lie groups are group objects in the category of differentiable manifolds and affine algebraic groups are group objects in the category of affine algebraic varieties.
↑ Such as group cohomology or equivariant K-theory.
↑ In particular, if the representation is faithful.
↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
↑ Template:Harvnb
↑ Writing z = xy, one has G = 〈z, y | z3 = y〉 = 〈z〉.
↑ For example the Hodge conjecture (in certain cases).
↑ See the Birch-Swinnerton-Dyer conjecture, one of the millennium problems
↑ {{#invoke:citation/CS1|citation |CitationClass=citation }}
↑ Shriver, D.F.; Atkins, P.W. Química Inorgânica, 3ª ed., Porto Alegre, Bookman, 2003.Template:Better source
{{#invoke:citation/CS1|citation
|CitationClass=citation }}
|CitationClass=citation }} Shows the advantage of generalising from group to groupoid.
|CitationClass=citation }} An introductory undergraduate text in the spirit of texts by Gallian or Herstein, covering groups, rings, integral domains, fields and Galois theory. Free downloadable PDF with open-source GFDL license.
|CitationClass=citation }} Conveys the practical value of group theory by explaining how it points to symmetries in physics and other sciences.
Ronan M., 2006. Symmetry and the Monster. Oxford University Press. ISBN 0-19-280722-6. For lay readers. Describes the quest to find the basic building blocks for finite groups.
|CitationClass=citation }} A standard contemporary reference.
|CitationClass=citation }} Inexpensive and fairly readable, but somewhat dated in emphasis, style, and notation.
Template:Weibel IHA
History of the abstract group concept
Higher dimensional group theory This presents a view of group theory as level one of a theory which extends in all dimensions, and has applications in homotopy theory and to higher dimensional nonabelian methods for local-to-global problems.
Plus teacher and student package: Group Theory This package brings together all the articles on group theory from Plus, the online mathematics magazine produced by the Millennium Mathematics Project at the University of Cambridge, exploring applications and recent breakthroughs, and giving explicit definitions and examples of groups.
US Naval Academy group theory guide A general introduction to group theory with exercises written by Tony Gaglione.
ml:ഗ്രൂപ്പ് സിദ്ധാന്തം
Retrieved from "https://en.formulasearchengine.com/index.php?title=Group_theory&oldid=222021" | CommonCrawl |
Weak law of large numbers - redundant?
I might be missing something basic - but it appears that the strong law of large numbers covers the weak law. If that case, why is the weak law needed?
probability law-of-large-numbers
amoeba says Reinstate Monica
$\begingroup$ The strong law indeed implies the weak law, but the weak law is easier to prove. See here: terrytao.wordpress.com/2008/06/18/… $\endgroup$ – S. Kolassa - Reinstate Monica Jan 9 '13 at 13:20
$\begingroup$ math.stackexchange.com/questions/13421/… $\endgroup$ – Scortchi - Reinstate Monica♦ Jan 9 '13 at 13:28
$\begingroup$ stats.stackexchange.com/questions/72859/… $\endgroup$ – kjetil b halvorsen Nov 28 '19 at 2:03
The most general case of the Weak Law of Large Numbers does not even require the existence of first moments. Therefore, it holds under conditions/assumptions more general than the conditions/assumptions required for the Strong Law of Large Numbers (existence of first moments).
Allow me to quote for you the relevant results from Durrett, Probability: Theory and Examples (4th edition), so you can see the truth of the above statement for yourself.
(p.60) Theorem 2.2.7 Weak Law of Large Numbers Let $X_1, X_2, \dots$ be i.i.d. with $$x \mathbb{P}(|X_i|>x) \to 0 \quad as \quad x \to \infty \quad (for\,all\,i=1,2,\dots) $$ Let $S_n = X_1 + \dots + X_n$ and $\mu_n = \mathbb{E}[X_1 1_{|X_1 \ge n|}]$. Then $S_n/n - \mu_n \to 0$ in probability.
The condition, for each $X_i$ in the sequence of random variables, that $x \mathbb{P}(|X_i| > x) \to 0$ as $x \to \infty$ is strictly weaker than the existence of first moments -- i.e., there exist i.i.d. sequences of random variables which satisfy this condition but which do not have finite first moments. For an example, see the previous answer above.
(p.73) Theorem 2.4.1. Strong Law of Large Numbers Let $X_1, X_2, \dots$ be pairwise independent identically distributed random variables with $\mathbb{E}|X_i| < \infty$ (for all $i = 1, 2, \dots$). Let $\mathbb{E}X_i = \mu$ and $S_n = X_1 + \dots + X_n$. Then $S_n/n \to \mu$ almost surely as $n \to \infty$.
Theorem 2.4.5. on p.75 is the Strong Law for the case that the first moment exists but is not finite.
Both results (the Weak Law of Large Numbers and the Strong Law of Large Numbers) are a lot easier to prove if/when we assume that the random variables have finite variance (second moments), but such an assumption is unnecessary for both results.
So, in conclusion, the Weak Law of Large Numbers is not redundant, because although its conclusion is weaker than that of the Strong Law of Large Numbers, it is true "more often" (i.e. under more general conditions) than the Strong Law of Large Numbers. So even when the Strong Law doesn't hold, the Weak Law may still hold.
Chill2MachtChill2Macht
$\begingroup$ we know that the condition you state for the weak law is necessary, but are the conditions that the RVs are pairwise iid and the mean exists necessary for the strong law (in the sense of the conclusion of thm 2.2.7 with "in probability" replaced by "almost surely")? $\endgroup$ – user795305 Jun 30 '17 at 17:12
$\begingroup$ No, pairwise iid is not, you can substitute other conditions (the Kolmogorov criterion, for example, for independent variates with different variances (therefore not iid): $\sum \sigma_k^2/k^2$ converges). There are also conditions for correlated variates. It also doesn't require a mean: ams.org/journals/tran/1973-185-00/S0002-9947-1973-0336806-5/…, but again, other conditions need to hold. $\endgroup$ – jbowman Jun 30 '17 at 17:21
$\begingroup$ @Ben The mean has to exist, because the mean doesn't exist for the Cauchy distribution, and the SLLN (nor the WLLN) holds for the Cauchy distribution. I don't know about pairwise independence. The proof given in Durrett is due to Etemadi (1981). In 1997 Etemadi published a paper in which he claimed that in the 1981 paper it was shown that the SLLN holds if and only if $\mathbb{E}(|X_i|) < \infty$ i.e. are necessary and sufficient. sciencedirect.com/science/article/pii/S016771529600123X Note that the "SLLN" in Durrett for mean existing but not finite isn't the same conclusion as SLLN $\endgroup$ – Chill2Macht Jun 30 '17 at 17:22
$\begingroup$ The version of the WLLN above is called the Feller WLLN or the Kolmogorov-Feller WLLN, see: www2.stat.duke.edu/courses/Fall09/sta205/lec/lln.pdf or: link.springer.com/article/10.1023/B:JOTP.0000040299.15416.0c The conditions for the Kolmogorov-Feller WLLN are not only sufficient but also necessary; see: stat.umn.edu/geyer/8112/notes/weaklaw.pdf I am not sure if weaker assumptions are possible for convergence in probability to infinity (i.e. for the WLLN) in case the mean exists but is infinite. $\endgroup$ – Chill2Macht Jun 30 '17 at 17:46
$\begingroup$ The conditions of the Feller WLLN imply $\mathbb{E}|X_1|^{1-\epsilon} < \infty$ for some $\epsilon >0$, but I am not sure if it is possible for the mean to exist and be infinite with $\mathbb{E}|X_1|^{1-\epsilon} \not< \infty$ for all $\epsilon > 0 $. So perhaps the infinite mean version of the SLLN might hold for some cases where the Feller WLLN does not; I don't know; at the very least generally though one does not have the infinite mean case in mind when talking about the SLLN, and the finite mean version of the SLLN definitely holds under less general assumptions than the Feller WLLN. $\endgroup$ – Chill2Macht Jun 30 '17 at 17:50
The mathematical formulations of the "Strong" and "Weak" Laws of Large Numbers look somewhat similar. Yet, the two Laws are quite different in nature :
The Weak Law never considers infinite sequences of realizations of a random variable. It only states that imbalanced sequences are less likely to occur as one considers longer sequences.
On the other hand, the Strong Law considers only infinite sequences of realizations of a random variable, and more precisely, the set of these infinite sequences. It states that the set of imbalanced sequences has probability 0 in a sense that generalizes the concept of "set of measure 0".
It can be shown that the Strong Law implies the Weak Law, which can therefore be regarded as a consequence of the Strong Law.
The converse is, however, wrong : it is possible to exhibit sequences of r.v.s following the Weak Law, but not the Strong one. So the terms "Weak" and "Strong" are indeed justified. For example, Let your sequence be i.i.d. with density
$f_X(x)=x^{-2}I(x>1)$
You can obtain a WLLN but not a SLLN, due to the Borel-Cantelli lemma.
Blain WaanBlain Waan
$\begingroup$ Can you please clarify what you are trying to say in the final two sentences of the post (For example...)? $\endgroup$ – cardinal Jan 30 '13 at 22:48
$\begingroup$ I believe that $I(x >1)$ is the indicator function for the event $x > 1$, i.e. $0$ for $x \le 1$ and $1 \ge 1$. Thus the function $f_X(x) = x^{-2}I(x > 1)$ equals $0$ for $x \le 1$ and $x^{-2}$ for $x \ge 1$. All of the variables in the sequence are independent of one another, and are distributed such that they have the function $f_X$ as their probability density function. Then because of the Borel-Cantelli Lemma, this sequence satisfies the conclusion of the WLLN, but it does not satisfy the conclusion of the SLLN. $\endgroup$ – Chill2Macht Jun 30 '17 at 15:53
Not the answer you're looking for? Browse other questions tagged probability law-of-large-numbers or ask your own question.
Is there a statistical application that requires strong consistency?
(Nomenclature) Are there two different Weak Laws of Large Numbers?
Central limit theorem versus law of large numbers
Conditions in law of large numbers
Intuition behind strong vs weak laws of large numbers (with an R simulation)
How does Stigler derive this result from Bernoulli's weak law of large numbers?
Expressing Law of Large numbers in terms of binomial probabilities | CommonCrawl |
Choi* , Lim** , and Kim***: Automated Link Tracing for Classification of Malicious Websites in Malware Distribution Networks
Sang-Yong Choi* , Chang Gyoon Lim** and Yong-Min Kim***
Automated Link Tracing for Classification of Malicious Websites in Malware Distribution Networks
Abstract: Malicious code distribution on the Internet is one of the most critical Internet-based threats and distribution technology has evolved to bypass detection systems. As a new defense against the detection bypass technology of malicious attackers, this study proposes the automated tracing of malicious websites in a malware distribution network (MDN). The proposed technology extracts automated links and classifies websites into malicious and normal websites based on link structure. Even if attackers use a new distribution technology, website classification is possible as long as the connections are established through automated links. The use of a real web-browser and proxy server enables an adequate response to attackers' perception of analysis environments and evasion technology and prevents analysis environments from being infected by malicious code. The validity and accuracy of the proposed method for classification are verified using 20,000 links, 10,000 each from normal and malicious websites.
Keywords: Auto Link Tracer , Drive-by Download , Malicious Website , MDN , Real Browser and Forward Proxy
The recent growth in online services has not only offered convenience to everyday life but also increased threats to Internet users. Services such as online banking, shopping, and social networking, which depend on personal or financial information, are particularly susceptible to threats. One of the most common threats is the drive-by download attack. Drive-by download attacks entice users to sites that distribute malicious code that is designed to infect the user PCs. Vulnerable PCs become infected with malicious code simply by accessing such sites, which is a reason this type of attack is considered one of the most critical online threats [1-3].
Research on methods of countering drive-by downloads can be classified into three general analytical approaches: webpage static [4-10], execution-based dynamic [11-13], and binary [14-17] analysis. However, such studies possess limitations because they rely on signatures such as anti-virus engines, similarity comparison with past (previously detected) data, and behavior analysis. Static analysis is constrained by issues such as distribution script obfuscation and high false positives; dynamic analysis uses analysis environments that can be easily identified by attackers, which increase their chances of bypassing the behavioral monitoring process. Binary analysis has the same limitations.
To overcome these limitations, this study proposes a method of analysis that does not require website content analysis, extraction of website links, or behavior analysis emulation. This study conducts a comprehensive analysis of the total cost involved in visiting websites through automated links and classifies websites as malicious and normal. The reliability and accuracy of the proposed method in identifying malicious websites are verified through normal and malicious website links collected from the Internet.
2.1 Malware Distribution Network
To infect PCs with malicious code, attackers create a network that connects landing sites with malicious-code distribution sites. This network is known as a malware distribution network (MDN) [4]. JavaScript and iframe tags are used to enable automatic connections without any action on the part of the users. The inserted link information is obfuscated [18] to interfere with analysis. Vulnerable PCs become infected with malicious code simply by accessing such sites. To attract users to the distribution sites, normal websites having a high number of user connections are injected with code that automatically connects users to the distribution sites [19].
2.2 MDN Analysis Methods
With malicious code distribution emerging as a critical online threat, an extensive analysis of the distribution sites has been conducted. The major research areas are static [4-10] and dynamic analysis [11-13,20]. In some cases, the latter includes binary behavior analysis.
Common methods of static analysis are the signature-based method, which analyzes abnormal content in websites, and meta-information comparison, which involves a statistical analysis of meta-information for comparison with websites commonly used for malicious code distribution. Static analysis decodes content that has been obfuscated in web-sites and the rate of analysis is faster than dynamic analysis because the content is directly analyzed. However, it is limited in its ability to counter the diverse methods of obfuscation. Static analysis is thus less effective in combating new attacks using evolved obfuscation methods.
Dynamic analysis uses virtual machines or emulators to examine changes in PCs after direct visits to websites. Using analysis environments similar to those in actual computing, this method does not have to consider obfuscation. It can serve as an effective solution as long as a well-defined malicious profile exists. This is because it analyzes changes in the actual computer, such as to the registry, file system, network, and process. However, a well-defined profile is difficult to acquire and evasive codes are not easy to combat [16]. An analysis environment also faces the risk of being infected by malicious code. Binary behavior analysis, as an expansion of static analysis, analyzes the binaries downloaded during website visits.
The four existing detection technologies capable of bypassing analysis environments are hardware, execution environment, external application, and action detection [16]. Hardware detection is a method of detecting virtual machine devices and can be used to detect network interface defined under VMware such as pcnet32. Execution environment detection is used to determine whether a binary execution environment is capable of monitoring the debugger status and other processes. External application detection detects whether known analytical tools such as the process monitor are running. Action detection monitors user actions such as mouse clicks and keyboard input to distinguish between malicious code environments and user environments and delays the time involved in process execution. Static analysis may not be effective in responding to malicious code built on intelligent bypass technology.
Data collection for the analysis can be categorized into two methods. The first method mirrors all web traffic of the target environment [21]. However, it is less applicable to encrypted traffic. The second method collects user traffic using a forward proxy server [13]. Although the second method is effective in decoding encrypted traffic, it is relatively slow because all user traffics must be processed by the proxy server.
2.3 Characteristics of MDNs
As previously mentioned, an MDN can contain obfuscated pages or scripts that detect analysis environments. These characteristics are insufficient as a standard for classifying sites into malicious and normal webpages. This is because normal code is also obfuscated for protection and the obfuscation method may be similar to that of malicious code. That is, website classification cannot be based simply on the properties of singular webpages that constitute links. A more reliable method of classification is required to distinguish normal from malicious webpages.
This study analyzes the automated link structure of normal websites and MDNs to classify normal and malicious websites. Our analysis revealed the differences between connected links in five major areas. Clearly, analyzing an MDN configuration and normal link structure is difficult when the five properties are individually examined. By focusing on the necessity of certain properties and whether they can be easily modified by attackers, this study applied relative weights and performed a correlation analysis between the links.
1) Host Properties between Root URI and Referrer: The URIs of normal links are typically connected to resources within a host and make the host of the root URI the same as that of the referrer. Conversely, an MDN contains links connecting normal websites to distribution sites created by attackers. To attract as many users as possible to the distribution sites, attackers operate distribution sites that are unlike normal websites. Thus, the MDN is likely to have different host values for the root URI and referrer [4,12,19].
2) Domain Properties between Root URI and Referrer: Similar to host properties, the root URI and referrer have different domain values. In general web hosting, a single domain is used to accommodate several hosts. Table 1 displays several URIs and their referrers for http://www.msn.com. As indicated in Table 1, the URIs have the same domain despite having different hosts. Similar to host properties, an MDN can have different domains for the root URI and referrer. Table 2 illustrates a typical MDN where four websites are connected and users are led to download portable executable (PE) files. Different domains exist for each node within the automated links, including crows.co.kr, filehon.com, ytmall.co.kr, and coocera.com [12,21].
3) Form of URI is IP Address: Normal websites use systematic domains instead of IP addresses and assign URIs for various services. As indicated in Table 3, an MDN typically consists of IP addresses. This can be traced to the attackers' intention of enhancing the mobility of the distribution sites [10,22]. IP addresses are used when inserting links that connect hacked websites to distribution sites to avoid the cost of retaining domains or the unavailability of domains previously identified by detection systems. The use of IP addresses is an effective option for attackers running distribution sites, which are usually changed after a short period.
Domain and host configuration of normal websites
URI Host Domain
http://c.msn.com/c.gif?udc=... omitted >... c.msn.com msn.com
http://otf.msn.com/c.gif?evt=... omitted >… otf.msn.com msn.com
http://rad.msn.com/ADSAdClient31.dll?GetSAd=... omitted >… rad.msn.com msn.com
http://otf.msn.com/c.gif otf.msn.com msn.com
http://c.msn.com/c.gif?udc=truerid=... omitted >… c.msn.com msn.com
http://g.msn.com/view/30000000000223966?EVT=... omitted >… g.msn.com msn.com
Domain and host configuration of an MDN
http://www.crows.co.kr www.crows.co.kr crows.co.kr
http://filehon.com/?p_id=dream1434category_use=1layout =01category=pop=y filehon.com filehon.com
http://ytmall.co.kr/vars/11/a.html ytmall.co.kr ytmall.co.kr
http://coocera.com/new/bbs_sun/files/s.exe coocera.com coocera.com
An MDN composed of IP addresses
http://www.19x.co.kr/ → http://223.255.222.85:8080/index.html → http://223.255.222.85:8080/ww.html → http://223.255.222.85:8080/ww.swf
Country codes in an MDN
http://jmdc.onmam.com/[KR] → http://hompy.onmam.com/portal/bgmPlay.aspx?hpno=69504[KR] → http://cus.flower25.com/img/pop/rc.html[KR] → http://eco-health.org/upload/ad/index.html[KR] → http://count34.51yes.com/sa.htm?id=344119155refe=location= http%3A//ecohealth.org/upload/ad/index.htmlcolor=32xresolution=1024x768returning=0language=undefined ua=Mozilla/5.0%20%28compatible%3B%20MSIE%2010.0%3B%20Windows%20NT%206.1%29[CN]
4) Country Properties of URI: Normal websites have the same country code as that of the domain of their related websites. Global services such as Google Analytics, Facebook, and YouTube may have different country information within automated links; however, the domains of the majority of the web services have the same country code. Moreover, attackers include malicious code distribution sites in website links that constitute an MDN. To avoid tracking distribution sites, the attackers may insert country information that is different from that of the hacked sites. As indicated in Table 4, the root URI and intermediate site in an MDN can have different country codes [10].
5) File Properties of URI: The ultimate purpose of drive-by download attacks is to infect user PCs with malicious code. In the majority of cases, the malicious code is downloaded in the form of an executable file in the MDN. If the user PC is not vulnerable, the executable file may not be downloaded, even when the user connects to the distribution site. The presence of an MDN cannot be determined solely by the properties of the downloaded file. This is even truer given the frequent changes to malicious code in the distribution sites. As illustrated in Table 2, if connecting to a certain website triggers the downloading of a PE or executable file, it is highly likely to be a constituent of an MDN [21].
2.4 Considerations in Analysis
In addition to analyzing previous research, this study proposes three considerations for effective analysis of an MDN. First, the proposed system must be capable of effectively combating new distribution methods including obfuscation. Rather than analyzing meta-information, signatures, and other content that can be easily evaded by attackers, analysis should focus on elements essential to the MDN, such as website link structure and webpage type. Second, the system must be capable of responding to bypass technology. The proposed system relies on real browsers and does not require that any detection programs be installed in the analysis environments. Finally, the system must be able to process encrypted traffic. To achieve this goal, forward proxy servers are employed in our study. Another advantage of using proxy servers is that the filtering of inbound traffic prevents analysis environments from being infected with malicious code. If an executable file is present in the response data, it is logged and deleted by the proxy server.
3. AutoLink-Tracer for Classification of Malicious Website
3.1 Definition to Automated Webpage Links (AutoLink)
Links constituting web services can be classified into a href> tags that enable access through mouse clicks and other user actions, as well as iframe> or JavaScript> that enable automatic connections without clicks. In general, the latter is used in an MDN. Other than iframe and JavaScript, links can use meta-tags such as location. A group of automatically linked websites can be expressed in the form of nodes and relations as illustrated in Fig. 1.
In this case, the nodes represent a webpage linked to the src properties of iframe or JavaScript. The relations indicate that the node is connected to other nodes.
3.2 Automated Link Analyzer
The website first visited by a user is the Root_Node, and the node that is automatically linked is the Hopping_Node. The final node that possesses no further automatic linking is labeled as the Last_Node. All automated links can be expressed in graphical form. The definition of the node is given by Eq. (1).
[TeX:] $$Node _ { (RN ) } = Root\text_Node \ of Automated \ Link \ Graph \\ Node _ { (HN ) } = Hopping\text_Node \ of Automated \ Link \ Graph \\ Node _ { (LN ) } = Last\text_Node \ of Automated \ Link \ Graph $$
In Fig. 1, the strength of the connection between the nodes is the relative strength of the logical connections between Node(RN) and the remaining nodes. For instance, when a specific node is linked to the same site as Node(RN), the connection strength is relatively greater than that of non-Node(RN) connections. The greater the connection strength, the lower the cost involved in connecting Node(RN) to the corresponding node, and vice versa. Connection strength is thus inversely proportional to the cost of visit. The automated link analyzer calculates the cost of visit by considering connection weights and the cost of visit for each node. It then performs the classification of MDNs and non-MDNs.
Configuration of automated linked pages.
3.3 Connection Strength of Automated Link
To calculate the cost of visit to the node of an automated link, the simple connection strength (SCS) of each node must be measured. As indicated in Table 5, SCS is represented by zero for a strong connection and one for a weak connection. That is, a strong connection results in a lower cost of visit. The five characteristics analyzed in Section 2.3 serve as the criteria for measuring SCS.
An MDN can contain Node(HN) or Node(LN) with different hosts, domains, or country codes from Node(RN). Further, in an MDN, the downloaded file is usually in the form of a PE or other executable. If the URI of the node is an IP address, it is likely to be the Node(HN) or Node(LN) of the MDN. Table 6 indicates the connection strength of each node. We allocate a small weight to characteristics that can be easily evaded by attackers and a high weight to all others. A high weight is assigned to characteristics essential for the MDN configuration; otherwise a low weight is assigned. For example, an IP address can be easily modified by attackers to a non-IP address. An IP address is also not a requirement for an MDN. However, the downloading of executable files is essential for the MDN configuration, even though the file type can be modified. Because normal sites are hacked to become malicious code distribution sites, host and domain names should be different. These are difficult to bypass and a URI is necessary for the MDN. Connection weights (CW) are assigned to the five characteristics, as presented in Table 6.
Measurement of SCS in automated links
Node attribute Measured value
Relation between current node and Node(RN) (eq/non-eq)
Hostname 0 / 1
Domain 0 / 1
Hosted Country 0 / 1
Current node
Type of URI
Non-IP 0
File attribute
Executable 1
Non-executable 0
Connection weight of SCS characteristics
Connection weight High need Low need
Feature Weight Feature Weight
Possibility of analysis avoidance High File attribute 2 Type of URI 1
Low Hostname/Domain 4 Hosted country 3
Eq. (2) is the connection strength (CS) measurement for a single node in an automated link considering SCS and CW.
[TeX:] $$C S _ { ( N o d e ) } = \sum _ { k = 1 } ^ { 5 } \left\{ S C S _ { ( k ) } \times C W _ { ( K ) } \right\} \\ Node\in \left\{ N o d e _ { ( H N ) } , N o d e _ { ( L N ) } \right\}$$
3.4 Cost of Automated Link Visit
CS is the strength of the connection between Node(RN) and the present node. The higher the value of CS, the lower the cost of visit. Thus, the cost of visit to Node(LN) of automated link (CAL) is the sum of CS of all the nodes constituting the automated link and the absolute distance (AD) between the nodes multiplied together (see Fig. 2).
Total cost to visit all nodes of automated link.
However, because automated links consist of a different number of nodes, summing the CS values may be insufficient for classifying MDNs and non-MDNs. For a relative comparison, normalizing the CS of the automated links using a different number of nodes is necessary. Assuming N is the maximum number of nodes, the relative distance (RD) can be obtained by dividing N by the number of nodes. CAL, which is the sum of all CS and RD values multiplied together, can be derived by Eq. (3).
[TeX:] $$C A L _ { ( Link(i) ) } = \sum _ { k = 1 } ^ { NC _ {(Link(i))} } \left\{ C S _ { Node( k ) } \times R D \right\} \\ N C _ { ( \operatorname { link } ( i ) ) } = \text { Node Count of link } _ { ( i ) } \\ R D = \frac { N } { N C _ { ( \ l i n k ( i ) ) } }$$
Because an MDN consists of normal nodes that also contain nodes inserted by the attackers, a high number of nodes means that the link is likely to be an MDN. As indicated in Eq. (4), the final decision cost (DC) can be derived by adding the node count rate (NCR) to CAL.
[TeX:] $$D C _ { ( \operatorname { Link } ( i ) ) } = C A L _ { ( link ( i ) ) + (CAL_{link(i)} \times NCR)} \\ N C R = \left\{ \begin{array} { l } { N C _ { ( \operatorname { link } ( i ) ) } \leq 10 : \frac { N C _ { ( \operatorname { lin } k ( i ) ) } } { 10 } } \\ { N C _ { ( \operatorname { link } ( i ) ) } > 10 : 1 } \end{array} \right.$$
Table 7 presents the operating processes of the system.
Operation of AutoLink-Tracer
input URL: one of URLs in database
output MaliciousLink[]: Array of [The linked nodes by automated]
declare NURL: Now selected URL in scheduler
SCS : Simple Connection Strength of current node
CS : Connection Strength of current link
CAL : Cost of visit to Automated link
DC : Decision Cost to visit current link
CW : Connection Weight of each Feature
NCR : Node Count Rate
algorithm Auto-Tracer
STEP 1: // initialization
MaliciousLink[] = NULL
STEP 2:// Visit Website
if exist not accessed URL in scheduler then
URL = select not accessed URL in scheduler
Capture URI, Referrer, Hostname, DomainName, Country, Filetype by Proxy
Insert Capture data to database
STEP 3: // Calculate Connection Strength
select URI, Referer, Hostname, DomainName, Country, Filetype from Database
SCS = Calculate Simple Connection Strength of each node
CS = sum(SCS * CW) of all node in current link
CAL = sum(CS * 10/Nocd_count) for all node in current link
DC = CAL + (CAL * NCR)
if DC include boundary of MDN
MaliciousLink = Now link
return Malicious_Link
end Auto-Tracer
4. Experimental Evaluation and Analysis
4.1 An Architecture of the Proposed Method
The overall structure of the prototype, called AutoLink-Tracer, is presented in Fig. 3. AutoLink-Tracer is a method that automatically traces links constituting web services and classifies MDNs based on the connection characteristics between the links. It consists of a link-tracing and link-analysis component. The link-tracing module is composed of real browsers and a forward proxy. A forward proxy utilized Mitmproxy [23], which is an open source software to analyze both http and https protocols. The automated link analyzer performs link analysis based on logs recorded by link tracing.
AutoLink-Tracer: a prototype architecture for the proposed method.
4.2 Experimental Goals and Procedures
The first goal of our study was to determine whether the proposed method can be a standard for classifying malicious and normal links. The second goal was to apply the classification method to malicious and normal links and then assess the performance of the AutoLink-Tracer. Table 8 presents the experimental data and procedures. These links were used as training and experimental data at a ratio of 7:3.
Experimental goals and procedures
Goal Procedures Dataset
Validity test
Analysis of the distribution situation for malicious/benign link using proposed method
Classification equation induction ROC analysis
Malicious link: 7,000
Benign link: 7,000
Classification performance
Optimal cross error rate (CER) derivation
False positive and false negative analysis for CER
Ten-thousand MDNs were collected from KAIST Cyber Security Research Center [24] and tenthousand normal links were collected from normal websites. The collection of normal links is browsing via real web browser. Thus all links are automated linking from webpage in the root website to webpage in the last leap website same as drive-by download attacks. The configuration of real web browser to collecting normal links is shown in Table 9.
Configuration of real browsing
Host OS Web browser version and options Plug-in and application
Windows 7 SP1 32 bit
Internet Explorer 10.0
Internet Option - Security Setting : Minimum
SDK 1.6
MS Office 2007
4.3 Validity of Experiments
To validate the proposed method, Eq. (4) was applied to extract the DC per link for 7,000 MDNs and 7,000 normal links. Further, the links were classified based on the number of nodes. Fig. 4 presents the automated link distribution with the x and y axes representing the number of nodes and DC, respectively. The size of the circle is indicative of the number of nodes at a corresponding point. As indicated in Fig. 4, malicious and normal links classified under the proposed method were distributed to different locations. This demonstrates that the proposed method is effective in classifying malicious and normal links.
Distribution of automated links.
To assess the performance of the proposed method for classifying malicious and normal links, ROC curves were compared based on linear and circle equations.
First, as Fig. 5(a) reveals, a linear equation with the negative of the boundary between the malicious and normal links was used as a gradient. Among the malicious links, the gradient was measured using the (x, y) and (x′, y′) coordinates at those points having the lowest y-axis and lowest x-axis, respectively. A straight line passing through the two points was drawn. Although the x-axis value varies, the ROC curve of Fig. 5(c) is derived from measurements of true positive rates and false positive rates.
From the ROC curve and its relation to changes in X (from a″ to a′ of Fig. 5(a)) presented in Fig. 5(c), the method of classifying malicious and normal links based on the linear equation can be considered highly reliable. Second, the equation of the circle was used as the boundary between malicious and normal links. As illustrated in Fig. 5(b), the center of the circle was (a, b), which had the highest distribution of malicious links. Similar to the linear equation, the ROC curve was derived as the radius increased.
The ROC curve in relation to the radius is indicated in Fig. 5(d). The circle equation demonstrates that the proposed method of classification is highly reliable.
Results of validity test. (a) Derivation of linear equation, (b) derivation of circle equation, (c) ROC of linear equation, and (d) ROC of circle equation.
4.4 Evaluation of Classification Performance
Fig. 6 displays our analysis of Type 1 (false positive rate) and Type 2 (false negative rate) errors, which was conducted to derive the optimal values for the two classification methods. As indicated in Fig. 6, the cross error rate (CER) is found at a radius between 2.2 to 2.4 for the circle equation, and at an x-value ranging from 0.5 to 1.0 for the linear equation.
The CER values derived for the two equations were used to classify the 3,000 malicious and normal links previously excluded in the validity test. Table 10 illustrates that when the radius is 2.3, the false positive (FP) and false negative (FN) are 7.7% and 2.83%, respectively, with an accuracy of 94.73%. Table 11 indicates that when the x-coordinate is x+0.8 of the linear equation, the FP is 1.6% and the FN is 2.16% with an accuracy of 98.08%. This indicates that the linear equation is capable of classifying malicious and normal links with a high accuracy of 98.08%.
We can conclude from the two results that the linear equation is superior in terms of accuracy and thus more effective at classifying MDNs and normal links than the circle equation. With a high accuracy of 98.08%, the linear equation fulfills the five measurement criteria defined for classification by AutoLink- Tracer.
Cross error rate (CER) for each equation: (a) circle equation and (b) linear equation.
Table 10.
Results of circle equation
Radius TN (%) TP (%) FP (%) FN (%) Accuracy (%)
2.20 96.30 92.33 7.67 3.97 94.18
Results of linear equation
X TN (%) TP (%) FP (%) FN (%) Accuracy (%)
1.0 96.00 98.60 1.40 4.00 97.30
Analysis of the experimental data revealed that the majority of normal links classified as FP were websites that provide global services. These links included YouTube (9EA), advertisement (14EA), and CDN (27EA) links. Websites that provide global services may have domains, host information, and country codes that are different from those of the root node. These services may exhibit a similar link structure to an MDN. However, the greatest difference between MDNs and the global service links is that the latter contain a considerably greater number of child nodes. In developing an actual system, white list filters can be used to reduce the number of false positives.
MDNs, which distribute malicious code on the Internet, have surfaced as a critical online threat. Although static, dynamic, and binary analyses have been employed to protect user PCs from malicious code, the development of malicious technology has prevented previous studies from providing effective countermeasures.
This study proposed a new method known as AutoLink-Tracer to classify malicious and normal websites. Forward proxy and real browsers were used to collect website information automatically and the connections of the collected websites were classified according to five characteristics: domain, host, IP server, country code, and webpage type. With AutoLink-Tracer, signature or profile management is not necessary because neither content analysis nor classification is possible based on common an MDN characteristics, even if attacks become more sophisticated.
The effectiveness of the proposed method for classifying normal and malicious websites was verified using a test of normal website links and MDNs. Because traffic that enters an analysis environment can be controlled in the proxy server, the method offers the added advantage of protecting the analysis environment from malicious code. However, the use of a proxy server impairs the collection speed. Further research is required to prevent the reduced speed and to extract properties from a diverse collection of links.
Sang-Yong Choi
He received his B.S. degree in Mathematics and M.S. degree in Computer Science from Hannam University in 2000 and 2003, repectively, and Ph.D. degree in Interdisciplinary of Information Security from Chonnam National University, Korea in 2014. He is a research associate professor at Cyber Security Research Center of the Korea Advanced Institute of Science and Technology (KAIST). His research interests are in web security, network security and privacy.
Chang Gyoon Lim
He received his Ph.D. in Dept. of Computer Engineering in Wayne State University, USA in 1997. Since September of 1997, he has been working for Major in Computer Engineering, Chonnam National University, Yeosu, Korea, as a professor. His current research interests include machine learning, soft computing, intelligent robot, IoT, cloud computing, and embedded software.
Yong-Min Kim
He received his Ph.D. in Computer Science and Statics in Chonnam National University, Korea. He is a professor at Department of Electronic Commerce, Chonnam National University, Yeosu, Korea. His research interests are in security and privacy, system and network security, application security as electronic commerce.
1 European Union Agency for Network and Information Security, ENSIA Threats Landscape 2014. HeraklionGreece:, 2015.doi:[[[10.2824/061861]]]
2 F. Y. Rashid, 2013;, https://www.securityweek.com/department-labor-website-hacked-distribute-malware
3 J. Pepitone, 2013;, http://money.cnn.com/2013/02/22/technology/security/nbc-com-hacked-malware/index.html
4 N. Provos, P. Mavrommatis, M. Abu Rajab, F. Monrose, "All your iFRAMEs point to us," in Proceedings of the USENIX Security Symposium, San Jose, CA, 2008;pp. 1-16. custom:[[[https://scholar.google.co.kr/scholar?hl=ko&as_sdt=0%2C5&q=All+your+iFRAMEs+point+to+us&btnG=]]]
5 K. Z. Chen, G. Gu, J. Zhuge, J. Nazario, X. Han, "WebPatrol: automated collection and replay of web-based malware scenarios," in Proceedings of the 6th ACM Symposium on Information, Computer and Communications Security, Hong Kong, China, 2011;pp. 186-195. doi:[[[10.1145/1966913.1966938]]]
6 J. Ma, L. K. Saul, S. Savage, G. M. Voelker, "Identifying suspicious URLs: an application of large-scale online learning," in Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, Canada, 2009;pp. 681-688. doi:[[[10.1145/1553374.1553462]]]
7 J. Ma, L. K. Saul, S. Savage, G. M. Voelker, "Beyond blacklists: learning to detect malicious web sites from suspicious URLs," in Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Paris, France, 2009;pp. 1245-1254. doi:[[[10.1145/1557019.1557153]]]
8 Y. Shindo, A. Satoh, Y. Nakamura, K. Iida, "Lightweight approach to detect drive-by download attacks based on file type transition," in Proceedings of the 2014 CoNEXT on Student Workshop, Sydney, Australia, 2014;pp. 28-30. doi:[[[10.1145/2680821.2680826]]]
9 G. Wang, J. W. Stokes, C. Herley, D. Felstead, "Detecting malicious landing pages in malware distribution networks," in Proceedings of 2013 43rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), Budapest, Hungary, 2013;pp. 1-11. doi:[[[10.1109/DSN.2013.6575316]]]
10 G. Stringhini, C. Kruegel, G. Vigna, "Shady paths: leveraging surfing crowds to detect malicious web pages," in Proceedings of the 2013 ACM SIGSAC Conference on Computer Communications Security, Berlin, Germany, 2013;pp. 133-144. doi:[[[10.1145/2508859.2516682]]]
11 A. Moshchuk, T. Bragin, D. Deville, S. D. Gribble, H. M. Levy, "SpyProxy: execution-based detection of malicious web content," in Proceedings of the USENIX Security Symposium, Boston, MA, 2007;pp. 1-16. custom:[[[https://scholar.google.co.kr/scholar?hl=ko&as_sdt=0%2C5&q=SpyProxy%3A+execution-based+detection+of+malicious+web+content&btnG=]]]
12 M. Cova, C. Kruegel, G. Vigna, "Detection and analysis of drive-by-download attacks and malicious JavaScript code," in Proceedings of the 19th International Conference on World Wide Web, Raleigh, NC, 2010;pp. 281-290. doi:[[[10.1145/1772690.1772720]]]
13 M. Akiyama, M. Iwamura, Y. Kawakoya, K. Aoki, M. Itoh, "Design and implementation of high interaction client honeypot for drive-by-download attacks," IEICE Transactions on Communications, vol. 93, no. 5, pp. 1131-1139, 2010.doi:[[[10.1587/transcom.E93.B.1131]]]
14 Cuckoo Sandbox (Online). Available:, https://cuckoosandbox.org/
15 The International Secure Systems Lab (iSecLab) (Online). Available:, https://iseclab.org/
16 M. Egele, T. Scholte, E. Kirda, C. Kruegel, "A survey on automated dynamic malware-analysis techniques and tools," ACM Computing Surveys, vol. 44, no. 2, 2012.doi:[[[10.1145/2089125.2089126]]]
17 K. Mathur, S. Hiranwal, "A survey on techniques in detection and analyzing malware executables," International Journal of Advanced Research in Computer Science and Software Engineering, vol. 3(=, no. 4, pp. 422-428, 2013.custom:[[[https://scholar.google.co.kr/scholar?hl=ko&as_sdt=0%2C5&q=A+survey+on+techniques+in+detection+and+analyzing+malware+executables&btnG=]]]
18 B. I. Kim, C. T. Im, H. C. Jung, "Suspicious malicious web site detection with strength analysis of a Javascript obfuscation," International Journal of Advanced Science and Technology, vol. 26, pp. 19-32, 2011.custom:[[[https://scholar.google.co.kr/scholar?hl=ko&as_sdt=0%2C5&q=Suspicious+malicious+web+site+detection+with+strength+analysis+of+a+Javascript+obfuscation&btnG=]]]
19 N. Provos, D. McNamee, P. Mavrommatis, K. Wang, N. Modadugu, "The ghost in the browser: analysis of web-based malware," in Proceedings of the 1st Workshop on Hot Topics Understanding Botnets (HotBots), Cambridge, MA, 2007;custom:[[[https://scholar.google.co.kr/scholar?hl=ko&as_sdt=0%2C5&q=The+ghost+in+the+browser%3A+analysis+of+web-based+malware&btnG=]]]
20 S. Y. Choi, D. Kim, Y. M. Kim, "ELPA: emulation-based linked page map analysis for the detection of drive-by download attacks," Journal of Information Processing Systems, vol. 12, no. 3, pp. 422-435, 2016.doi:[[[10.3745/JIPS.03.0045]]]
21 L. Invernizzi, S. Miskovic, R. Torres, S. Saha, S. J. Lee, M. Mellia, C. Kruegel, G. Vigna, "Nazca: detecting malware distribution in large-scale networks," in Proceedings of the 2014 Network and Distributed System Security (NDSS) Symposium, San Diego, CA, 2014;pp. 23-26. custom:[[[https://scholar.google.co.kr/scholar?hl=ko&as_sdt=0%2C5&q=Nazca%3A+detecting+malware+distribution+in+large-scale+networks&btnG=]]]
22 S. C. Jeeva, E. B. Rajsingh, "Intelligent phishing URL detection using association rule mining," Human-centric Computing and Information Sciences, vol. 6, no. 10, 2016.doi:[[[10.1186/s13673-016-0064-3]]]
23 Mitmproxy (Online). Avaialable:, https://mitmproxy.org
24 KAIST Cyber Security Research Center, (Online). Avaialable:, http://csrc.kaist.ac.kr/bbs/board.php?tbl=report
Received: July 26 2016
Revision received: November 29 2016
Accepted: December 6 2016
Corresponding Author: Yong-Min Kim*** ([email protected])
Sang-Yong Choi*, Cyber Security Research Center, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, [email protected]
Chang Gyoon Lim**, Major in Computer Engineering, Chonnam National University, Yeosu, Korea, [email protected]
Yong-Min Kim***, Dept. of Electronic Commerce, Chonnam National University, Yeosu, Korea, [email protected] | CommonCrawl |
Methodology article
PDXGEM: patient-derived tumor xenograft-based gene expression model for predicting clinical response to anticancer therapy in cancer patients
Youngchul Kim ORCID: orcid.org/0000-0002-2307-03301,
Daewon Kim2,
Biwei Cao3,
Rodrigo Carvajal3 &
Minjung Kim4
Cancer is a highly heterogeneous disease with varying responses to anti-cancer drugs. Although several attempts have been made to predict the anti-cancer therapeutic responses, there remains a great need to develop highly accurate prediction models of response to the anti-cancer drugs for clinical applications toward a personalized medicine. Patient derived xenografts (PDXs) are preclinical cancer models in which the tissue or cells from a patient's tumor are implanted into an immunodeficient or humanized mouse. In the present study, we develop a bioinformatics analysis pipeline to build a predictive gene expression model (GEM) for cancer patients' drug responses based on gene expression and drug activity data from PDX models.
Drug sensitivity biomarkers were identified by performing an association analysis between gene expression levels and post-treatment tumor volume changes in PDX models. We built a drug response prediction model (called PDXGEM) in a random-forest algorithm by using a subset of the drug sensitvity biomarkers with concordant co-expression patterns between the PDXs and pretreatment cancer patient tumors. We applied the PDXGEM to several cytotoxic chemotherapies as well as targeted therapy agents that are used to treat breast cancer, pancreatic cancer, colorectal cancer, or non-small cell lung cancer. Significantly accurate predictions of PDXGEM for pathological response or survival outcomes were observed in extensive independent validations on multiple cancer patient datasets obtained from retrospective observational studies and prospective clinical trials.
Our results demonstrated the strong potential of using molecular profiles and drug activity data of PDX tumors in developing a clinically translatable predictive cancer biomarkers for cancer patients. The PDXGEM web application is publicly available at http://pdxgem.moffitt.org.
Cytotoxic chemotherapy and targeted therapy play important roles in the treatment of cancer, alongside with surgery, radiotherapy and a recent breakthrough immunotherapy. Responses of cancer patients to drugs of those anticancer therapies vary widely because of the substantial heterogeneity in the molecular characteristics of their tumors even with a histologically same subtype of cancer [1]. Although a considerable number of novel anticancer drugs have been introduced during the past few decades, overall survival (OS) and quality of life of cancer patients have not been improved much, mainly because of the unselective use of these drugs in the presence of heterogeneous tumor characteristics and drug responses [2]. Hence, it is necessary to develop a personalized anticancer therapy that can help guide individual patients with heterogeneous tumors to anticancer drugs with the most therapeutic benefit. Successful personalized anticancer therapy will then greatly depend on the identification of predictive cancer biomarkers that can be used to accurately select patients who will benefit from treatment with the anticancer drugs [3].
For a predictive cancer biomarker discovery, it is considered most desirable to analyze molecular profiling data and clinical outcome data of cancer patients that were obtained before and/or after a treatment with anticancer drugs of interest from a prospective randomized clinical trial [4]. However, it is not straightforward to develop cancer biomarkers in this manner due to extremely huge cost and time spent in the process of the clinical trial. Because of these limitations, many cancer biomarker studies rely on testing anticancer drugs in preclinical cancer models including immortalized cancer cell lines and animal models [5].
Cancer cell lines cultured in vitro are cancer cells that keep dividing and growing over time, under certain conditions in a laboratory. Human cancer-derived cell lines have been widely used to understand molecular characteristics and drug activity mechanism of tumor cells. For instance, two large cancer cell line panels, Genomics of Drug Sensitivity in Cancer and Cancer Cell Line Encyclopedia, were established to develop new anticancer drugs and to identify new molecular drug targets and predictive biomarkers by interrogating pharmacogenomic mechanisms in more than 1000 cancer cell lines [6, 7]. We and many other research teams have been developing techniques to translate cancer cell line-driven biomarkers into prediction models of cancer patients' anticancer drug responses [8,9,10,11,12,13]. Despite these efforts, there still remains a lack of well-validated biomarkers and methods for further biomarker discoveries.
A patient-derived xenograft (PDX) is a promising preclinical model of cancer in which the tissue or cells from a patient's tumor are implanted into an immunodeficient or humanized mouse. It is used to create an environment that allows for the natural growth of cancer, its monitoring, and the corresponding treatment evaluations of the original patient. Recently, large PDX-based studies, such as National Cancer Institute MicroXeno project, Novartis PDX panel, and EuroPDX consortium study, have interrogated molecular characteristics. These studies, which were based on multiplex molecular platforms including gene expression and genetic mutation, reported that PDXs can retain the distinct characteristics of different tumors from different patients and therefore can effectively recapitulate the intra- and inter-tumor heterogeneity that represents human cancer [14,15,16,17]. These novel and unprecedented PDX resources have the potential to provide an opportunity to discover highly predictive cancer biomarkers that can be used to help guide cancer patients to highly beneficial anticancer therapeutics and to accelerate the process of new drug development. However, very few attempts have been made and no analytic tool for developing a PDX-based predictive gene expression model (GEM) is yet available. To address this, we have developed a new pharmacogenomics pipeline, so-called PDXGEM, that can be used to construct a highly predictive GEM of clinical responses of cancer patients to anti-cancer drugs on the basis of pretreatment gene expression profiles and posttreatment drug screening data of the preclinical PDX tumors.
In the present study, we provide a full description of the PDXGEM pipeline and demonstrate its predictive utility by applying it to several cytotoxic and targeted therapeutic agents and validating the prediction performance of resultant multi-gene expression models on independent external cancer patient cohorts with well-annotated clinical outcomes. We have also created a publicly available web-based application with an initial inventory of the data of the Novartis PDX panel and cancer patient cohorts that were used to develop and validate our PDXGEM.
The PDXGEM pipeline consists of four subsequent steps, 1) drug sensitivity biomarker discovery, 2) concordant co-expression analysis (CCEA), 3) multi-gene expression model training for drug response prediction, and 4) model validation (Fig. 1; see Materials and Method). To demonstrate the utility of the PDXGEM, we applied the PDXGEM to building predictive GEMs of cancer patients' responses to each of three chemotherapy agents and three targeted therapy drugs: paclitaxel and trastuzumab for breast cancer, 5-fluorouracil (5FU) and cetuximab for colorectal cancer (CRC), gemcitabine for pancreatic cancer, and erlotinib for non-small cell lung cancer (NSCLC). External validations of the resultant GEMs were conducted using publicly available gene expression data and clinical outcome data of independent cancer patient cohorts from prospective clinical trials or observational studies.
Schema of the patient-derived xenograft based gene expression model (PDXGEM). a In the drug sensitivity gene discovery step, correlation analysis and differential expression analysis of gene expression data and drug-activity data in patient-derived xenograft (PDX) tumors are conducted. b Concordant co-expression analysis identifies a drug sensitivity gene (g1) that is concordantly co-expressed with 3 other genes (g2, g3, and g4) between PDX tumors and pretreatment cancer patients' tumors. c A multi-gene expression model of drug response is trained on PDX data using the random-forest algorithm. d The performance of the multi-gene expression model is validated by contrasting prediction scores between the responsive (R) and the non-responsive (NR) patients to a drug in cancer patient cohort
PDXGEM for predicting paclitaxel response in breast cancer patients
Paclitaxel, combined with FAC (fluorouracil, doxorubicin, and cyclophosphamide) is a cornerstone of the current standard chemotherapy used for treating breast cancer patients. We applied PDXGEM to build a multi-gene expression model to predict who may achieve a pathological complete response (pCR) to paclitaxel. Six hundred probesets were first identified as initial drug sensitivity biomarkers that exhibited differential expressions between three breast cancer PDXs with shrunken tumor volumes and ten breast cancer PDXs with increased tumor volumes after receiving paclitaxel (t-test nominal P < 0.05, Fig. 2a). The pattern of co-expression among the drug sensitivity genes, as measured by a gene-gene correlation coefficient, in the breast cancer PDXs were then quite distinct from that in breast cancer patients (Fig. 2b). This finding is in line with that of a previous study, which showed an inherent biological gap between PDX tumors and their origin cancer patient tumors because of different growth environments surrounding the tumors.
Development of PDXGEM for paclitaxel response prediction in breast cancer patient. a Volcano plot with log2 fold change of differential gene expressions (x-axis) in paclitaxel-sensitive and paclitaxel-resistant patient-derived xenograft (PDX) models and –log10P-value (y-axis). Black dots display the initial drug sensitivity probesets and red circles further indicate concordantly co-expressed biomarkers between the PDX models and breast cancer patients. b Clustering heatmap depicts correlation matrices of drug sensitivity genes in PDX models (left panel) and pretreatment cancer patients (right panel) before (top panel) and after (bottom panel) concordant co-expression. c The Pearson's correlation coefficient between observed percent change in PDX tumor volumes (x-axis) and PDXGEM prediction scores for breast cancer PDX models (y-axis) was 0.982. d Receiver-operating characteristics curves of paclitaxel PDXGEM on seven different breast cancer data sets
The CCEA showed that concordance co-expression coefficients (CCECs) ranged from − 0.191 to 0.464 for all drug sensitivity biomarkers. Supplementary Figure 1 shows the distribution of all CCECs and scatter plots of gene-gene correlation coefficients for drug sensitivity biomarkers with varying CCEC values. 147 (24.5%) of the drug sensitivity biomarkers showed significantly positive CCECs, ranging from 0.204 to 0.464 between those breast cancer PDXs and a cohort of 251 breast cancer patients (GSE3494 [18]), and we hereafter referred to as the concordant co-expression (CCE) biomarkers. The CCE biomarkers showed more concordant co-expression patterns with two common clusters of genes between the breast cancer PDXs and patients and also had an increased median CCEC of 0.272 (Fig. 2b; bottom) compared with all drug sensitivity biomarkers that did not have common clusters and yielded a median CCEC of 0.09 (Fig. 2b; top).
A random forest (RF) predictor was then trained using the gene expression data of the breast cancer PDXs for all the CCE biomarkers as a model training set. A resultant RF predictor consisted of 145 CCE biomarkers with a positive variable importance value (Supplementary Fig. 2). Prediction scores of the RF predictor, hereafter referred to as PDXGEM score, was tightly correlated with the observed tumor volume changes in the PDX training dataset (r = 0.982, n = 13; P < 0.01; Fig. 2c).
To ensure the predictive performance of the RF predictor, we validated it on seven independent gene expression datasets of breast cancer patients that were collected through four randomized clinical trials (GSE20271 [19], GSE22226 [20], GSE41998 [21], GSE42822 [10]), two prospective observational studies (GSE25065 [22], GSE32646 [23]), and one retrospective study cohort (GSE20194 [24]). Notably, there were significant differences in prediction scores between patients with pCR and those with residual of disease (RD) after paclitaxel-based chemotherapy in all the breast cancer cohorts (P < 0.05; Supplementary Fig. 3A-G). In addition, area under the receiver-operating characteristic (ROC) curve (AUC) as an overall classification accuracy ranged from 0.653 to 0.789 (Fig. 2d). To further determine whether the RF predictor is predictive of paclitaxel-specific response, we tested it in 87 breast cancer patients in the GSE20271 clinical trial cohort who did not receive paclitaxel but only FAC combination chemotherapy. There was no significant difference in prediction scores, suggesting that our predictor is predictive of response specifically to paclitaxel (AUC = 0.589, P = 0.44; Supplementary Fig. 3H).
To examine the utility of CCEA, we trained a RF predictor using all 600 initial drug sensitivity biomarkers that did not undergo CCEA. Although this predictor was approximately three times complex as the above final RF predictor, there was no significant difference in its prediction scores between pCR and RD groups in four breast cancer cohorts (Supplementary Fig. 4). Furthermore, decreased AUCs were observed in the remaining validation sets, suggesting that the CCEA lead to a parsimonious gene expression signature with a more accurate prediction performance.
Lastly, gene ontology (GO) analyses, which were performed to understand the biological functions of 145 biomarkers of our final paclitaxel response predictor, showed that COL1A1, RPH3AL, and THSD4 were the most significantly associated with breast neoplasm function (false discovery rate (FDR) P < 0.001). In addition, DNA replication proteins and mismatch repair were the top two representative pathways (Supplementary Table 1).
PDXGEM for Trastuzumap-specific response in breast cancer patients
Trastuzumab is a monoclonal antibody used to treat human epidermal growth factor receptor 2- (HER2-) positive breast cancer by itself or in combination with other anti-cancer therapeutics [25]. To construct a gene signature predictive of response to the trastuzumab in breast cancer patients, we applied the PDXGEM to data on pretreatment gene expression and post-treatment tumor volume changes in 13 breast cancer PDXs that underwent a monotherapy with trastuzumab. We identified 1333 drug sensitivity biomarkers with significant Spearman rank correlation relationships (nominal P-value < 0.05) between gene expression levels and the tumor volume changes. We then further screened 515 CCE biomarkers with significant CCECs ranging from 0.201 to 0.509. Finally, an optimal predictor was constructed with 480 CCE biomarkers possessing positive variable importance in RF model training analysis and the predictor yielded a strong correlation coefficient of 0.977 (p < 0.01, n = 13) between predicted and observed tumor volume changes in the breast cancer PDX models. We then performed an independent validation of this RF predictor using data from the US Oncology 02–103 breast cancer trial (GSE42822 [10]), in which 25 patients with stage II-III HER2-positive breast cancer received trastuzumab. We observed a borderline significant difference in prediction scores between 12 patients with pCR and 13 patients with RD after treatment with trastuzumab (AUC = 0.712, P = 0.074). Considering the large number of the biomarkers involved in the predictor and the encouraging AUC value, we set the more stringent threshold value of 0.3 for CCEC at the CCEA step of the PDXGEM pipeline to yield a less complex GEM with more concordantly co-expressed biomarkers between the breast cancer PDXs and patients. As expected, a new RF predictor was constructed with 193 CCE biomarkers and yielded a more significant difference in prediction scores between pCR and RD response groups in the breast cancer trial cohort (AUC = 0.737; P = 0.025) (Fig. 3a). To assess the specificity of the RF predictor for trastuzumab, we validated the RF predictor on 34 HER2-positive and 54 HER2-negative breast cancer patients who did not receive trastuzumab in the same clinical trial. In both HER2 strata, we observed no difference in prediction scores between pCR and RD response groups (AUC = 0.533 and P = 0.877 for the HER2 positive breast cancer; AUC = 0.493 and P = 0.696 for the HER2 negative breast cancer; Fig. 3b). When the predictor was further tested using other available breast cancer patient cohorts treated with paclitaxel-based (not trastuzumab-based) chemotherapy, none of the breast cancer cohorts showed any significant difference in prediction scores, strongly suggesting that the RF predictor is predictive of trastuzumab-specific response in breast cancer patients (Supplementary Fig. 5).
PDXGEM prediction scores for trastuzumab in breast cancer patients by HER2 status. Distributional plot of PDXGEM prediction scores between patients with pathological complete response (pCR) and patients with residual of disease (RD) after receiving trastuzumab (a) in HER2 positive breast cancer patients, (b) in HER2 positive breast cancer patients who did not receive trastuzumab but did receive other chemotherapy and (c) HER2 negative breast cancer patients who did not receive Trastuzumab. Red center lines represent the mean of prediction scores
Finally, GO analysis of the 193 biomarkers in the final predictor identified the most significant pathways including miRNA targets in extraceullar matrix and membrane receptors, the focal adhesion-PI3K-Akt-mTOR-signaling pathway, the inflammatory response pathway, and the apoptosis-related network due to altered Notch3 (FDR P < 0.05; Supplementary Table 1). In particular, the PI3K-Akt-mTOR-signlaing pathway is a downstream pathway of HER2 and is well known to be responsible for promoting cell proliferation and angiogenesis [26]. In addition, COLTA1 gene had the second highest variable importance in the RF model training analysis and was reported in the genomic study of a phase 3 clinical trial for trastuzumab to be a key gene in integrin signaling pathway which was linked to a decreased recurrence-free survival time after adjuvant trastuzumab therapy [27].
PDXGEM for predicting response to gemcitabine in pancreatic cancer patients
Gemcitabine is currently used as a backbone in a first-line or second-line treatments for pancreatic ductal adenocarcinoma (PDA), which carries a dismal prognosis with a typical overall survival (OS) of 6 months from diagnosis [28]. Although only six pancreatic cancer PDXs were available for tumor volume changes after receiving gemcitabine treatment in the Novartis PDX panel, we used PDXGEM to develop a gene signature predictive of response to gemcitabine.
We screened 965 drug sensitivity biomarkers using t-test to contrast the expression levels of an individual probeset between two PDXs with shrunken tumor volumes and four PDXs with increased tumor volumes after receiving gemcitabine (nominal P < 0.05). We further selected 404 CCE biomarkers from CCEA using pretreatment gene expression data of 39 patients with PDA (GSE15471 [29]). In a RF model training analysis of the PDX dataset, the final prediction model consisted of 298 CCE biomarkers. A high correlation coefficient of 0.959 was observed between predicted scores and observed percent changes in PDX tumor volumes.
As an external validation of the prediction performance of the final model, we collected gene expression data and survival outcome data from a retrospective study cohort of 63 patients with stage I/II PDA who received gemcitabine (GSE57495 [30]). For a comparative analysis of the survival outcomes, we defined two patent subgroups according to whether patients' prediction scores were higher or lower than the median prediction score. The low-score group then showed a significantly better OS (median OS = 31.7 months, 95% CI = 19.5 ~ not reached) than the high-score group (median OS = 7.7 months, 95% CI = 13.5–28.3, log-rank P = 0.023) (Fig. 4a). To assess the prediction ability of the final model for gemcitabine-specific response, we analyzed in a similar manner survival outcome data from a prospective observational study cohort of 30 patients with PDA who did not receive adjuvant chemotherapy (M-MEXP-2780 [31], ArrayExpress). No significant difference was observed in OS, but the low-score group had more promising OS than the high-score group (Fig. 4b; median OS = 22.9 months for the low-score group and 10.9 months for the high-score group; log-rank P = 0.18), implying that our PDXGEM signature was predictive of gemcitabine response and partly prognostic. To further confirm the prognostic value of the predictor, we analyzed two additional cohorts of patients with PDA (GSE17891 [32];n = 29) and the International Cancer Genome Consortium [33] (ICGC;n = 82) even though their chemotherapeutic treatment records were not available. In the GSE17891 cohort, we observed slightly better OS in the low-score group but not significant (P = 0.6, Fig. 4c). In addition, a multivariable Cox regression analysis showed that higher prediction score was significantly associated with a higher risk of death (hazard ratio (HR) = 1.087, 95% confidence interval (CI) = 1.01–1.161, p = 0.01), independent of known demographic and clinical prognostic factors of PDA including age at surgery, tumor stage, and molecular subtypes of PDA. For the ICGC cohort, there was a better OS in the low-score group than in the high-score group (log-rank test P = 0.06; median OS = 25.6 in the low-score group and 13.7 in the high-score group; Fig. 4d), and the raw prediction score was again significantly associated with OS (HR = 1.026, 95%CI = 1.001–1.051), independent of age and tumor stage. Although a further validation analysis of patient's drug treatment data is needed, our observations suggested that the PDXGEM predictor is predictive of response to gemcitabine, but may have a prognostic value in terms of predicting long-term outcome OS in patients with PDAC.
PDXGEM for gemcitabine in pancreatic cancer patients. a-d Kaplan-Meier curves of overall survival between pancreatic cancer patients with a higher (gray) and lower (black) PDXGEM score than the median prediction score in (a) GSE57495, (b) M-MEXP-2780, (c) GSE17891, and (d) ICGC cohort. P-value was calculated using log-rank test
PDXGEM for predicting response to 5FU in colorectal cancer patients
5-fluorouracil (5FU) is widely used to treat solid tumors, including colorectal, breast, and head and neck cancer. Using PDXGEM, we built a gene signature to predict response to 5FU among patients with colorectal cancer (CRC) by analyzing data of 16 colorectal cancer PDXs on gene expression and percent of change in tumor volumes after treatment with 5FU. At the drug sensitivity biomarker discovery step, expression levels of 848 probesets were significantly correlated with the percent of change in tumor volumes (nominal P < 0.05). We next identified 332 CCE biomarkers from the CCEA of the PDXs and a cohort of metastatic CRC (mCRC) patients (GSE14095 [34]; n = 189). In the following RF prediction training step, all the CCE biomarkers displayed positive variable importance and a resultant RF predictor yielded an almost perfect correlation coefficient of 0.978 between PDXGEM scores and observed tumor volume changes in all the 16 PDX models. According to a gene ontology analysis of the biomarkers, the most significantly enriched function was amino acid catabolic process, which is in agreement with that 5-FU drug pathway is regulated via a complex network of anabolic and catabolic genes [35] (Supplementary Table 1).
As an external validation for the prediction performance of the RF predictor, we tested the RF predictor by using two gene expression datasets of CRC patients. The first dataset (GSE62322 [36]) was obtained from a phase 2 clinical trial, in which a percent of change in lesion size was assessed among 20 patients with liver metastatic CRC after receiving FOLFIRI (leucovorin calcium, 5FU, and irinotecan). Our RF predictor produced prediction scores with a significantly large difference between 9 responders and 11 non-responders (Fig. 5a; AUC = 0.788, 95% CI = 0.56–0.99, P = 0.035). The other validation dataset was collected from a retrospective study (GSE39582 [37]) and consisted of two CRC patient cohorts: 1) 75 primary CRC patients treated with 5FU monotherapy, and 2) 69 primary and 20 mCRC patients who received 5FU as either FOLFIRI or FOLFOX (leucovorin calcium, 5FU, and oxaliplatin) combination therapies [37]. We divided patients into three balanced groups (low-, intermediate-, and high-score groups) by separating their PDXGEM scores into tertiles and examined survival trends across the three groups. In the 5FU monotherapy cohort, there was a trend of longer OS in primary CRC patients with lower PDXGEM scores; however, this trend was not statistically significant, which might be due to a low event rates (trend test P = 0.319) (Fig. 5b). In the combination therapy cohort, we observed a significant trend of a lower score toward an enhanced survival (Tarone's trend test P = 0.03; median OS = 41, 22, and 20 months for high-, intermediate-, and low-score strata, respectively; see Fig. 5c). In a pairwise comparison of survival between the three groups, we observed a significant difference between the low-score group and the intermediate-score groups (log-rank test P = 0.033) and a borderline significant difference between the low-score group and the high-score group (P = 0.063). No significant difference was observed between the intermediate- and high-score groups (P = 0.56). However, a completely reversed survival trend was observed among the 69 patients with primary, reflecting a known fact that adjuvant FOLFIRI is ineffective in treating resected primary cancer but effective in treating metastatic disease [38, 39] (Fig. 5d).
PDXGEM for 5FU response prediction in colorectal cancer patients. a Distribution of PDXGEM scores (Y-axis) between responsive and non-responsive patients after at a treatment with 5FU-based chemotherapy. b-d Kaplan-Meier curves of overall survival for the high (dotted gray), intermediate (gray), and low (black) score group in (b) primary colorectal cancer (CRC) patients receiving 5-FU monotherapy in GSE39581 (c), and metastatic CRC patients receiving FOLFIRI monotherapy in GSE39581 (d) and primary CRC patients receiving FOLFIRI in GSE39581. Prediction scores were broken down at their tertiles. The P value was calculated using a survival trend test. e Distribution of PDXGEM scores (y-axis) of CRC patients who did not received 5-FU. The P value was calculated using Tarone's trend test
Finally, we examined the prediction performance of the RF predictor for 5-FU specific response using data obtained from a cohort of mCRC patients in a prospective clinical trial of cetuximab monotherapy (GSE5851 [40]). No significant difference in prediction scores was found (AUC = 0.59; P = 0.51), which shows that the PDXGEM predictor is predictive of 5FU-specific response (Fig. 5e).
PDXGEM for predicting Cetuximab response in colorectal cancer patients
Cetuximab is a monoclonal antibody that targets the epidermal growth factor receptor (EGFR). It was approved for treating patients with EGFR-expressing mCRC without KRAS mutations. Given that around 40% of patients with KRAS wild-type tumors are unresponsive to this targeted therapy, there is an unmet need to identify additional relevant predictive biomarkers beyond KRAS mutations status [41]. To address this need, we used the PDXGEM to construct a predictive multi-gene signature of cetuximab response in patients with mCRC.
We selected 997 differentially expressed probesets via unpaired t-test analyses of nine sensitive and seven resistant PDXs after receiving cetuximab therapy (nominal P < 0.05). We then screened 670 biomarkers that were concordantly co-expressed across the PDXs and a cohort of mCRC patients (GSE14095 [34]). We constructed an optimal RF predictor based on 585 CCE biomarkers and observed a strong correlation coefficient of 0.98 (P < 0.01, n = 16) between prediction scores and observed percent of change in tumor volumes in the PDX training dataset.
We proceeded to conduct an external validation study using data from 68 mCRC patients who received cetuximab monotherapy in a phase 2 clinical trial (GSE5851 [40]). We observed a significant difference in prediction scores between 6 responders and 62 non-responders (AUC = 0.699, P = 0.041; Fig. 6a). When patients' survival outcomes were analyzed as described in the prior 5FU PDXGEM study, the high-score group showed worse progression-free survival with 6-months PFS rate of 3.7%, compared with the low- and intermediate-score groups, which had 6-months PFS rates of 18.5 and 19.2%, respectively (Supplementary Fig. 6A; log-rank P = 0.085). Moreover, in a subgroup analysis restricted to patients with wild-type KRAS, a significant difference in PDXGEM score was observed between responders and non-responders (p = 0.038; Fig. 6b).
PDXGEM prediction for response to Cetuximab in metastatic colorectal cancer patient. a Distribution of PDXGEM scores (y-axis) is compared between metastatic colorectal cancer patients with complete response (CR) or partial response (PR) and those with stable of disease (SD) or progressive disease (PD) after treatment with cetuximab. Blue and red dots are subjects with or without positive epidermal growth factor receptor (EGFR) expression, respectively. b PDXGEM scores stratified by KRAS mutation status. c Kaplan-Meier curves of overall survival in metastatic colorectal cancer patients who did not receive cetuximab
Because EGFR-expressing mCRC patients with wild-type KRAS is a part of the drug indication of cetuximab, we examined whether the PDXGEM score was associated with either EGFR expression level or the mutation status of the KRAS gene in the GSE5851 cohort. There was no significant correlation between the PDXGEM score and EGFR expression level (r = − 0.103, P = 0.41, Supplementary Fig. 6b). No significant difference was observed in PDXGEM scores between patients with wild-type KRAS and those with mutant KRAS (P = 0.941, Fig. 6c).
To determine whether the predictor has cetuximab specificity, we validated it using data from an independent cohort of mCRC patients (GSE62322 [36]) who received FOLFIRI but not cetuximab. No significant difference was seen in PDXGEM scores between 9 responders and 10 non-responders (AUC = 0.444; P = 0.72; Fig. 6d), suggesting that the predictor is specifically predictive of response to cetuximab.
PDXGEM signature predictive of Erlotinib response in NSCLC patients and cell clines
Erlotinib is an EGFR tyrosine kinase inhibitor that was approved for the treatment of non-small cell lung cancer (NSCLC), but its overall therapeutic efficacy is minimal [42]. We constructed a multi-gene expression signature to predicting response to the erlotinib by analyzing data on the pretreatment gene expression profiles and percent of changes in tumor volume in 8 NSCLC PDXs following erlotinib administration.
We screened 1624 initial drug sensitivity biomarkers were screened using an unpaired t-test that compared three PDXs with tumor shrinkage to five PDXs with tumor growth. Among them, 112 biomarkers showed concordant co-expression patterns between the PDXs and a cohort of 150 NSCLC patients (GSE43580 [43]). Finally, a 106-gene based RF predictor predictive of post-erlotinib treatment tumor volume change was trained with all the PDXs. PDXGEM score from the RF predictor was significantly correlated with the observed percent of change in tumor volume in the PDX training set (r = 0.973 and P < 0.01; n = 8).
To validate the prediction performance of the RF predictor, we generated PDXGEM scores for in vitro erlotinib-treated NSCLC cell lines (GSE31625 [44]; n = 46). There was a significantly large difference in PDXGEM scores between 18 erlotinib-sensitive cell lines and 28 erlotinib-resistant cell lines (AUC = 0.708 and P = 0.006; see Fig. 7a). We next validated the RF predictor on data from a prospective clinical trial cohort of 41 refractory NSCLC patients who received the first-line treatment with erlotinib in combination with bevacizumab (GSE37138 [45]). We observed a significant difference in PDXGEM scores between 5 responders and 36 non-responders (AUC = 0.689 and P = 0.016; Fig. 7b). To examine whether the RF predictor is also predictive of treatment response at recurrent disease settings, we further validated the predictor on data from 26 patients with relapsed or metastatic NSCLC who had EGFR mutation and received erlotinib as second-line treatment (GSE33072 [46]). Although our predictor yielded the highest prediction scores for 2 patients with the shortest PFS, there was no significant difference in PFS between the high-score group and the low-score group (Fig. 7c and Supplementary Fig. 7A), indicating that our predictor may not be predictive at the second line treatment setting.
PDXGEM prediction for response to erlotinib in non-small cell lung cancer (NSCLC) patient. PDXGEM scores (a) between erlotinib-sensitive and erlotinib-resistant NSCLC cell lines, and (b) between the NSCLC patients who were responsive and those who were nonresponsive to erlotinib in the first line setting (c) Progression-free survival curves in metastatic NSCLC patients who receive erlotinib as the second-line treatment setting
To determine whether the PDXGEM predictor has erlotinib-specificity, we produced PDXGEM scores for 20 patients with the NSCLC subtype lung squamous carcinoma who did not receive erlotinib or other EGFR inhibitors (GSE68793). There was no significant association between prediction scores and PFS or OS (Supplementary Fig. 7B and 7C), showing that the predictor is specifically predictive of response to erlotinb. However, additional studies are needed to further confirm its erlotinib-specificity in patients with other subtypes of NSCLC.
Collectively, our validation results showed that our PDXGEM predictor was predictive of response to erlotinib in refractory NSCLC patients in the first line treatment setting, but not in the second line treatment setting.
Predictive cancer biomarkers are necessary toward a personalized cancer therapy, by which a cancer patient will likely to be treated with the most effective anti-cancer drugs available.
In this study, we developed a statistical bioinformatics pipeline, PDXGEM, to build a multi-gene expression signature as a quantitative cancer biomarker for predicting cancer patients' responses to a single anti-cancer drug on the basis of data on pretreatment gene expression profiles and posttreatment outcomes in preclinical PDX models. We demonstrated that PDXGEM can build a predictive gene expression signature for cancer patients' responses to chemotherapy and targeted therapy agents.
Because the PDX tumors can alter the biological characteristics of their origin patient tumors to adapt to new growth environments, we devised CCEC statistics to quantify the degree of concordance of co-expression patterns between preclinical PDX tumors and cancer patient tumors. Although drug sensitivity biomarkers obtained directly from a correlative or differential expression analysis of data from preclinical PDX models could serve as predictive biomarkers by themselves, we showed that a subset of them with significant CCEC was able to induce a more translatable predictor, thereby yielding a better performance of predicting therapeutic outcomes in cancer patients as shown in our examples.
Notably, the PDXGEM does not use any patients' outcome data during the development of a prediction model whereas various strategies for developing predictive gene signatures by analyzing data of preclinical models often uses patients' outcome data for screening biomarkers or training a prediction model. The PDXGEM only uses pretreatment gene expression data of cancer patients at the CCEA step. Therefore, the PDXGEM can build gene signatures for even unapproved anti-cancer drugs for patients with a certain type of cancer. Although CCEA was introduced to improve GEMs that are trained by only using PDX model data, PDXGEM still can build a gene expression signatures by skipping the CCEA step in the absence of available pretreatment patient data.
The results of our PDXGEM application showed that significantly predictive GEMs can be developed from a small cohort of PDX models. For example, the PDXGEM for erlotinib only uses 8 PDX models but validation analyses of this model on NSCLC cancer cell lines and on patients with this disease yielded statistically significant prediction performance. This level of predictive performance is a highly desirable and encouraing feature when there is a limited number of available preclinical PDX models.
Indication of targeted therapy agents highly depends on the status of their known target companion biomarkers in patient tumors. Our PDXGEM predictor for cetuximab is able to differentiate the responsive from the nonresponsive even in mCRC patients with wild-type KRAS genes, demonstrating that Integrative usage of PDXGEM along with known companion biomarkers of a targeted therapy has a potential for improving clinical outcomes and thereby the quality of life of a targeted cancer patient population. Moreover, PDXGEM has the potential of being used to develop a predictor of response to a recent breakthrough immunotherapy. Several immune-oncology studies have begun to create and investigate PDX mouse models with human immune system [47]. Data collected from these humanized mice will enable our PDXGEM pipeline to develop predictive cancer biomarkers of response to the immunotherapy.
Many cancer drugs, including those used in our study, are multi-indication drugs that can be used for treating more than one cancer type. For instance, paclitaxel is currently a standard chemotherapy drug for treating breast cancer and ovarian cancer. There is great interest in identifying a new treatment indication of existing anti-cancer therapy agents. We have recently introduced a drug repositioning approach (CONCORD) to translating predictive cancer biomarkers from one cancer type to another [13]. The CONCORD framework was used to analyze the gene expression and drug sensitivity data of a large panel of cancer cell lines with different types of cancer. Similarly, given that more PDX panels that span multiple types of cancer are becoming publicly available, there will be great interest in using PDXGEM to explore a drugs' potential for anti-cancer drug repositioning by testing prediction values of a predictive gene expression signature across multiple types of cancer.
There are clear challenges and opportunities in developing the PDXGEM pipeline. A PDXGEM predictor for only one drug may just provide limited information on whether a patient will be likely to be cured with the drug or not. However, if PDXGEM predictors are built for a multitude of FDA-approved anticancer drugs and then be used simultaneously for evaluating comparative effectiveness among drugs, it may enable to choose most beneficial drug to treat a cancer patient in advance.
In the drug sensitivity gene discovery step, we performed t-test to identify differentially expressed genes between PDXs with shrunken tumor volumes and PDXs with increased tumor volumes because we presumed that these genes would bear biologically reliable information regarding the pharmacogenomic mechanism that inhibits tumor growth and kills tumor cells. When a group sample size was less than three, we used a correlation analysis instead of t-test, as this test can lose a statistical power in microarray data analysis due to small sample sizes. However, this approach may not be optimal. More sophisticated bioinformatics methods such as limma [48] or recent popular deep-learning algorithms that do not involve any feature selection step may provide a better set of candidate predictive genes. Evaluating their performances in developing an optimal biomarker discovery method or guideline would constitute exciting future research topics.
The results of our concordant co-expression analysis were dependent on a pretreatment gene expression data set of cancer patients that represented a cancer type of interest. Although we used the largest gene expression dataset available, in terms of the number of patients and the coverage of histological cancer subtypes, merging multiple independent gene expression datasets would allow for a more comprehensive gene expression dataset of individual cancer subtypes. We built a multi-gene expression model using the RF modeling algorithm to handle a larger number of gene biomarkers than the smaller sample size of the PDX data as a model training data. However, other statistical prediction modeling and machine learning algorithms such as penalized linear regression and support vector machine analyses could also be used to build more accurately predictive models [49]. The majority of gene expression datasets we analyzed was profiled on microarray platforms. We validated the PDXGEM signature for gemcitabine on gene expression data that were profiled using RNA sequencing (RNAseq) platform (ICGC cohort). However, validating the signature's cross-platform prediction performance on other next generation sequencing datasets is warranted. Furthermore, a recent pharmacogenomics study of cancer cell lines reported that transcript-level expression data profiled on the RNAseq platform could lead to more predictive biomarkers than gene-level expression data. The application of PDXGEM to RNAseq transcriptional profiling data may also lead to a better performing predictive cancer biomarker.
In developing a predictive biomarker, it is important to evaluate whether the biomarker is specifically predictive of a drug of interest. We thus validated the final PDXGEM of a drug on patients who were not treated with the drug. However, we were unable to perform the validation study on multiple datasets due to a lack of available data; the majority of drugs analyzed in our study are core components of current standard of care regimens. Furthermore, although many cancer treatment regimens are combinations of multiple chemotherapy drugs, but our current PDXGEM study is limited to the prediction of response to a single drug. Further research is warranted to develop a drug predictor of response to combination chemotherapy based on data obtained from PDXs treated with a single drug.
It will be useful to investigate whether PDXGEM can be extended to different molecular platform data such as genome-wide genetic variant data, proteomics data, and metabolomics data. The mathematical framework of PDXGEM will be broadly applicable to these different molecular platforms. However, one may need to carefully examine whether large, reliable patient data resources are available and whether predictive therapeutic biomarkers can be obtained from such molecular profile data. Another promising research focus is to predict in advance a post-treatment adverse events (AEs) on the basis of gene expression data; in cancer treatment, an AE is also an important post-treatment outcome along with response and survival. A part of our PDXGEM pipeline, such as an initial biomarker (feature) selection and multi-gene expression model training, will be directly applicable to identifying AE-correlated genes and training a multi-gene expression model.
Lastly, we developed a web-based PDXGEM application (http://pdxgem.moffitt.org) to share the PDXGEM algorithm with the scientific community, in the hope that this tool will allow researchers to gain a better understanding of the drug targets and validation in a prospective study.
Molecular gene expression profiles and drug activity data from PDX tumors can be used to develop highly predictive cancer biomarkers for predicting responses to anti-cancer drugs in cancer patients. The clinical utility of PDXGEM predictions should be assessed in a prospective study.
Gene expression data and anti-cancer response data of PDX and cancer patient cohorts
Data on gene expression profiles and post-treatment percent change in tumor volume of the Novartis PDX panel were obtained from Gene Expression Omnibus (GEO) repository (https://www.ncbi.nlm.nih.gov/geo/). The gene expression data and clinical outcome data of cancer patients used for the CCEA or validation analyses were also publicly available at GEO (http://www.ncbi.nlm.nih.gov/geo) as well as ArrayExpress (https://www.ebi.ac.uk/arrayexpress/) and ICGC (https://dcc.icgc.org/repository). A descriptive summary and accession ID of all the data can be found in Supplementary Table 2.
In vivo PDX-based drug sensitivity biomarker discovery
We discovered genes whose expression levels were significantly associated with in vivo activities of each anti-cancer drug administered to the PDX tumors of the target cancer type. Drug activity was calculated as a percent change in PDX tumor volumes (= 100 x (post-treatment tumor volume – pretreatment tumor volume) / pretreatment tumor volume). A negative drug activity value for a PDX, thus, indicates a tumor shrinkage, and a positive drug activity value represents tumor growth. Drug activity data and pretreatment gene expression profiling data of the PDX models were analyzed to screen initial drug sensitivity biomarkers. The basic unit of the biomarkers was an individual probeset on the PDX microarray. Drug sensitivity biomarkers were selected using an unpaired two sample t-test that quantifies differential gene expression levels between PDXs with shrunken tumors and those with grown tumors. When a sample size in one of the two PDX groups was less than three and a variation in tumor volume changes was near zero, we used a correlation analysis of gene expression levels with the percent change in tumor volumes to screen the initial drug sensitivity biomarkers. For both the t-test and correlation analyses, all statistical tests were two-sided and the FDR was controlled to be less than 0.05 to correct for multiple comparisons. When no significant genes were found, mainly due to a small sample size of available PDXs, we controlled a less conservative nominal type I error rate of 0.05 to identify initial drug sensitivity biomarkers.
Concordant co-expression analysis (CCEA)
Because PDX tumors can alter the biological characteristics of their origin patient tumors to adapt to new growth environments, potentially not all the drug sensitivity genes screened in an analysis of PDX tumor data will be predictive of response of cancer patients. To explicitly consider such biological differences, we selected genes with concordant co-expression patterns between the PDX tumors and cancer patient tumors. To quantify the degree of concordance of each gene's co-expression relationships, we calculated the concordance co-expression coefficient (CCEC) for each gene as follows: using gene expression data from each of the two cancer systems separately, we first constructed two n × n correlation matrices for n initial drug-sensitivity biomarkers; we denoted the two correlation matrices, e.g. one for the PDX tumor set and the other for the pretreatment cancer patient tumor set, as U = [Uij]n × n and V = [Vij]n × n, where Uij and Vij were the correlation coefficients between gene i and j in the PDX set and the patient tumor set, respectively; and the CCEC for the gene g, c(g), is derived as
$$ c(g)=\frac{2{\sum}_{k\ne g}\left({U}_{kg}-\overline{U_{.g}}\right)\left({V}_{kg}-\overline{V_{.g}}\right)}{\sum_{k\ne g}{\left({U}_{kg}-\overline{U_{.g}}\right)}^2+{\sum}_{k\ne g}{\left({V}_{kg}-\overline{V_{.g}}\right)}^2+{\sum}_{k\ne g}{\left(\overline{U_{.g}}-\overline{V_{.g}}\right)}^2} $$
where \( \overline{{\mathrm{U}}_{.\mathrm{g}}}=\frac{1}{\mathrm{n}-1}{\sum}_{\mathrm{k}\ne \mathrm{g}}{\mathrm{U}}_{\mathrm{k}\mathrm{g}} \) and \( \overline{{\mathrm{V}}_{.\mathrm{g}}}=\frac{1}{\mathrm{n}-1}{\sum}_{\mathrm{k}\ne \mathrm{g}}{\mathrm{V}}_{\mathrm{k}\mathrm{g}} \).
Briefly, the CCEC c(g) first computed two vectors of gene-gene correlation coefficients. One vector consisted of correlation coefficients of gene g with other n-1 genes for the PDX tumor set. The other vector was computed in the same manner for the patient tumor set. The CCEC next quantifies the degree of agreement between the two vectors by calculating Lin's concordance correlation coefficient [50]. Therefore, in the example of paclitaxel, c(g) reflects the degree of concordance between the breast cancer PDX panel and GSE3494 breast cancer patient cohort for expression relationships of probeset g with other n-1 probesets. If c(g) took a statistically significant positive value under an FDR of 0.05, then probeset g was selected as a CCE biomarker. Because the probeset g was initially selected among n drug-sensitivity biomarkers, the probeset still retained a significant association with drug sensitivity. To compute CCEC, we used 'epi.ccc' function that was implemented in epiR package in R program. The P-value for the concordance correlation coefficient was corrected for multiple testing by using the Benjamini-Hochberg method implemented in 'p.adjust' function.
PDXGEM modeling and evaluation
A multi-gene expression model for predicting each drug's response was built using gene expression data and drug activity data of the PDX panel that was used in the above drug sensitivity biomarker discovery. The drug activity data and gene expression data of the PDX model for all CCE biomarkers, defined as drug sensitivity genes with statistically significant CCEC, formed the model training data. After completing a gene-wise standardization of the model training data, we performed a random forest classification and regression analysis using the 'randomforest' function implemented in randomForest package at the default setting in R program. The prediction performance of the resultant RF predictor was first evaluated by calculating a correlation coefficient between the observed and predicted tumor volume changes in the PDX models. When there was a significant correlation relationship, the RF predictor was validated on gene expression data and post-treatment clinical outcome data of cancer patient cohorts that were independent of the biomarker discovery and the prediction model development.
PDXGEM prediction and validation
To validate the prediction performance of each drug's final RF prediction model, we produced prediction scores of the RF model (PDXGEM score) for cancer patient cohorts that were not involved in either drug sensitivity biomarker discovery or prediction model development procedures. The performance of each drug's PDXGEM prediction was then assessed in a prospective manner. For cancer patient cohorts with binary response outcome data, we compared prediction scores between responsive and non-responsive patient groups by performing a two-sample t-test. The AUC was also calculated to summarize an overall prediction accuracy of the prediction model. For cancer patient cohorts with survival outcome data, survival distributions were compared between their prediction score strata via Kaplan-Meier analysis, log-rank test, and Tarone's trend test. Multivariable Cox proportional hazard regression analysis was also used to examine an association between raw continuous prediction scores and survival outcomes. All survival analyses were performed using survival and survMiner packages in R program.
Gene ontology analysis
To assess any potent functional behaviors and mechanisms by which the multi-gene expression model could predict patients' responses to an anticancer drug of interest, we selected CCE biomarkers showing a positive value of variable importance in the RF analysis. In brief, the variable importance is a model selection measure by summarizing the difference in prediction accuracy between two RF predictors with and without individual biomarkers. Finally, web-based Enrichr tool was used to conduct GO analysis by submitting a set of CCE biomarkers with positive values of variable importance measure [51].
The datasets analyzed during the current study are available with data accession IDs presented in the manuscript in the Gene Expression Omnibus repository, https://www.ncbi.nlm.nih.gov, and ArrayExpress repository, https://www.ebi.ac.uk/arrayexpress. The data that support the findings of this study are available at http://pdxgem.moffitt.org/publications. R scripts and functions are also available at http://github.com/ykim5g/PDXGEM.
5FU:
5-fluoracil
AE:
Adverse Event
AUC:
Area under Receiver Operating Characteristic Curve
CCEA:
Concordance Co-Expression Analysis
CCEC:
Concordance Co-Expression Coefficient
CRC:
FAC:
Fluorouracil, doxorubicin, and cyclophosphamide
GEM:
Gene Expression Model
HER2:
Human Epidermal Growth Factor Receptor 2
ICGC:
International Cancer Genome Consortium
mCRC:
Metastatic Colorectal Cancer
NSCLC:
Overall Survival
Pathological Complete Response
PD:
Progressive Disease
PDA:
Pancreatic Ductal Adenocarcinoma
PDX:
Patient-Derived Xenograft
PFS:
Partial Response
RD:
Residual of Disease
RNAseq:
ROC:
Receiver Operating Characteristic curve
Stable Disease
Fisher R, Pusztai L, Swanton C. Cancer heterogeneity: implications for targeted therapeutics. Br J Cancer. 2013;108:479–85.
CAS PubMed PubMed Central Google Scholar
Rupp T, Zuckerman D. Quality of life, overall survival, and costs of Cancer drugs approved based on surrogate endpoints. JAMA Intern Med. 2017;177:276–7.
Zou J, Wang E. Cancer biomarker discovery for precision medicine: new progresses. Curr Med Chem. 2018;26:7655–71.
Goossens N, Nakagawa S, Sun X, Hoshida Y. Cancer biomarker discovery and validation. Transl Cancer Res. 2015;4:256–69.
Boyd MR. In: Teicher BA, editor. The NCI in vitro anticancer drug discovery screen. in anticancer drug development guide: preclinical screening, clinical trials, and approval. Totowa: Humana Press; 1997. p. 23–42.
Yang W, et al. Genomics of drug sensitivity in Cancer (GDSC): a resource for therapeutic biomarker discovery in cancer cells. Nucleic Acids Res. 2013;41:D955–61.
Barretina J, et al. The Cancer cell line encyclopedia enables predictive modelling of anticancer drug sensitivity. Nature. 2012;483:603–7.
Bansal M, et al. A community computational challenge to predict the activity of pairs of compounds. Nat Biotechnol. 2014;32:1213–22.
Ferriss JS, et al. Multi-gene expression predictors of single drug responses to adjuvant chemotherapy in ovarian carcinoma: predicting platinum resistance. PLoS One. 2012;7:e30550.
Shen K, et al. A systematic evaluation of multi-gene predictors for the pathological response of breast cancer patients to chemotherapy. PLoS One. 2012;7:e49529.
Lee JK, et al. Prospective comparison of clinical and genomic multivariate predictors of response to neoadjuvant chemotherapy in breast cancer. Clin Cancer Res. 2010;16:711–8.
Kim Y, et al. Retrospective analysis of survival improvement by molecular biomarker-based personalized chemotherapy for recurrent ovarian cancer. PLoS One. 2014;9:e86532.
Kim Y, Dillon PM, Park T, Lee JK. CONCORD biomarker prediction for novel drug introduction to different cancer types. Oncotarget. 2018;9:1091–106.
Byrne AT, et al. Interrogating open issues in cancer precision medicine with patient-derived xenografts. Nat Rev Cancer. 2017;17:254–68.
Hollingshead MG, et al. Gene expression profiling of 49 human tumor xenografts from in vitro culture through multiple in vivo passages--strategies for data mining in support of therapeutic studies. BMC Genomics. 2014;15:393.
Hidalgo M, et al. Patient-derived xenograft models: an emerging platform for translational cancer research. Cancer Dis. 2014;4:998–1013.
Bruna A, et al. A biobank of breast cancer explants with preserved intra-tumor heterogeneity to screen anticancer compounds. Cell. 2016;167:260–274 e222.
Miller LD, et al. An expression signature for p53 status in human breast cancer predicts mutation status, transcriptional effects, and patient survival. Proc Natl Acad Sci U S A. 2005;102:13550–5.
Tabchy A, et al. Evaluation of a 30-gene paclitaxel, fluorouracil, doxorubicin, and cyclophosphamide chemotherapy response predictor in a multicenter randomized trial in breast cancer. Clin Cancer Res. 2010;16:5351–61.
Esserman LJ, et al. Chemotherapy response and recurrence-free survival in neoadjuvant breast cancer depends on biomarker profiles: results from the I-SPY 1 TRIAL (CALGB 150007/150012; ACRIN 6657). Breast Cancer Res Treat. 2012;132:1049–62.
Horak CE, et al. Biomarker analysis of neoadjuvant doxorubicin/cyclophosphamide followed by ixabepilone or paclitaxel in early-stage breast cancer. Clin Cancer Res. 2013;19:1587–95.
Hatzis C, et al. A genomic predictor of response and survival following taxane-anthracycline chemotherapy for invasive breast cancer. JAMA. 2011;305:1873–81.
Miyake T, et al. GSTP1 expression predicts poor pathological complete response to neoadjuvant chemotherapy in ER-negative breast cancer. Cancer Sci. 2012;103:913–20.
Popovici V, et al. Effect of training-sample size and classification difficulty on the accuracy of genomic predictors. Breast Cancer Res. 2010;12:R5.
Slamon D, et al. Adjuvant trastuzumab in HER2-positive breast cancer. N Engl J Med. 2011;365:1273–83.
Menard S, Pupa SM, Campiglio M, Tagliabue E. Biologic and therapeutic role of HER2 in cancer. Oncogene. 2003;22:6570–8.
Perez EA, et al. Genomic analysis reveals that immune function genes are strongly linked to clinical outcome in the north central Cancer treatment group n9831 adjuvant Trastuzumab trial. J Clin Oncol. 2015;33:701–8.
Adamska A, Domenichini A, Falasca M. Pancreatic ductal adenocarcinoma: current and evolving therapies. Int J Mol Sci. 2017;18:1338.
PubMed Central Google Scholar
Badea L, Herlea V, Dima SO, Dumitrascu T, Popescu I. Combined gene expression analysis of whole-tissue and microdissected pancreatic ductal adenocarcinoma identifies genes specifically overexpressed in tumor epithelia. Hepatogastroenterology. 2008;55:2016–27.
Chen DT, et al. Prognostic fifteen-gene signature for early stage pancreatic ductal adenocarcinoma. PLoS One. 2015;10:e0133562.
Christof Winter, Glen Kristiansen, Stephan Kersting, Janine Roy, Daniela Aust, Thomas Knösel, Petra Rümmele, Beatrix Jahnke, Vera Hentrich, Felix Rückert, Marco Niedergethmann, Wilko Weichert, Marcus Bahra, Hans J. Schlitt, Utz Settmacher, Helmut Friess, Markus Büchler, Hans-Detlev Saeger, Michael Schroeder, Christian Pilarsky, Robert Grützmann, Donna K. Slonim. Google Goes Cancer: Improving Outcome Prediction for Cancer Patients by Network-Based. Ranking of Marker Genes. PLoS Computational Biology 8(5):e1002511.
Collisson EA, et al. Subtypes of pancreatic ductal adenocarcinoma and their differing responses to therapy. Nat Med. 2011;17:500–U140.
Zhang JJ, et al. International Cancer genome consortium data portal-a one-stop shop for cancer genomics data. Database. 2011;2011:bar026.
Watanabe T, et al. Gene expression signature and response to the use of leucovorin, fluorouracil and oxaliplatin in colorectal cancer patients. Clin Transl Oncol. 2011;13:419–25.
Muhale FA, Wetmore BA, Thomas RS, McLeod HL. Systems pharmacology assessment of the 5-fluorouracil pathway. Pharmacogenomics. 2011;12:341–50.
Del Rio M, et al. Gene expression signature in advanced colorectal cancer patients select drugs and response for the use of leucovorin, fluorouracil, and irinotecan. J Clin Oncol. 2007;25:773–80.
Marisa L, et al. Gene expression classification of colon cancer into molecular subtypes: characterization, validation, and prognostic value. PLoS Med. 2013;10:e1001453.
Ychou M, et al. A phase III randomised trial of LV5FU2 + irinotecan versus LV5FU2 alone in adjuvant high-risk colon cancer (FNCLCC Accord02/FFCD9802). Ann Oncol. 2009;20:674–80.
Van Cutsem E, et al. Randomized phase III trial comparing biweekly infusional fluorouracil/leucovorin alone or with irinotecan in the adjuvant treatment of stage III colon cancer: PETACC-3. J Clin Oncol. 2009;27:3117–25.
Khambata-Ford S, et al. Expression of epiregulin and amphiregulin and K-ras mutation status predict disease control in metastatic colorectal cancer patients treated with cetuximab. J Clin Oncol. 2007;25:3230–7.
De Stefano A, Carlomagno C. Beyond KRAS: predictive factors of the efficacy of anti-EGFR monoclonal antibodies in the treatment of metastatic colorectal cancer. World J Gastroenterol. 2014;20:9732–43.
Zappa C, Mousa SA. Non-small cell lung cancer: current treatment and future advances. Transl Lung Cancer Res. 2016;5:288–300.
Tarca AL, et al. Strengths and limitations of microarray-based phenotype prediction: lessons learned from the IMPROVER diagnostic signature challenge. Bioinformatics. 2013;29:2892–9.
Balko JM, et al. Gene expression patterns that predict sensitivity to epidermal growth factor receptor tyrosine kinase inhibitors in lung cancer cell lines and human lung tumors. BMC Genomics. 2006;7:289.
Baty F, et al. EGFR exon-level biomarkers of the response to bevacizumab/erlotinib in non-small cell lung cancer. PLoS One. 2013;8:e72966.
Byers LA, et al. An epithelial-mesenchymal transition gene signature predicts resistance to EGFR and PI3K inhibitors and identifies Axl as a therapeutic target for overcoming EGFR inhibitor resistance. Clin Cancer Res. 2013;19:279–90.
Chang DK, et al. Human anti-CAIX antibodies mediate immune cell inhibition of renal cell carcinoma in vitro and in a humanized mouse model in vivo. Mol Cancer. 2015;14:119.
Wettenhall JM, Smyth GK. limmaGUI: a graphical user interface for linear modeling of microarray data. Bioinformatics. 2004;20:3705–6.
Wainberg M, Alipanahi B, Frey BJ. Are random forests truly the best classifiers? J Mach Learn Res. 2016;17:e28966.
Lin LI. A concordance correlation coefficient to evaluate reproducibility. Biometrics. 1989;45:255–68.
Kuleshov MV, et al. Enrichr: a comprehensive gene set enrichment analysis web server 2016 update. Nucleic Acids Res. 2016;44:W90–7.
The author would like to thank Ms. Ava Cho for proofreading the article. Editorial assistance was provided by the Moffitt Cancer Center's Scientific Editing Department by Dr. Paul Fletcher & Daley Drucker.
This study was supported by the Shared Resources at the H. Lee Moffitt Cancer Center and Research Institute, an NCI designated Comprehensive Cancer Center (P30-CA076292). The funding body did not play any roles in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Department of Biostatistics and Bioinformatics, H. Lee Moffitt Cancer Center and Research Institute, 12902 Magnolia Drive, Tampa, Florida, 33612-9416, USA
Youngchul Kim
Department of Gastrointestinal Oncology, Moffitt Cancer Center, Tampa, Florida, 33612-9416, USA
Daewon Kim
Biostatistics and Bioinformatics Shared Resource, H. Lee Moffitt Cancer Center and Research Institute, 12902 Magnolia Drive, Tampa, Florida, 33612-9416, USA
Biwei Cao & Rodrigo Carvajal
Department of Cell Biology, Microbiology and Molecular Biology, University of South Florida, Tampa, FL, 33620, USA
Biwei Cao
Rodrigo Carvajal
Study concepts and design: YK and DK; data acquisition and assembly: YK and BC; quality control of data and algorithms: YK, BC, and DK; data analysis and interpretation: YK, BC, MK and DK; Statistical analysis: YK and BC; software development: YK and RC; manuscript preparation: YK and DK; manuscript editing: YK, BC, RC, MK, and DK; manuscript review: YK, BC, RC, MK, and DK; final approval of manuscript: All authors read and approved this manuscript to be published.
Correspondence to Youngchul Kim.
Additional file 1: Supplementary Figure 1.
File type: PDF. Distribution of pairwise gene-gene correlation coefficients at varying concordant co-expression coefficient (CCEC) values. Supplementary Figure 2. File type: PDF. Variable importance of biomarkers in paclitaxel PDXGEM. Supplementary Figure 3. File type: PDF. Prediction scores of paclitaxel PDXGEM in breast cancer patients. Supplementary Figure 4. File type: PDF. Prediction scores of Paclitaxel PDXGEM built by skipping CCEC analysis. Supplementary Figure 5. File type: Prediction scores of trastuzumab PDXGEM in breast cancer patients who were not treated with trastuzumab. Supplementary Figure 6. File type: PDF. PDXGEM for cetuximab in colorectal cancer patients. Supplementary Figure 7. File type: PDF. PDXGEM for erlotinib in non-small cell lung cancer patients
Additional file 2: Supplementary Table 1.
File type: Excel. Gene ontology and annotation analysis result of PDXGEM biomarkers.
File type: Word. The list of gene expression and anti-cancer drug response data sets
Kim, Y., Kim, D., Cao, B. et al. PDXGEM: patient-derived tumor xenograft-based gene expression model for predicting clinical response to anticancer therapy in cancer patients. BMC Bioinformatics 21, 288 (2020). https://doi.org/10.1186/s12859-020-03633-z
DOI: https://doi.org/10.1186/s12859-020-03633-z
Patient-derived xenograft model
Predictive cancer biomarker
Drug response prediction
Novel computational methods for the analysis of biological systems | CommonCrawl |
Analytic sensing for multi-layer spherical models with application to EEG source imaging
Hybrid regularization for MRI reconstruction with static field inhomogeneity correction
November 2013, 7(4): 1235-1250. doi: 10.3934/ipi.2013.7.1235
Multi-wave imaging in attenuating media
Andrew Homan 1,
Department of Mathematics, Purdue University, West Lafayette, IN 47906, United States
Received January 2013 Revised August 2013 Published November 2013
We consider a mathematical model of thermoacoustic tomography and other multi-wave imaging techniques with variable sound speed and attenuation. We find that a Neumann series reconstruction algorithm, previously studied under the assumption of zero attenuation, still converges if attenuation is sufficiently small. With complete boundary data, we show the inverse problem has a unique solution, and modified time reversal provides a stable reconstruction. We also consider partial boundary data, and in this case study those singularities that can be stably recovered.
Keywords: damped wave equation, time reversal., Inverse problems, microlocal analysis, thermo-acoustics.
Mathematics Subject Classification: Primary: 35R30; Secondary: 35A27, 92C5.
Citation: Andrew Homan. Multi-wave imaging in attenuating media. Inverse Problems & Imaging, 2013, 7 (4) : 1235-1250. doi: 10.3934/ipi.2013.7.1235
H. Ammari, E. Bretin, J. Garnier and A. Wahab, Time reversal in attenuating acoustic media,, Contemp. Math., 548 (2011), 151. doi: 10.1090/conm/548/10841. Google Scholar
G. Bal, K. Ren, G. Uhlmann and T. Zhou, Quantitative thermo-acoustics and related problems,, Inverse Problems, 27 (2011). doi: 10.1088/0266-5611/27/5/055007. Google Scholar
P. Burgholzer, F. Camacho-Gonzales, D. Sponseiler, G. Mayer and G. Hendorfer, Information changes and time reversal for diffusion-related periodic fields,, Proc. SPIE, 7177 (2009). doi: 10.1117/12.809074. Google Scholar
B. T. Cox, J. G. Laufer and P. C. Beard, The challenges for photoacoustic imaging,, Proc. SPIE, 7177 (2009). doi: 10.1117/12.806788. Google Scholar
X. L. Deán-Ben, D. Razansky and V. Ntziachristos, The effects of attenuation in optoacoustic signals,, Phys. Med. Biol., 56 (2011), 6129. Google Scholar
D. Finch, S. K. Patch and Rakesh, Determining a function from its mean values over a family of spheres,, SIAM J. Math. Anal., 35 (2004), 1213. doi: 10.1137/S0036141002417814. Google Scholar
Y. Hristova, Time reversal in thermoacoustic tomography - an error estimate,, Inverse Problems, 25 (2009). doi: 10.1088/0266-5611/25/5/055008. Google Scholar
Y. Hristova, P. Kuchment and L. Nyugen, On reconstruction and time reversal in thermoacoustic tomography in acoustically homogeneous and inhomogeneous media,, Inverse Problems, 24 (2008). doi: 10.1088/0266-5611/24/5/055006. Google Scholar
X. Jin, C. Li and L. Wang, Effects of acoustic heterogeneities on transcranial brain imaging with microwave-induced thermoacoustic tomography,, Med. Phys., 35 (2008), 3205. doi: 10.1118/1.2938731. Google Scholar
K. Kalimeris and O. Scherzer, Photoacoustic imaging in attenuating acoustic media based on strongly causal models,, Math. Meth. Appl. Sci., (). doi: 10.1002/mma.2756. Google Scholar
R. Kowar, Integral equation models for thermoacoustic imaging of acoustic dissipative tissue,, Inverse Problems, 26 (2010). doi: 10.1088/0266-5611/26/9/095005. Google Scholar
R. Kowar, O. Scherzer and X. Bonnefond, Causality analysis of frequency-dependent wave attenuation,, Math. Meth. in Appl. Sci., 34 (2011), 108. doi: 10.1002/mma.1344. Google Scholar
P. Kuchment and L. Kunyansky, Mathematics of thermoacoustic tomography,, Euro. J. Appl. Math., 19 (2008), 191. doi: 10.1017/S0956792508007353. Google Scholar
I. Lasiecka, J. L. Lions and R. Triggiani, Nonhomogeneous boundary value problems for second order hyperbolic operators,, J. Math. Pures Appl., 65 (1986), 149. Google Scholar
D. Modgil, M. Anastasio and P. J. La Rivière, Photoacoustic image reconstruction in an attenuating medium using singular value decomposition,, Proc. SPIE, 7177 (2009). Google Scholar
S. K. Patch and M. Haltmeier, Thermoacoustic tomography - ultrasound attenuation effects,, IEEE Nucl. Sci. Symp. Conf. Rec., 4 (2006), 2604. Google Scholar
J. Qian, P. Stefanov, G. Uhlmann and H. Zhao, An effecient Neumann-series based algorithm for thermoacoustic and photoacoustic tomography with variable sound speed,, SIAM J. Imaging Sciences, 4 (2011), 850. doi: 10.1137/100817280. Google Scholar
M. Reed and B. Simon, Methods of Modern Mathematical Physics,, volume 2, (1975). Google Scholar
P. J. La Rivière, J. Zhang and M. Anastasio, Image reconstruction in optoacoustic tomography for dispersive acoustic media,, Optics Letters, 31 (2006), 781. Google Scholar
H. Roitner and P. Burgholzer, Effecient modeling and compensation of ultrasound attenuation losses in photoacoustic imaging,, Inverse Problems, 27 (2011). doi: 10.1088/0266-5611/27/1/015003. Google Scholar
P. Stefanov and G. Uhlmann, Thermoacoustic tomography with variable sound speed,, Inverse Problems, 25 (2009). doi: 10.1088/0266-5611/25/7/075011. Google Scholar
D. Tataru, Unqiue continuation for operators with partially analytic coefficients,, J. Math. Pures Appl., 78 (1999), 505. doi: 10.1016/S0021-7824(99)00016-1. Google Scholar
M. Taylor, Pseudodifferential Operators,, Princeton University Press, (1981). Google Scholar
J. Tittlefitz, Thermoacoustic tomography in elastic media,, Inverse Problems, 28 (2012). doi: 10.1088/0266-5611/28/5/055004. Google Scholar
B. Treeby, E. Zhang and B. T. Cox, Photoacoustic tomography in absorbing acoustic media using time reversal,, Inverse Problems, 26 (2010). doi: 10.1088/0266-5611/26/11/115003. Google Scholar
Kenrick Bingham, Yaroslav Kurylev, Matti Lassas, Samuli Siltanen. Iterative time-reversal control for inverse problems. Inverse Problems & Imaging, 2008, 2 (1) : 63-81. doi: 10.3934/ipi.2008.2.63
Kazufumi Ito, Karim Ramdani, Marius Tucsnak. A time reversal based algorithm for solving initial data inverse problems. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 641-652. doi: 10.3934/dcdss.2011.4.641
Anna Doubova, Enrique Fernández-Cara. Some geometric inverse problems for the linear wave equation. Inverse Problems & Imaging, 2015, 9 (2) : 371-393. doi: 10.3934/ipi.2015.9.371
Hiroshi Takeda. Large time behavior of solutions for a nonlinear damped wave equation. Communications on Pure & Applied Analysis, 2016, 15 (1) : 41-55. doi: 10.3934/cpaa.2016.15.41
V. Pata, Sergey Zelik. A remark on the damped wave equation. Communications on Pure & Applied Analysis, 2006, 5 (3) : 611-616. doi: 10.3934/cpaa.2006.5.611
James W. Webber, Sean Holman. Microlocal analysis of a spindle transform. Inverse Problems & Imaging, 2019, 13 (2) : 231-261. doi: 10.3934/ipi.2019013
Laurent Bourgeois, Dmitry Ponomarev, Jérémi Dardé. An inverse obstacle problem for the wave equation in a finite time domain. Inverse Problems & Imaging, 2019, 13 (2) : 377-400. doi: 10.3934/ipi.2019019
Weiguo Zhang, Yan Zhao, Xiang Li. Qualitative analysis to the traveling wave solutions of Kakutani-Kawahara equation and its approximate damped oscillatory solution. Communications on Pure & Applied Analysis, 2013, 12 (2) : 1075-1090. doi: 10.3934/cpaa.2013.12.1075
Yanbing Yang, Runzhang Xu. Nonlinear wave equation with both strongly and weakly damped terms: Supercritical initial energy finite time blow up. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1351-1358. doi: 10.3934/cpaa.2019065
Stéphane Gerbi, Belkacem Said-Houari. Exponential decay for solutions to semilinear damped wave equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 559-566. doi: 10.3934/dcdss.2012.5.559
Maurizio Grasselli, Vittorino Pata. On the damped semilinear wave equation with critical exponent. Conference Publications, 2003, 2003 (Special) : 351-358. doi: 10.3934/proc.2003.2003.351
Cedric Galusinski, Serguei Zelik. Uniform Gevrey regularity for the attractor of a damped wave equation. Conference Publications, 2003, 2003 (Special) : 305-312. doi: 10.3934/proc.2003.2003.305
Martin Michálek, Dalibor Pražák, Jakub Slavík. Semilinear damped wave equation in locally uniform spaces. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1673-1695. doi: 10.3934/cpaa.2017080
Rachid Assel, Mohamed Ghazel. Energy decay for the damped wave equation on an unbounded network. Evolution Equations & Control Theory, 2018, 7 (3) : 335-351. doi: 10.3934/eect.2018017
Yannick Privat, Emmanuel Trélat. Optimal design of sensors for a damped wave equation. Conference Publications, 2015, 2015 (special) : 936-944. doi: 10.3934/proc.2015.0936
Lauri Oksanen. Solving an inverse problem for the wave equation by using a minimization algorithm and time-reversed measurements. Inverse Problems & Imaging, 2011, 5 (3) : 731-744. doi: 10.3934/ipi.2011.5.731
Raluca Felea, Romina Gaburro, Allan Greenleaf, Clifford Nolan. Microlocal analysis of Doppler synthetic aperture radar. Inverse Problems & Imaging, 2019, 13 (6) : 1283-1307. doi: 10.3934/ipi.2019056
Linglong Du. Long time behavior for the visco-elastic damped wave equation in $\mathbb{R}^n_+$ and the boundary effect. Networks & Heterogeneous Media, 2018, 13 (4) : 549-565. doi: 10.3934/nhm.2018025
Sergei A. Avdonin, Sergei A. Ivanov, Jun-Min Wang. Inverse problems for the heat equation with memory. Inverse Problems & Imaging, 2019, 13 (1) : 31-38. doi: 10.3934/ipi.2019002
Feng Zhou, Chunyou Sun, Xin Li. Dynamics for the damped wave equations on time-dependent domains. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1645-1674. doi: 10.3934/dcdsb.2018068
Andrew Homan | CommonCrawl |
A Stiffness Variable Passive Compliance Device with Reconfigurable Elastic Inner Skeleton and Origami Shell | springerprofessional.de Skip to main content
Mein eLearning
Springer Professional
Automobil + Motoren
Bauwesen + Immobilien
Business IT + Informatik
Elektrotechnik + Elektronik
Energie + Umwelt
Finance + Banking
Management + Führung
Marketing + Vertrieb
Maschinenbau + Werkstoffe
Versicherung + Risiko
Einzelzugang
Zugang für Unternehmen
Deutsch subNavigationMarker
PDF-Version jetzt herunterladen
vorheriger Artikel Intelligent Modularized Reconfigurable Mechanis...
nächster Artikel Investigation on Yield Behavior of 7075-T6 Alum...
PatentFit aktivieren
Weitere Artikel dieser Ausgabe durch Wischen aufrufen
Tipp schließen
01.12.2020 | Original Article | Ausgabe 1/2020 Open Access
A Stiffness Variable Passive Compliance Device with Reconfigurable Elastic Inner Skeleton and Origami Shell
Chinese Journal of Mechanical Engineering > Ausgabe 1/2020
Zhuang Zhang, Genliang Chen, Weicheng Fan, Wei Yan, Lingyu Kong, Hao Wang
» Zur Zusammenfassung PDF-Version jetzt herunterladen
Human-Robot Interaction is one of the most challenging and popular research topics in robotics [ 1 ]. Meanwhile, for an effective interaction, the safety of human is of importance. Cartesian compliance [ 2 ] can ensure the safety of both robot manipulators and human when contact force arises from position misalignment or hard collision [ 3 ]. In general, there are mainly two methods to protect both the manipulators and human from being damaged by unexpected interaction, namely the active and the passive compliance [ 4 ].
The active compliance is to manipulate robot with totally stiff actuators and obtain compliance by means of control [ 5 ]. Such kind of compliance requires force sensing units to provide force information to the controller [ 6 ]. However, if there is something wrong with the sensors or the controllers, the manipulator will thoroughly lose its compliance and will not be able to ensure safety [ 7 ]. Another way to attain compliance is integrating flexible structures into the manipulators, which is referred to as passive compliance [ 8 ]. Such kind of compliance provides more reliable protection which is irrelevant to the control algorithm [ 9 ], thanks to its inherent flexible structure. The passive compliance can be obtained by setting compliant device between the manipulator and environment [ 10 ‒ 14 ] or integrating the compliant joints inside the manipulator [ 15 ‒ 18 ]. However, the integration of compliant elements will lead to degradation of position accuracy. Besides, the structural compliance also brings challenges in resisting disturbances or maintaining shapes under intense inertia, which will limit the performance of the manipulators under high acceleration [ 19 ].
To this end, various designs of passive variable stiffness joints/devices have been proposed. One of them is structure controlled stiffness (SCS) that means achieving variations through changing the effective structure of the spring. The core advantage of SCS is that completely stiff setting is possible to realize. As a consequence, SCS is more appropriate than other designs when precise positioning or high acceleration is needed. Changing the moment of inertia is one of the feasible methods to vary the effective structure. Kawamura et al. [ 20 ] used vacuum to press some layered sheets together. The moment of inertia changes along with the variation of the effective cross-sectional area. Such method of stiffness variation has been widely used in soft grippers and manipulators due to its simple structure and control strategy [ 19 , 21 – 23 ]. Another way to change the effective structure is controlling the effective length of the spring. Choi et al. [ 18 ] designed a variable stiffness joint which consisted of four-bar linkages, leaf springs and two identical actuators. Controlling the four-bar linkages on the two sides identically, the effective lengths of the springs could be changed. Tao et al. [ 24 ] proposed a variable stiffness joint with similar principle. Only one leaf spring was used and the effective length was changed by rollers and a screw. Bi et al. [ 25 ] presented a concept of parallel-assembled-folded serial leaf springs. The lengths of the springs were also controlled by rollers and a ball screw.
Among the existing designs of SCS that can provide tunable passive compliance, most of them were compliant joints. The stiffness varies along with the tuning of the length or thickness of the spring. Thus, the elements to control the effective structure of the springs are indispensable, which makes the structure of the joints complicated and hard to integrate inside the robots under some specific applications. Moreover, most of the designs utilize electrical motors to control stiffness. The motors generally need to apply torque to keep a constant stiffness, which is energy inefficient. In addition, the motors introduce relatively high weights and is not suitable for extreme environments.
To alleviate the above shortcomings, in this paper, a novel design of a stiffness variable passive compliance device with combined structure of reconfigurable elastic inner skeleton and origami shell is proposed. The proposed device can be used as an end-of-arm tool with two different modes generated from changing the arrangement of the elastic links and the passive joints. Apart from providing passive compliance, the device can switch to the stiff status for applications with high acceleration/deceleration or precise positioning. With a Si-Mo (single input multiple output) pneumatic actuation system, the four limbs of the inner skeleton can switch their modes simultaneously and the stiffness of the device can be changed in a fast, simple and straightforward manner. No electrical elements needs to be mounted on the device, which means the device has potential applications in some extreme environments. The kinetostatics and the compliance of the device are analyzed based on an efficient approach to large deflection problems [ 26 , 27 ]. A prototype was built, on which experimental assessments of the stiffness changing capabilities were conducted.
The paper is arranged as follows: Section 2 introduces the concept of the stiffness variation of the device. Section 3 provides a detailed description of the mechanical design and the fabrication method. The theoretical model and the compliance of the device are analyzed in Section 4, followed in Section 5 by experimental assessments on the fabricated prototype. Finally, conclusions are drawn in Section 6.
2 Concept of the Stiffness Variation
The structure of the proposed passive compliance device with variable stiffness can be divided into three parts: the reconfigurable elastic inner skeleton, the origami shell, and the Si-Mo pneumatic actuation system, as shown in Figure 1. Among these parts, the reconfigurable elastic inner skeleton is the core component from which the passive compliance and the variable stiffness generate.
Structure of the proposed variable stiffness device
2.1 Reconfigurable Elastic Inner Skeleton
The main concept of the reconfigurable [ 28 ‒ 30 ] elastic inner skeleton is to have two trapezoid four-bar linkages arranged in orthogonal. As shown in Figure 1, each trapezoid consists of two rigid limbs (AD/ad, BC/bc) and two elastic limbs (AB/ab, DC/dc). The rigid limbs are mounting side and tool side of the device, respectively. There are four elastic limbs in the device while each elastic limb consists of two universal joints and a leaf spring. The motion of each trapezoid four-bar linkage is totally passive and the only active motion inside the device is the self-rotation of the leaf springs. Under the actuation of the Si-Mo pneumatic actuation system, the leaf springs of the four elastic limbs are able to rotate 90° simultaneously. Figure 2 illustrates the statuses of the device before and after 90° rotation, respectively.
Two different statuses of the inner skeleton
The leaf spring in each elastic limb plays an important role in the stiffness variation. Considering the small deflection beam equation
$$M = \left( \frac{EI}{L} \right) \times \theta ,$$
where M is the bending moment, E is the material modulus, I is the moment of inertia, L is the effective beam length, and θ is the angle of bending or slope.
In this representation of bending, the term EI/L relates to the bending stiffness of the leaf spring. In order to change the stiffness of the beam, one of the most effective way is to change the parameter I that can be calculated as
$$I = \frac{{L_{b} \times L_{h}^{3} }}{12},$$
where the parameter L b denotes the width of the surface perpendicular to the bending direction, and L h denotes the other.
It can be seen that if L b and L h exchange with each other, the moment of inertia will be changed. As to the leaf spring, there is a big difference between its width and its thickness. The moment of inertia differs a lot in the two bending directions, so that the stiffness of the leaf spring in the two directions are totally different. By means of such feature, the leaf springs in the proposed device are designed to have the capacity of 90° self-rotation. Along with the stiffness variation of the leaf springs on the specific directions, the constraints on the tool side can be changed through combining the elastic springs with the passive rigid joints appropriately.
2.2 Stiff Status of the Device
Compared with the width, the thickness of the leaf spring is much smaller. Thus, it's appropriate to regard the spring as totally stiff in the direction perpendicular to the narrow edge. As shown in Figure 2(a), when arranging each pair of the leaf springs 'facing each other', the bending direction of the leaf springs is same with the rotation of the corresponding passive joints in the same elastic limbs. Thus, each four-bar linkage only has the capability of planar movement. However, once there is a motion trend in one of the trapezoid four-bar linkages, the leaf springs and the corresponding joints in the other trapezoid will act totally stiff in that direction due to the orthogonal arrangement of two trapezoids. For example, as shown in Figure 2(a), the trapezoid four-bar linkage consisting of elastic limb 1, limb 2 and two rigid plates (namely trapezoid ABCD in Figure 1) has only one Degree-of-Freedom. However, elastic limb 3 and limb 4 act stiff on such moving direction because the direction is perpendicular to the narrow edges of the leaf springs and also the passive joints. As a result, no movement could be realized except that buckling happens. In such status, the device is suitable for precise positioning and motion with high acceleration/deceleration.
2.3 Compliant Status of the Device
As shown in Figure 2(b), the bending direction of each leaf spring is perpendicular to the rotation of the corresponding passive joints in the same elastic limb after 90° self-rotation from the stiff status shown in Figure 2(a). In such situation, no movement could happen when regarding all the springs as rigid body. However, the bending of the leaf spring is easy to realize, so that the tool side of the device will be able to move. The movement of the device can be regarded as a combination of the planar motion of one four-bar linkage and the bending of the other one. As shown in Figure 3, when one of the four-bar linkages starts moving, the corresponding leaf springs are rigid in the moving direction. The one Degree-of-Freedom planar motion is as thus generated. At the same time, the leaf springs in the other four-bar linkage are easy to be deformed on the same direction. Then, deflections happen in these two springs to adapt the movement. As the compliance generates from the structural deflection, the device can provide passive compliance which is irrelevant to the control algorithm.
Schematic diagram of the deformed device under the compliant status
3 Design and Fabrication
The design of the inner skeleton is illustrated in Section 2.1. As to the fabrication of the elastic inner skeleton, the main principles of the material selection are lightweight and easy to obtain. All of the non-standard rigid parts are three-dimensional printed. Slender Ni-Ti alloy strips are employed as the elastic leaf springs.
3.2 Origami Shell
Apart from the structural passive compliance, the elastic limbs with slender structures also bring shortcomings. Under the situation shown in Figure 2(b), the tool side is easy to be twisted relative to the mounting side due to the low stiffness of the leaf springs. Further, the bare slender metal sheets are not safe enough in manipulation.
To alleviate above shortcomings, a tubular origami shell with Yoshimura pattern [ 31 ] is integrated to form an enclosed structure and prevent the device from twist, as shown in Figure 1. As an ancient Chinese and Japanese art of paper folding, origami is drawing more and more attention these years in the robotics field and origami robots can be defined as autonomous machines whose morphology and function are created using folding [ 32 ‒ 35 ]. The Yoshimura pattern is a cylindrical folding origami that supports bending and axial folding [ 36 ‒ 38 ]. Another merit of such kind of origami shell is its torsion resistance [ 39 ]. In this way, the tendency of twist under the status in Figure 2(b) is avoided without influencing the passive motion of the elastic skeleton. Besides, the slender metal sheets are isolated from the working environment by the compliant origami shell. Hence, the interaction safety is ensured. In addition, the feature of uncontinuously foldable [ 40 ] makes the Yoshimura pattern resistant to be axially compressed. Thus, the integration of a pre-compressed origami shell with Yoshimura pattern will provide extension force and prevent the device from buckling.
The base crease pattern of the origami shell as well as the actual images of the machined pattern and the manually folded shell is shown in Figure 4. The 0.15 mm polyethylene terephthalate (PET) films are chosen due to their high strength-to-weight ratio, transparency and easy to obtain. The crease pattern is planar designed and machined by a carbon dioxide laser-cutter. It is worth noting that the black solid lines represent mountain creases and the blue dash lines represent valley creases in Figure 4(a). The 2D laser-cut PET film (Figure 4(b)) can be manually folded into a 3D structure following the crease pattern and finally forms a rectangular tube (Figure 4(c)). The rectangular structure is attained from designing four sections in the proposed 2D pattern, which aims to attain similar bending capacities with the reconfigurable elastic inner skeleton under the compliant status.
Origami shell: base crease pattern, 2D laser-cut film and folded 3D structure
3.3 Si-Mo Pneumatic Actuation System
As discussed above, the variable stiffness of the proposed device generates from the mode switching of the reconfigurable elastic inner skeleton. Hence, in order to actively control the mode of the skeleton, a Si-Mo pneumatic actuation system is designed. The system consists of a pneumatic central line and four sub-actuators, as shown in Figure 5. Each sub-actuator consists of a rigid rotor, a rigid stator and two soft pneumatic chambers. The rigid parts are three-dimensional printed and the pneumatic chambers are made of inelastic air-tight fabric. The two chambers that connect the shank of the rotor and the arc groove of the stator together on the opposite sides will significantly expand and fill the groove when pressurized. In this way, the two chambers antagonistically actuate the rotor to swing inside the stator. The four elastic leaf springs are connected with the four rotors through bolts and nuts, respectively. As a result, when the rotors rotate under the pneumatic actuation, the leaf springs will correspondingly rotate. In this way, the reconfiguration of the structure is achieved.
Si-Mo pneumatic actuation system
The pneumatic central line provides air source to the four sub-actuators that integrated inside the four elastic limbs. A 2-4 way solenoid valve (VQD1121, SMC) is used to switch the inflation between the two chambers of each sub-actuator. Benefiting from the pneumatic actuation, a single pressure input can control four actuators simultaneously, as shown in Figure 5. Moreover, the motion of the limbs has no influence on the transmission of the soft pneumatic pipes when the device is imposed to deform under the compliant status illustrated in Figure 3. Such soft transmission is hard to realize by other transmission methods, such as cable transmission, gear transmission, etc.
Another advantage of such pneumatic actuation is its remote transmission. The control system can be placed far away from the device and the mode switching will not be significantly influenced. As no electrical elements needs to be mounted on the device and the only actuation elements connecting to the device are two soft pipes, the device has potential applications in some special environments, such as underwater, radioactive, low/high temperature, etc.
4 Modeling and Compliance Analysis
As discussed in the previous sections, the motion of the tool side can only be produced by external force under the compliant status. The leaf springs will not deform under the stiff status. Hence, the modeling and analysis in this section are based on the compliant status. Due to the orthogonal arrangement, the compliance can be regarded as generating from the combination of two deformed trapezoid four-bar linkages. Each leaf spring acts stiff in the moving direction of the corresponding four-bar linkage and flexible in the moving direction of the other one. Hence, to prove the compliance of the device, it's appropriate to analyze the deflection of the leaf springs and the force they generate in one of the trapezoid linkages. The key issue for the compliance analysis is to analyze the large deflection of the leaf springs. Based on our prior work on the general approach to the large deflection problems of spatial flexible rods [ 26 , 27 ], the kinetostatics of the leaf springs inside the device can be analyzed. In this way, the compliance of the device under the compliant status can be predicted and the geometric parameter designs for different applications can benefit from the analysis accordingly.
4.1 Kinematics
For kinematics modeling, two reference frames, namely the spatial one {S} and the tool one {T}, are constructed at the mounting side and the tool side of the device, respectively. The origin of {S}, termed O shown in Figure 3, locates at the center of the mounting side. The corresponding x and y axes are along the horizontal and the vertical directions, respectively. The origin of {T}, P is referred to as the center of the tool side of the device.
As indicated above, the four-bar linkage that parallel to the moving direction will keep rigid while the other one deflects to adapt the movement. Hence, the configuration of {T} is uniquely determined according to the motion of the rigid four-bar linkage. Then, the configuration of the tool side can be obtained by solving the following equations:
$$\left\{ \begin{gathered} x = - \frac{{r_{m} }}{2} + \frac{{r_{t} }}{2}\cos \phi + l\cos (\theta_{0} - \theta ), \hfill \\ y = \frac{{r_{t} }}{2}\sin \phi - l\sin (\theta_{0} - \theta ), \hfill \\ ( - r_{m} + r_{t} \cos \phi + l\cos (\theta_{0} - \theta ))^{2} + (r_{t} \sin \phi - l\sin (\theta_{0} - \theta ))^{2} = l^{2} , \hfill \\ \end{gathered} \right.$$
where x, y, \(\phi\) correspond to the horizontal position, the vertical position and the rotation of the tool side relative to {S}. θ and θ 0 denotes the variable and the initial angle of the revolute joint A, respectively. r m and r t are effective diameter of the joints on the mounting side and the tool side. l denotes the length of the leaf spring.
As to the motion of the single leaf spring, the pose of the spring's tip frame, with respect to the local frame on the distal end of the spring {L}, can be derived as
$${\varvec{g}}_{lt} = {\varvec{g}}_{sl}^{ - 1} {\varvec{g}}_{{{\text{s}}t}} ,$$
where \({\varvec{g}}_{sl} \in SE(3)\) denotes the pose of the local frame on the distal end of the spring with respect to {S} and \({\varvec{g}}_{st} \in SE(3)\) relates to the pose of the spring's tip frame, with respect to {S}, which can be calculated from x, y, \(\phi\) and the known parameters.
4.2 Kinetostatics
On the basis of our prior work [ 26 , 27 ], the leaf springs in the elastic limbs are discretized into a number of small segments, as illustrated in Figure 6. For each segment, a six-DOF linkage consisting of rigid bodies and elastic joints can be attained based on the principal axes decomposition of the structural compliance matrix [ 41 ]. Then, the force-deflection behavior of the discretized elastic segments can be approximated by the kinetostatics of the equivalent linkages. Connecting all the segments one after another, a hyper-redundant rigid-body mechanism with elastic joints can be constructed to represent the large deflection problems of the leaf springs.
Mechanism approximation of the leaf springs inside the proposed device
Due to its thin-walled structure, the thickness of the leaf spring is much smaller than its width. It's appropriate to regard the leaf spring as totally stiff in the direction perpendicular to the narrow edge. On the other hand, the effects of shearing and compression are neglectable compared with the bending one [ 42 ]. Thus, only the revolute joints associated with the segments' bending and torsion effects will be taken into account in the hyper-redundant mechanism. Using product-of-exponential formula for forward kinematics [ 43 ], the configuration of the approximated hyper-redundant mechanism's tip frame g lt and the balance of the elastic deflection can then be defined as
$$\left\{ \begin{aligned} \varvec{g}_{{lt}} = \prod\limits_{{i = 1}}^{{2n}} {\exp (\hat{\xi }_{i} \theta _{i} )} \varvec{g}_{{lt,0}} , \hfill \\ \user2{\tau } = \varvec{K}_{\theta } \varvec{\theta } - \varvec{J}_{\theta }^{{\text{T}}} \varvec{F} \to \varvec{0}, \hfill \\ \end{aligned} \right.$$
where \(\varvec{\theta }_{i} = [\theta _{1} , \ldots ,\theta _{{2n}} ]^{{\text{T}}} \in \mathbb{R}^{{2n \times 1}}\) denotes the joint variables of the whole spring. \(\xi _{i} = {\text{Ad}}(\varvec{g}_{{0,0}} , \ldots ,\varvec{g}_{{i - 1,0}} )\user2{t}_{i}\) are the joint twists, transformed from their local frames to {L}; t i denote the joint twists of the segments in their local frames. \({\varvec{g}}_{lt,0} = {\varvec{g}}_{0,0} {\varvec{g}}_{1,0} \cdot \cdot \cdot {\varvec{g}}_{2n,0}\) relates to the initial pose of the leaf spring with respect to {L}. 2 n is the total number of joints. \({\varvec{K}}_{\theta } = {\text{diag}}(1/c_{1,1} ,1/c_{1,2} ,1/c_{2,1} ,1/c_{2,2} , \cdot \cdot \cdot ,1/c_{n,1} ,1/c_{n,2} )\) denotes the overall joint stiffness matrix, where \(c_{i,1} { = }\delta {/}EI_{xx} ,\) \(c_{i,2} { = }\delta {/}GI_{zz}\) denote the compliance of the approximated joints for bending and torsion. δ, E, G, I xx and I zz are the length of each small segment, the elastic modulus, the shear modulus, and the moment of inertia on bending and torsion direction, respectively. And F corresponds to the external wrench exerted at the tip. Here, the Jacobian matrix J θ can be derived as
$${\varvec{J}}_{\theta } = \left( {\frac{{\partial \dot{\varvec{g}}_{st} }}{{\partial {\varvec{\theta}}}}{\varvec{g}}_{st}^{ - 1} } \right)^{ \vee } = [{\xi^{\prime}}_{1},{\xi^{\prime}}_{2} , \cdots,\;{\xi^{\prime}}_{2n}] \in {\mathbb{R}}^{6 \times 2n},$$
where \({\xi^{\prime}}_{k} = {\text{Ad(}}\exp (\hat{{\xi }}_{1} \theta_{1} ) \cdots \exp (\hat{{\xi }}_{k - 1} \theta_{k - 1} )){\xi }_{k}\) relate to the joint twists in the current configuration and are represented with respect to {L}.
Then, an unconstrained optimization model is formulated to accomplish the force-deflection problems of the corresponding hyper-redundant mechanism, which can be defined as
$$\min {\varvec{c}}({\varvec{x}}) = \left[ \begin{aligned} \log ({\varvec{g}}_{t}^{ - 1} {\varvec{g}}_{lt} ({\varvec{\theta } }))^{ \vee } \\ {\varvec{K}}_{\theta } {\varvec{\theta } } - {\varvec{J}}_{\theta }^{{\text{T}}} {\varvec{F}} \\ \end{aligned} \right],$$
where \({\varvec{x}} = {[}{\varvec{\theta}}{,}\;{\varvec{F}}]^{{\text{T}}} \in {\mathbb{R}}^{(2n + 7) \times 1}\) denotes the variables of the optimization problem. \({\varvec{g}}_{t} \in SE(3)\) denotes the target pose for the tip-frame of each leaf spring. \(\log ({\varvec{g}}_{t}^{ - 1} {\varvec{g}}_{lt} (\varvec{\theta }))^{ \vee } \in {\mathbb{R}}^{6 \times 1}\) corresponds to the twist deviation of current pose from the target one.
Thus, the gradient of the objective function Eq. ( 7) can be written as
$$\nabla = \left[ {\frac{{\partial {\varvec{c}}}}{{\partial {\varvec{\theta}}}},\;\frac{{\partial {\varvec{c}}}}{{\partial {\varvec{F}}}}} \right] = \begin{bmatrix} {{\varvec{J}}_{{\theta}} } & {\varvec{0}} \\{{\varvec{K}}_{{\theta}} + {\varvec{K}}_{{\varvec{J}}}} & { - {\varvec{J}}_{{\theta }}^{{\text{T}}}} \end{bmatrix}$$
where \({\varvec{K}}_{{\varvec{J}}} \in {\mathbb{R}}^{2n \times 2n}\) is a configuration-dependent stiffness item. Please refer to Ref. [ 26 ] for more details.
Then, the update theme for the variables in this hybrid equilibrium problem can be represented as
$${\varvec{x}}^{j + 1} = {\varvec{x}}^{j} + \nabla^{ - 1} {\varvec{c}}^{j} ,$$
which will be iteratively repeated until the objective function c approaches zero and the variable x converges stably. As a result, the rotations of all the approximated joints and the generated force in the equilibrium configuration can be simultaneously obtained in terms of the resultant θ and F.
4.3 Compliance Analysis
In this section, the compliance of the proposed device under the compliant status is analyzed. The geometric parameters of the analyzed device and the mechanical properties of the leaf spring are given in Table 1. It is worth noting that the device is central symmetric, and lengths of all the leaf springs are same. h W and h T denotes the width and thickness of the leaf spring, respectively. μ is Poisson's ratio of the springs.
Lengths and mechanical properties of the leaf springs
l (m)
r t (m)
r m (m)
h W (m)
h T (m)
E (GPa)
5×10 −3
I xx (m 4)
I zz (m 4)
3.3×10 −15
Based on the above analysis, the horizontal displacement of the device's mounting side is set from 0 to 60 mm. Then, the deflection of the leaf spring is calculated every 3 mm and is illustrated in Figure 7. At the same time, Figure 8 depicts the variation of the generated force during the passive motion. According to the figures, it is apparent that the leaf spring is easy to be deformed and can not generate great force. The generated force has an upper boundary less than 10.5 N and the force decreases as the tool side keep moving after passing the boundary points.
Deformations of the single leaf spring from 0 to 60 mm
Generated forces of the single leaf spring from 0 to 60 mm
Considering the relatively low generated force, the device is proved to bring sufficient compliance to the manipulator or the operator when misalignment happens in applications like assembly automation or human-robot interaction. Benefiting from the upper boundary, the device will ensure the protection regardless of the positioning error. Moreover, the passive compliance is attained from the inherent structure and no sensors or complicated control algorithms is needed during the manipulation. As the same time, the device will act compliant immediately however fast the misalignment or the collision occurs, which is hard to realize by the active compliance because of its limited bandwidth [ 44 ].
5 Experiments
To validate the capability of the stiffness changing of the proposed device, several experiments were conducted on the prototype with the geometric parameters shown in Table 1. The prototype was fixed upon the end effector of a six-DOF industrial robot (UR 10) from which the motion generated during the experiments. The motion of the tool side of the prototype was measured by a motion capture system (OptiTrack Prime 41) with the absolute measurement accuracy around 0.3 mm.
The first experiment aimed to show the stiffness of the prototype under different statuses. As shown in Figure 9, the tool side was imposed to knock into a fixed barrier under the actuation of the industrial robot; the prototype was set at the stiff and the compliant status, respectively. A six-axis force-torque sensor (ATI Mini 45) was mounted between the barrier and a fixed platform to measure the interaction force. Then, different displacements were imposed at the mounting side of the prototype while the tool side kept motionless due to the interaction with the fixed barrier.
Setup of the stiffness tests
Figure 10 illustrates the correlations between the generated forces and the relative displacements of the prototype under two different statuses. It is apparent that the prototype showed high stiffness under the stiff status. The generated force rose up to 10 N when the relative displacement between the two sides was just around 5 mm. It is worth noting that the relative displacement in this test was mainly generated from the assembly gap and the structural deformation of the device. The stiffness should be much more enlarged through replacing the plastic parts by the metal ones. Hence, the displacement was controlled less than 5 mm to prevent the plastic three-dimensional printed elements from damage. On the contrary, the displacement could be much larger in the test under the compliant status. The reconfigurable elastic inner skeleton as well as the origami chamber was easily deformed to adapt the relative movement between the mounting side and the tool side. The compliance under such status was noticeable and the generated force was around 10 N when the displacement reached 20 mm. It should be noted that the measured force was generated from two deflected leaf springs, so that the measured force was almost twice the predicted one.
Correlations between the generated forces and the displacements under two statuses
In the second experiment, the response of the stiffness variation was tested. Similar to the first experiment, the tool side was first imposed to knock into the barrier fixed upon the six-axis force-torque sensor under the stiff status. Then, the status was controlled to be switched under the actuation of the Si-Mo pneumatic actuation system. The interaction force during such process was record by the force sensor and Figure 11 illustrates the response of the prototype. It is apparent that the stiffness of the prototype can be changed rapidly with a reaction time around 80 ms from the stiff status to the compliant one. As a consequence, the proposed device was proved to have the capability of switching its stiffness in a fast, simple and straightforward manner.
Response of the stiffness variation
The last experiment was designed to prove the torsional strength augment from the origami shell, which has an apparent influence on the behavior of the device under the compliant status. As shown in Figure 12(a), the torsion was manually imposed on the tool side with a six-axis force-torque sensor mounted between the tool side and the rotation bar. Both the device with and without the origami shell were tested. As shown in Figure 12(b), the tool side rotated more than 16° under the torsional force around 0.3 N·m without the origami shell. However, almost no rotation generated on the prototype with the origami shell under the same torsional forces. It was apparent that the device without the origami shell was easy to be twisted while the device with the origami shell seemed stable. Hence, the torsional strength of the proposed device was proved to be significantly enhanced by the origami shell.
Experiment and results of the torsional strength tests with/without the origami shell
In this paper, a novel stiffness variable passive compliance device that consists of a reconfigurable elastic inner skeleton, an origami shell and a Si-Mo pneumatic actuation system is proposed. Controlling the self-rotation of the leaf springs, the arrangement of the elastic links and the passive joints can be changed and the stiffness variation of the device is as thus realized. The device can be used for precise positioning or applications with high acceleration /deceleration under the stiff status and providing passive compliance or protection under the compliant status. The Si-Mo pneumatic actuation system can switch the stiffness of the device in a fast, simple and straightforward way and the device has potential applications in some special environments as no electrical elements needs to be mounted on the device.
The kinetostatics as well as the compliance of the device is analyzed based on an efficient approach to large deflection problems. A prototype has been built to assess the proposed concept. The experimental results show that the device possesses relatively low stiffness under the compliant status and high stiffness under the stiff status with a status switching speed around 80 ms. The device generates only 10 N when the relative displacement between the two sides is around 20 mm under the compliant status. The laser-cut origami shell significantly enhances the torsional strength of the device and the interaction safety can benefit from its inherent soft structure.
The authors declare no competing financial interests.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Zurück zum Zitat A De Santis, B Siciliano, A De Luca, et al. An atlas of physical human–robot interaction. Mechanism and Machine Theory, 2008, 43(3): 253-270. MATHCrossRef A De Santis, B Siciliano, A De Luca, et al. An atlas of physical human–robot interaction. Mechanism and Machine Theory, 2008, 43(3): 253-270. MATHCrossRef
Zurück zum Zitat D E Whitney. Historical perspective and state of the art in robot force control. The International Journal of Robotics Research, 1987, 6(1): 3-14. CrossRef D E Whitney. Historical perspective and state of the art in robot force control. The International Journal of Robotics Research, 1987, 6(1): 3-14. CrossRef
Zurück zum Zitat S Haddadin, A Albu-Schäffer, G Hirzinger. Requirements for safe robots: Measurements, analysis and new insights. The International Journal of Robotics Research, 2009, 28(11-12): 1507-1527. CrossRef S Haddadin, A Albu-Schäffer, G Hirzinger. Requirements for safe robots: Measurements, analysis and new insights. The International Journal of Robotics Research, 2009, 28(11-12): 1507-1527. CrossRef
Zurück zum Zitat W Wang, R N K Loh, E Y Gu. Passive compliance versus active compliance in robot-based automated assembly systems. Industrial Robot: An International Journal, 1998, 25(1): 48-57. CrossRef W Wang, R N K Loh, E Y Gu. Passive compliance versus active compliance in robot-based automated assembly systems. Industrial Robot: An International Journal, 1998, 25(1): 48-57. CrossRef
Zurück zum Zitat N Hogan, S P Buerger. Robotics and Automation Handbook. New York: CRC Press, 2004: 375–398. N Hogan, S P Buerger. Robotics and Automation Handbook. New York: CRC Press, 2004: 375–398.
Zurück zum Zitat T Lefebvre, J Xiao, H Bruyninckx, et al. Active compliant motion: A survey. Advanced Robotics, 2005, 19(5): 479-499. CrossRef T Lefebvre, J Xiao, H Bruyninckx, et al. Active compliant motion: A survey. Advanced Robotics, 2005, 19(5): 479-499. CrossRef
Zurück zum Zitat K Goris, J Saldien, B Vanderborght, et al. How to achieve the huggable behavior of the social robot Probo? A reflection on the actuators. Mechatronics, 2011, 21(3): 490-500. CrossRef K Goris, J Saldien, B Vanderborght, et al. How to achieve the huggable behavior of the social robot Probo? A reflection on the actuators. Mechatronics, 2011, 21(3): 490-500. CrossRef
Zurück zum Zitat R Van Ham, T G Sugar, B Vanderborght, et al. Compliant actuator designs. IEEE Robotics & Automation Magazine, 2009, 16(3): 81-94. CrossRef R Van Ham, T G Sugar, B Vanderborght, et al. Compliant actuator designs. IEEE Robotics & Automation Magazine, 2009, 16(3): 81-94. CrossRef
Zurück zum Zitat J J Park, J B Song. A nonlinear stiffness safe joint mechanism design for human robot interaction. Journal of Mechanical Design, 2010, 132(6): 061005. CrossRef J J Park, J B Song. A nonlinear stiffness safe joint mechanism design for human robot interaction. Journal of Mechanical Design, 2010, 132(6): 061005. CrossRef
Zurück zum Zitat D E Whitney, J M Rourke. Mechanical behavior and design equations for elastomer shear pad remote center compliances. Journal of Dynamic Systems, Measurement, and Control, 1986, 108(3): 223–232. CrossRef D E Whitney, J M Rourke. Mechanical behavior and design equations for elastomer shear pad remote center compliances. Journal of Dynamic Systems, Measurement, and Control, 1986, 108(3): 223–232. CrossRef
Zurück zum Zitat Š Havlík. A new elastic structure for a compliant robot wrist. Robotica, 1983, 1(2): 95-102. CrossRef Š Havlík. A new elastic structure for a compliant robot wrist. Robotica, 1983, 1(2): 95-102. CrossRef
Zurück zum Zitat N Ciblak, H Lipkin. Design and analysis of remote center of compliance structures. Journal of Robotic Systems, 2003, 20(8): 415-427. MATHCrossRef N Ciblak, H Lipkin. Design and analysis of remote center of compliance structures. Journal of Robotic Systems, 2003, 20(8): 415-427. MATHCrossRef
Zurück zum Zitat S Lee. Development of a new variable remote center compliance (VRCC) with modified elastomer shear pad (ESP) for robot assembly. IEEE Transactions on Automation Science and Engineering, 2005, 2(2): 193-197. CrossRef S Lee. Development of a new variable remote center compliance (VRCC) with modified elastomer shear pad (ESP) for robot assembly. IEEE Transactions on Automation Science and Engineering, 2005, 2(2): 193-197. CrossRef
Zurück zum Zitat J Yu, Y Zhao, G Chen, et al. Realizing controllable physical interaction based on an electromagnetic variable stiffness joint. Journal of Mechanisms and Robotics, 2019, 11(5): 054501. CrossRef J Yu, Y Zhao, G Chen, et al. Realizing controllable physical interaction based on an electromagnetic variable stiffness joint. Journal of Mechanisms and Robotics, 2019, 11(5): 054501. CrossRef
Zurück zum Zitat P H Kuo, A D Deshpande. A novel joint design for robotic hands with humanlike nonlinear compliance. Journal of Mechanisms and Robotics, 2016, 8(2). P H Kuo, A D Deshpande. A novel joint design for robotic hands with humanlike nonlinear compliance. Journal of Mechanisms and Robotics, 2016, 8(2).
Zurück zum Zitat D Shin, I Sardellitti, Y L Park, et al. Design and control of a bio-inspired human-friendly robot. The International Journal of Robotics Research, 2010, 29(5): 571-584. CrossRef D Shin, I Sardellitti, Y L Park, et al. Design and control of a bio-inspired human-friendly robot. The International Journal of Robotics Research, 2010, 29(5): 571-584. CrossRef
Zurück zum Zitat T W Seo, M Sitti. Tank-like module-based climbing robot using passive compliant joints. IEEE/ASME Transactions on Mechatronics, 2012, 18(1): 397-408. CrossRef T W Seo, M Sitti. Tank-like module-based climbing robot using passive compliant joints. IEEE/ASME Transactions on Mechatronics, 2012, 18(1): 397-408. CrossRef
Zurück zum Zitat J Choi, S Hong, W Lee, et al. A robot joint with variable stiffness using leaf springs. IEEE Transactions on Robotics, 2011, 27(2): 229-238. CrossRef J Choi, S Hong, W Lee, et al. A robot joint with variable stiffness using leaf springs. IEEE Transactions on Robotics, 2011, 27(2): 229-238. CrossRef
Zurück zum Zitat M Zhu, Y Mori, T Wakayama, et al. A fully multi-material three-dimensional printed soft gripper with variable stiffness for robust grasping. Soft Robotics, 2019, 6(4): 507-519. CrossRef M Zhu, Y Mori, T Wakayama, et al. A fully multi-material three-dimensional printed soft gripper with variable stiffness for robust grasping. Soft Robotics, 2019, 6(4): 507-519. CrossRef
Zurück zum Zitat S Kawamura, T Yamamoto, D Ishida, et al. Development of passive elements with variable mechanical impedance for wearable robots. Proceedings 2002 IEEE International Conference on Robotics and Automation, IEEE, 2002, 1: 248-253. S Kawamura, T Yamamoto, D Ishida, et al. Development of passive elements with variable mechanical impedance for wearable robots. Proceedings 2002 IEEE International Conference on Robotics and Automation, IEEE, 2002, 1: 248-253.
Zurück zum Zitat A Bamotra, P Walia, A V Prituja, et al. Layer-jamming suction grippers with variable stiffness. Journal of Mechanisms and Robotics, 2019, 11(3): 035003. CrossRef A Bamotra, P Walia, A V Prituja, et al. Layer-jamming suction grippers with variable stiffness. Journal of Mechanisms and Robotics, 2019, 11(3): 035003. CrossRef
Zurück zum Zitat T Wang, J Zhang, Y Li, et al. Electrostatic layer jamming variable stiffness for soft robotics. IEEE/ASME Transactions on Mechatronics, 2019, 24(2): 424-433. CrossRef T Wang, J Zhang, Y Li, et al. Electrostatic layer jamming variable stiffness for soft robotics. IEEE/ASME Transactions on Mechatronics, 2019, 24(2): 424-433. CrossRef
Zurück zum Zitat Y J Kim, S Cheng, S Kim, et al. A novel layer jamming mechanism with tunable stiffness capability for minimally invasive surgery. IEEE Transactions on Robotics, 2013, 29(4): 1031-1042. CrossRef Y J Kim, S Cheng, S Kim, et al. A novel layer jamming mechanism with tunable stiffness capability for minimally invasive surgery. IEEE Transactions on Robotics, 2013, 29(4): 1031-1042. CrossRef
Zurück zum Zitat Y Tao, T Wang, Y Wang, et al. A new variable stiffness robot joint. Industrial Robot: An International Journal, 2015, 42(4): 371-378. MathSciNetCrossRef Y Tao, T Wang, Y Wang, et al. A new variable stiffness robot joint. Industrial Robot: An International Journal, 2015, 42(4): 371-378. MathSciNetCrossRef
Zurück zum Zitat S Bi, C Liu, H Zhao, et al. Design and analysis of a novel variable stiffness actuator based on parallel-assembled-folded serial leaf springs. Advanced Robotics, 2017, 31(18): 990-1001. CrossRef S Bi, C Liu, H Zhao, et al. Design and analysis of a novel variable stiffness actuator based on parallel-assembled-folded serial leaf springs. Advanced Robotics, 2017, 31(18): 990-1001. CrossRef
Zurück zum Zitat G Chen, Z Zhang, H Wang. A general approach to the large deflection problems of spatial flexible rods using principal axes decomposition of compliance matrices. Journal of Mechanisms and Robotics, 2018, 10(3): 031012. G Chen, Z Zhang, H Wang. A general approach to the large deflection problems of spatial flexible rods using principal axes decomposition of compliance matrices. Journal of Mechanisms and Robotics, 2018, 10(3): 031012.
Zurück zum Zitat G Chen, Z Zhang, L Kong, et al. Analysis and validation of a flexible planar two degrees-of-freedom parallel manipulator with structural passive compliance. Journal of Mechanisms and Robotics, 2020, 12(1): 011011. G Chen, Z Zhang, L Kong, et al. Analysis and validation of a flexible planar two degrees-of-freedom parallel manipulator with structural passive compliance. Journal of Mechanisms and Robotics, 2020, 12(1): 011011.
Zurück zum Zitat J S Dai, M Zoppi, X Kong. Advances in reconfigurable mechanisms and robots I. London: Springer, 2012. J S Dai, M Zoppi, X Kong. Advances in reconfigurable mechanisms and robots I. London: Springer, 2012.
Zurück zum Zitat G Wei, J S Dai, S Wang, et al. Kinematic analysis and prototype of a metamorphic anthropomorphic hand with a reconfigurable palm. International Journal of Humanoid Robotics, 2011, 8(3): 459-479. CrossRef G Wei, J S Dai, S Wang, et al. Kinematic analysis and prototype of a metamorphic anthropomorphic hand with a reconfigurable palm. International Journal of Humanoid Robotics, 2011, 8(3): 459-479. CrossRef
Zurück zum Zitat G Wei, J S Dai. Advances in robot kinematics. Cham: Springer, 2014. G Wei, J S Dai. Advances in robot kinematics. Cham: Springer, 2014.
Zurück zum Zitat Y Yoshimura. On the mechanism of buckling of a circular cylindrical shell under axial compression. Reports of the Institute of Science and Technology of the University of Tokyo, 1955. Y Yoshimura. On the mechanism of buckling of a circular cylindrical shell under axial compression. Reports of the Institute of Science and Technology of the University of Tokyo, 1955.
Zurück zum Zitat D Rus, C Sung. Spotlight on origami robots. Science Robotics, 2018, 3(15): eaat0938. D Rus, C Sung. Spotlight on origami robots. Science Robotics, 2018, 3(15): eaat0938.
Zurück zum Zitat G Wei, J S Dai. Origami-inspired integrated planar-spherical overconstrained mechanisms. Journal of Mechanical Design, 2014, 136(5): 051003. CrossRef G Wei, J S Dai. Origami-inspired integrated planar-spherical overconstrained mechanisms. Journal of Mechanical Design, 2014, 136(5): 051003. CrossRef
Zurück zum Zitat Y Chen, W Lv, R Peng, et al. Mobile assemblies of four-spherical-4R-integrated linkages and the associated four-crease-integrated rigid origami patterns. Mechanism and Machine Theory, 2019, 142: 103613. CrossRef Y Chen, W Lv, R Peng, et al. Mobile assemblies of four-spherical-4R-integrated linkages and the associated four-crease-integrated rigid origami patterns. Mechanism and Machine Theory, 2019, 142: 103613. CrossRef
Zurück zum Zitat Z Zhang, G Chen, H Wu, et al. A pneumatic/cable-driven hybrid linear actuator with combined structure of origami chambers and deployable mechanism. IEEE Robotics and Automation Letters, 2020, 5(2): 3564-3571. Z Zhang, G Chen, H Wu, et al. A pneumatic/cable-driven hybrid linear actuator with combined structure of origami chambers and deployable mechanism. IEEE Robotics and Automation Letters, 2020, 5(2): 3564-3571.
Zurück zum Zitat L Paez, G Agarwal, J Paik. Design and analysis of a soft pneumatic actuator with origami shell reinforcement. Soft Robotics, 2016, 3(3): 109-119. CrossRef L Paez, G Agarwal, J Paik. Design and analysis of a soft pneumatic actuator with origami shell reinforcement. Soft Robotics, 2016, 3(3): 109-119. CrossRef
Zurück zum Zitat R V Martinez, C R Fish, X Chen, et al. Elastomeric origami: programmable paper-elastomer composites as pneumatic actuators. Advanced Functional Materials, 2012, 22(7): 1376-1384. CrossRef R V Martinez, C R Fish, X Chen, et al. Elastomeric origami: programmable paper-elastomer composites as pneumatic actuators. Advanced Functional Materials, 2012, 22(7): 1376-1384. CrossRef
Zurück zum Zitat M Luo, R Yan, Z Wan, et al. OriSnake: Design, fabrication, and experimental analysis of a 3-D origami snake robot. IEEE Robotics and Automation Letters, 2018, 3(3): 1993-1999. CrossRef M Luo, R Yan, Z Wan, et al. OriSnake: Design, fabrication, and experimental analysis of a 3-D origami snake robot. IEEE Robotics and Automation Letters, 2018, 3(3): 1993-1999. CrossRef
Zurück zum Zitat J Santoso, E H Skorina, M Luo, et al. Design and analysis of an origami continuum manipulation module with torsional strength. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2017: 2098-2104. J Santoso, E H Skorina, M Luo, et al. Design and analysis of an origami continuum manipulation module with torsional strength. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2017: 2098-2104.
Zurück zum Zitat K Miura, T Tachi. Synthesis of rigid-foldable cylindrical polyhedra. Symmetry: Art and Science, 2010: 204-213. K Miura, T Tachi. Synthesis of rigid-foldable cylindrical polyhedra. Symmetry: Art and Science, 2010: 204-213.
Zurück zum Zitat G Chen, H Wang, Z Lin, et al. The principal axes decomposition of spatial stiffness matrices. IEEE Transactions on Robotics, 2015, 31(1): 191-207. CrossRef G Chen, H Wang, Z Lin, et al. The principal axes decomposition of spatial stiffness matrices. IEEE Transactions on Robotics, 2015, 31(1): 191-207. CrossRef
Zurück zum Zitat L Howell. Compliant mechanisms. John Wiley & Sons, 2001. L Howell. Compliant mechanisms. John Wiley & Sons, 2001.
Zurück zum Zitat G Chen, H Wang, Z Lin. Determination of the identifiable parameters in robot calibration based on the POE formula. IEEE Transactions on Robotics, 2014, 30(5): 1066-1077. CrossRef G Chen, H Wang, Z Lin. Determination of the identifiable parameters in robot calibration based on the POE formula. IEEE Transactions on Robotics, 2014, 30(5): 1066-1077. CrossRef
Zurück zum Zitat A Bicchi, G Tonietti. Fast and" soft-arm" tactics [robot arm design]. IEEE Robotics & Automation Magazine, 2004, 11(2): 22-33. CrossRef A Bicchi, G Tonietti. Fast and" soft-arm" tactics [robot arm design]. IEEE Robotics & Automation Magazine, 2004, 11(2): 22-33. CrossRef
Zhuang Zhang
Genliang Chen
Weicheng Fan
Wei Yan
Lingyu Kong
https://doi.org/10.1186/s10033-020-00490-y
Springer Singapore
Chinese Journal of Mechanical Engineering
Surface Quality Improvement in Machining an Aluminum Honeycomb by Ice Fixation
Blade Segment with a 3D Lattice of Diamond Grits Fabricated via an Additive Manufacturing Process
Research Review of Principles and Methods for Ultrasonic Measurement of Axial Stress in Bolts
A Comparative Study of Fractional Order Models on State of Charge Estimation for Lithium Ion Batteries
Automatic Scallion Seedling Feeding Mechanism with an Asymmetrical High-order Transmission Gear Train
Parallel Distributed Compensation /H∞ Control of Lane-keeping System Based on the Takagi-Sugeno Fuzzy Model
Die im Laufe eines Jahres in der "adhäsion" veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.
Zur Marktübersicht
in-adhesives, MKVS, Nordson/© Nordson, ViscoTec/© ViscoTec, Hellmich GmbH/© Hellmich GmbH
Themen und Inhalte:
Unsere Produkte:
Cookies verwalten, meine Daten nicht verkaufen
Gabler Wirtschaftslexikon
Gabler Banklexikon | CommonCrawl |
Journal of Fluid Mechanics
URL: /core/journals/journal-of-fluid-mechanics
Focus on Fluids
JFM Rapids
JFM Perspectives
MathJax
MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org.
< Back to all volumes
Volume 751 - 25 July 2014
Two-dimensional numerical study of vortex shedding regimes of oscillatory flow past two circular cylinders in side-by-side and tandem arrangements at low Reynolds numbers
Ming Zhao, Liang Cheng
Oscillatory flow past two circular cylinders in side-by-side and tandem arrangements at low Reynolds numbers is simulated numerically by solving the two-dimensional Navier–Stokes (NS) equations using a finite-element method (FEM). The aim of this study is to identify the flow regimes of the two-cylinder system at different gap arrangements and Keulegan–Carpenter numbers (KC). Simulations are conducted at seven gap ratios $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}G$ ( $G=L/D$ where $L$ is the cylinder-to-cylinder gap and $D$ the diameter of a cylinder) of 0.5, 1, 1.5, 2, 3, 4 and 5 and KC ranging from 1 to 12 with an interval of 0.25. The flow regimes that have been identified for oscillatory flow around a single cylinder are also observed in the two-cylinder system but with different flow patterns due to the interactions between the two cylinders. In the side-by-side arrangement, the vortex shedding from the gap between the two cylinders dominates when the gap ratio is small, resulting in the gap vortex shedding (GVS) regime, which is different from any of the flow regimes identified for a single cylinder. For intermediate gap ratios of 1.5 and 2 in the side-by-side arrangement, the vortex shedding mode from one side of each cylinder is not necessarily the same as that from the other side, forming a so-called combined flow regime. When the gap ratio between the two cylinders is sufficiently large, the vortex shedding from each cylinder is similar to that of a single cylinder. In the tandem arrangement, when the gap between the two cylinders is very small, the flow regimes are similar to that of a single cylinder. For large gap ratios in the tandem arrangement, the vortex shedding flows from the gap side of the two cylinders interact and those from the outer sides of the cylinders are less affected by the existence of the other cylinder and similar to that of a single cylinder. Strong interaction between the vortex shedding flows from the two cylinders makes the flow very irregular at large KC values for both side-by-side and tandem arrangements.
On the structure and origin of pressure fluctuations in wall turbulence: predictions based on the resolvent analysis
M. Luhar, A. S. Sharma, B. J. McKeon
Published online by Cambridge University Press: 16 June 2014, pp. 38-70
We generate predictions for the fluctuating pressure field in turbulent pipe flow by reformulating the resolvent analysis of McKeon and Sharma (J. Fluid Mech., vol. 658, 2010, pp. 336–382) in terms of the so-called primitive variables. Under this analysis, the nonlinear convective terms in the Fourier-transformed Navier–Stokes equations (NSE) are treated as a forcing that is mapped to a velocity and pressure response by the resolvent of the linearized Navier–Stokes operator. At each wavenumber–frequency combination, the turbulent velocity and pressure field are represented by the most-amplified (rank-1) response modes, identified via a singular value decomposition of the resolvent. We show that these rank-1 response modes reconcile many of the key relationships among the velocity field, coherent structure (i.e. hairpin vortices), and the high-amplitude wall-pressure events observed in previous experiments and direct numerical simulations (DNS). A Green's function representation shows that the pressure fields obtained under this analysis correspond primarily to the fast pressure contribution arising from the linear interaction between the mean shear and the turbulent wall-normal velocity. Recovering the slow pressure requires an explicit treatment of the nonlinear interactions between the Fourier response modes. By considering the velocity and pressure fields associated with the triadically consistent mode combination studied by Sharma and McKeon (J. Fluid Mech., vol. 728, 2013, pp. 196–238), we identify the possibility of an apparent amplitude modulation effect in the pressure field, similar to that observed for the streamwise velocity field. However, unlike the streamwise velocity, for which the large scales of the flow are in phase with the envelope of the small-scale activity close to the wall, we expect there to be a $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}\pi /2$ phase difference between the large-scale wall-pressure and the envelope of the small-scale activity. Finally, we generate spectral predictions based on a rank-1 model assuming broadband forcing across all wavenumber–frequency combinations. Despite the significant simplifying assumptions, this approach reproduces trends observed in previous DNS for the wavenumber spectra of velocity and pressure, and for the scale-dependence of wall-pressure propagation speed.
The role of advance ratio and aspect ratio in determining leading-edge vortex stability for flapping flight
R. R. Harbig, J. Sheridan, M. C. Thompson
Published online by Cambridge University Press: 16 June 2014, pp. 71-105
The effects of advance ratio and the wing's aspect ratio on the structure of the leading-edge vortex (LEV) that forms on flapping and rotating wings under insect-like flight conditions are not well understood. However, recent studies have indicated that they could play a role in determining the stable attachment of the LEV. In this study, a numerical model of a flapping wing at insect Reynolds numbers is used to explore the effects of these parameters on the characteristics and stability of the LEV. The word 'stability' is used here to describe whether the LEV was attached throughout the stroke or if it was shed. It is demonstrated that increasing the advance ratio enhances vorticity production at the leading edge during the downstroke, and this results in more rapid growth of the LEV for non-zero advance ratios. Increasing the wing aspect ratio was found to have the effect of shortening the wing's chord length relative to the LEV's size. These two effects combined determine the stability of the LEV. For high advance ratios and large aspect ratios, the LEV was observed to quickly grow to envelop the entire wing during the early stages of the downstroke. Continued rotation of the wing resulted in the LEV being eventually shed as part of a vortex loop that peels away from the wing's tip. The shedding of the LEV for high-aspect-ratio wings at non-zero advance ratios leads to reduced aerodynamic performance of these wings, which helps to explain why a number of insect species have evolved to have low-aspect-ratio wings.
Electrohydrodynamics of particle-covered drops
Malika Ouriemi, Petia M. Vlahovska
We experimentally investigate the effect of surface-absorbed colloidal particles on the dynamics of a leaky dielectric drop in a uniform DC electric field. Depending on the particle polarizabilty, coverage and the electrical field intensity, particles assemble into various patterns such as an equatorial belt, pole-to-pole chains or a band of dynamic vortices. The particle structuring changes droplet electrohydrodynamics: under the same conditions where a particle-free drop would be a steady oblate spheroid, the belt can give rise to unsteady behaviours such as sustained drop wobbling or tumbling. Moreover, particle chaining can be accompanied by prolate drop deformation and tip-streaming.
The Burnett equations in cylindrical coordinates and their solution for flow in a microtube
Narendra Singh, Amit Agrawal
The Burnett equations constitute a set of higher-order continuum equations. These equations are obtained from the Chapman–Enskog series solution of the Boltzmann equation while retaining second-order-accurate terms in the Knudsen number $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}\mathit{Kn}$ . The set of higher-order continuum models is expected to be applicable to flows in the slip and transition regimes where the Navier–Stokes equations perform poorly. However, obtaining analytical or numerical solutions of these equations has been noted to be particularly difficult. In the first part of this work, we present the full set of Burnett equations in cylindrical coordinates in three-dimensional form. The equations are reported in a generalized way for gas molecules that are assumed to be Maxwellian molecules or hard spheres. In the second part, a closed-form solution of these equations for isothermal Poiseuille flow in a microtube is derived. The solution of the equations is shown to satisfy the full Burnett equations up to $\mathit{Kn} \leq 1.3$ within an error norm of ${\pm }1.0\, \%$ . The mass flow rate obtained analytically is shown to compare well with available experimental and numerical results. Comparison of the stress terms in the Burnett and Navier–Stokes equations is presented. The significance of the Burnett normal stress and its role in diffusion of momentum is brought out by the analysis. An order-of-magnitude analysis of various terms in the equations is presented, based on which a reduced model of the Burnett equations is provided for flow in a microtube. The Burnett equations in full three-dimensional form in cylindrical coordinates and their solution are not previously available.
Effect of large bulk viscosity on large-Reynolds-number flows
M. S. Cramer, F. Bahmani
We examine the inviscid and boundary-layer approximations in fluids having bulk viscosities which are large compared with their shear viscosities for three-dimensional steady flows over rigid bodies. We examine the first-order corrections to the classical lowest-order inviscid and laminar boundary-layer flows using the method of matched asymptotic expansions. It is shown that the effects of large bulk viscosity are non-negligible when the ratio of bulk to shear viscosity is of the order of the square root of the Reynolds number. The first-order outer flow is seen to be rotational, non-isentropic and viscous but nevertheless slips at the inner boundary. First-order corrections to the boundary-layer flow include a variation of the thermodynamic pressure across the boundary layer and terms interpreted as heat sources in the energy equation. The latter results are a generalization and verification of the predictions of Emanuel (Phys. Fluids A, vol. 4, 1992, pp. 491–495).
How flexibility affects the wake symmetry properties of a self-propelled plunging foil
Xiaojue Zhu, Guowei He, Xing Zhang
The wake symmetry properties of a flapping-foil system are closely associated with its propulsive performance. In the present work, the effect of the foil flexibility on the wake symmetry properties of a self-propelled plunging foil is studied numerically. We compare the wakes of a flexible foil and a rigid foil at a low flapping Reynolds number of 200. The two foils are of the same dimensions, flapping frequency, leading-edge amplitude and cruising velocity but different bending rigidities. The results indicate that flexibility can either inhibit or trigger the symmetry breaking of the wake. We find that there exists a threshold value of vortex circulation above which symmetry breaking occurs. The modification of vortex circulation is found to be the pivotal factor in the influence of the foil flexibility on the wake symmetry properties. An increase in flexibility can result in a reduction in the vorticity production at the leading edge because of the decrease in the effective angle of attack, but it also enhances vorticity production at the trailing edge because of the increase in the trailing-edge flapping velocity. The competition between these two opposing effects eventually determines the strength of vortex circulation, which, in turn, governs the wake symmetry properties. Further investigation indicates that the former effect is related to the streamlined shape of the deformed foil while the latter effect is associated with structural resonance. The results of this work provide new insights into the functional role of passive flexibility in flapping-based biolocomotion.
Drops of power-law fluids falling on a coated vertical fibre
Liyan Yu, John Hinch
We study the solitary wave solutions in a thin film of a power-law fluid coating a vertical fibre. Different behaviours are observed for shear-thickening and shear-thinning fluids. For shear-thickening fluids, the solitary waves are larger and faster when the reduced Bond number is smaller. For shear-thinning fluids, two branches of solutions exist for a certain range of the Bond number, where the solitary waves are larger and faster on one and smaller and slower on the other as the Bond number decreases. We carry out an asymptotic analysis for the large and fast-travelling solitary waves to explain how their speeds and amplitudes change with the Bond number. The analysis is then extended to examine the stability of the two branches of solutions for the shear-thinning fluids.
Quasi-geostrophic approximation of anelastic convection
Friedrich H. Busse, Radostin D. Simitev
The onset of convection in a rotating cylindrical annulus with parallel ends filled with a compressible fluid is studied in the anelastic approximation. Thermal Rossby waves propagating in the azimuthal direction are found as solutions. The analogy to the case of Boussinesq convection in the presence of conical end surfaces of the annular region is emphasised. As in the latter case, the results can be applied as an approximation for the description of the onset of anelastic convection in rotating spherical fluid shells. Reasonable agreement with three-dimensional numerical results published by Jones, Kuzanyan & Mitchell (J. Fluid Mech., vol. 634, 2009, pp. 291–319) for the latter problem is found. As in those results, the location of the onset of convection shifts outwards from the tangent cylinder with increasing number $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}N_{\rho }$ of density scale heights until it reaches the equatorial boundary. A new result is that at a much higher number $N_{\rho }$ the onset location returns to the interior of the fluid shell.
The quiescent core of turbulent channel flow
Y. S. Kwon, J. Philip, C. M. de Silva, N. Hutchins, J. P. Monty
The identification of uniform momentum zones in wall-turbulence, introduced by Adrian, Meinhart & Tomkins (J. Fluid Mech., vol. 422, 2000, pp. 1–54) has been applied to turbulent channel flow, revealing a large 'core' region having high and uniform velocity magnitude. Examination of the core reveals that it is a region of relatively weak turbulence levels. For channel flow in the range $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}Re_{\tau } = 1000\text {--}4000$ , it was found that the 'core' is identifiable by regions bounded by the continuous isocontour lines of the streamwise velocity at $0.95U_{CL}$ (95 % of the centreline velocity). A detailed investigation into the properties of the core has revealed it has a large-scale oscillation which is predominantly anti-symmetric with respect to the channel centreline as it moves through the channel, and there is a distinct jump in turbulence statistics as the core boundary is crossed. It is concluded that the edge of the core demarcates a shear layer of relatively intense vorticity such that the interior of the core contains weakly varying, very low-level turbulence (relative to the flow closer to the wall). Although channel flows are generally referred to as 'fully turbulent', these findings suggest there exists a relatively large and 'quiescent' core region with a boundary qualitatively similar to the turbulent/non-turbulent interface of boundary layers, jets and wakes.
Inertial wave excitation and focusing in a liquid bounded by a frustum and a cylinder
Marten Klein, Torsten Seelig, Michael V. Kurgansky, Abouzar Ghasemi V., Ion Dan Borcia, Andreas Will, Eberhard Schaller, Christoph Egbers, Uwe Harlander
The mechanism of localized inertial wave excitation and its efficiency is investigated for an annular cavity rotating with $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}\Omega _0$ . Meridional symmetry is broken by replacing the inner cylinder with a truncated cone (frustum). Waves are excited by individual longitudinal libration of the walls. The geometry is non-separable and exhibits wave focusing and wave attractors. We investigated laboratory and numerical results for the Ekman number $E\approx 10^{-6}$ , inclination $\alpha =5.71^\circ $ and libration amplitudes $\varepsilon \leq 0.2$ within the inertial wave band $0 < \omega < 2\Omega _0$ . Under the assumption that the inertial waves do not essentially affect the boundary-layer structure, we use classical boundary-layer analysis to study oscillating Ekman layers over a librating wall that is at an angle $\alpha \neq 0$ to the axis of rotation. The Ekman layer erupts at frequency $\omega =f_{*}$ , where $f_{*}\equiv 2 \Omega _0 \sin \alpha $ is the effective Coriolis parameter in a plane tangential to the wall. For the selected inclination this eruption occurs for the forcing frequency $\omega /\Omega _0=0.2$ . For the librating lids eruption occurs at $\omega /\Omega _0=2$ . The study reveals that the frequency dependence of the total kinetic energy $K_{\omega }$ of the excited wave field is strongly connected to the square of the Ekman pumping velocity $w_{{E}}(\omega )$ that, in the linear limit, becomes singular when the boundary layer erupts. This explains the frequency dependence of non-resonantly excited waves. By the localization of the forcing, the two configurations investigated, (i) frustum libration and (ii) lids together with outer cylinder in libration, can be clearly distinguished by their response spectra. Good agreement was found for the spatial structure of low-order wave attractors and periodic orbits (both characterized by a small number of reflections) in the frequency windows predicted by geometric ray tracing. For 'resonant' frequencies a significantly increased total bulk energy was found, while the energy in the boundary layer remained nearly constant. Inertial wave energy enters the bulk flow via corner beams, which are parallel to the characteristics of the underlying Poincaré problem. Numerical simulations revealed a mismatch between the wall-parallel mass fluxes near the corners. This leads to boundary-layer eruption and the generation of inertial waves in the corners.
Scaling of the turbulent/non-turbulent interface in boundary layers
Kapil Chauhan, Jimmy Philip, Ivan Marusic
Scaling of the interface that demarcates a turbulent boundary layer from the non-turbulent free stream is sought using theoretical reasoning and experimental evidence in a zero-pressure-gradient boundary layer. The data-analysis, utilising particle image velocimetry (PIV) measurements at four different Reynolds numbers ( $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}\delta u_{\tau }/\nu =1200\mbox{--}14\, 500$ ), indicates the presence of a viscosity dominated interface at all Reynolds numbers. It is found that the mean normal velocity across the interface and the tangential velocity jump scale with the skin-friction velocity $u_{\tau }$ and are approximately $u_{\tau }/10$ and $u_{\tau }$ , respectively. The width of the superlayer is characterised by the local vorticity thickness $\delta _{\omega }$ and scales with the viscous length scale $\nu /u_{\tau }$ . An order of magnitude analysis of the tangential momentum balance within the superlayer suggests that the turbulent motions also scale with inner velocity and length scales $u_{\tau }$ and $\nu /u_{\tau }$ , respectively. The influence of the wall on the dynamics in the superlayer is considered via Townsend's similarity hypothesis, which can be extended to account for the viscous influence at the turbulent/non-turbulent interface. Similar to a turbulent far-wake the turbulent motions in the superlayer are of the same order as the mean velocity deficit, which lends to a physical explanation for the emergence of the wake profile in the outer part of the boundary layer.
Reconnection of skewed vortices
Y. Kimura, H. K. Moffatt
Based on experimental evidence that vortex reconnection commences with the approach of nearly antiparallel segments of vorticity, a linearised model is developed in which two Burgers-type vortices are driven together and stretched by an ambient irrotational strain field induced by more remote vorticity. When these Burgers vortices are exactly antiparallel, they are annihilated on the strain time-scale, independent of kinematic viscosity $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}\nu $ in the limit $\nu \rightarrow 0$ . When the vortices are skew to each other, they are annihilated under this action over a local extent that increases exponentially in the stretching direction, with clear evidence of reconnection on the same strain time-scale. The initial helicity associated with the skewed geometry is eliminated during the process of reconnection. The model applies equally to the reconnection of weak magnetic flux tubes under the action of a strain field, when Lorentz forces are negligible.
Analysis of a model for foam improved oil recovery
P. Grassia, E. Mas-Hernández, N. Shokri, S. J. Cox, G. Mishuris, W. R. Rossen
During improved oil recovery (IOR), gas may be introduced into a porous reservoir filled with surfactant solution in order to form foam. A model for the evolution of the resulting foam front known as 'pressure-driven growth' is analysed. An asymptotic solution of this model for long times is derived that shows that foam can propagate indefinitely into the reservoir without gravity override. Moreover, 'pressure-driven growth' is shown to correspond to a special case of the more general 'viscous froth' model. In particular, it is a singular limit of the viscous froth, corresponding to the elimination of a surface tension term, permitting sharp corners and kinks in the predicted shape of the front. Sharp corners tend to develop from concave regions of the front. The principal solution of interest has a convex front, however, so that although this solution itself has no sharp corners (except for some kinks that develop spuriously owing to errors in a numerical scheme), it is found nevertheless to exhibit milder singularities in front curvature, as the long-time asymptotic analytical solution makes clear. Numerical schemes for the evolving front shape which perform robustly (avoiding the development of spurious kinks) are also developed. Generalisations of this solution to geologically heterogeneous reservoirs should exhibit concavities and/or sharp corner singularities as an inherent part of their evolution: propagation of fronts containing such 'inherent' singularities can be readily incorporated into these numerical schemes.
Long-wave dynamics of an inextensible planar membrane in an electric field
Y.-N. Young, Shravan Veerapaneni, Michael J. Miksis
In this paper the dynamics of an inextensible capacitive elastic membrane under an electric field is investigated in the long-wave (lubrication) leaky dielectric framework, where a sixth-order nonlinear differential equation with an integral constraint is derived. Steady equilibrium profiles for a non-conducting membrane in a direct current (DC) field are found to depend only on the membrane excess area and the volume under the membrane. Linear stability analysis on a tensionless flat membrane in a DC field gives the growth rate in terms of membrane conductance and electric properties in the bulk. Numerical simulations of a capacitive conducting membrane under an alternating current (AC) field elucidate how variation of the membrane tension correlates with the nonlinear membrane dynamics. Different membrane dynamics, such as undulation and flip-flop, are found at different electric field strength and membrane area. In particular a travelling wave on the membrane is found as a response to a periodic AC field in the perpendicular direction.
Perturbation theory and numerical modelling of weakly and moderately nonlinear dynamics of the incompressible Richtmyer–Meshkov instability
A. L. Velikovich, M. Herrmann, S. I. Abarzhi
A study of incompressible two-dimensional (2D) Richtmyer–Meshkov instability (RMI) by means of high-order perturbation theory and numerical simulations is reported. Nonlinear corrections to Richtmyer's impulsive formula for the RMI bubble and spike growth rates have been calculated for arbitrary Atwood number and an explicit formula has been obtained for it in the Boussinesq limit. Conditions for early-time acceleration and deceleration of the bubble and the spike have been elucidated. Theoretical time histories of the interface curvature at the bubble and spike tip and the profiles of vertical and horizontal velocities have been calculated and favourably compared to simulation results. In our simulations we have solved 2D unsteady Navier–Stokes equations for immiscible incompressible fluids using the finite volume fractional step flow solver NGA developed by Desjardins et al. (J. Comput. Phys., vol. 227, 2008, pp. 7125–7159) coupled to the level set based interface solver LIT (Herrmann, J. Comput. Phys., vol. 227, 2008, pp. 2674–2706). We study the impact of small amounts of viscosity on the flow dynamics and compare simulation results to theory to discuss the influence of the theory's ideal inviscid flow assumption.
The coalescence of liquid drops in a viscous fluid: interface formation model
James E. Sprittles, Yulii D. Shikhmurzaev
The interface formation model is applied to describe the initial stages of the coalescence of two liquid drops in the presence of a viscous ambient fluid whose dynamics is fully accounted for. Our focus is on understanding (a) how this model's predictions differ from those of the conventionally used one, (b) what influence the ambient fluid has on the evolution of the shape of the coalescing drops and (c) the coupling of the intrinsic dynamics of coalescence and that of the ambient fluid. The key feature of the interface formation model in its application to the coalescence phenomenon is that it removes the singularity inherent in the conventional model at the onset of coalescence and describes the part of the free surface 'trapped' between the coalescing volumes as they are pressed against each other as a rapidly disappearing 'internal interface'. Considering the simplest possible formulation of this model, we find experimentally verifiable differences with the predictions of the conventional model showing, in particular, the effect of drop size on the coalescence process. According to the new model, for small drops a non-monotonic time dependence of the bridge expansion speed is a feature that could be looked for in further experimental studies. Finally, the results of both models are compared to recently available experimental data on the evolution of the liquid bridge connecting coalescing drops, and the interface formation model is seen to give a better agreement with the data.
Discrete-vortex method with novel shedding criterion for unsteady aerofoil flows with intermittent leading-edge vortex shedding
Kiran Ramesh, Ashok Gopalarathnam, Kenneth Granlund, Michael V. Ol, Jack R. Edwards
Unsteady aerofoil flows are often characterized by leading-edge vortex (LEV) shedding. While experiments and high-order computations have contributed to our understanding of these flows, fast low-order methods are needed for engineering tasks. Classical unsteady aerofoil theories are limited to small amplitudes and attached leading-edge flows. Discrete-vortex methods that model vortex shedding from leading edges assume continuous shedding, valid only for sharp leading edges, or shedding governed by ad-hoc criteria such as a critical angle of attack, valid only for a restricted set of kinematics. We present a criterion for intermittent vortex shedding from rounded leading edges that is governed by a maximum allowable leading-edge suction. We show that, when using unsteady thin aerofoil theory, this leading-edge suction parameter (LESP) is related to the $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}A_0$ term in the Fourier series representing the chordwise variation of bound vorticity. Furthermore, for any aerofoil and Reynolds number, there is a critical value of the LESP, which is independent of the motion kinematics. When the instantaneous LESP value exceeds the critical value, vortex shedding occurs at the leading edge. We have augmented a discrete-time, arbitrary-motion, unsteady thin aerofoil theory with discrete-vortex shedding from the leading edge governed by the instantaneous LESP. Thus, the use of a single empirical parameter, the critical-LESP value, allows us to determine the onset, growth, and termination of LEVs. We show, by comparison with experimental and computational results for several aerofoils, motions and Reynolds numbers, that this computationally inexpensive method is successful in predicting the complex flows and forces resulting from intermittent LEV shedding, thus validating the LESP concept.
Critical layer and radiative instabilities in shallow-water shear flows
Xavier Riedinger, Andrew D. Gilbert
In this study a linear stability analysis of shallow-water flows is undertaken for a representative Froude number $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}F=3.5$ . The focus is on monotonic base flow profiles $U$ without an inflection point, in order to study critical layer instability (CLI) and its interaction with radiative instability (RI). First the dispersion relation is presented for the piecewise linear profile studied numerically by Satomura (J. Meterol. Soc. Japan, vol. 59, 1981, pp. 148–167) and using WKBJ analysis an interpretation given of mode branches, resonances and radiative instability. In particular surface gravity (SG) waves can resonate with a limit mode (LM) (or Rayleigh wave), localised near the discontinuity in shear in the flow; in this piecewise profile there is no critical layer. The piecewise linear profile is then continuously modified in a family of nonlinear profiles, to show the effect of the vorticity gradient $Q^{\prime } = - U^{\prime \prime }$ on the nature of the modes. Some modes remain as modes and others turn into quasi-modes (QM), linked to Landau damping of disturbances to the flow, depending on the sign of the vorticity gradient at the critical point. Thus an interpretation of critical layer instability for continuous profiles is given, as the remnant of the resonance with the LM. Numerical results and WKBJ analysis of critical layer instability and radiative instability for more general smooth profiles are provided. A link is made between growth rate formulae obtained by considering wave momentum and those found via the WKBJ approximation. Finally the competition between the stabilising effect of vorticity gradients in a critical layer and the destabilising effect of radiation (radiative instability) is studied.
Low-Reynolds-number wakes of elliptical cylinders: from the circular cylinder to the normal flat plate
Mark C. Thompson, Alexander Radi, Anirudh Rao, John Sheridan, Kerry Hourigan
While the wake of a circular cylinder and, to a lesser extent, the normal flat plate have been studied in considerable detail, the wakes of elliptic cylinders have not received similar attention. However, the wakes from the first two bodies have considerably different characteristics, in terms of three-dimensional transition modes, and near- and far-wake structure. This paper focuses on elliptic cylinders, which span these two disparate cases. The Strouhal number and drag coefficient variations with Reynolds number are documented for the two-dimensional shedding regime. There are considerable differences from the standard circular cylinder curve. The different three-dimensional transition modes are also examined using Floquet stability analysis based on computed two-dimensional periodic base flows. As the cylinder aspect ratio (major to minor axis) is decreased, mode A is no longer unstable for aspect ratios below 0.25, as the wake deviates further from the standard Bénard–von Kármán state. For still smaller aspect ratios, another three-dimensional quasi-periodic mode becomes unstable, leading to a different transition scenario. Interestingly, for the 0.25 aspect ratio case, mode A restabilises above a Reynolds number of approximately 125, allowing the wake to return to a two-dimensional state, at least in the near wake. For the flat plate, three-dimensional simulations show that the shift in the Strouhal number from the two-dimensional value is gradual with Reynolds number, unlike the situation for the circular cylinder wake once mode A shedding develops. Dynamic mode decomposition is used to characterise the spatially evolving character of the wake as it undergoes transition from the primary Bénard–von Kármán-like near wake into a two-layered wake, through to a secondary Bénard–von Kármán-like wake further downstream, which in turn develops an even longer wavelength unsteadiness. It is also used to examine the differences in the two- and three-dimensional near-wake state, showing the increasing distortion of the two-dimensional rollers as the Reynolds number is increased. | CommonCrawl |
Advances in Mathematics of Communications
2013, Volume 7, Issue 3: 319-334. Doi: 10.3934/amc.2013.7.319
This issue Previous Article Average complexities of access structures on five participants Next Article On the distribution of auto-correlation value of balanced Boolean functions
A 3-cycle construction of complete arcs sharing $(q+3)/2$ points with a conic
Daniele Bartoli1, ,
Alexander A. Davydov2, ,
Stefano Marcugini1, and
Fernanda Pambianco1,
Department of Mathematics and Informatics, Perugia University, Perugia, 06123
Institute for Information Transmission Problems (Kharkevich institute), Russian Academy of Sciences, GSP-4, Moscow, 127994
Received: July 31, 2012
Revised: May 31, 2013
In the projective plane $PG(2,q),$ $q\equiv 2$ $(\bmod~3)$ odd prime power, $ q\geq 11,$ an explicit construction of $\frac{1}{2}(q+7)$-arcs sharing $ \frac{1}{2}(q+3)$ points with an irreducible conic is considered. The construction is based on 3-orbits of some projectivity, called 3-cycles. For every $q,$ variants of the construction give non-equivalent arcs. It allows us to obtain complete $\frac{1}{ 2}(q+7)$-arcs for $q\leq 4523.$ Moreover, for $q=17,59$ there exist variants that are incomplete arcs. Completing these variants we obtained complete $( \frac{1}{2}(q+3)+\delta)$-arcs with $ \delta =4,$ $q=17,$ and $\delta =3,$ $q=59$; a description of them as union of some symmetrical objects is given.
Projective planes,
complete arcs,
irreducible conics,
symmetric arcs.
Mathematics Subject Classification: Primary: 51E21, 51E22; Secondary: 94B05.
A. H. Ali, J. W. P. Hirschfeld and H. Kaneta, The automorphism group of a complete $(q-1)$-arc in $PG(2,q)$, J. Combin. Des., 2 (1994), 131-145.doi: 10.1002/jcd.3180020304.
D. Bartoli, A. A. Davydov, G. Faina, S. Marcugini and F. Pambianco, On sizes of complete arcs in $PG(2,q)$, Discrete Math., 312 (2012), 680-698.doi: 10.1016/j.disc.2011.07.002.
D. Bartoli, A. A. Davydov, G. Faina, S. Marcugini and F. Pambianco, New upper bounds on the smallest size of a complete arc in a finite Desarguesian projective plane, J. Geom., 104 (2013), 11-43.doi: 10.1007/s00022-013-0154-6.
D. Bartoli, A. A. Davydov, S. Marcugini and F. Pambianco, The minimum order of complete caps in $PG(4,4)$, Adv. Math. Commun., 5 (2011), 37-40.doi: 10.3934/amc.2011.5.37.
D. Bartoli, G. Faina, S. Marcugini, F. Pambianco and A. A. Davydov, A new algorithm and a new type of estimate for the smallest size of complete arcs in $PG(2,q)$, Electron. Notes Discrete Math., 40 (2013), 27-31.
D. Bartoli, S. Marcugini and F. Pambianco, New quantum caps in $PG(4,4)$, J. Combin. Des., 20 (2012), 448-466.doi: 10.1002/jcd.21321.
K. Coolsaet and H. Sticker, Arcs with large conical subsets, Electron. J. Combin., 17 (2010), $\#$R112.
A. A. Davydov, G. Faina, S. Marcugini and F. Pambianco, Computer search in projective planes for the sizes of complete arcs, J. Geom., 82 (2005), 50-62.doi: 10.1007/s00022-004-1719-1.
A. A. Davydov, G. Faina, S. Marcugini and F. Pambianco, On the spectrum of sizes of complete caps in projective spaces $PG(n,q)$ of small dimension, in "Proc. XI Int. Workshop on Algebraic and Combin. Coding Theory, ACCT2008,'' Pamporovo, Bulgaria, (2008), 57-62.
A. A. Davydov, G. Faina, S. Marcugini and F. Pambianco, On sizes of complete caps in projective spaces $PG(n,q)$ and arcs in planes $PG(2,q)$, J. Geom., 94 (2009), 31-58.doi: 10.1007/s00022-009-0009-3.
A. A. Davydov, M. Giulietti, S. Marcugini and F. Pambianco, Linear nonbinary covering codes and saturating sets in projective spaces, Adv. Math. Commun., 5 (2011), 119-147.doi: 10.3934/amc.2011.5.119.
A. A. Davydov, S. Marcugini and F. Pambianco, Minimal 1-saturating sets and complete caps in binary projective spaces, J. Combin. Theory Ser. A, 113 (2006), 647-663.doi: 10.1016/j.jcta.2005.06.003.
A. A. Davydov, S. Marcugini and F. Pambianco, Complete $(q^{2+q+8)}/2$-caps in the spaces $PG(3,q),$ $q\equiv 2$ $(mod$ $3)$ an odd prime, and a complete 20-cap in $PG(3,5)$, Des. Codes Cryptogr., 50 (2009), 359-372.doi: 10.1007/s10623-008-9237-z.
A. A. Davydov, S. Marcugini and F. Pambianco, A geometric construction of complete arcs sharing $(q+3)/2$ points with a conic, in "Proc. XII Int. Workshop on Algebraic and Combin. Coding Theory, ACCT2010,'' Novosibirsk, Russia, (2010), 109-115.
G. Faina and F. Pambianco, On the spectrum of the values $k$ for which a complete $k$-cap in $PG(n,q)$ exists, J. Geom., 62 (1998), 84-98.doi: 10.1007/BF01237602.
V. Giordano, Arcs in cyclic affine planes, Innov. Incidence Geom., 6-7 (2009), 203-209.
J. W. P. Hirschfeld, "Projective Geometries over Finite Fields," $2^{nd}$ edition, Clarendon Press, Oxford, 1998.
J. W. P. Hirschfeld and L. Storme, The packing problem in statistics, coding theory and finite projective spaces, J. Statist. Plann. Inference, 72 (1998), 355-380.doi: 10.1016/S0378-3758(98)00043-3.
J. W. P. Hirschfeld and L. Storme, The packing problem in statistics, coding theory and finite geometry: update 2001, in "Finite Geometries'' (eds. A. Blokhuis, J.W.P. Hirschfeld, D. Jungnickel and J.A. Thas), Kluwer, (2001), 201-246.doi: 10.1007/978-1-4613-0283-4_13.
G. Korchmáros and A. Sonnino, Complete arcs arising from conics, Discrete Math., 267 (2003), 181-187.doi: 10.1016/S0012-365X(02)00613-1.
G. Korchmáros and A. Sonnino, On arcs sharing the maximum number of points with an oval in a Desarguesian plane of odd order, J. Combin. Des., 18 (2010), 25-47.doi: 10.1002/jcd.20220.
F. J. MacWilliams and N. J. A. Sloane, "The Theory of Error-Correctig Codes,'' North-Holland, Amsterdam, The Netherlands, 1977.
S. Marcugini, A. Milani and F. Pambianco, Maximal $(n,3)$-arcs in $PG(2,13)$, Discrete Math., 294 (2005), 139-145.doi: 10.1016/j.disc.2004.04.043.
F. Pambianco, D. Bartoli, G. Faina and S. Marcugini, Classification of the smallest minimal 1-saturating sets in $PG(2,q)$, $q\leq 23$, Electron. Notes Discrete Math., 40 (2013), 229-233.
G. Pellegrino, Un'osservazione sul problema dei $k$-archi completi in $S_{2,q}$, con $q\equiv 1 (mod$ $4)$, Atti Accad. Naz. Lincei Rend., 63 (1977), 33-44.
G. Pellegrino, Sugli archi completi dei piani $PG(2,q)$, con $q$ dispari, contenenti $(q+3)/2$ punti di una conica, Rend. Mat., 12 (1992), 649-674.
L. Storme, Finite geometry, in "The CRC Handbook of Combinatorial Designs'' (eds. C.J. Colbourn and J. Dinitz), $2^{nd}$ edition, CRC Press, Boca Raton, (2006), 702-729.
Daniele Bartoli
Alexander A. Davydov
Stefano Marcugini
Fernanda Pambianco | CommonCrawl |
Inadmissible theorems in research
One of my engineering friends told me how he once had to take a make-up calculus I exam due to being hospitalised and so self-studied a lot of the missed topics. For the make-up exam, he used L'Hôpital's rule, although we weren't taught that until 1 or 2 exams later. My friend told me that the professor wrote
'You are not yet allowed to use L'Hôpital's rule.'
So, I like to say that L'Hôpital's rule was inadmissible in that exam.
Now, it absolutely makes sense that if you're the student that you're not allowed to use propositions, theorems, etc from future topics, all the more for future classes and especially for something as basic as calculus I. It also makes sense to adjust for majors: Certainly maths majors shouldn't be allowed to use topics in discrete mathematics or linear algebra to have an edge over their business, environmental science or engineering (who take linear algebra later than maths majors in my university) classmates in calculus I or II.
But after bachelor's and master's and maths PhD coursework, you're the researcher and not merely the student: Say, you're doing your maths PhD dissertation or even after you've finished the PhD.
Does maths research have anything inadmissible?
I can't imagine you have something to prove and then you find some paper that helps you prove something and then you go to your advisor who would then tell you, 'You are not yet allowed to use Poincaré theorem' or for something proven true more than 12 years ago: 'You are not yet allowed to use Cauchy's differentiation formula'.
Actually what about outside maths, say, physics or computer science?
research-process mathematics computer-science physics supervision
Volker Siegel
BCLCBCLC
I would have said by virtue of being hospitalized, L'Hopital's rule should be fair game. – Azor Ahai Aug 29 '18 at 21:02
Comments are not for extended discussion; this conversation has been moved to chat. Please do not post answers in the comments. If you want to debate the practice of banning L'Hôpital's rule in an exam situation, please take it to chat. Please read this FAQ before posting another comment. – Wrzlprmft♦ Aug 31 '18 at 7:06
The error, such as it is, your friend made was not the use of l'Hôpital, but the lack of proof that it is correct. If he had stated l'Hôpital as a lemma and provided a sufficiently elementary proof, then presumably the lecturer would not have had an issue with the solution.
An analogous phenomenon happens in research mathematics. There are plenty of folklore results, where researchers are pretty sure the result is true, and the techniques for proving the result are known, but nobody happens to have written the proof down or at least published it. These can be found, for example, in the classical regularity theory for partial differential equations.
Should one provide a proof of such a result when using it as a tool? Sometimes people simply refer to the result without being explicit about it. Sometimes they prove it "because we cannot find a proof in the literature", even if the proof is simple or not to the point of a given article. There is no absolutely right solution in these cases.
I think that folklore results are as close to "inadmissible" as one gets in research mathematics; one should be careful about them, sometimes prove them, but sometimes they are also used without proof.
TommiTommi
@Buffy The first paragraph is an introduction to the answer that is folklore. Right, Tommi Brander? – BCLC Aug 29 '18 at 18:18
@BCLC: It is more common than you think. For just one example phrasing, see "it is folklore that" on Google Scholar. – user21820 Sep 10 '18 at 4:10
Tommi Brander, in @user21820 's link, is the first paper, which is by Terry Tao, related to 'classical regularity theory for partial differential equations' ? – BCLC Sep 25 '18 at 15:37
@BCLC Uhhh... I guess? This is not a precise classification schema. Why do you ask? – Tommi Sep 25 '18 at 19:08
@TommiBrander well the paper looks like a good example of your example – BCLC Sep 25 '18 at 23:32
No, but trying to prove X without using Y is still a very useful concept even in research, because it can lead to interesting generalizations, or new proof techniques that can be applied to a larger set of problems.
For instance, in some sense the Lebesgue integral is "just" trying to prove the properties of integrals without using the continuity of f, or the theory of matroids is "just" trying to prove the properties of linearly independent vectors without using a lot of properties from the vector space structure.
So this is far from being a pointless exercise, if that's what you had in mind.
Konrad Rudolph
Federico PoloniFederico Poloni
This is an excellent answer. There is a very broad phenomenon that can be paraphrased as "constraint breeds creativity." E.g. there is a reason that people have been writing haikus for more than eight hundred years. But one of the essences of "creative constraints" is that they are largely self-imposed. – Pete L. Clark Aug 29 '18 at 22:00
@FedericoPoloni I'm not familiar with that use of punctuation, and I don't think it's commonly understood. I think you probably mean to write "the Lebesgue integral is 'just' trying to prove …", which uses more conventional punctuation and grammar to express what I think you're trying to express. – Konrad Rudolph Aug 30 '18 at 9:52
@KonradRudolph FWIW, I think the original was fine, although I don't have a strong preference. (Native English speaker) – Yemon Choi Aug 30 '18 at 12:07
An important note, though: I consider there to be a very significant difference between proving results using fewer hypotheses or axioms, and "pretending" not to know theorems which are consequences of the hypotheses you do assume. Banning l'Hopital, while assuming stronger results like the mean value and squeeze theorem, is both ill-defined (the first lemma of my solution can just be a proof of l'Hopital) and of dubious benefit. – user168715 Aug 31 '18 at 6:53
@PeteL.Clark There's even a relevant XKCD about that. – Fund Monica's Lawsuit Sep 1 '18 at 1:49
In the sense that you are asking, I cannot imagine there ever being a method that is ruled inadmissible because the researcher is "not ready for it." Every intellectual approach is potentially fair game.
If the specific goal of a work is to find an alternate approach to establishing something, however, it could well be the case that one or more prior methods are ruled out of scope, as it would assume the result that you want to establish by another independent path. For example, the constant e has been derived in multiple ways.
Finally, once you step outside of pure theory and into experimental work, one must also consider the ethics of an experimental method. Many potential approaches are considered inadmissible due to the objectionable nature of the experiment. In extreme cases, such Nazi medical experiments, even referencing the prior work may be considered inadmissible.
jakebealjakebeal
Ah, you mean like if you want to, say, prove Fourier inversion formula probabilistically, you would want to avoid anything that sounds like what you already know to be the proof/s of the Fourier inversion formula because that would defeat coming up with a different proof? Or something like my question here? Thanks jakebeal! – BCLC Aug 29 '18 at 14:50
Re outside of pure: Okay now that seems pretty obvious in hindsight (i.e. dumb question for outside of pure). I think it's far less obvious for pure – BCLC Aug 29 '18 at 15:24
It is worth pointing out, that theorems are usually inadmissible if they lead to circular theorem proving. If you study math you learn how mathematical theories are built lemma by lemma and theorem by theorem. These theorems and their dependencies form a directed acyclic graph (DAG).
If you are asked to reproduce the proof of a certain theorem and you use a "later" result, this results usually depends on the theorem you are supposed to prove, so using it is not just inadmissible for educational reasons, it actually would lead to an incorrect proof in the context of the DAG.
In that sense there cannot be any inadmissible theorems in research, because research usually consists of proving the "latest" theorems. However, if you publish a shorter, more elegant or more beautiful proof of a known result, you might have to look out for inadmissible theorems again.
J W
BlindKungFuMasterBlindKungFuMaster
+1 for bringing up explicitly what seems to have only been implicit, or mentioned in comments to other answers. I have a hazy memory of marking someone's comprehensive graduate exam in Canada where the simplicity of the algebra of n-by-n matrices (which carried non-negligible marks) was proved by appealing to Wedderburn's structure theorem... – Yemon Choi Aug 30 '18 at 12:09
This the right answer to my mind. It would be strengthened by explaining what this has to do with l'Hopital as in Nate Eldridge's comment. But what does DAG stand for? – Noah Snyder Aug 30 '18 at 12:20
@NoahSnyder: DAG doubtless stands for directed acyclic graph. – J W Aug 30 '18 at 13:13
@JW: Thanks! I was expecting it was a technical term in pedagogy or philosophy of science, not math. – Noah Snyder Aug 30 '18 at 13:21
The acyclical bit of DAG's is probably worded a bit carelessly. It's common enough to have theorems A and B that are essential equivalent, such that A can be proven from B and vice versa. This creates an obvious cycle, but it doesn't matter. There are then at least two acyclical subgraphs that connect the theorem to prove and its axioms - axioms being the graph roots. IOW, while any particular proof is acyclical, the union of them is not. – MSalters Aug 30 '18 at 14:53
While there are indeed no inadmissible theorems in research, there are certain things that one sometimes tries to avoid.
Two examples come to mind:
The first is the classification of finite simple groups. The classification itself is not particularly complicated, but the proof is absurdly so. This makes mathematicians working in group theory prefer to avoid using it when possible. It is in fact quite often explicitly pointed out in a paper if a key result relies on it.
The reason for this preference was probably to some extend originally that the proof was too complicated for people to have full confidence in, but my impression is that this is no longer the case, and the preference is now due to the fact that relying on the classification makes the "real reason" for the truth of a result more opaque and thus less likely to lead to further insights.
The other example is the huge effort that has gone into trying to prove the so-called Kazhdan-Lusztig conjecture using purely algebraic methods.
The result itself is algebraic in nature, but the original proof uses a lot of very deep results from geometry, which made it impossible to use it as a stepping stone to settings not allowing for this geometric structure.
Such an algebraic proof was achieved in 2012 by Elias and Williamson, when they proved Soergel's conjecture, which has the Kazhdan-Lusztig conjecture as one of several consequences.
The techniques used in this proof allowed just the sort of generalizations hoped for, leading first to a disproof of Lusztig's conjecture in 2013 (a characteristic $p$ analogue of the Kazhdan-Lusztig conjecture), and then to a proof of a replacement for Lusztig's conjecture in 2015 (for type $A$) and 2017 (in general), at least under some mild assumptions on the characteristic.
Tobias KildetoftTobias Kildetoft
Didn't Elias and Williamson put the KL conjecture on an algebraic footing, or am I misremembering things? – darij grinberg Aug 29 '18 at 15:08
@darijgrinberg They did indeed. I actually meant to add that, but forgot it again while typing. I have added some details about it. – Tobias Kildetoft Aug 29 '18 at 17:04
There are cases where the researcher restricts himself not to use certain theorems. Example:
Atle Selberg,"An elementary proof of the prime-number theorem". Ann. of Math. (2) 50 (1949), 305--313.
The author restricts himself to use only "elementary" (in a technical sense) methods.
Other cases may be proofs in geometry using only straightedge and compasses. Gauss showed that the regular 257-gon may be constructed with straightedge and compasses. I would not consider that to be "a new proof of a known result".
GEdgarGEdgar
So same as jakebeal? – BCLC Aug 29 '18 at 17:03
That case is different because the researchers are justing showing a new proof for a known theorem but that is simpler (or more elegant) than the known proofs. In math, there is a kind of consensus that simpler proofs are better (for many reasons, for instance, they are easier to be checked and usually depend on weaker results), so, an elementary proof is an original research result even if it is a proof of the "same type" as the existing ones (e.g, a simpler algebraic proof when other algebraic proof is already known). – Hilder Vitor Lima Pereira Aug 30 '18 at 13:21
@HilderVitorLimaPereira if I may nitpick a bit, the elementary proof of the prime number theorem is regarded by most people who have studied it as neither simpler nor more elegant than the analytic family of proofs. It is however more "elementary" (specifically, does not use complex or Fourier analysis), which is also a very important and interesting feature. Certainly its discovery was a major research result, so in that sense you make a good and valid point. – Dan Romik Aug 30 '18 at 15:46
@DanRomik I see. Yes, when I said "weaker results" I actually was think about more elementary results in the sense that they use theories that do not depend on deep sequence of constructions and other theorems or that are considered basic knowledge in the math comunity. Thank you for that comment. – Hilder Vitor Lima Pereira Aug 30 '18 at 16:03
@HilderVitorLimaPereira maybe that thought could be called "weaker claims"? – elliot svensson Aug 31 '18 at 14:20
It is perhaps worth noting that some results are in a sense inadmissible because they aren't actually theorems. Some conjectures/axioms are so central that they are widely used, even though they haven't yet been established. Proofs relying on these should make that clear in the hypotheses. However, it wouldn't be that hard to have a bad day and forget that something you use frequently hasn't actually been proved yet, or that it is needed for a later result you want to use.
Jessica BJessica B
Perhaps Poincare was a bad example because it was a conjecture with a high bounty for quite sometime, but let's pretend I used something that had been proven for decades old. Your answer is now...? – BCLC Aug 29 '18 at 15:13
There is (unfortunately...) a whole spectrum between "unequivocal theorem" and "conjecture" in combinatorics and geometry, due to the rigorous methods lagging behind the sort of arguments researchers actually use. – darij grinberg Aug 29 '18 at 15:14
@BCLC Actually, the Poincare Conjecture was widely 'used' before its proof. The resulting theorems include a hypothesis of 'no fake 3-balls'. But I also know of a paper proving a topological result using the generalised continuum hypothesis. – Jessica B Aug 29 '18 at 15:17
@darijgrinberg I disagree with your assertion. If something is believed true, no matter with what level of confidence, but is not an "unequivocal" theorem (i.e., a "theorem"), then it is a conjecture, not "somewhere on the spectrum between unequivocal theorem and conjecture". I challenge you to show me a pure math paper, published in a credible journal, that uses different terminology. I'm pretty sure I do understand what you're getting at, but others likely won't, and your use of an adjective like "unequivocal" next to "theorem" is likely to sow confusion and lead some people to think ... – Dan Romik Aug 29 '18 at 20:42
@DanRomik: I guess I was ambiguous. Of course these things are stated as theorems in the papers they're published in. But when you start asking people about them, you start hearing eehms and uuhms. I don't think the problem is concentrated with certain authors -- rather it's specific to certain kinds of combinatorics, and the same people that write very clearly about (say) algebra become vague and murky when they need properties of RSK or Hillman-Grassl... – darij grinberg Aug 29 '18 at 20:45
In intuitionistic logic and constructive mathematics we try to prove stuff without the law of excluded middle, which excludes many of the normal tools used in math. And in logic in general we often try to prove stuff using only a defined set of axioms, which often means that we are not allowed to follow our 'normal' intuitions. Especially when proving something in multiple axiomatic systems of different strengt you can get that some tool only become available towards the end(the more powerful systems) , and are as such inadmissible in the weaker systems.
epa095epa095
That is a great thing to do, but not the same as having parts of math closed off from you by an advisor unless you are both working in that space. The axiom of choice is another example that explores proof in a reduced space. I once worked in systems with a small set of axioms in which more could be true, but less could be proved to be true. Fun. – Buffy Aug 29 '18 at 20:43
In the same vein, working in reverse mathematics usually requires one's arguments to be provable from rather weak systems of axioms, which leads to all sorts of complications that would not be present using standard sets of assumptions. – Andrés E. Caicedo Aug 30 '18 at 20:29
To answer your main question, no. Nothing is disallowed. Any advisor would (or at least should) allow any valid mathematics. There is nothing in mathematics that is disallowed, especially in doctoral research. Of course this implies acceptance (now settled) on Poincaré's theorem. Prior to an accepted proof you couldn't depend on it.
In fact, you can even write a dissertation based on a hypothetical (If Prof Buffy's Large Theorem is true, then it follows that...). You can explore the consequences of things not proven. Sometimes it helps connect them to known results, leading to a proof of the "large theorem" and sometimes it helps to lead to a contradiction showing it false.
However, I have an issue with the background you have given on what is appropriate in teaching and examining students. I question the wisdom of the first professor disallowing anything that the student knows. That seems shortsighted and turns the professor into a gate that allows only some things to trickle through.
Of course, if the professor wants to test the student on a particular technique he can try to find questions that do so, but this also points up the basic stupidity of exams in general. There are other ways to assure that the student learns essential techniques.
A university education isn't about competition with other students and the (horrors) problem of an unfair advantage. it is about learning. If the professor or the system grades students competitively they are doing a poor job.
If you have the 20 absolutely best students in the world and grade purely competitively, then half of them will be below average.
I feel like you have misunderstood the question. – Jessica B Aug 29 '18 at 15:05
@Buffy: The question wasn't actually about the class. The question was about whether "inadmissible" stuff exists at the graduate level. – cHao Aug 29 '18 at 15:54
One reason to "disallow" results not yet studied is that it helps to avoid circular logic. A standard example: student is asked to show that lim_{x -> 0} sin(x)/x = 1. Student applies L'Hôpital's rule, taking advantage of the fact that the derivative of sin(x) is cos(x). However, the usual way of proving that the derivative of sin(x) is cos(x) requires knowing the value of lim_{x -> 0} sin(x)/x. If you "forbid" L'Hôpital's rule in solving the original problem, you prevent this issue from arising. – Nate Eldredge Aug 29 '18 at 16:48
Well, you can have a standing course policy not to assume results not yet proved. This is sufficiently common that the instructor may have assumed it went without saying. Or, the downgrade may have actually been for circular logic, but the reasoning was explained poorly or misunderstood. – Nate Eldredge Aug 29 '18 at 16:56
I think L'Hopital's rule is uniquely pernicious and results in students failing to learn about limits and immediately forgetting everything about limits, in a way that has essentially no good parallels elsewhere in the elementary math curriculum. So I don't think you can substitute in something else and make it the same question. Someone who uses L'Hopital to say compute \lim_{x\rightarrow 0} \frac{x^2}{x} isn't showing a more advanced understanding of the material, they're showing they don't understand the material! – Noah Snyder Aug 30 '18 at 12:52
I don't think there are inadmissible theorems in research, although obviously one has to care not to rely on assumptions that has yet to be proven for a particular problem.
However, in terms of PhD or postdoc work, I feel that some approaches may be rather "off-topic" because of not-really-academic reasons. For example, if you secure a PhD funding to study topic X, you should not normally use it to study Y. Similarly, if you secure a postdoc in a team which develops method A, and you want to study your competitor's method B, your PI may want to keep the time you spend on B limited, so it does not exceed the time you spend to develop A. Some PIs are quite notorious in a sense that they won't tolerate you even touching some method C, because of their important reasons, so even though you have full academic freedom to go and explore method C if you like it, it may be "inadmissible" to do so within your current work arrangements.
Dmitry SavostyanovDmitry Savostyanov
Thanks Dmitry Savostyanov! This sounds like something I had in mind, but this is for applied research? Or also for theoretical research? – BCLC Aug 29 '18 at 15:10
Even in pure maths, people can be very protective sometimes. And people in applied maths can be very open-minded. It's more about personal approaches to science, perhaps. – Dmitry Savostyanov Aug 29 '18 at 15:11
I'm going to give a related point of view from outside of academia, namely a commercial/government research organisation.
I have come across researchers and managers who are hindered by what I call an exam mentality, whereby they assume that a research question can only be answered with a data set provided, and cannot make reference to other data, results, studies etc.
I've found this exam mentality to be extremely limiting and comes about because the researcher or manager has a misconception about research that has been indoctrinated from their (mostly exam-based) education.
The fact of the matter is that by not using data/techniques/studies on arbitrary grounds stifles research. It leads to missed opportunities for commercial organisations to make profit, or missed consequences when governments introduce new policy, or missed side-effects of new drugs etc.
Bad_BishopBad_Bishop
I will add a small example from Theoretical Computer Science and algorithm design.
It is a very important open problem to find a combinatorial (or even LP based) algorithm that achieves the Goemans-Williamson bound (0.878) for approximating the MaxCut problem in polynomial time.
We know that using Semidefinite Programming techniques, a bound on the approximation factor of alpha = 0.878 can be achieved in poly time. But can we achieve this bound using other techniques? Slightly less ambitiously but probably equally important: Can we find a combinatorial algorithm with approximation guarantee strictle better than 1/2?
Luca Trevisan had made important progress towards that direction using spectral techniques.
PsySpPsySp
In research you would use the most applicable method (that you know) to demonstrate a solution, and would possibly also be in situations where you are asked about or offered alternative approaches to your solution (and then you learn a new method).
In the example where L'Hôpital's rule was "not permitted", it could be that the question could have been worded better as it sounds like a "solve this" question, assuming that only the methods taught in the course are known to students and therefore only the methods taught in the course will be used in the exam.
MickMick
There was no ambiguity in the question. L'Hôpital's rule wasn't introduced to us until our third or fourth exam. My engineering friend was taking a make-up for either our second exam or our midterm or both (i forgot). It would've been like using the sequence definition of continuity in the first exam of an elementary analysis class if such class teaches sequences last (like mine did) – BCLC Aug 29 '18 at 14:46
I understand that, but when it was introduced has no bearing on whether students may already know how to use it. It would be the same asking, "Show that the first derivative of x^2 is 2x, and then telling students that solved it using implicit differentiation that that is not allowed and they should have used explicit differentiation. – Mick Aug 29 '18 at 14:51
Mick, but it was a make-up exam. It would be unfair to students who took the exam on time because we didn't know L'Hôpital's rule at the time? – BCLC Aug 29 '18 at 14:56
It's not about being fair. It's about math building on itself. Often you're expected to solve things a certain way in order to ensure you understand what the later stuff allows you to simplify or ignore. If there was an intended method, it should have been in the instructions. But it's a common assumption that if you haven't been taught it, you don't know it yet. – cHao Aug 29 '18 at 15:51
Without denying the other suggestions on why it might be disallowed, fairness to other students is irrelevant. The purpose of an exam is to assess or verify what you have learned, not to decide who wins a competition. – WGroleau Aug 30 '18 at 12:15
Well in pure Math research I am sure brute force approximations by computer are disallowed except as a way to introduce the interest in the topic and possibly a way to narrow the area to be explored. Perhaps even to suggest an approach to solution.
Math research requires equations that describe an exact answer and proof that the answer is correct by derivation from established math facts and theorems. Computer approximations may use ever smaller intervals to narrow the range of an answer but they don't actually reach the infinitely small limit of L'hospital style.
The separate area of computerized derivations basically just automates what is already known. I am sure many places leave researchers free to use such computerization to speed documentation of work as far as such software goes. I am sure that plenty of human guidance is still needed to formulate the problem, introduce postulates and choose which available solution steps to try. But the key thing is that all such software derivation would have to verified by hand before any outside review for software error and that techniques stay within allowed boundaries (the IF portion of theorems etc).
And after such hand checks...how many mathematical researchers would credit computer software for assistance?
Well I saw applications mathematicians cite software as a quick check method for colleagues to check the reasonableness of the work way back in the 1980s. In that applications mathematics sometimes has an almost engineering math view of practical results, I suppose that they still do give the computer software approximations as quick demonstration AFTER the formal derivations. And I hear that applications math sometimes solves the nearest approximation to the problem possible when solution to the exact problem still evades them. So again more room for assistance by computer software derivation. Not sure that such operations research type topics fits everyone's definition of mathematical research though.
ObservationObservation
Please try to avoid leaving two separate answers; you should edit your first one – Yemon Choi Sep 2 '18 at 3:36
I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else, and doesn't really address the OP's question about whether there are situations when one should not make use of certain theorems while doing research – Yemon Choi Sep 2 '18 at 3:37
In shorter terms, yes computer approximation techniques are often used in a shotgun manner to look for areas of potential convergence on solutions. As in "give me a hint". Especially in applied math topics where real world boundaries can be described.
Again a there is the question of whether real world problems other than fundamental physics are true math research or the much looser applied math or even operations research.
But in the actual derivation of theorems from underlying and proven theorems to new theorems...computers are more limited to documentation tools similar to word processors for prose. Still tools becoming more and more important to speed the more common equation checking of documented work as word processors check spelling and grammar for prose. And more areas where human must override or redirect.
I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else – Yemon Choi Sep 2 '18 at 3:35
Also, don't create two new user identities. Register one which can be used consistently – Yemon Choi Sep 2 '18 at 3:37
The axiom of choice (and its corollaries) are pretty well-accepted these days in the mathematical community, but you might occasionally run across a few old-school mathematicians who think that it's "wrong", and therefore that any corollary that you use the axiom of choice to prove is also "wrong". (Of course, what it even means for the axiom of choice to be "wrong" is a largely philosophical question.)
tparkertparker
Not the answer you're looking for? Browse other questions tagged research-process mathematics computer-science physics supervision or ask your own question.
Asking an exam question that requires a specific technique
Should a student be penalized for using a theorem outside of the curriculum?
Is it fair to punish the student for not knowing what's in the course?
Is university teaching better than high school teaching experience in applying for teaching-funded PhDs?
Does one need a master's in math before taking a PhD in pure math?
How should I start Undergraduate Research in Mathematics?
Is there a place in academia for a physicist who reads mostly about math?
Doing PhD on computer vision with an engineering background
Love Courses, Hate Research? | CommonCrawl |
Sat, 17 Aug 2019 19:41:41 GMT
12.E: Introduction to Calculus (Exercises)
[ "article:topic", "license:ccby", "showtoc:no", "authorname:openstaxjabramson" ]
Precalculus & Trigonometry
Book: Precalculus (OpenStax)
12: Introduction to Calculus
Contributed by Jay Abramson
Principal Lecturer (School of Mathematical and Statistical Sciences) at Arizona State University
Publisher: OpenStax CNX
12.1: Finding Limits - Numerical and Graphical Approaches
12.2: Finding Limits - Properties of Limits
12.3: Continuity
12.4: Derivatives
In this section, we will examine numerical and graphical approaches to identifying limits.
1) Explain the difference between a value at \(x=a\) and the limit as \(x\) approaches \(a\).
The value of the function, the output, at \(x=a\) is \(f(a)\). When the \(\lim \limits_{x \to a}f(x)\) is taken, the values of \(x\) get infinitely close to \(a\) but never equal \(a\). As the values of \(x\) approach \(a\) from the left and right, the limit is the value that the function is approaching.
2) Explain why we say a function does not have a limit as \(x\) approaches \(a\) if, as \(x\) approaches \(a\), the left-hand limit is not equal to the right-hand limit.
For the exercises 3-14, estimate the functional values and the limits from the graph of the function \(f\) provided in the Figure below.
3) \(\lim \limits_{x \to −2^−} f(x)\)
4) \(\lim \limits_{x \to −2^+ }f(x)\)
5) \(\lim \limits_{x \to −2 f(x)}\)
6) \(f(−2)\)
7) \(\lim \limits_{x \to −1^− f(x)}\)
8) \(\lim \limits_{x \to 1^+} f(x)\)
9) \(\lim \limits_{x \to 1} f(x)\)
10) \(f(1)\)
11) \(\lim \limits_{x \to 4^−} f(x)\)
12) \(\lim \limits_{x \to 4^+} f(x)\)
13) \(\lim \limits_{x \to 4} f(x)\)
For the exercises 15-21, draw the graph of a function from the functional values and limits provided.
15) \(\lim \limits_{x \to 0^−} f(x)=2, \lim \limits_{x \to 0^+} f(x)=–3, \lim \limits_{x \to 2} f(x)=2, f(0)=4, f(2)=–1, f(–3) \text{ does not exist.}\)
Answers will vary.
16) \(\lim \limits_{x \to 2^−} f(x)=0,\lim \limits_{x \to 2^+} =–2,\lim \limits_{x \to 0} f(x)=3, f(2)=5, f(0)\)
17) \(\lim \limits_{ x \to 2^−} f(x)=2, \lim \limits_{ x \to 2^+} f(x)=−3, \lim \limits_{x \to 0} f(x)=5, f(0)=1, f(1)=0\)
18) \(\lim \limits_{x \to 3^−} f(x)=0, \lim \limits_{x \to 3^+} f(x)=5, \lim \limits_{x \to 5} f(x)=0, f(5)=4, f(3) \text{ does not exist.}\)
19) \( \lim \limits_{ x \to 4} f(x)=6, \lim \limits_{ x \to 6^+} f(x)=−1, \lim \limits_{ x \to 0} f(x)=5, f(4)=6, f(2)=6\)
20) \( \lim \limits_{ x \to −3} f(x)=2, \lim \limits_{ x \to 1^+} f(x)=−2, \lim \limits_{ x \to 3} f(x)=–4, f(–3)=0, f(0)=0\)
21) \( \lim \limits_{ x \to π} f(x)=π^2, \lim \limits_{ x \to –π} f(x)=\dfrac{π}{2}, \lim \limits_{ x \to 1^-} f(x)=0, f(π)=\sqrt{2}, f(0) \text{ does not exist}.\)
For the exercises 22-26, use a graphing calculator to determine the limit to \(5\) decimal places as \(x\) approaches \(0\).
22) \(f(x)=(1+x)^{\frac{1}{x}}\)
23) \(g(x)=(1+x)^{\frac{2}{x}}\)
\(7.38906\)
24) \(h(x)=(1+x)^{\frac{3}{x}}\)
25) \(i(x)=(1+x)^{\frac{4}{x}}\)
\(54.59815\)
26) \(j(x)=(1+x)^{\frac{5}{x}}\)
27) Based on the pattern you observed in the exercises above, make a conjecture as to the limit of \(f(x)=(1+x)^{\frac{6}{x}}, g(x)=(1+x)^{\frac{7}{x}},\) and \(h(x)=(1+x)^{\frac{n}{x}}.\)
\(e^6≈403.428794,e^7≈1096.633158, e^n\)
For the exercises 28-29, use a graphing utility to find graphical evidence to determine the left- and right-hand limits of the function given as \(x\) approaches \(a\). If the function has a limit as \(x\) approaches \(a\),state it. If not, discuss why there is no limit.
28) \((x)= \begin{cases} |x|−1, && \text{if }x≠1 \\ x^3, && \text{if }x=1 \end{cases} a=1 \)
29) \((x)= \begin{cases} \frac{1}{x+1}, && \text{if } x=−2 \\ (x+1)^2, && \text{if } x≠−2 \end{cases} a=−2 \)
\(\lim \limits_{x \to −2} f(x)=1\)
For the exercises 30-38, use numerical evidence to determine whether the limit exists at \(x=a\). If not, describe the behavior of the graph of the function near \(x=a\). Round answers to two decimal places.
30) \(f(x)=\dfrac{x^2−4x}{16−x^2};a=4\)
31) \(f(x)=\dfrac{x^2−x−6}{x^2−9};a=3\)
\(\lim \limits_{x \to 3} \left (\dfrac{x^2−x−6}{x^2−9} \right )=\dfrac{5}{6}≈0.83\)
32) \(f(x)=\dfrac{x^2−6x−7}{x^2– 7x};a=7\)
33) \(f(x)=\dfrac{x^2–1}{x^2–3x+2};a=1\)
\(\lim \limits_{x \to 1} \left (\dfrac{x^2−1}{x^2−3x+2} \right )=−2.00\)
34) \(f(x)=\dfrac{1−x^2}{x^2−3x+2};a=1\)
35) \(f(x)=\dfrac{10−10x^2}{x^2−3x+2};a=1\)
\(\lim \limits_{x \to 1} \left (\dfrac{10−10x^2}{x^2−3x+2} \right )=20.00\)
36) \(f(x)=\dfrac{x}{6x^2−5x−6};a=\dfrac{3}{2}\)
37) \(f(x)=\dfrac{x}{4x^2+4x+1};a=−\dfrac{1}{2}\)
\(\lim \limits_{x \to \frac{−1}{2}} \left (\dfrac{x}{4x^2+4x+1} \right )\) does not exist. Function values decrease without bound as \(x\) approaches \(-0.5\) from either left or right.
38) \(f(x)=\frac{2}{x−4}; a=4\)
For the exercises 39-41, use a calculator to estimate the limit by preparing a table of values. If there is no limit, describe the behavior of the function as \(x\) approaches the given value.
39) \(\lim \limits_{x \to 0} \dfrac{7 \tan x}{3x}\)
\(\lim \limits_{x \to 0} \dfrac{7 \tan x}{3x}=\dfrac{7}{3}\)
40) \(\lim \limits_{x \to 4} \dfrac{x^2}{x−4}\)
41) \(\lim \limits_{x \to 0}\dfrac{2 \sin x}{4 \tan x}\)
\(\lim \limits_{x \to 0} \dfrac{2 \sin x}{4 \tan x}=\dfrac{1}{2}\)
For the exercises 42-49, use a graphing utility to find numerical or graphical evidence to determine the left and right-hand limits of the function given as \(x\) approaches \(a\). If the function has a limit as \(x\) approaches \(a\), state it. If not, discuss why there is no limit.
42) \(\lim \limits_{x \to 0}e^{e^{\frac{1}{x}}}\)
43) \(\lim \limits_{x \to 0}e^{e^{− \frac{1}{x^2}}}\)
\(\lim \limits_{x \to 0}e^{e^{− \frac{1}{x^2}}}=1.0\)
44) \(\lim \limits_{x \to 0} \dfrac{|x|}{x}\)
45) \(\lim \limits_{x \to −1} \dfrac{|x+1|}{x+1}\)
\(\lim \limits_{ x→−1^−}\dfrac{| x+1 |}{x+1}=\dfrac{−(x+1)}{(x+1)}=−1\) and \(\lim \limits_{ x \to −1^+}\dfrac{| x+1 |}{x+1}=\dfrac{(x+1)}{(x+1)}=1\); since the right-hand limit does not equal the left-hand limit, \(\lim \limits_{ x \to −1}\dfrac{|x+1|}{x+1}\) does not exist.
46) \(\lim \limits_{ x \to 5} \dfrac{| x−5 |}{5−x}\)
47) \(\lim \limits_{ x \to −1}\dfrac{1}{(x+1)^2}\)
\(\lim \limits_{ x \to −1} \dfrac{1}{(x+1)^2}\) does not exist. The function increases without bound as \(x\) approaches \(−1\) from either side.
48) \(\lim \limits_{ x \to 1} \dfrac{1}{(x−1)^3}\)
49) \(\lim \limits_{ x \to 0} \dfrac{5}{1−e^{\frac{2}{x}}}\)
\(\lim \limits_{ x \to 0} \dfrac{5}{1−e^{\frac{2}{x}}}\) does not exist. Function values approach \(5\) from the left and approach \(0\) from the right.
50) Use numerical and graphical evidence to compare and contrast the limits of two functions whose formulas appear similar: \(f(x)=\left | \dfrac{1−x}{x} \right |\) and \(g(x)=\left | \dfrac{1+x}{x} \right |\) as \(x\) approaches \(0\). Use a graphing utility, if possible, to determine the left- and right-hand limits of the functions \(f(x)\) and \(g(x)\) as \(x\) approaches \(0\). If the functions have a limit as \(x\) approaches \(0\), state it. If not, discuss why there is no limit.
51) According to the Theory of Relativity, the mass m m of a particle depends on its velocity \(v\). That is
\[m=\dfrac{m_o}{\sqrt{1−(v^2/c^2)}} \nonumber \]
where \(m_o\) is the mass when the particle is at rest and \(c\) is the speed of light. Find the limit of the mass, \(m\), as \(v\) approaches \(c^−.\)
Through examination of the postulates and an understanding of relativistic physics, as \(v→c, m→∞. \)Take this one step further to the solution, \[\lim \limits_{v \to c^−}m=\lim \limits_{v \to c^−} \dfrac{m_o}{\sqrt{1−(v^2/c^2)}}=∞ \nonumber \]
52) Allow the speed of light, \(c\), to be equal to \(1.0\). If the mass, \(m\), is \(1\), what occurs to \(m\) as \(v \to c\)? Using the values listed in the Table below, make a conjecture as to what the mass is as \(v\) approaches \(1.00\).
\(v\)
\(m\)
0.999 22.36
0.99999 223.61
Graphing a function or exploring a table of values to determine a limit can be cumbersome and time-consuming. When possible, it is more efficient to use the properties of limits, which is a collection of theorems for finding limits. Knowing the properties of limits allows us to compute limits directly.
1) Give an example of a type of function \(f\) whose limit, as \(x\) approaches \(a,\) is \(f(a)\).
If \(f\) is a polynomial function, the limit of a polynomial function as \(x\) approaches \(a\) will always be \(f(a)\).
2) When direct substitution is used to evaluate the limit of a rational function as \(x\) approaches \(a\) and the result is \(f(a)=\dfrac{0}{0}\),does this mean that the limit of \(f\) does not exist?
3) What does it mean to say the limit of \(f(x)\), as \(x\) approaches \(c\), is undefined?
It could mean either (1) the values of the function increase or decrease without bound as \(x\) approaches \(c,\) or (2) the left and right-hand limits are not equal.
For the exercises 4-30, evaluate the limits algebraically.
4) \(\lim \limits_{x \to 0} (3)\)
5) \(\lim \limits_{x \to 2} \left (\dfrac{−5x}{x^2−1} \right )\)
\(\dfrac{−10}{3}\)
6) \(\lim \limits_{x \to 2} \left (\dfrac{x^2−5x+6}{x+2} \right )\)
7) \(\lim \limits_{x \to 3} \left (\dfrac{x^2−9}{x−3} \right )\)
8) \(\lim \limits_{x \to −1} \left (\dfrac{x^2−2x−3}{x+1} \right )\)
9) \(\lim \limits_{x \to \frac{3}{2}} \left (\dfrac{6x^2−17x+12}{2x−3} \right )\)
\(\dfrac{1}{2}\)
10) \(\lim \limits_{ x \to −\frac{7}{2}} \left (\dfrac{8x^2+18x−35}{2x+7} \right )\)
11) \(\lim \limits_{ x \to 3} \left (\dfrac{x^2−9}{x−5x+6} \right )\)
12) \(\lim \limits_{ x \to −3} \left (\dfrac{−7x^4−21x^3}{−12x^4+108x^2} \right )\)
13) \(\lim \limits_{ x \to 3} \left (\dfrac{x^2+2x−3}{x−3} \right )\)
14) \(\lim \limits_{ h \to 0} \left (\dfrac{(3+h)^3−27}{h} \right )\)
15) \(\lim \limits_{ h \to 0} \left (\dfrac{(2−h)^3−8}{h} \right )\)
\(−12\)
16) \(\lim \limits_{ h \to 0} \left (\dfrac{(h+3)^2−9}{h} \right )\)
17) \(\lim \limits_{ h \to 0} \left (\dfrac{\sqrt{5−h}−\sqrt{5}}{h} \right )\)
\(−\dfrac{\sqrt{5}}{10}\)
18) \(\lim \limits_{ x \to 0} \left (\dfrac{\sqrt{3−x}−\sqrt{3}}{x} \right )\)
19) \(\lim \limits_{ x \to 9} \left (\dfrac{x^2−81}{3−x} \right )\)
\(−108\)
20) \(\lim \limits_{ x \to 1} \left (\dfrac{\sqrt{x}−x^2}{1−\sqrt{x}} \right )\)
21) \(\lim \limits_{ x \to 0}\left ( \dfrac{x}{\sqrt{1+2x}-1} \right )\)
22) \(\lim \limits_{ x \to \frac{1}{2}} \left (\dfrac{x^2−\tfrac{1}{4}}{2x−1} \right )\)
23) \(\lim \limits_{ x \to 4} \left (\dfrac{x^3−64}{x^2−16} \right )\)
24) \(\lim \limits_{ x \to 2^−} \left (\dfrac{|x−2|}{x−2} \right )\)
25) \(\lim \limits_{ x \to 2^+} \left (\dfrac{| x−2 |}{x−2} \right )\)
26) \(\lim \limits_{ x \to 2} \left (\dfrac{| x−2 |}{x−2} \right )\)
27) \(\lim \limits_{ x \to 4^−} \left (\dfrac{| x−4 |}{4−x} \right )\)
28) \(\lim \limits_{ x \to 4^+} \left (\dfrac{| x−4 |}{4−x} \right )\)
29) \(\lim \limits_{ x \to 4} \left (\dfrac{| x−4 |}{4−x} \right )\)
30) \(\lim \limits_{ x \to 2} \left (\dfrac{−8+6x−x^2}{x−2} \right )\)
For the exercises 31-33, use the given information to evaluate the limits: \(\lim \limits_{x \to c}f(x)=3, \lim \limits_{x \to c} g(x)=5\)
31) \(\lim \limits_{x \to c} [ 2f(x)+\sqrt{g(x)} ]\)
\(6+\sqrt{5}\)
33) \(\lim \limits_{x \to c}\dfrac{f(x)}{g(x)}\)
For the exercises 34-43, evaluate the following limits.
34) \(\lim \limits_{x \to 2} \cos (πx)\)
35) \(\lim \limits_{x \to 2} \sin (πx)\)
36) \(\lim \limits_{x \to 2} \sin \left (\dfrac{π}{x} \right )\)
37) \(f(x)= \begin{cases} 2x^2+2x+1, && x≤0 \\ x−3, && x>0 ; \end{cases} \lim \limits_{x \to 0^+}f(x)\)
\(−3\)
38) \(f(x)= \begin{cases} 2x^2+2x+1, && x≤0 \\ x−3, && x>0 ; \end{cases} \lim \limits_{x \to 0^−} f(x)\)
39) \(f(x)= \begin{cases} 2x^2+2x+1, && x≤0 \\ x−3, && x>0 ; \end{cases} \lim \limits_{x \to 0}f(x)\)
does not exist; right-hand limit is not the same as the left-hand limit.
40) \(\lim \limits_{x \to 4} \dfrac{\sqrt{x+5}−3}{x−4}\)
41) \(\lim \limits_{x \to 2^+} (2x−〚x〛)\)
42) \(\lim \limits_{x \to 2} \dfrac{\sqrt{x+7}−3}{x^2−x−2}\)
43) \(\lim \limits_{x \to 3^+}\dfrac{x^2}{x^2−9}\)
Limit does not exist; limit approaches infinity.
For the exercises 44-53, find the average rate of change\(\dfrac{f(x+h)−f(x)}{h}\).
44) \(f(x)=x+1\)
45) \(f(x)=2x^2−1\)
\(4x+2h\)
46) \(f(x)=x^2+3x+4\)
47) \(f(x)=x^2+4x−100\)
\(2x+h+4\)
48) \(f(x)=3x^2+1\)
49) \(f(x)= \cos (x)\)
\(\dfrac{\cos (x+h)− \cos (x)}{h}\)
50) \(f(x)=2x^3−4x\)
51) \(f(x)=\dfrac{1}{x}\)
\(\dfrac{−1}{x(x+h)}\)
52) \(f(x)=\dfrac{1}{x^2}\)
53) \(f(x)=\sqrt{x}\)
\(\dfrac{−1}{\sqrt{x+h}+\sqrt{x}}\)
54) Find an equation that could be represented by the Figure below.
Figure below.
\(f(x)=\dfrac{x^2+5x+6}{x+3}\)
For the exercises 56-57, refer to the Figure below.
56) What is the right-hand limit of the function as \(x\) approaches \(0\)?
57) What is the left-hand limit of the function as \(x\) approaches \(0\)?
58) The position function \(s(t)=−16t^2+144t\) gives the position of a projectile as a function of time. Find the average velocity (average rate of change) on the interval \([ 1,2 ]\).
59) The height of a projectile is given by \(s(t)=−64t^2+192t\) Find the average rate of change of the height from \(t=1\) second to \(t=1.5\) seconds.
60) The amount of money in an account after \(t\) years compounded continuously at \(4.25\%\) interest is given by the formula \(A=A_0e^{0.0425t}\),where \(A_0\) is the initial amount invested. Find the average rate of change of the balance of the account from \(t=1\) year to \(t=2\) years if the initial amount invested is \(\$1,000.00.\)
A function that remains level for an interval and then jumps instantaneously to a higher value is called a stepwise function. This function is an example. A function that has any hole or break in its graph is known as a discontinuous function. A stepwise function, such as parking-garage charges as a function of hours parked, is an example of a discontinuous function. We can check three different conditions to decide if a function is continuous at a particular number.
1) State in your own words what it means for a function \(f\) to be continuous at \(x=c\).
Informally, if a function is continuous at \(x=c\), then there is no break in the graph of the function at \(f(c)\), and \(f(c)\) is defined.
2) State in your own words what it means for a function to be continuous on the interval \((a,b)\).
For the exercises 3-22, determine why the function \(f\) is discontinuous at a given point \(a\) on the graph. State which condition fails.
3) \(f(x)=\ln | x+3 |,a=−3\)
discontinuous at \(a=−3\); \(f(−3)\) does not exist
4) \(f(x)= \ln | 5x−2 |,a=\dfrac{2}{5}\)
5) \(f(x)=\dfrac{x^2−16}{x+4},a=−4\)
removable discontinuity at \(a=−4; f(−4)\) is not defined
6) \(f(x)=\dfrac{x^2−16x}{x},a=0\)
7) \(f(x)= \begin{cases} x, && x≠3 \\ 2x, && x=3 \end{cases} a=3\)
Discontinuous at \(a=3; \lim \limits_{x \to 3} f(x)=3,\) but \(f(3)=6,\) which is not equal to the limit.
8) \(f(x) = \begin{cases} 5, &&x≠0 \\ 3, && x=0 \end{cases} a=0\)
9) \(f(x)= \begin{cases} \dfrac{1}{2−x}, && x≠2 \\ 3, &&x=2 \end{cases} a=2\)
\(\lim \limits_{x \to 2}f(x)\) does not exist.
10) \(f(x)= \begin{cases} \dfrac{1}{x+6}, && x=−6 \\ x^2, && x≠−6 \end{cases} a=−6\)
11) \(f(x)=\begin{cases} 3+x, &&x<1 \\ x, &&x=1 \\ x^2, && x>1 \end{cases} a=1\)
\(\lim \limits_{x \to 1^−}f(x)=4;\lim \limits_{x \to 1^+}f(x)=1.\) Therefore, \(\lim \limits_{x \to 1}f(x)\) does not exist.
12) \(f(x)= \begin{cases} 3−x, && x<1 \\ x, && x=1 \\ 2x^2, && x>1 \end{cases} a=1\)
13) \(f(x)= \begin{cases} 3+2x, && x<1 \\ x, && x=1 \\ −x^2, && x>1 \end{cases} a=1\)
\(\lim \limits_{x \to 1^−} f(x)=5≠ \lim \limits_{x \to 1^+}f(x)=−1\). Thus \(\lim \limits_{x \to 1}f(x)\) does not exist.
14) \(f(x)= \begin{cases} x^2, &&x<−2 \\ 2x+1, && x=−2 \\ x^3, && x>−2 \end{cases} a=−2\)
15) \(f(x)= \begin{cases} \dfrac{x^2−9}{x+3}, && x<−3 \\ x−9, && x=−3 \\ \dfrac{1}{x}, && x>−3 \end{cases} a=−3\)
\(\lim \limits_{x to −3^+}f(x)=−\dfrac{1}{3}\)
Therefore, \(\lim \limits_{x \to −3} f(x)\) does not exist.
16) \(f(x)= \begin{cases} \dfrac{x^2−9}{x+3}, && x<−3 \\ x−9, && x=−3\\ −6, && x>−3 \end{cases} a=3\)
17) \(f(x)=\dfrac{x^2−4}{x−2}, a=2\)
\(f(2)\) is not defined.
18) \(f(x)=\dfrac{25−x^2}{x^2−10x+25}, a=5\)
19) \(f(x)=\dfrac{x^3−9x}{x^2+11x+24}, a=−3\)
\(f(−3)\) is not defined.
20) \(f(x)=\dfrac{x^3−27}{x^2−3x}, a=3\)
21) \(f(x)=\dfrac{x}{|x|}, a=0\)
22) \(f(x)=\dfrac{2|x+2|}{x+2}, a=−2\)
For the exercises 23-35, determine whether or not the given function \(f\) is continuous everywhere. If it is continuous everywhere it is defined, state for what range it is continuous. If it is discontinuous, state where it is discontinuous.
23) \(f(x)=x^3−2x−15\)
Continuous on \((−∞,∞)\)
24) \(f(x)=\dfrac{x^2−2x−15}{x−5}\)
25) \(f(x)=2⋅3^{x+4}\)
26) \(f(x)=− \sin (3x)\)
27) \(f(x)=\dfrac{|x−2|}{x^2−2x}\)
Discontinuous at \(x=0\) and\(x=2\)
28) \(f(x)= \tan (x)+2\)
29) \(f(x)=2x+\dfrac{5}{x}\)
Discontinuous at \(x=0\)
30) \(f(x)=\log _2 (x)\)
31) \(f(x)= \ln x^2 \)
Continuous on \((0,∞)\)
32) \(f(x)=e^{2x}\)
33) \(f(x)=\sqrt{x−4}\)
Continuous on \([4,∞)\)
34) \(f(x)= \sec (x)−3\)
35) \(f(x)=x^2+ \sin (x)\)
Continuous on \((−∞,∞)\).
36) Determine the values of \(b\) and \(c\) such that the following function is continuous on the entire real number line.
\[f(x)= \begin{cases}x+1, && 1<x<3 \\ x^2+bx+c, &&|x−2|≥1 \end{cases} \nonumber \]
For the exercises 37-39, refer to the Figure below. Each square represents one square unit. For each value of \(a\), determine which of the three conditions of continuity are satisfied at \(x=a\) and which are not.
37) \(x=−3\)
\(1\), but not \(2\) or \(3\)
38) \(x=2\)
\(1\) and \(2\), but not \(3\)
For the exercises 40-43, use a graphing utility to graph the function \(f(x)= \sin \left (\dfrac{12π}{x} \right )\) as in Figure. Set the \(x\)-axis a short distance before and after \(0\) to illustrate the point of discontinuity.
40) Which conditions for continuity fail at the point of discontinuity?
41) Evaluate \(f(0)\).
\(f(0)\) is undefined.
42) Solve for \(x\) if \(f(x)=0\).
43) What is the domain of \(f(x)\)?
\((−∞,0)∪(0,∞)\)
For the exercises 44-45, consider the function shown in the Figure below.
44) At what \(x\)-coordinates is the function discontinuous?
45) What condition of continuity is violated at these points?
At \(x=−1\), the limit does not exist. At \(x=1, f(1)\) does not exist.
At \(x=2\), there appears to be a vertical asymptote, and the limit does not exist.
46) Consider the function shown in the Figure below. At what \(x\)-coordinates is the function discontinuous? What condition(s) of continuity were violated?
47) Construct a function that passes through the origin with a constant slope of \(1\), with removable discontinuities at \(x=−7\) and \(x=1\).
\(\dfrac{x^3+6x^2−7x}{(x+7)(x−1)}\)
48) The function \(f(x)=\dfrac{x^3−1}{x−1}\) is graphed in the Figure below. It appears to be continuous on the interval \([−3,3]\), but there is an \(x\)-value on that interval at which the function is discontinuous. Determine the value of \(x\) at which the function is discontinuous, and explain the pitfall of utilizing technology when considering continuity of a function by examining its graph.
49) Find the limit \(\lim \limits_{ x \to 1}f(x)\) and determine if the following function is continuous at \(x=1\):
\[fx= \begin{cases} x^2+4 && x≠1 \\ 2 && x=1\end{cases} \nonumber \]
The function is discontinuous at \(x=1\) because the limit as \(x\) approaches \(1\) is \(5\) and \(f(1)=2\).
50) The graph of \(f(x)= \dfrac{\sin (2x)}{x}\) is shown in the Figure below. Is the function \(f(x)\) continuous at \(x=0?\) Why or why not?
Change divided by time is one example of a rate. The rates of change in the previous examples are each different. In other words, some changed faster than others. If we were to graph the functions, we could compare the rates by determining the slopes of the graphs.
1) How is the slope of a linear function similar to the derivative?
The slope of a linear function stays the same. The derivative of a general function varies according to \(x\). Both the slope of a line and the derivative at a point measure the rate of change of the function.
2) What is the difference between the average rate of change of a function on the interval \([x,x+h]\) and the derivative of the function at \(x\)?
3) A car traveled \(110\) miles during the time period from 2:00 P.M. to 4:00 P.M. What was the car's average velocity? At exactly 2:30 P.M., the speed of the car registered exactly \(62\) miles per hour. What is another name for the speed of the car at 2:30 P.M.? Why does this speed differ from the average velocity?
Average velocity is \(55\) miles per hour. The instantaneous velocity at 2:30 p.m. is \(62\) miles per hour. The instantaneous velocity measures the velocity of the car at an instant of time whereas the average velocity gives the velocity of the car over an interval.
4) Explain the concept of the slope of a curve at point \(x\).
5) Suppose water is flowing into a tank at an average rate of \(45\) gallons per minute. Translate this statement into the language of mathematics.
The average rate of change of the amount of water in the tank is \(45\) gallons per minute. If \(f(x)\) is the function giving the amount of water in the tank at any time \(t\), then the average rate of change of \(f(x)\) between \(t=a\) and \(t=b\) is \(f(a)+45(b−a)\).
For the exercises 6-17, use the definition of derivative \(\lim \limits_{ h \to 0}\dfrac{f(x+h)-f(x)}{h}\) to calculate the derivative of each function.
6) \(f(x)=3x-4\)
7) \(f(x)=-2x+1\)
\(f'(x)=-2\)
8) \(f(x)=x^2-2x+1\)
9) \(f(x)=2x^2+x-3\)
\(f'(x)=4x+1\)
11) \(f(x)=\dfrac{-1}{x-2}\)
\(f'(x)=\dfrac{1}{(x-2)^2}\)
12) \(f(x)=\dfrac{2+x}{1-x}\)
13) \(f(x)=\dfrac{5-2x}{3+2x}\)
\(\dfrac{-16}{(3+2x)^2}\)
14) \(f(x)=\sqrt{1+3x}\)
15) \(f(x)=3x^3-x^2+2x+5\)
\(f'(x)=9x^2-2x+2\)
16) \(f(x)=5\)
17) \(f(x)=5\pi\)
\(f'(x)=0\)
For the exercises 18-21, find the average rate of change between the two points.
18) \((-2,0)\) and \((-4,5)\)
19) \((4,-3)\) and \((-2,-1)\)
\(-\dfrac{1}{3}\)
20) \((0,5)\) and \((6,5)\)
21) \((7,-2)\) and \((7,10)\)
For the polynomial functions 22-25, find the derivatives.
22) \(f(x)=x^3+1\)
23) \(f(x)=-3x^2-7x=6\)
\(f'(x)=-6x-7\)
24) \(f(x)=7x^2\)
25) \(f(x)=3x^3+2x^2+x-26\)
\(f'(x)=9x^2+4x+1\)
For the functions 26-28, find the equation of the tangent line to the curve at the given point \(x\) on the curve.
26) \(f(x)=2x^2-3x\; \; x=3\)
27) \(f(x)=x^2+1\; \; x=2\)
\(y=12x-15\)
28) \(f(x)=\sqrt{x}\; \; x=9\)
29) For the following exercise, find \(k\) such that the given line is tangent to the graph of the function.
\[f(x)=x^2-kx\; \; y=4x-9 \nonumber \]
\(k=-10\) or \(k=2\)
For the exercises 30-33, consider the graph of the function \(f\) and determine where the function is continuous/discontinuous and differentiable/not differentiable.
Discontinuous at \(x=-2\) and \(x=0\). Not differentiable at \(-2, 0, 2\).
Discontinuous at \(x=5\). Not differentiable at \(-4, -2, 0, 1, 3, 4, 5\).
For the exercises 34-43, use the Figure below to estimate either the function at a given value of \(x\) or the derivative at a given value of \(x\), as indicated.
34) \(f(-1)\)
\(f(0)=-2\)
39) \(f'(-1)\)
\(f'(-1)=9\)
40) \(f'(0)\)
\(f'(1)=-3\)
\(f'(3)=9\)
44) Sketch the function based on the information below:
\[f'(x)=2x, f(2)=4 \nonumber \]
45) Numerically evaluate the derivative. Explore the behavior of the graph of \(f(x)=x^2\) around \(x=1\) by graphing the function on the following domains: \([0.9,1.1], [0.99,1.01], [0.999,1.001], [0.9999, 1.0001]\). We can use the feature on our calculator that automatically sets Ymin and Ymax to the Xmin and Xmax values we preset. (On some of the commonly used graphing calculators, this feature may be called ZOOM FIT or ZOOM AUTO). By examining the corresponding range values for this viewing window, approximate how the curve changes at \(x=1\), that is, approximate the derivative at \(x=1\).
Answers vary. The slope of the tangent line near \(x=1\) is \(2\).
For the exercises 46-50, explain the notation in words. The volume \(f(t)\) of a tank of gasoline, in gallons, \(t\) minutes after noon.
46) \(f(0)=600\)
47) \(f'(30)=-20\)
At 12:30 p.m., the rate of change of the number of gallons in the tank is \(-20\) gallons per minute. That is, the tank is losing \(20\) gallons per minute.
48) \(f(30)=0\)
49) \(f'(200)=30\)
At \(200\) minutes after noon, the volume of gallons in the tank is changing at the rate of \(30\) gallons per minute.
50) \(f(240)=500\)
For the exercises 51-55, explain the functions in words. The height, \(s\), of a projectile after \(t\) seconds is given by \(s(t)=-16t^2+80t\).
51) \(s(2)=96\)
The height of the projectile after \(2\) seconds is \(96\) feet.
52) \(s'(2)=16\)
The height of the projectile at \(t=3\) seconds is \(96\) feet.
54) \(s'(3)=-16\)
55) \(s(0)=0, s(5)=0\)
The height of the projectile is zero at \(t=0\) and again at \(t=5\). In other words, the projectile starts on the ground and falls to earth again after \(5\) seconds.
For the exercises 56-57, the volume \(V\) of a sphere with respect to its radius \(r\) is given by \(V=\dfrac{4}{3}\pi r^3\).
56) Find the average rate of change of \(V\) as \(r\) changes from \(1\) cm to \(2\) cm.
57) Find the instantaneous rate of change of \(V\) when \(r=3\) cm.
\(36\pi \)
For the exercises 58-60, the revenue generated by selling \(x\) items is given by \(R(x)=2x^2+10x\).
58) Find the average change of the revenue function as \(x\) changes from \(x=10\) to \(x=20\).
59) Find \(R'(10)\) and interpret.
\(\$50.00\) per unit, which is the instantaneous rate of change of revenue when exactly \(10\) units are sold.
60) Find \(R'(15)\) and interpret. Compare \(R'(15)\) to \(R'(10)\), and explain the difference.
For the exercises 61-63, the cost of producing \(x\) cellphones is described by the function \(C(x)=x^2-4x+1000\).
61) Find the average rate of change in the total cost as \(x\) changes from \(x=10\) to \(x=15\).
\(\$21\) per unit
62) Find the approximate marginal cost, when \(15\) cellphones have been produced, of producing the \(16^{th}\) cellphone.
63) Find the approximate marginal cost, when \(20\) cellphones have been produced, of producing the \(21^{st}\) cellphone.
\(\$36\)
For the exercises 64-67, use the definition for the derivative at a point \(x=a\), \(\lim \limits_{x \to a}\dfrac{f(x)-f(a)}{x-a}\), to find the derivative of the functions.
65) \(f(x)=5x^2-x+4\)
\(f'(x)=10a-1\)
66) \(f(x)=-x^2+4x+7\)
67) \(f(x)=\dfrac{-4}{3-x^2}\)
\(\dfrac{4}{(3-x)^2}\)
Jay Abramson (Arizona State University) with contributing authors. Textbook content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at https://openstax.org/details/books/precalculus.
12.R: Introduction to Calculus (Review)
Jay Abramson
Show Page TOC | CommonCrawl |
Title: Expected value
Subject: Bias of an estimator, Poisson distribution, Cauchy distribution, Optimal design, Glossary of poker terms
In probability theory, the expected value of a random variable is intuitively the long-run average value of repetitions of the experiment it represents. For example, the expected value of a die roll is 3.5 because, roughly speaking, the average of an extremely large number of dice rolls is practically always nearly equal to 3.5. Less roughly, the law of large numbers guarantees that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions goes to infinity. The expected value is also known as the expectation, mathematical expectation, EV, mean, or first moment.
More practically, the expected value of a discrete random variable is the probability-weighted average of all possible values. In other words, each possible value the random variable can assume is multiplied by its probability of occurring, and the resulting products are summed to produce the expected value. The same works for continuous random variables, except the sum is replaced by an integral and the probabilities by probability densities. The formal definition subsumes both of these and also works for distributions which are neither discrete nor continuous: the expected value of a random variable is the integral of the random variable with respect to its probability measure. [1][2]
The expected value does not exist for random variables having some distributions with large "tails", such as the Cauchy distribution.[3] For random variables such as these, the long-tails of the distribution prevent the sum/integral from converging.
The expected value is a key aspect of how one characterizes a probability distribution; it is one type of location parameter. By contrast, the variance is a measure of dispersion of the possible values of the random variable around the expected value. The variance itself is defined in terms of two expectations: it is the expected value of the squared deviation of the variable's value from the variable's expected value.
The expected value plays important roles in a variety of contexts. In regression analysis, one desires a formula in terms of observed data that will give a "good" estimate of the parameter giving the effect of some explanatory variable upon a dependent variable. The formula will give different estimates using different samples of data, so the estimate it gives is itself a random variable. A formula is typically considered good in this context if it is an unbiased estimator—that is, if the expected value of the estimate (the average value it would give over an arbitrarily large number of separate samples) can be shown to equal the true value of the desired parameter.
In von Neumann-Morgenstern utility function.
1.1 Univariate discrete random variable, finite case
1.2 Univariate discrete random variable, countable case
1.3 Univariate continuous random variable
1.4 General definition
2.1 Constants
2.2 Monotonicity
2.3 Linearity
2.4 Iterated expectation
2.4.1 Iterated expectation for discrete random variables
2.4.2 Iterated expectation for continuous random variables
2.5 Inequality
2.6 Non-multiplicativity
2.7 Functional non-invariance
3 Uses and applications
4 Expectation of matrices
5 Formulas for special cases
5.1 Discrete distribution taking only non-negative integer values
5.2 Continuous distribution taking non-negative values
9 Literature
Univariate discrete random variable, finite case
Suppose random variable X can take value x1 with probability p1, value x2 with probability p2, and so on, up to value xk with probability pk. Then the expectation of this random variable X is defined as
\operatorname{E}[X] = x_1p_1 + x_2p_2 + \dotsb + x_kp_k \;.
Since all probabilities pi add up to one (p1 + p2 + ... + pk = 1), the expected value can be viewed as the weighted average, with pi's being the weights:
\operatorname{E}[X] = \frac{x_1p_1 + x_2p_2 + \dotsb + x_kp_k}{1} = \frac{x_1p_1 + x_2p_2 + \dotsb + x_kp_k}{p_1 + p_2 + \dotsb + p_k}\;.
If all outcomes xi are equally likely (that is, p1 = p2 = ... = pk), then the weighted average turns into the simple average. This is intuitive: the expected value of a random variable is the average of all values it can take; thus the expected value is what one expects to happen on average. If the outcomes xi are not equally probable, then the simple average must be replaced with the weighted average, which takes into account the fact that some outcomes are more likely than the others. The intuition however remains the same: the expected value of X is what one expects to happen on average.
An illustration of the convergence of sequence averages of rolls of a die to the expected value of 3.5 as the number of rolls (trials) grows.
Example 1. Let X represent the outcome of a roll of a fair six-sided die. More specifically, X will be the number of pips showing on the top face of the die after the toss. The possible values for X are 1, 2, 3, 4, 5, and 6, all equally likely (each having the probability of 1/6). The expectation of X is
\operatorname{E}[X] = 1\cdot\frac16 + 2\cdot\frac16 + 3\cdot\frac16 + 4\cdot\frac16 + 5\cdot\frac16 + 6\cdot\frac16 = 3.5.
If one rolls the die n times and computes the average (arithmetic mean) of the results, then as n grows, the average will almost surely converge to the expected value, a fact known as the strong law of large numbers. One example sequence of ten rolls of the die is 2, 3, 1, 2, 5, 6, 2, 2, 2, 6, which has the average of 3.1, with the distance of 0.4 from the expected value of 3.5. The convergence is relatively slow: the probability that the average falls within the range 3.5 ± 0.1 is 21.6% for ten rolls, 46.1% for a hundred rolls and 93.7% for a thousand rolls. See the figure for an illustration of the averages of longer sequences of rolls of the die and how they converge to the expected value of 3.5. More generally, the rate of convergence can be roughly quantified by e.g. Chebyshev's inequality and the Berry-Esseen theorem.
Example 2. The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable X represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability 1/38), the payoff is $35; otherwise the player loses the bet. The expected profit from such a bet will be
\operatorname{E}[\,\text{gain from }$1\text{ bet}\,] = -$1 \cdot \frac{37}{38}\ +\ $35 \cdot \frac{1}{38} = -$0.0526.
Univariate discrete random variable, countable case
Let X be a discrete random variable taking values x
1, x
2, ... with probabilities p
1, p
2, ... respectively. Then the expected value of this random variable is the infinite sum
\operatorname{E}[X] = \sum_{i=1}^\infty x_i\, p_i,
provided that this series converges absolutely (that is, the sum must remain finite if we were to replace all x
i's with their absolute values). If this series does not converge absolutely, we say that the expected value of X does not exist.
For example, suppose random variable X takes values 1, −2, 3, −4, ..., with respective probabilities c/12, c/22, c/32, c/42, ..., where c = 6/π2 is a normalizing constant that ensures the probabilities sum up to one. Then the infinite sum
\sum_{i=1}^\infty x_i\,p_i = c\,\bigg( 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \dotsb \bigg)
converges and its sum is equal to ln(2) ≃ 0.69315. However it would be incorrect to claim that the expected value of X is equal to this number—in fact E[X] does not exist, as this series does not converge absolutely (see harmonic series).
Univariate continuous random variable
If the probability distribution of X admits a probability density function f(x), then the expected value can be computed as
\operatorname{E}[X] = \int_{-\infty}^\infty x f(x)\, \mathrm{d}x .
In general, if X is a random variable defined on a probability space (Ω, Σ, P), then the expected value of X, denoted by E[X], 〈X〉, X or E[X], is defined as the Lebesgue integral
\operatorname{E} [X] = \int_\Omega X \, \mathrm{d}P = \int_\Omega X(\omega) P(\mathrm{d}\omega)
When this integral exists, it is defined as the expectation of X. Note that not all random variables have a finite expected value, since the integral may not converge absolutely; furthermore, for some it is not defined at all (e.g., Cauchy distribution). Two variables with the same probability distribution will have the same expected value, if it is defined.
It follows directly from the discrete case definition that if X is a constant random variable, i.e. X = b for some fixed real number b, then the expected value of X is also b.
The expected value of a measurable function of X, g(X), given that X has a probability density function f(x), is given by the inner product of f and g:
\operatorname{E}[g(X)] = \int_{-\infty}^\infty g(x) f(x)\, \mathrm{d}x .
This is sometimes called the law of the unconscious statistician. Using representations as Riemann–Stieltjes integral and integration by parts the formula can be restated as
\operatorname{E}[g(X)] = \int_a^\infty g(x) \, \mathrm{d} \mathrm{P}(X \le x)= \begin{cases} g(a)+ \int_a^\infty g'(x)\mathrm{P}(X > x) \, \mathrm{d} x & \mathrm{if}\ \mathrm{P}(g(X) \ge g(a))=1 \\ g(b) - \int_{-\infty}^b g'(x)\mathrm{P}(X \le x) \, \mathrm{d} x & \mathrm{if}\ \mathrm{P}(g(X) \le g(b))=1. \end{cases}
As a special case let α denote a positive real number. Then
\operatorname{E}\left [\left|X \right|^\alpha \right ] = \alpha \int_{0}^{\infty} t^{\alpha -1}\mathrm{P}(\left|X \right|>t) \, \mathrm{d}t.
In particular, if α = 1 and Pr[X ≥ 0] = 1, then this reduces to
\operatorname{E}[|X|] = \operatorname{E}[X] = \int_0^\infty \lbrace 1-F(t) \rbrace \, \mathrm{d}t,
where F is the cumulative distribution function of X. This last identity is an instance of what, in a non-probabilistic setting, has been called the layer cake representation.
The law of the unconscious statistician applies also to a measurable function g of several random variables X1, ... Xn having a joint density f:[4] [5]
\operatorname{E}[g(X_1,\dots,X_n)] = \int_{-\infty}^\infty\cdots \int_{-\infty}^\infty g(x_1,\cdots,x_n)~f(x_1,\cdots,x_n)~\mathrm{d}x_1\cdots \mathrm{d}x_n .
The expected value of a constant is equal to the constant itself; i.e., if c is a constant, then E[c] = c.
If X and Y are random variables such that X ≤ Y almost surely, then E[X] ≤ E[Y].
The expected value operator (or expectation operator) E is linear in the sense that
\begin{align} \operatorname{E}[X + c] &= \operatorname{E}[X] + c \\ \operatorname{E}[X + Y] &= \operatorname{E}[X] + \operatorname{E}[Y] \\ \operatorname{E}[aX] &= a \operatorname{E}[X] \end{align}
Note that the second result is valid even if X is not statistically independent of Y. Combining the results from previous three equations, we can see that
\operatorname{E}[a X + b Y + c] = a \operatorname{E}[X] + b \operatorname{E}[Y] + c\,
for any two random variables X and Y (which need to be defined on the same probability space) and any real numbers a, b and c.
Iterated expectation
Iterated expectation for discrete random variables
For any two discrete random variables X, Y one may define the conditional expectation:[6]
\operatorname{E}[X|Y=y] = \sum\limits_x x \cdot \operatorname{P}(X=x|Y=y).
which means that E[X|Y = y] is a function of y. Let g(y) be that function of y; then the notation E[X|Y] is then a random variable in its own right, equal to g(Y).
Lemma. Then the expectation of X satisfies:[7]
\operatorname{E}[X] = \operatorname{E}\left[ \operatorname{E}[X|Y] \right].
\begin{align} \operatorname{E}\left[ \operatorname{E}[X|Y] \right] &= \sum\limits_y \operatorname{E}[X|Y=y] \cdot \operatorname{P}(Y=y) \\ &=\sum\limits_y \left( \sum\limits_x x \cdot \operatorname{P}(X=x|Y=y) \right) \cdot \operatorname{P}(Y=y)\\ &=\sum\limits_y \sum\limits_x x \cdot \operatorname{P}(X=x|Y=y) \cdot \operatorname{P}(Y=y)\\ &=\sum\limits_y \sum\limits_x x \cdot \operatorname{P}(Y=y|X=x) \cdot \operatorname{P}(X=x) \\ &=\sum\limits_x x \cdot \operatorname{P}(X=x) \cdot \left( \sum\limits_y \operatorname{P}(Y=y|X=x) \right) \\ &=\sum\limits_x x \cdot \operatorname{P}(X=x) \\ &=\operatorname{E}[X] \end{align}
The left-hand side of this equation is referred to as the iterated expectation. The equation is sometimes called the tower rule or the tower property; it is treated under law of total expectation.
Iterated expectation for continuous random variables
In the continuous case, the results are completely analogous. The definition of conditional expectation would use inequalities, density functions, and integrals to replace equalities, mass functions, and summations, respectively. However, the main result still holds:
\operatorname{E}[X] = \operatorname{E}[\operatorname{E}[X|Y]]
If a random variable X is always less than or equal to another random variable Y, the expectation of X is less than or equal to that of Y:
If X ≤ Y, then E[X] ≤ E[Y].
In particular, if we set Y to |X| we know X ≤ Y and −X ≤ Y. Therefore we know E[X] ≤ E[Y] and E[−X] ≤ E[Y]. From the linearity of expectation we know −E[X] ≤ E[Y]. Therefore the absolute value of expectation of a random variable is less than or equal to the expectation of its absolute value:
|\operatorname{E}[X]| \leq \operatorname{E}[|X|]
Non-multiplicativity
If one considers the joint probability density function of X and Y, say j(x,y), then the expectation of XY is
\operatorname{E}[XY] = \iint xy \, j(x,y)\,\mathrm{d}x\,\mathrm{d}y.
In general, the expected value operator is not multiplicative, i.e. E[XY] is not necessarily equal to E[X]·E[Y]. In fact, the amount by which multiplicativity fails is called the covariance:
\operatorname{Cov}(X,Y)=\operatorname{E}[XY]-\operatorname{E}[X]\operatorname{E}[Y].
Thus multiplicativity holds precisely when Cov(X, Y) = 0, in which case X and Y are said to be uncorrelated (independent variables are a notable case of uncorrelated variables).
Now if X and Y are independent, then by definition j(x,y) = f(x)g(y) where f and g are the marginal PDFs for X and Y. Then
\begin{align} \operatorname{E}[XY] &= \iint xy \,j(x,y)\,\mathrm{d}x\,\mathrm{d}y = \iint x y f(x) g(y)\,\mathrm{d}y\,\mathrm{d}x \\ & = \left[\int x f(x)\,\mathrm{d}x\right]\left[\int y g(y)\,\mathrm{d}y\right] = \operatorname{E}[X]\operatorname{E}[Y] \end{align}
and Cov(X, Y) = 0.
Observe that independence of X and Y is required only to write j(x, y) = f(x)g(y), and this is required to establish the second equality above. The third equality follows from a basic application of the Fubini-Tonelli theorem.
Functional non-invariance
In general, the expectation operator and functions of random variables do not commute; that is
\operatorname{E}[g(X)] = \int_{\Omega} g(X)\, \mathrm{d}\mathrm{P} \neq g(\operatorname{E}[X]),
A notable inequality concerning this topic is Jensen's inequality, involving expected values of convex (or concave) functions.
It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.
The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions.
To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.
This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. \operatorname{P}({X \in \mathcal{A}}) = \operatorname{E}[I_{\mathcal{A}}(X)] where I_{\mathcal{A}}(X) is the indicator function for set \mathcal{A}, i.e. X \in \mathcal{A} \rightarrow I_{\mathcal{A}}(X)= 1, X \not \in \mathcal{A} \rightarrow I_{\mathcal{A}}(X)= 0 .
The mass of probability distribution is balanced at the expected value, here a Beta(α,β) distribution with expected value α/(α+β).
In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X].
Expected values can also be used to compute the variance, by means of the computational formula for the variance
\operatorname{Var}(X)= \operatorname{E}[X^2] - (\operatorname{E}[X])^2.
A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator \hat{A} operating on a quantum state vector |\psi\rangle is written as \langle\hat{A}\rangle = \langle\psi|A|\psi\rangle. The uncertainty in \hat{A} can be calculated using the formula (\Delta A)^2 = \langle\hat{A}^2\rangle - \langle \hat{A} \rangle^2 .
Expectation of matrices
If X is an m × n matrix, then the expected value of the matrix is defined as the matrix of expected values:
\operatorname{E}[X] = \operatorname{E} \left [\begin{pmatrix} x_{1,1} & x_{1,2} & \cdots & x_{1,n} \\ x_{2,1} & x_{2,2} & \cdots & x_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ x_{m,1} & x_{m,2} & \cdots & x_{m,n} \end{pmatrix} \right ] = \begin{pmatrix} \operatorname{E}[x_{1,1}] & \operatorname{E}[x_{1,2}] & \cdots & \operatorname{E}[x_{1,n}] \\ \operatorname{E}[x_{2,1}] & \operatorname{E}[x_{2,2}] & \cdots & \operatorname{E}[x_{2,n}] \\ \vdots & \vdots & \ddots & \vdots \\ \operatorname{E}[x_{m,1}] & \operatorname{E}[x_{m,2}] & \cdots & \operatorname{E}[x_{m,n}] \end{pmatrix}.
This is utilized in covariance matrices.
Formulas for special cases
Discrete distribution taking only non-negative integer values
When a random variable takes only values in {0, 1, 2, 3, ...} we can use the following formula for computing its expectation (even when the expectation is infinite):
\operatorname{E}[X]=\sum\limits_{i=1}^\infty P(X\geq i).
\sum\limits_{i=1}^\infty \mathrm{P}(X\geq i) = \sum\limits_{i=1}^\infty \sum\limits_{j=i}^\infty P(X = j).
Interchanging the order of summation, we have
\begin{align} \sum\limits_{i=1}^\infty \sum\limits_{j=i}^\infty P(X = j) &=\sum\limits_{j=1}^\infty \sum\limits_{i=1}^j P(X = j)\\ &=\sum\limits_{j=1}^\infty j\, P(X = j)\\ &=\operatorname{E}[X]. \end{align}
This result can be a useful computational shortcut. For example, suppose we toss a coin where the probability of heads is p. How many tosses can we expect until the first heads (not including the heads itself)? Let X be this number. Note that we are counting only the tails and not the heads which ends the experiment; in particular, we can have X = 0. The expectation of X may be computed by \sum_{i= 1}^\infty (1-p)^i=\frac{1}{p}-1 . This is because, when the first i tosses yield tails, the number of tosses is at least i. The last equality used the formula for a geometric progression, \sum_{i=1}^\infty r^i=\frac{r}{1-r}, where r = 1−p.
Continuous distribution taking non-negative values
Analogously with the discrete case above, when a continuous random variable X takes only non-negative values, we can use the following formula for computing its expectation (even when the expectation is infinite):
\operatorname{E}[X]=\int_0^\infty P(X \ge x)\; \mathrm{d}x
Proof: It is first assumed that X has a density fX(x). We present two techniques:
Using integration by parts (a special case of Section 1.4 above):
\operatorname{E}[X] = \int_0^\infty (-x)(-f_X(x))\;\mathrm{d}x = \left[ -x(1 - F(x)) \right]_0^\infty + \int_0^\infty (1 - F(x))\;\mathrm{d}x
and the bracket vanishes because (see Cumulative distribution function#Derived functions)
1-F(x) = o\left(\frac{1}{x}\right) as x \rightarrow \infty.
Using an interchange in order of integration:
\int_0^\infty \! \mathrm{P}(X\ge x)\;\mathrm{d}x =\int_0^\infty \int_x^\infty f_X(t)\;\mathrm{d}t\;\mathrm{d}x = \int_0^\infty \int_0^t f_X(t)\;\mathrm{d}x\;\mathrm{d}t = \int_0^\infty t f_X(t)\;\mathrm{d}t = \operatorname{E}[X]
In case no density exists, it is seen that
\operatorname{E}[X] = \int_0^\infty \int_0^x \! \mathrm{d}t \, \mathrm{d}F(x) = \int_0^\infty \int_t^\infty \! \mathrm{d}F(x)\mathrm{d}t = \int_0^\infty \! (1-F(t))\,\mathrm{d}t.
The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players who have to end their game before it's properly finished. This problem had been debated for centuries, and many conflicting proposals and solutions had been suggested over the years, when it was posed in 1654 to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré. de Méré claimed that this problem couldn't be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all. He began to discuss the problem in a now famous series of letters to Pierre de Fermat. Soon enough they both independently came up with a solution. They solved the problem in different computational ways but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution and this in turn made them absolutely convinced they had solved the problem conclusively. However, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.[8]
Three years later, in 1657, a Dutch mathematician Christiaan Huygens, who had just visited Paris, published a treatise (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory. In this book he considered the problem of points and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens also extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players). In this sense this book can be seen as the first successful attempt of laying down the foundations of the theory of probability.
In the foreword to his book, Huygens wrote: "It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs." (cited by Edwards (2002)). Thus, Huygens learned about de Méré's Problem in 1655 during his visit to France; later on in 1656 from his correspondence with Carcavi he learned that his method was essentially the same as Pascal's; so that before his book went to press in 1657 he knew about Pascal's priority in this subject.
Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes: "That my Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure me in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal Chance of gaining them, my Expectation is worth a+b/2." More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly:
… this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage mathematical hope.
The use of the letter E to denote expected value goes back to W.A. Whitworth in 1901,[9] who used a script E. The symbol has become popular since for English writers it meant "Expectation", for Germans "Erwartungswert", and for French "Espérance mathématique".[10]
Central tendency
Chebyshev's inequality (an inequality on location and scale parameters)
Conditional expectation
Expected value is also a key concept in economics, finance, and many other subjects
The general term expectation
Expectation value (quantum mechanics)
Moment (mathematics)
Nonlinear expectation a generalization of the expected value
Wald's equation for calculating the expected value of a random number of random variables
^ Sheldon M Ross (2007). "§2.4 Expectation of a random variable". Introduction to probability models (9th ed.). Academic Press. p. 38 ff.
^ Expectation Value, retrieved October 2013
^ Papoulis, A. (1984), Probability, Random Variables, and Stochastic Processes, New York: McGraw-Hill, p. pp. 139–152
^ Sheldon M Ross. "Chapter 3: Conditional probability and conditional expectation". cited work. p. 97 ff.
^ Sheldon M Ross. "§3.4: Computing expectations by conditioning". cited work. p. 105 ff.
^ "Ore, Pascal and the Invention of Probability Theory". The American Mathematical Monthly 67 (5): 409–419. 1960.
^ Whitworth, W.A. (1901) Choice and Chance with One Thousand Exercises. Fifth edition. Deighton Bell, Cambridge. [Reprinted by Hafner Publishing Co., New York, 1959.]
^ "Earliest uses of symbols in probability and statistics".
Edwards, A.W.F (2002). Pascal's arithmetical triangle: the story of a mathematical idea (2nd ed.). JHU Press.
Huygens, Christiaan (1657). De ratiociniis in ludo aleæ (English translation, published in 1714:).
Theory of probability distributions
probability mass function (pmf)
probability density function (pdf)
cumulative distribution function (cdf)
quantile function
raw moment
central moment
skewness
L-moment
moment-generating function (mgf)
characteristic function
probability-generating function (pgf)
cumulant
combinant
Dungeons & Dragons, American Civil War, Bible, Latin, Asia
Probability distribution
Statistics, Normal distribution, Probability density function, Integral, Survey methodology
Statistics, Survey methodology, Regression analysis, Sociology, Mathematics
Statistics, Normal distribution, Covariance, Probability distribution, Regression analysis
Bias of an estimator
Statistics, Robust statistics, Jstor, Regression analysis, Estimator
Poisson distribution
Negative binomial distribution, Statistics, Time, Binomial distribution, Space
Cauchy distribution
Normal distribution, Probability distribution, Expected value, Indeterminate form, Variance
Optimal design
Design of experiments, Statistical model, Statistics, Statistical theory, Jack Kiefer (mathematician)
Glossary of poker terms
Texas hold 'em, Draw poker, Stud poker, Community card poker, Poker tournament | CommonCrawl |
Finite dimensional global attractor for a Bose-Einstein equation in a two dimensional unbounded domain
On the initial value problem of fractional stochastic evolution equations in Hilbert spaces
Positive solution for quasilinear Schrödinger equations with a parameter
GUANGBING LI 1,
Business School of Hunan University, Changsha, Hunan 410082, China
Received September 2014 Revised February 2015 Published June 2015
In this paper, we study the following quasilinear Schrödinger equations of the form \begin{eqnarray} -\Delta u+V(x)u-[\Delta(1+u^2)^{\alpha/2}]\frac{\alpha u}{2(1+u^2)^{(2-\alpha)/2}}=\mathrm{g}(x,u), \end{eqnarray} where $1 \le \alpha \le 2$, $N \ge 3$, $V\in C(R^N, R)$ and $\mathrm{g}\in C(R^N\times R, R)$. By using a change of variables, we get new equations, whose respective associated functionals are well defined in $H^1(R^N)$ and satisfy the geometric hypotheses of the mountain pass theorem. Using the special techniques, the existence of positive solutions is studied.
Keywords: mountain pass theorem., positive solution, Quasilinear Schrödinger equation.
Mathematics Subject Classification: 35J20, 35J60, 35Q5.
Citation: GUANGBING LI. Positive solution for quasilinear Schrödinger equations with a parameter. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1803-1816. doi: 10.3934/cpaa.2015.14.1803
J. M. Bezerra do Ó, O. H. Miyagaki and S. H. M. Soares, Soliton solutions for quasilinear Schrödinger equations: the critical exponential case,, \emph{Nonlinear Anal.}, 67 (2007), 3357. doi: 10.1016/j.na.2006.10.018. Google Scholar
J. M. Bezerra do Ó, O. H. Miyagaki and S. H. M. Soares, Soliton solutions for quasilinear Schrödinger equations with critical growth,, \emph{J. Differential Equations}, 248 (2010), 722. doi: 10.1016/j.jde.2009.11.030. Google Scholar
M. Colin and L. Jeanjean, Solutions for a quasilinear Schrödinger equation: a dual approach,, \emph{Nonlinear Analysis: Theorey, 56 (2004), 213. doi: 10.1016/j.na.2003.09.008. Google Scholar
Y. Cheng and J. Yang, Positive solution to a class of relativistic nonlinear Schrödinger equation,, \emph{J. Math. Anal. Appl.}, 411 (2014), 665. doi: 10.1016/j.jmaa.2013.10.006. Google Scholar
S. Kurihara, Large-amplitude quasi-solitons in superfluid films,, \emph{Journal of the physical Society of Japan}, 50 (1981), 3262. Google Scholar
P. L. Lions, The concentration-compactness principle in the calculus of variations: the locally compact cases, part I and part II,, \emph{Ann. Inst. H. Poincar\'e Anal. Non Lin\, 1 (1984), 109. Google Scholar
A. G. Litvak and A. M. Sergeev, One dimensional collapse of plasma waves,, \emph{JETP Letters}, 27 (1978), 517. Google Scholar
J. Q. Liu, Y. Q. Wang and Z. Q. Wang, Soliton solutions for quasilinear Schrödinger equations II,, \emph{Journal of Differential Equations}, 187 (2003), 473. doi: 10.1016/S0022-0396(02)00064-5. Google Scholar
E. W. Laedke, K. H. Spatschek and L. Stenflo, Evolution theorem for a class of perturbed envelope soliton solutions,, \emph{Journal of Mathematical Physics}, 24 (1983), 2764. doi: 10.1063/1.525675. Google Scholar
J. Liu and Z. Q. Wang, Soliton solutions for a quasilinear Schrödinger equations I,, \emph{Proc. Amer. Math. Soc.}, 131 (2003), 441. doi: 10.1090/S0002-9939-02-06783-7. Google Scholar
J. Liu, Y. Wang and Z. Q. Wang, Solutions for quasilinear Schrödinger equations via the Nehari method,, \emph{Comm. Partial Differential Equations}, 29 (2004), 879. doi: 10.1081/PDE-120037335. Google Scholar
A. Nakamura, Damping and modification of exciton solitary waves,, \emph{J. Phys. Soc.}, 42 (1977), 1823. Google Scholar
J. M. do Ó and U. Secero, Solitary waves for a class of quasilinear Schrödinger equations in dimension two,, \emph{Cale. Var. Partial Differential Equations}, 38 (2010), 275. doi: 10.1007/s00526-009-0286-6. Google Scholar
M. Porkolab and M. V. Goldman, Upper hybrid solitons and oscillating two-stream instabilities,, \emph{Phys. Fluids}, 19 (1976), 872. Google Scholar
M. Poppenberg, K. Schmitt and Z. Q. Wang, On the existence of soliton solutions to quasilinear Schrödinger equations,, \emph{Calc. Var. Partial Differential Equations}, 14 (2002), 329. doi: 10.1007/s005260100105. Google Scholar
D. Ruiz and G. Siciliano, Existence of ground states for a modified nonlinear Schrödinger equation,, \emph{Nonlinearity}, 23 (2010), 1221. doi: 10.1088/0951-7715/23/5/011. Google Scholar
Y. Shen and Y. Wang, Soliton solutions for generalized quasilinear Schrödinger equations,, \emph{Nonlinear Analysis: Theorem, 80 (2013), 194. doi: 10.1016/j.na.2012.10.005. Google Scholar
E. B. Silva and G. F. Vieira, Quasilinear asymptotically periodic Schrödinger equations with critical growth,, \emph{Calc. Var. Partial Differential Equations}, 39 (2010), 722. doi: 10.1007/s00526-009-0299-1. Google Scholar
Xian Wu, Multiple solutions for quasilinear Schrödinger equations with a parameter,, \emph{J. Differential Equations}, 256 (2014), 2619. doi: 10.1016/j.jde.2014.01.026. Google Scholar
M. B. Yang, Existence of solutions for a quasilinear Schrödinger equation with subcritical nonlinearities,, \emph{Nonlinear Analysis}, 75 (2012), 5362. doi: 10.1016/j.na.2012.04.054. Google Scholar
J. Zhang, X. H. Tang and W. Zhang, Existence of infinitely many solutions for a quasilinear elliptic equation,, \emph{Applied Mathematics Letters}, 37 (2014), 131. doi: 10.1016/j.aml.2014.06.010. Google Scholar
J. Zhang, X. H. Tang and W. Zhang, Infinitely many solutions of quasilinear Schrödinger equation with sign-changing potential,, \emph{Journal of Mathematical Analysis and Applications}, 420 (2014), 1762. doi: 10.1016/j.jmaa.2014.06.055. Google Scholar
Christopher Grumiau, Marco Squassina, Christophe Troestler. On the Mountain-Pass algorithm for the quasi-linear Schrödinger equation. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1345-1360. doi: 10.3934/dcdsb.2013.18.1345
Xiang-Dong Fang. A positive solution for an asymptotically cubic quasilinear Schrödinger equation. Communications on Pure & Applied Analysis, 2019, 18 (1) : 51-64. doi: 10.3934/cpaa.2019004
Dorota Bors. Application of Mountain Pass Theorem to superlinear equations with fractional Laplacian controlled by distributed parameters and boundary data. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 29-43. doi: 10.3934/dcdsb.2018003
Xiang-Dong Fang. Positive solutions for quasilinear Schrödinger equations in $\mathbb{R}^N$. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1603-1615. doi: 10.3934/cpaa.2017077
Ian Schindler, Kyril Tintarev. Mountain pass solutions to semilinear problems with critical nonlinearity. Conference Publications, 2007, 2007 (Special) : 912-919. doi: 10.3934/proc.2007.2007.912
Wentao Huang, Jianlin Xiang. Soliton solutions for a quasilinear Schrödinger equation with critical exponent. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1309-1333. doi: 10.3934/cpaa.2016.15.1309
Kun Cheng, Yinbin Deng. Nodal solutions for a generalized quasilinear Schrödinger equation with critical exponents. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 77-103. doi: 10.3934/dcds.2017004
Jianqing Chen. A variational argument to finding global solutions of a quasilinear Schrödinger equation. Communications on Pure & Applied Analysis, 2008, 7 (1) : 83-88. doi: 10.3934/cpaa.2008.7.83
Fouad Hadj Selem, Hiroaki Kikuchi, Juncheng Wei. Existence and uniqueness of singular solution to stationary Schrödinger equation with supercritical nonlinearity. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4613-4626. doi: 10.3934/dcds.2013.33.4613
Zhengping Wang, Huan-Song Zhou. Radial sign-changing solution for fractional Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 499-508. doi: 10.3934/dcds.2016.36.499
Gökçe Dİlek Küçük, Gabil Yagub, Ercan Çelİk. On the existence and uniqueness of the solution of an optimal control problem for Schrödinger equation. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 503-512. doi: 10.3934/dcdss.2019033
Yinbin Deng, Wei Shuai. Positive solutions for quasilinear Schrödinger equations with critical growth and potential vanishing at infinity. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2273-2287. doi: 10.3934/cpaa.2014.13.2273
Xudong Shang, Jihui Zhang. Multiplicity and concentration of positive solutions for fractional nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2239-2259. doi: 10.3934/cpaa.2018107
Dmitry Glotov, P. J. McKenna. Numerical mountain pass solutions of Ginzburg-Landau type equations. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1345-1359. doi: 10.3934/cpaa.2008.7.1345
Claudianor O. Alves, Giovany M. Figueiredo, Marcelo F. Furtado. Multiplicity of solutions for elliptic systems via local Mountain Pass method. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1745-1758. doi: 10.3934/cpaa.2009.8.1745
Zhengping Wang, Huan-Song Zhou. Positive solution for a nonlinear stationary Schrödinger-Poisson system in $R^3$. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 809-816. doi: 10.3934/dcds.2007.18.809
Yinbin Deng, Yi Li, Xiujuan Yan. Nodal solutions for a quasilinear Schrödinger equation with critical nonlinearity and non-square diffusion. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2487-2508. doi: 10.3934/cpaa.2015.14.2487
Minbo Yang, Yanheng Ding. Existence and multiplicity of semiclassical states for a quasilinear Schrödinger equation in $\mathbb{R}^N$. Communications on Pure & Applied Analysis, 2013, 12 (1) : 429-449. doi: 10.3934/cpaa.2013.12.429
Yaotian Shen, Youjun Wang. A class of generalized quasilinear Schrödinger equations. Communications on Pure & Applied Analysis, 2016, 15 (3) : 853-870. doi: 10.3934/cpaa.2016.15.853
Hiroshi Isozaki, Hisashi Morioka. A Rellich type theorem for discrete Schrödinger operators. Inverse Problems & Imaging, 2014, 8 (2) : 475-489. doi: 10.3934/ipi.2014.8.475
PDF downloads (10)
GUANGBING LI | CommonCrawl |
Template talk:Euro topics
WikiProject Numismatics
(Rated Template-class)
NumismaticsWikipedia:WikiProject NumismaticsTemplate:WikiProject Numismaticsnumismatic articles
This template is within the scope of WikiProject Numismatics, a collaborative effort to improve the coverage of numismatics and currencies on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Template This template does not require a rating on the project's quality scale.
WikiProject European Union
European UnionWikipedia:WikiProject European UnionTemplate:WikiProject European UnionEuropean Union articles
European Union portal
This template is within the scope of WikiProject European Union, a collaborative effort to improve the coverage of the European Union on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
1 The following discussion was copied from Template talk:EU coins menu, which now redirects here.
3 Note on Sweden, UK Denmark
4 Montenegro and Kosovo
5 Template position
6 Oct07 redesign
7 ECU • ERM • EMU
8 Target countries
9 SEPA
10 Euro convergence criteria
11 Kosovo and Montenegro redux
12 Swiss franc and Switzerland
13 Currencies pegged to the Euro?
The following discussion was copied from Template talk:EU coins menu, which now redirects here.[edit]
The following discussion was copied from Template talk:EU coins menu, which now redirects here.
This seems completely redundant with Template:Eurocoins--I really can't see why we need both, especially in the same articles--Cypriot euro coins, etc.. 24.17.48.241 15:58, 22 October 2005 (UTC)
I agree, and propose we remove the Eurocoins template. The menu template is less ugly. ナイトスタリオン ✉ 20:46, 22 October 2005 (UTC)
I don't know about "ugly", but I would be inclined to agree, as I certainly would say the right-column format seems less intrusive, and more readily accessible. 24.17.48.241 04:42, 24 October 2005 (UTC)
I meant "ugly" as in "messy as far as layout and design are concerned". 't seems we agree on this, then? ナイトスタリオン ✉ 10:13, 24 October 2005 (UTC)
Image[edit]
As the template is growing more and more I propose excluding the image from this template. --Dima1 (talk) 15:35, 2 May 2008 (UTC)
No any comments? --Dima1 (talk) 05:25, 12 May 2008 (UTC)
I personaly still like it, but I have high resolution displays, so it shows perfect for me. Miguel.mateo (talk) 09:17, 12 May 2008 (UTC)
Template is very big to have images inside. I insist on removing it. Any arguments? --Dima1 (talk) 13:22, 14 July 2008 (UTC)
Dima, we have talked about this, why you want it removed? It does look nice on my computer. What resolution do you have? Miguel.mateo (talk) 22:18, 14 July 2008 (UTC)
Removing the image is not going to change the fact that the template is large. If anything, the image serves to provide overflow space so that the template is less cramped and easier to read. Should we have a vote? Cheers. The € • T/C 23:45, 14 July 2008 (UTC)
Dima1, it would be helpful if we could finish our discussion before unilateral action is taken on your part. Both Miguel.mateo and myself have concerns about executing the changes you are proposing. By my count, there are three outstanding questions in this discussion that require address before action is taken. Please show some restraint in this matter. Cheers. The € • T/C 18:55, 15 July 2008 (UTC)
Do you agree that template is rather big? I understand that this picture is very beautiful, but we have to take into consideration readability and size of the articles. If you don't agree we can vote on it. --Dima1 (talk) 19:22, 15 July 2008 (UTC)
IMO the template looks better with the image, i think without it, it would look dull - however i suggest that the image should be elongated to touch both the top and bottom ends--Melitikus (talk) 19:36, 15 July 2008 (UTC)
I agree with Melitikus and Theeuro. —Nightstallion 21:53, 15 July 2008 (UTC)
Guys, let me go back to my original question: what resolution are you using? I use at home 1200x1024 and at work 1600x1200 and the image looks great in the template on both computers. If you have smaller resolution (which is not normal now a days but still possible) you may think that you need to enlarge the image or that the template is huge, but I honestly believe this is not the case, and even after I reduce my resolution, removing the image does not help to improve the template, as Theeuro mentioned. Miguel.mateo (talk) 22:35, 15 July 2008 (UTC)
I am using 1024x768, but the question is not about resolutions but about the template size; when removing image template becomes smaller. Should we use it anyway, and the second thing is if the image is so much connected with the template topic that you all want to leave it. How do you think? --Dima1 (talk) 10:30, 17 July 2008 (UTC)
Note on Sweden, UK Denmark[edit]
I'd like to include a small note at the bottom of the template on the Non-Euro EU members.
Note: EU members Denmark, Sweden and the UK currently maintain their national currencies.
Any objections?Seabhcán 12:17, 27 January 2006 (UTC)
Support, if you use UK instead. ;) —Nightstallion (?) 13:02, 27 January 2006 (UTC)
I just went ahead and added a (slightly modified) note. —Nightstallion (?) 02:16, 29 January 2006 (UTC)
Montenegro and Kosovo[edit]
In Montenegro, an independent country, euro € is used, and Kosovo, from Serbia
Indeed, but as neither Kosovo nor Montenegro will be able to mint their own coins until they join the Union and then the eurozone, it's irrelevant for now. —Nightstallion (?) 17:50, 30 October 2006 (UTC)
So I can't stop wondering, what kind (in terms of national) of coins do they use? --ChoChoPK (球球PK) (talk | contrib) 23:37, 30 October 2006 (UTC)
Good question. Before the euro, the Balkans mostly relied on the German mark, but I don't think that would make the likelihood of encountering German euro coins any higher... —Nightstallion (?) 22:30, 13 November 2006 (UTC)
If the coins don't exist, why have an article about them [1] - there was not a Kosovan mark coins article. --Rumping (talk) 08:16, 19 November 2007 (UTC)
I feel its better to leave Kosovo as 'Misc'. We should avoid disputed about the status of that place (independent nation vs serbian province).Thewikipedian (talk) 12:16, 25 February 2008 (UTC)
Template position[edit]
Currently, this template is located at the right of the screen; this has caused problems to users with 1024 pixel wide screen (or less) on denomination articles (1 cent ~ 2 euro). I suggest making this template horizontal and place at the bottom.
If I were to take one step further, I would make a "euro related topic" template, where things like "Eurozone", "Currencies related to the euro", "ERM" would all be in that box. We could also place "coins by country" and "coins by denomination" there too. --ChoChoPK (球球PK) (talk | contrib) 01:20, 26 December 2006 (UTC)
Mh. I prefer it the way it currently is, since it's rather the "related articles" type of template (confer {{Politics of Slovenia}}) than the "see also" type of template (confer {{NATO}}). What did you have in mind? —Nightstallion (?) 12:16, 30 December 2006 (UTC)
But how do you address the problem of spacing? These articles have huge space to accommodated roughly 60% of the internet users. I'm sorry to say this, but the result is ugly for 100% of the users, regardless of screen resolution. --ChoChoPK (球球PK) (talk | contrib) 10:17, 2 January 2007 (UTC)
Mh. Could you show me what kind of change you had in mind? I'm not against changing it out of principle, but I'd like to see first whether we can keep its visual pleasantness. Despite its problems, I really like the way the template currently looks... —Nightstallion (?) 17:36, 2 January 2007 (UTC)
User:Chochopk/Template sandbox 1. --ChoChoPK (球球PK) (talk | contrib) 10:13, 3 January 2007 (UTC)
Mh. Three requests:
Make it "pre-euro" instead of "pre euro".
Keep the distinction between the new member states which will adopt it earlier or later and Andorra/Denmark/Sweden/UK.
Keep the short text at the bottom on the situation in Denmark/Sweden/UK.
Apart from that, I think I'm sold. :) —Nightstallion (?) 13:47, 3 January 2007 (UTC)
Just out of curiosity, when will you make the change? —Nightstallion (?) 09:21, 13 January 2007 (UTC)
Please allow some time. Very busy at wiki and in real life. I aim to have a complete draft for you to review by the end of this weekend. --ChoChoPK (球球PK) (talk | contrib) 15:29, 13 January 2007 (UTC)
Sorry, I did not mean to bother or annoy you. Great work! —Nightstallion (?) 12:29, 15 January 2007 (UTC)
Copied talk content ends here.
Forgive me, but this template is awful, it is far too big, and with far too many topics to be a proper navigational aid. The only relevant section of it to the title is the first, the general topics related to the Euro and Eurozone. There should be seperate navigational templates for the coins (which there was before people took it upon themselves to alter), and for the various other currencies. I for one object to the article about the pound sterling (not the "British Pound" as it is incorrectly labelled) being classed as a "Euro related topic". Hammersfan 15/02/07, 18.10 GMT
Contrary to the previous comment, I find the new template pretty useful, lot more than the earlier. It contains all relevant topics related to the euro as a currency. Although Britain is not a member of the monetary union, it is supposed to be, and th efar future aim for all EU members is to join the euro anyways. Timur lenk 20:12, 15 February 2007 (UTC)
I disagree entirely. —Nightstallion (?) 20:55, 15 February 2007 (UTC)
I don't feel strongly against splitting, but allow me to explain the motive behind this template. I created this template as a replacement for Template:EU coins menu and Template:PreEuroCurrencies; the vertical EU coins menu was causing problem for users with smaller screen, so a horizontal format is better. And then I realize there was no euro topic nav box for things like eurozone and currencies related to the euro and the result was a whole bunch of links in the see also section. And often times, if a user is interested in Cyprus and its relationship with the euro, he/she probably wants to read/edit currencies related to the euro, European Exchange Rate Mechanism, Cypriot pound, Cypriot euro coins. I see that the target dates of joining the euro for the new EU members are updated frequently. Having these article links helps consistency.
By the way, just to be clear, Nightstallion is disagreeing with Hammersfan, not with Timur lenk. --ChoChoPK (球球PK) (talk | contrib) 13:40, 16 February 2007 (UTC)
Please make the template collapsed. --Dima1 (talk) 21:59, 13 July 2008 (UTC)
No any comments? --Dima1 (talk) 14:03, 27 August 2008 (UTC)
I disagree as well, why do you want it collapsable? Also, it looks easier to the eye of us editors if you ask those questions at the end of the talk section, I missed your previous question almost two months ago about this topic; just a suggestion Miguel.mateo (talk) 01:59, 28 August 2008 (UTC)
Oct07 redesign[edit]
Looks fantastic! Clear, pleasing to the eye (great image btw) and stylish. I knew it was you SSJ, good work! - J Logan t: 09:46, 3 October 2007 (UTC)
Aye! —Nightstallion 22:19, 3 October 2007 (UTC)
Thanks! - S. Solberg J. 16:35, 10 November 2007 (UTC)
ECU • ERM • EMU[edit]
I would have thought ECU • ERM • EMU would be better spelt out [2] than left as I • II • III. People looking for these (e.g. me) find it difficult to find these topics if we have to rely on mouseovers. --Rumping (talk) 08:24, 19 November 2007 (UTC)
I agree with Rumping on this issue. Using the actual abbreviations seems more informative. In addition to Rumping's original point, I might add that a researcher might not think to mouseover I • II • III to find the underlying topics. Perhaps a happy compromise might be in order for this template. I propose using ECU(I) • ERM(II) • EMU(III) in the template, if Ssolbergj is still very adamant about expressing the three step process. --Theeuro (talk) 02:42, 20 November 2007 (UTC)
Well, I tried it that way and it looks confusing. So it is back with the abbr. --Theeuro (talk) 03:25, 21 November 2007 (UTC)
My argument is that they were step 1, 2 and 3 of the EU's single currency plan (the European Monetary System). The chronological steps justify their place under 'history'. If they're just some abbreviations, mentioning EMU (the last step which still is active and therefore is under 'topics' as well) twice would be redundant for example. "ECU", "ERM" and "EMU" are three confusing abbreviations. Numbers and chronology are easier to understand. - S {\displaystyle \mathrm {S} } . S o l b e r g {\displaystyle \mathrm {Solberg} } J {\displaystyle \mathrm {J} } . 13:33, 21 November 2007 (UTC)
But then since the Exchange Rate Mechanism is still active it too should be a topic and the European Currency Unit should be spelt out as a former currency. I II III is just unhelpful. --Rumping (talk) 20:00, 21 November 2007 (UTC)
Yes the ERM is still active in some countries, but ERM is and was a tool on the path towards the single currency. May I suggest you read the EMS European Monetary System article? The numbers make perfect sense. - S {\displaystyle \mathrm {S} } . S o l b e r g {\displaystyle \mathrm {Solberg} } J {\displaystyle \mathrm {J} } . 23:56, 21 November 2007 (UTC)
I don't think we disagree about what the history was or the present actually is, just the best presentation. --Rumping 18:37, 2 December 2007 (UTC)
That the history and/or present situation of these three composites of the Euro is not in dispute. Rumping makes a very good point when he says '...best presentation'. Having ECU • ERM • EMU instead of I • II • III speaks more to the researchers' ability to get to the relevant article. Should we have a vote? - The € • T/C 05:05, 4 December 2007 (UTC)
No it's not about presentation; the point is that if they aren't numbers, (or pointed out as chronological steps in this template) the sense disappears. To divide the EMS into three numbered steps is an established practice, I didn't make it up. - S {\displaystyle \mathrm {S} } . S o l b e r g {\displaystyle \mathrm {Solberg} } J {\displaystyle \mathrm {J} } . 00:06, 5 December 2007 (UTC)
I think both arguments are valid. Why don't we simply use both? —Nightstallion 17:36, 5 December 2007 (UTC)
Thanks, that's exactly what I had in mind. :) —Nightstallion 02:39, 11 December 2007 (UTC)
This looks so much better. Way to compromise, S. Solberg J.!
-The € • T/C 09:39, 11 December 2007 (UTC)
Target countries[edit]
I was wondering if it is better to remove the years next to the target countries, since they change in the article too often, without any reason and almost all of them are based on real speculation (there is no official release date), unless it is as realistic as Slovakia (but I would remove 2009 as well). Any comments? Miguel.mateo (talk) 12:59, 1 July 2008 (UTC)
No one has comment and I think that this information (the target year) is so missleading, I will remove it. Miguel.mateo (talk) 01:46, 22 July 2008 (UTC)
I would keep the Slovakian one since it is known--Melitikus (talk) 10:06, 23 July 2008 (UTC)
SEPA[edit]
SEPA is not mentioned anywhere on this template. Shouldn't it be there somewhere? (Stefan2 (talk) 07:29, 10 May 2009 (UTC))
Euro convergence criteria[edit]
The article Euro convergence criteria is related to the euro. Can it be added somewhere? —Preceding unsigned comment added by 212.247.11.156 (talk) 21:12, 9 October 2009 (UTC)
Kosovo and Montenegro redux[edit]
"Proposed adoption by other countries" is a misleading category for Kosovo and Montenegro to sit in. Maybe they should fit under the "International status" heading as a sub-heading "Unilateral adoption by non-EU countries".Travelpleb (talk) 14:13, 22 July 2013 (UTC)
Euro topics
Economic and Monetary Union of the European Union
Linguistic issues
ECB President
European System of Central Banks
Euro Group
Euro summit
Fiscal provisions
Stability and Growth Pact
European Financial Stability Facility
European Financial Stabilisation Mechanism
European Stability Mechanism
Euro Plus Pact
European Fiscal Compact
"Snake in the tunnel"
European Monetary System
I ECU
II ERM
III EMU
European Monetary Cooperation Fund
European Monetary Institute
Black Wednesday
Economy of Europe
Economy of the European Union
Euro calculator
Euro Interbank Offered Rate (Euribor)
Single Euro Payments Area (SEPA)
International status
Proposed eurobonds
Reserve currency
Petroeuro
non-EU use
Other commemorative coins
Identifying marks
Europa coin programme
Euro mint
Coins by issuing country
EU / proposed
Non-EU
Potential adoption by
Currencies yielded
European Currency Unit
Austrian schilling
Belgian franc
Cypriot pound
Dutch guilder
Estonian kroon
Finnish markka
French franc
German mark
Greek drachma
Luxembourgish franc
Maltese lira
Monegasque franc
Portuguese escudo
Sammarinese lira
Slovenian tolar
Spanish peseta
Vatican lira
ERM II
other (EU)
British pound sterling (incl. Gibraltar pound)
Travelpleb (talk) 14:29, 22 July 2013 (UTC)
Swiss franc and Switzerland[edit]
Obviously Switzerland is not part of the EU, or the Euro-zone (though there are many bilateral agreements between the EU and Switzerland on economic issues). However, the euro is accepted in Swizerland in many places, especially in the border regions, probably mostly because Switzerland is surrounded by the euro-zone; as such, would it make sense to either add Switzerland to the "Potential adoption by other countries" section or better, to add a section for "Use by non-EU countries" and put places like Kosovo and Switzerland in that section. Thoughts? **** you, you ******* ****. (talk) 08:52, 20 December 2013 (UTC)
Kosovo uses the euro as the actual unit of currency. The acceptance of euros in areas of Switzerland is not a unique situation, it happens in many border areas around the world (like the UK for another euro example). Individual companies/people are at liberty to take whatever currency they want, this doesn't change that Switzerland's official currency is the franc. CMD (talk) 16:04, 20 December 2013 (UTC)
Currencies pegged to the Euro?[edit]
Should we also mention in this template the list of currencies pegged to the Euro: XAF, XOF, XPF, BAM, etc. 2601:602:9C01:6075:0:0:0:5BE1 (talk) 14:59, 1 September 2017 (UTC)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Template_talk:Euro_topics&oldid=813139512"
Template-Class numismatic articles
NA-importance numismatic articles
WikiProject Numismatics articles
Template-Class European Union articles
NA-importance European Union articles
WikiProject European Union articles
European Union [videos]
The European Union is a political and economic union of 28 member states that are located primarily in Europe. It has an area of 4,475,757 km2 and an estimated population of about 513 million. The EU has developed an internal single market through a standardised system of …
The Congress of Vienna met in 1814–15. The objective of the Congress was to settle the many issues arising from the French Revolutionary Wars, the Napoleonic Wars, and the dissolution of the Holy Roman Empire.
In 1989, the Iron Curtain fell, enabling the Community to expand further (Berlin Wall pictured)
Danish krone [videos]
The krone is the official currency of Denmark, Greenland, and the Faroe Islands, introduced on 1 January 1875. Both the ISO code "DKK" and currency sign "kr." are in common use; the former precedes the value, the latter in some …
An aluminium bronze 10-kroner coin (2011- series)
Image: DKK 100 obverse (2009)
Image: DKK 50 obverse (2009)
Image: DKK 50 reverse (2009)
Swedish krona [videos]
The krona is the official currency of Sweden. Both the ISO code "SEK" and currency sign "kr" are in common use; the former precedes or follows the value, the latter usually follows it but, especially in the past, it sometimes preceded the …
Image: Swedish 10 crown coin front side
Image: SWE 31 Sveriges Riksbank 1000 Kronor (1909, specimen)
Image: Collage SEK
Numismatics [videos]
Numismatics is the study or collection of currency, including coins, tokens, paper money and related objects. While numismatists are often characterised as students or collectors of coins, the discipline also includes the broader study of money and other payment media used to resolve debts and the …
Alexander the Great tetradrachm from the Temnos Mint circa 188-170 BC
Two 20 kr gold coins from the Scandinavian Monetary Union.
Currency [videos]
A currency, in the most specific use of the word, refers to money in any form when in use or circulation as a medium of exchange, especially circulating banknotes and coins. A more general definition is that a currency …
Cowry shells being used as money by an Arab trader.
Song dynasty Jiaozi, the world's earliest paper money.
Eurozone [videos]
The eurozone, officially called the euro area, is a monetary union of 19 of the 28 European Union member states which have adopted the euro as their common currency and sole legal tender. The monetary authority of the eurozone is the Eurosystem. The other nine members of the European Union …
Eurogroup President Mário Centeno
The European Central Bank (seat in Frankfurt depicted) is the supranational monetary authority of the eurozone.
Cypriot euro coins [videos]
Cypriot euro coins feature three separate designs for the three series of coins. Cyprus has been a member of the European Union since 1 May 2004, and is a member of the Economic and Monetary Union of the European Union. It has completed the third stage of the EMU and adopted the euro as its …
Image: Accession of Cyprus to the euro area re
Double eagle [videos]
A double eagle is a gold coin of the United States with a denomination of $20. The coins are made from a 90% gold and 10% copper alloy and have a total weight of 1.0750 …
The 1849 liberty head design by James B. Longacre
The 1907 high relief double eagle designed by Augustus Saint-Gaudens
Side of the 1907 "high relief" double eagle showing edge lettering and surface detail
The Smithsonian specimen of the 1933 Saint Gaudens double eagle
Hank Aaron [videos]
Henry Louis Aaron, nicknamed "Hammer" or "Hammerin' Hank", is a retired American Major League Baseball right fielder who serves as the senior vice president of the Atlanta Braves. He played 21 seasons for the Milwaukee/Atlanta Braves in the National League and two …
The Braves' jersey Hank Aaron wore when he broke Babe Ruth's career home run record in 1974
The fence outside of Turner Field over which Hank Aaron hit his 715th career home run still exists.
Hank Aaron's Hall of Fame plaque at the Baseball Hall of Fame in Cooperstown, New York
Hank Aaron during his August 5, 1978 visit to the White House.
Maserati 450S [videos]
The Maserati 450S is a racing car made by Maserati of Italy, and used in FIA's endurance World Sportscar Championship racing. A total of nine were made. — Their design started in 1954 led by Vittorio Bellentani and Guido Taddeucci. Their …
1957 Maserati 450S at Palm Springs 2010.
1957 450S Costin/Zagato coupe at Scarsdale (2006).
The engine in the Maserati 450S
Carroll Shelby standing next to a Maserati 450S that he raced in 1957.
Battle of Agincourt [videos]
The Battle of Agincourt was one of the greatest English victories in the Hundred Years' War. It took place on 25 October 1415 near Azincourt in the County of Saint-Pol, in northern France. England's unexpected victory against a numerically …
The Battle of Agincourt, 15th-century miniature, Enguerrand de Monstrelet
Monumental brass of an English knight wearing armour at the time of Agincourt (Sir Maurice Russell (d. 1416), Dyrham Church, Gloucestershire)
Miniature from Vigiles du roi Charles VII. The battle of Azincourt 1415.
1915 depiction of Henry V at the Battle of Agincourt : The King wears on this surcoat the Royal Arms of England, quartered with the Fleur de Lys of France as a symbol of his claim to the throne of France.
Battle of Salamis [videos]
The Battle of Salamis was a naval battle fought between an alliance of Greek city-states under Themistocles and the Persian Empire under King Xerxes in 480 BC which resulted in a decisive victory for the outnumbered Greeks. The …
Modern view of the strait of Salamis, where the battle took place. Seen from the south.
Battle order. The Achaemenid fleet (in red) entered from the east (right) and confronted the Greek fleet (in blue) within the confines of the strait.
Greek trireme.
Fleet of triremes based on the full-sized replica Olympias
Amazon rainforest [videos]
The Amazon rainforest, also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest in the Amazon biome that covers most of …
Amazon rainforest, near Manaus, Brazil
Aerial view of the Amazon rainforest.
Members of an uncontacted tribe encountered in the Brazilian state of Acre in 2009.
Geoglyphs on deforested land in the Amazon rainforest, Acre.
Vienna [videos]
Vienna is the federal capital and largest city of Austria, and one of the nine states of Austria. Vienna is Austria's primate city, with a population of about 1.9 million …
1683 Allen (printed 1686)
Vienna from Belvedere by Bernardo Bellotto, 1758
Vienna Ringstraße and State Opera around 1870
Color photo lithograph of Vienna, 1900
History of Iran [videos]
The history of Iran, which was commonly known until the mid-20th century as Persia in the Western world, is intertwined with the history of a larger region, also to an extent known as Greater Iran, comprising the area from Anatolia, the Bosphorus, and Egypt in the west to the borders of Ancient …
Chogha Zanbil is one of the few extant ziggurats outside of Mesopotamia and is considered to be the best preserved example in the world.
A gold cup at the National Museum of Iran, dating from the first half of 1st millennium BC
A panoramic view of Persepolis.
The Seleucid Empire in 200 BC, before Antiochus was defeated by the Romans
Robert Pattinson [videos]
Robert Douglas Thomas Pattinson is an English actor, model and musician. He started his film career by playing Cedric Diggory in Harry Potter and the Goblet of Fire in 2005. He later got the leading role of vampire Edward Cullen in the film adaptations of the Twilight novels by …
Pattinson in February 2017
Pattinson at the Photocall for The Twilight Saga: New Moon at the Crillon Hotel in Paris in 2009
Pattinson at the New York premiere of Water for Elephants
Pattinson at the 2012 San Diego Comic-Con International
Marie Antoinette [videos]
Marie Antoinette was the last Queen of France before the French Revolution. She was born an Archduchess of Austria and was the penultimate child and youngest daughter of Empress Maria Theresa and …
Portrait by Élisabeth Vigée Le Brun, 1778
Archduchesses Maria Antonia in a pink dress and Maria Carolina in blue (watercolor on ivory by Antonio Pencini, 1764)
Archduchess Maria Antonia (watercolor by Jean-Étienne Liotard, 1762)
Marriage of Marie Antoinette with Louis-Auguste celebrated in the Royal Chapel of Versailles by the Archbishop-Duke of Reims on May 16, 1770
Topaz [videos]
Topaz is a silicate mineral of aluminium and fluorine with the chemical formula Al2SiO42. Topaz crystallizes in the orthorhombic system, and its crystals are mostly prismatic terminated by pyramidal and other faces. It is one of the hardest naturally occurring minerals …
Topaz crystal on white matrix
Image: Topaz Mountain By Phil Konstantin
Image: Topaz k 312b
Image: Topaz k 182a
Musée d'Orsay [videos]
The Musée d'Orsay is a museum in Paris, France, on the Left Bank of the Seine. It is housed in the former Gare d'Orsay, a Beaux-Arts railway station built between 1898 and 1900. The museum holds mainly French art dating from 1848 to 1914, including paintings …
Main Hall of the Musée d'Orsay
The Musée d'Orsay as seen from the Passerelle Léopold-Sédar-Senghor
Musée d'Orsay Clock, Victor Laloux, Main Hall
The interior of the museum.
Ancient Greek coinage [videos]
The history of ancient Greek coinage can be divided into four periods, the Archaic, the Classical, the Hellenistic and the Roman. The Archaic period extends from the introduction of coinage to the Greek world during the 7th century BC until the Persian Wars …
The earliest coinage of Athens, circa 545-525/15 BC
Archaic coin of Athens with effigy of Athena on the obverse, and olive sprig, owl and ΑΘΕ, initials of "Athens" on the reverse. Circa 510-500/490 BC
Above: Six rod-shaped obeloi (oboloi) displayed at the Numismatic Museum of Athens, discovered at Heraion of Argos. Below: grasp of six oboloi forming one drachma
A Syracusan tetradrachm (c. 415–405 BC) Obverse: head of the nymph Arethusa, surrounded by four swimming dolphins and a rudder Reverse: a racing quadriga, its charioteer crowned by the goddess Victory in flight.
Imperial crown [videos]
An Imperial Crown is a crown used for the coronation of emperors. — Design — Crowns in Europe during the medieval period varied in design: — An open crown is one which consists basically of a golden circlet elaborately worked and decorated with precious stones or enamels.... The medieval French …
The British Imperial State Crown viewed from the side with the front facing left (the Black Prince's Ruby, and the Cullinan II are just visible in profile).
Emperor Maximilian I wearing a crown with mitre
Image: Museum of Anatolian Civilizations 118
Image: Probus Coin
Will Smith [videos]
Willard Carroll Smith II is an American actor, rapper and media personality. In April 2007, Newsweek called him "the most powerful actor in Hollywood". Smith has been nominated for five Golden Globe Awards and two Academy Awards, and has won four Grammy Awards. — In the …
Smith in 2017
Smith at the Emmy Awards in 1993
Smith hosting the 2011 Walmart Shareholders Meeting
Smith performed the soccer 2018 World Cup's official song "Live It Up"
Jennifer Aniston [videos]
Jennifer Joanna Aniston is an American actress, film producer, and businesswoman. The daughter of actors John Aniston and Nancy Dow, she began working as an actress at an early age with an uncredited role in the 1987 film Mac and Me. After her career grew successfully in …
Aniston in February 2012
Aniston at the 2008 Toronto International Film Festival
Aniston at the He's Just Not That into You premiere in 2009
Aniston at the London premiere of Horrible Bosses in 2011
Crown Jewels of the United Kingdom [videos]
The Crown Jewels of the United Kingdom, originally the Crown Jewels of England, are 140 royal ceremonial objects kept in the Tower of London, which include the regalia and vestments worn by British kings and queens at their coronations.Symbols of 800 years of monarchy, the coronation regalia are …
Elizabeth II in her regalia, 1953
King Æthelstan presenting an illuminated manuscript to St Cuthbert, c. 930
First great seal of the Confessor
The Stone of Scone in the Coronation Chair, 1859
Angelina Jolie [videos]
Angelina Jolie is an American actress, filmmaker, and humanitarian. The recipient of such accolades as an Academy Award and three Golden Globe Awards, she has been named Hollywood's highest-paid actress multiple times. — Jolie made her screen …
Jon Voight at the Academy Awards in April 1988, where his children accompanied him
Jolie with her husband Brad Pitt, at the Cannes premiere of A Mighty Heart in May 2007
Jolie in character as Christine Collins on the set of Changeling in October 2007
Jolie at the 2011 Cannes Film Festival
History of Monaco [videos]
The early history of Monaco is primarily concerned with the protective and strategic value of the Rock of Monaco, the area's chief geological landmark, which served first as a shelter for ancient peoples and later as a fortress. Part of Liguria's history since the fall of the Roman Empire, from the …
The Rock in 1890
La Roche in modern times
Western Front in 1944
View of Monaco in 2016
Patrick Dempsey [videos]
Patrick Galen Dempsey is an American actor and racing driver, best known for his role as neurosurgeon Derek "McDreamy" Shepherd in Grey's Anatomy, starring with Ellen Pompeo. He saw early success as an actor, starring in a number of films in the 1980s …
Dempsey in 2016
Dempsey at the 2008 Rolex 24 Hours of Daytona.
Dempsey waves to the crowd at the 2015 Indianapolis 500 where he served as the Honorary Starter
Taraji P. Henson [videos]
Taraji Penda Henson is an American actress, singer, and author. She studied acting at Howard University and began her Hollywood career in guest-roles on several television shows before making her breakthrough in Baby Boy. She received praise for playing …
Henson at the premiere of Hidden Figures in 2016
Henson in 2011
Liam Neeson [videos]
Liam John Neeson is an actor from Northern Ireland. He has been nominated for a number of awards, including an Academy Award for Best Actor, a BAFTA Award for Best Actor in a Leading Role, and three Golden Globe Awards for Best Actor in a Motion Picture Drama. Empire magazine …
Neeson at the 2012 Deauville American Film Festival
Neeson attending the premiere of The Other Man in September 2008
Liam Neeson, Deauville Film Festival, 2012.
Kaley Cuoco [videos]
Kaley Christine Cuoco is an American actress and producer. After a series of supporting film and television roles in the late 1990s, she landed her breakthrough role as Bridget Hennessy on the ABC sitcom 8 Simple Rules, on which she starred …
Cuoco in July 2017
Cuoco at the San Diego Comic-Con in July 2009
Cuoco at PaleyFest in March 2013
Tyler Perry [videos]
Tyler Perry is an American actor, playwright, filmmaker, and comedian. In 2011, Forbes listed him as the highest paid man in entertainment, earning $130 million USD between May 2010 and May 2011.Perry created and performs the Madea character, a tough …
Perry at the 82nd Academy Awards in 2010
Perry at a book signing in 2006
Monomakh's Cap [videos]
Monomakh's Cap, also called the Golden Cap, is a chief relic of the Russian Grand Princes and Tsars. It is a symbol-crown of the Russian autocracy, and is the oldest of the crowns currently …
Monomakh's Cap in the foreground and Kazan Cap in the background
Image: Russian regalia | CommonCrawl |
Global existence of solutions for the three-dimensional Boussinesq system with anisotropic data
Infinitely many solutions for an elliptic problem with double critical Hardy-Sobolev-Maz'ya terms
March 2016, 36(3): 1583-1601. doi: 10.3934/dcds.2016.36.1583
Large-time behavior of the full compressible Euler-Poisson system without the temperature damping
Zhong Tan 1, , Yong Wang 1, and Fanhui Xu 2,
School of Mathematical Sciences and Fujian Provincial Key Laboratory, on Mathematical Modeling and Scientific Computing, Xiamen University, Xiamen, 361005, China
Department of Mathematics, University of Southern California, Los Angeles, CA 90089, United States
Received January 2015 Revised April 2015 Published August 2015
We study the three-dimensional full compressible Euler-Poisson system without the temperature damping. Using a general energy method, we prove the optimal decay rates of the solutions and their higher order derivatives. We show that the optimal decay rates is algebraic but not exponential since the absence of temperature damping.
Keywords: energy method, interpolation., decay rates, Full Euler-Poisson system.
Mathematics Subject Classification: Primary: 35M10, 35Q60; Secondary: 76N10, 35Q35, 35B4.
Citation: Zhong Tan, Yong Wang, Fanhui Xu. Large-time behavior of the full compressible Euler-Poisson system without the temperature damping. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1583-1601. doi: 10.3934/dcds.2016.36.1583
G. Alì, Global existence of smooth solutions of the $N$-dimensional Euler-Poisson model,, SIAM J. Math. Anal., 35 (2003), 389. doi: 10.1137/S0036141001393225. Google Scholar
G. Alì, D. Bini and S. Rionero, Global existence and relaxation limit for smooth solutions to the Euler-Poisson model for semiconductors,, SIAM J. Math. Anal., 32 (2000), 572. doi: 10.1137/S0036141099355174. Google Scholar
G. Alì and A. Jüngel, Global smooth solutions to the multi-dimensional hydrodynamic model for two-carrier plasmas,, J. Differential Equations, 190 (2003), 663. doi: 10.1016/S0022-0396(02)00157-2. Google Scholar
F. Chen, Introduction to Plasma Physics and Controlled Fusion,, Vol. 1, (1984). doi: 10.1007/978-1-4757-5595-4. Google Scholar
G. Q. Chen and D. H. Wang, Convergence of shock capturing schemes for the compressible Euler-Poisson equations,, Comm. Math. Phys., 179 (1996), 333. doi: 10.1007/BF02102592. Google Scholar
P. Degond and P. A. Markowich, On a one-dimensional steady-state hydrodynamic model,, Appl. Math. Lett., 3 (1990), 25. doi: 10.1016/0893-9659(90)90130-4. Google Scholar
P. Degond and P. A. Markowich, A steady-state potential flow model for semiconductors,, Ann. Mat. Pura Appl., 165 (1993), 87. doi: 10.1007/BF01765842. Google Scholar
D. Donatelli, M. Mei, B. Rubino and R. Sampalmieri, Asymptotic behavior of solutions to Euler-Poisson equations for bipolar hydrodynamic model of semiconductors,, J. Differential Equations, 255 (2013), 3150. doi: 10.1016/j.jde.2013.07.027. Google Scholar
W. F. Fang and K. Ito, Steady-state solutions of a one-dimensional hydrodynamic model for semiconductors,, J. Differential Equations, 133 (1997), 224. doi: 10.1006/jdeq.1996.3203. Google Scholar
I. Gamba, Stationary transonic solutions of a one-dimensional hydrodynamic model for semiconductor,, Comm. Partial Differential Equations, 17 (1992), 553. doi: 10.1080/03605309208820853. Google Scholar
I. Gasser, L. Hsiao and H. L. Li, Large time behavior of solutions of the bipolar hydrodynamical model for semiconductors,, J. Differential Equations, 192 (2003), 326. doi: 10.1016/S0022-0396(03)00122-0. Google Scholar
I. Gasser and R. Natalini, The energy transport and the drift diffusion equations as relaxation limits of the hydrodynamic model for semiconductors,, Quart. Appl. Math., 57 (1999), 269. Google Scholar
L. Grafakos, Classical and Modern Fourier Analysis,, Pearson/Prentice Hall, (2004). Google Scholar
Y. Guo and W. Strauss, Stability of semiconductor states with insulating and contact boundary conditions,, Arch. Ration. Mech. Anal., 179 (2006), 1. doi: 10.1007/s00205-005-0369-2. Google Scholar
Y. Guo and Y. J. Wang, Decay of dissipative equations and negative Sobolev spaces,, Comm. Partial Differential Equations, 37 (2012), 2165. doi: 10.1080/03605302.2012.696296. Google Scholar
L. Hsiao, Q. C. Ju and S. Wang, The asymptotic behaviour of global smooth solutions to the multi-dimensional hydrodynamic model for semiconductors,, Math. Meth. Appl. Sci., 26 (2003), 1187. doi: 10.1002/mma.410. Google Scholar
L. Hsiao, P. A. Markowich and S. Wang, The asymptotic behavior of globally smooth solutions of the multidimensional isentropic hydrodynamic model for semiconductors,, J. Differential Equations, 192 (2003), 111. doi: 10.1016/S0022-0396(03)00063-9. Google Scholar
L. Hsiao and T. Yang, Asymptotics of initial boundary value problems for hydrodynamic and drift diffusion models for semiconductors,, J. Differential Equations, 170 (2001), 472. doi: 10.1006/jdeq.2000.3825. Google Scholar
L. Hsiao and K. J. Zhang, The global weak solution and relaxation limits of the initial boundary value problem to the bipolar hydrodynamic model for semiconductors,, Math. Models Methods Appl. Sci., 10 (2000), 1333. doi: 10.1142/S0218202500000653. Google Scholar
L. Hsiao and K. J. Zhang, The relaxation of the hydrodynamic model for semiconductors to the drift-diffusion equations,, J. Differential Equations, 165 (2000), 315. doi: 10.1006/jdeq.2000.3780. Google Scholar
F. M. Huang, T. H. Li and H. M. Yu, Weak solutions to isothermal hydrodynamic model for semiconductor devices,, J. Differential Equations, 247 (2009), 3070. doi: 10.1016/j.jde.2009.07.032. Google Scholar
F. M. Huang, M. Mei and Y. Wang, Large time behavior of solutions to $n$-dimensional bipolar hydrodynamic model for semiconductors,, SIAM J. Math. Anal., 43 (2011), 1595. doi: 10.1137/100810228. Google Scholar
F. M. Huang, M. Mei, Y. Wang and T. Yang, Long-time behavior of solutions to the bipolar hydrodynamic model of semiconductors with boundary effect,, SIAM J. Math. Anal., 44 (2012), 1134. doi: 10.1137/110831647. Google Scholar
F. M. Huang, M. Mei, Y. Wang and H. M. Yu, Asymptotic convergence to stationary waves for unipolar hydrodynamic model of semiconductors,, SIAM J. Math. Anal., 43 (2011), 411. doi: 10.1137/100793025. Google Scholar
F. M. Huang, M. Mei, Y. Wang and H. M. Yu, Asymptotic convergence to planar stationary waves for multi-dimensional unipolar hydrodynamic model of semiconductors,, J. Differential Equations, 251 (2011), 1305. doi: 10.1016/j.jde.2011.04.007. Google Scholar
N. Ju, Existence and uniqueness of the solution to the dissipative $2D$ Quasi-Geostrophic equations in the Sobolev space,, Commun. Math. Phys., 251 (2004), 365. doi: 10.1007/s00220-004-1062-2. Google Scholar
A. Jüngel, Quasi-hydrodynamic Semiconductor Equations,, Progr. Nonlinear Differential Equations Appl., (2001). doi: 10.1007/978-3-0348-8334-4. Google Scholar
A. Jüngel and Y. J. Peng, A hierarchy of hydrodynamic models for plasmas: Zero-relaxation-time limits,, Comm. Partial Differential Equations, 24 (1999), 1007. doi: 10.1080/03605309908821456. Google Scholar
H. L. Li, P. Markowich and M. Mei, Asymptotic behaviour of solutions of the hydrodynamic model of semiconductors,, Proc. Roy. Soc. Edinburgh Sect. A, 132 (2002), 359. doi: 10.1017/S0308210500001670. Google Scholar
Y. P. Li, Global existence and asymptotic behavior for a multidimensional nonisentropic hydrodynamic semiconductor model with the heat source,, J. Differential Equations, 225 (2006), 134. doi: 10.1016/j.jde.2006.01.001. Google Scholar
Y. P. Li, Diffusion relaxation limit of a nonisentropic hydrodynamic model for semiconductors,, Math. Methods Appl. Sci., 30 (2007), 2247. doi: 10.1002/mma.890. Google Scholar
Y. P. Li, Global existence and asymptotic behavior of solutions to the nonisentropic bipolar hydrodynamic models,, J. Differential Equations, 250 (2011), 1285. doi: 10.1016/j.jde.2010.08.018. Google Scholar
Y. P. Li and X. F. Yang, Global existence and asymptotic behavior of the solutions to the three-dimensional bipolar Euler-Poisson systems,, J. Differential Equations, 252 (2012), 768. doi: 10.1016/j.jde.2011.08.008. Google Scholar
T. Luo, R. Natalini and Z. P. Xin, Large time behavior of the solutions to a hydrodynamic model for semiconductors,, SIAM J. Appl. Math., 59 (1999), 810. doi: 10.1137/S0036139996312168. Google Scholar
P. A. Markowich, On steady state Euler-Poisson models for semiconductors,, Z. Angew. Math. Phys., 42 (1991), 389. doi: 10.1007/BF00945711. Google Scholar
P. Marcati and R. Natalini, Weak solutions to a hydrodynamic model for semiconductors and relaxation to the drift-diffusion equation,, Arch. Ration. Mech. Anal., 129 (1995), 129. doi: 10.1007/BF00379918. Google Scholar
P. A. Markowich, C. Ringhofer and C. Schmeiser, Semiconductor Equations,, Springer-Verlag, (1990). doi: 10.1007/978-3-7091-6961-2. Google Scholar
M. Mei and Y. Wang, Stability of stationary waves for full Euler-Poisson system in multi-dimensional space,, Commun. Pure Appl. Anal., 11 (2012), 1775. doi: 10.3934/cpaa.2012.11.1775. Google Scholar
R. Natalini, The bipolar hydrodynamic model for semiconductors and the drift-diffusion equations,, J. Math. Anal. Appl., 198 (1996), 262. doi: 10.1006/jmaa.1996.0081. Google Scholar
L. Nirenberg, On elliptic partial differential equations,, Ann. Scuola Norm. Sup. Pisa, 13 (1959), 115. Google Scholar
S. Nishibata and M. Suzuki, Asymptotic stability of a stationary solution to a hydrodynamic model of semiconductors,, Osaka J. Math., 44 (2007), 639. Google Scholar
S. Nishibata and M. Suzuki, Asymptotic stability of a stationary solution to a thermal hydrodynamic model for semiconductors,, Arch. Ration. Mech. Anal., 192 (2009), 187. doi: 10.1007/s00205-008-0129-1. Google Scholar
Y. J. Peng and J. Xu, Global well-posedness of the hydrodynamic model for two-carrier plasmas,, J. Differential Equations, 255 (2013), 3447. doi: 10.1016/j.jde.2013.07.045. Google Scholar
F. Poupaud, M. Rascle and J. P. Vila, Global solutions to the isothermal Euler-Poisson system with arbitrarily large data,, J. Differential Equations, 123 (1995), 93. doi: 10.1006/jdeq.1995.1158. Google Scholar
A. Sitenko and V. Malnev, Plasma Physics Theory,, Appl. Math. Math. Comput., (1995). Google Scholar
V. Sohinger and R. M. Strain, The Boltzmann equation, Besov spaces, and optimal time decay rates in $\mathbbR_x^n$,, Adv. Math., 261 (2014), 274. doi: 10.1016/j.aim.2014.04.012. Google Scholar
D. H. Wang, Global solutions to the Euler-Poisson equations of two-carrier types in one dimension,, Z. Angew. Math. Phys., 48 (1997), 680. doi: 10.1007/s000330050056. Google Scholar
D. H. Wang and G. Q. Chen, Formation of singularities in compressible Euler-Poisson fluids with heat diffusion and damping relaxation,, J. Differential Equations, 144 (1998), 44. doi: 10.1006/jdeq.1997.3377. Google Scholar
D. H. Wang and Z. J. Wang, Large BV solutions to the compressible isothermal Euler-Poisson equations with spherical symmetry,, Nonlinearity, 19 (2006), 1985. doi: 10.1088/0951-7715/19/8/012. Google Scholar
Y. J. Wang, Decay of the Navier-Stokes-Poisson equations,, J. Differential Equations, 253 (2012), 273. doi: 10.1016/j.jde.2012.03.006. Google Scholar
J. Xu, Energy-transport and drift-diffusion limits of nonisentropic Euler-Poisson equations,, J. Differential Equations, 252 (2012), 915. doi: 10.1016/j.jde.2011.09.040. Google Scholar
B. Zhang, Convergence of the Godunov scheme for a simplified one-dimensional hydrodynamic model for semiconductor devices,, Comm. Math. Phys., 157 (1993), 1. doi: 10.1007/BF02098016. Google Scholar
C. Zhu and H. Hattori, Stability of steady state solutions for an isentropic hydrodynamic model of semiconductors of two species,, J. Differential Equations, 166 (2000), 1. doi: 10.1006/jdeq.2000.3799. Google Scholar
Ming Mei, Yong Wang. Stability of stationary waves for full Euler-Poisson system in multi-dimensional space. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1775-1807. doi: 10.3934/cpaa.2012.11.1775
Xueke Pu. Quasineutral limit of the Euler-Poisson system under strong magnetic fields. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 2095-2111. doi: 10.3934/dcdss.2016086
Shu Wang, Chundi Liu. Boundary Layer Problem and Quasineutral Limit of Compressible Euler-Poisson System. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2177-2199. doi: 10.3934/cpaa.2017108
Myoungjean Bae, Yong Park. Radial transonic shock solutions of Euler-Poisson system in convergent nozzles. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : 773-791. doi: 10.3934/dcdss.2018049
Yeping Li, Jie Liao. Stability and $ L^{p}$ convergence rates of planar diffusion waves for three-dimensional bipolar Euler-Poisson systems. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1281-1302. doi: 10.3934/cpaa.2019062
A. Alexandrou Himonas, Gerard Misiołek, Feride Tiǧlay. On unique continuation for the modified Euler-Poisson equations. Discrete & Continuous Dynamical Systems - A, 2007, 19 (3) : 515-529. doi: 10.3934/dcds.2007.19.515
Qiangchang Ju, Hailiang Li, Yong Li, Song Jiang. Quasi-neutral limit of the two-fluid Euler-Poisson system. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1577-1590. doi: 10.3934/cpaa.2010.9.1577
Yeping Li. Existence and some limit analysis of stationary solutions for a multi-dimensional bipolar Euler-Poisson system. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 345-360. doi: 10.3934/dcdsb.2011.16.345
Corrado Lattanzio, Pierangelo Marcati. The relaxation to the drift-diffusion system for the 3-$D$ isentropic Euler-Poisson model for semiconductors. Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 449-455. doi: 10.3934/dcds.1999.5.449
Yongcai Geng. Singularity formation for relativistic Euler and Euler-Poisson equations with repulsive force. Communications on Pure & Applied Analysis, 2015, 14 (2) : 549-564. doi: 10.3934/cpaa.2015.14.549
Hong Cai, Zhong Tan. Stability of stationary solutions to the compressible bipolar Euler-Poisson equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4677-4696. doi: 10.3934/dcds.2017201
La-Su Mai, Kaijun Zhang. Asymptotic stability of steady state solutions for the relativistic Euler-Poisson equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 981-1004. doi: 10.3934/dcds.2016.36.981
Manwai Yuen. Cylindrical blowup solutions to the isothermal Euler-Poisson equations. Conference Publications, 2011, 2011 (Special) : 1448-1456. doi: 10.3934/proc.2011.2011.1448
Jiang Xu, Ting Zhang. Zero-electron-mass limit of Euler-Poisson equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4743-4768. doi: 10.3934/dcds.2013.33.4743
Haigang Li, Jiguang Bao. Euler-Poisson equations related to general compressible rotating fluids. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 1085-1096. doi: 10.3934/dcds.2011.29.1085
Sasho Popov, Jean-Marie Strelcyn. The Euler-Poisson equations: An elementary approach to integrability conditions. Journal of Geometric Mechanics, 2018, 10 (3) : 293-329. doi: 10.3934/jgm.2018011
Masahiro Suzuki. Asymptotic stability of stationary solutions to the Euler-Poisson equations arising in plasma physics. Kinetic & Related Models, 2011, 4 (2) : 569-588. doi: 10.3934/krm.2011.4.569
Zhigang Wu, Weike Wang. Pointwise estimates of solutions for the Euler-Poisson equations with damping in multi-dimensions. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 1101-1117. doi: 10.3934/dcds.2010.26.1101
Ruy Coimbra Charão, Jáuber Cavalcante Oliveira, Gustavo Alberto Perla Menzala. Energy decay rates of magnetoelastic waves in a bounded conductive medium. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 797-821. doi: 10.3934/dcds.2009.25.797
Petronela Radu, Grozdena Todorova, Borislav Yordanov. Higher order energy decay rates for damped wave equations with variable coefficients. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 609-629. doi: 10.3934/dcdss.2009.2.609
Zhong Tan Yong Wang Fanhui Xu | CommonCrawl |
Sumit Kumar Debnath 1, , Tanmay Choudhury 1, , Pantelimon Stănică 2, , Kunal Dey 1, and Nibedita Kundu 3,,
Department of Mathematics, National Institute of Technology Jamshedpur, Jamshedpur-831014, India
Department of Applied Mathematics, Naval Postgraduate School, Monterey, CA 93943, USA
Department of Mathematics, The LNM Institute of Information Technology, Jaipur-302031, India
* Corresponding author: [email protected]
Received October 2020 Revised February 2021 Early access June 2021
Fund Project: The first author is supported by DRDO, India (ERIP/ER/202005001/M/01/1775)
Figure(3) / Table(2)
In the context of digital signatures, the proxy signature holds a significant role of enabling an original signer to delegate its signing ability to another party (i.e., proxy signer). It has significant practical applications. Particularly it is useful in distributed systems, where delegation of authentication rights is quite common. For example, key sharing protocol, grid computing, and mobile communications. Currently, a large portion of existing proxy signature schemes are based on the hardness of problems like integer factoring, discrete logarithms, and/or elliptic curve discrete logarithms. However, with the rising of quantum computers, the problem of prime factorization and discrete logarithm will be solvable in polynomial-time, due to Shor's algorithm, which dilutes the security features of existing ElGamal, RSA, ECC, and the proxy signature schemes based on these problems. As a consequence, construction of secure and efficient post-quantum proxy signature becomes necessary. In this work, we develop a post-quantum proxy signature scheme Mult-proxy, relying on multivariate public key cryptography (MPKC), which is one of the most promising candidates of post-quantum cryptography. We employ a 5-pass identification protocol to design our proxy signature scheme. Our work attains the usual proxy criterion and a one-more-unforgeability criterion under the hardness of the Multivariate Quadratic polynomial (MQ) problem. It produces optimal size proxy signatures and optimal size proxy shares in the field of MPKC.
Keywords: Multivariate public key cryptography, post-quantum cryptography, proxy signature, provable secure proxy signature, security.
Mathematics Subject Classification: Primary: 94A60, 94A62, 68M12; Secondary: 68P30.
Citation: Sumit Kumar Debnath, Tanmay Choudhury, Pantelimon Stănică, Kunal Dey, Nibedita Kundu. Delegating signing rights in a multivariate proxy signature scheme. Advances in Mathematics of Communications, doi: 10.3934/amc.2021016
A. K. Awasthi and S. Lal, Proxy blind signature scheme, Trans. on Cryptology, 2:1 (2005), 5-11. Google Scholar
D. J. Bernstein, Introduction to Post-Quantum Cryptography, Post-Quantum Cryptography, Springer–Berlin, Heidelberg, 2009, 1–14. doi: 10.1007/978-3-540-88702-7_1. Google Scholar
A. Bogdanov, T. Eisenbarth, A. Rupp and C. Wolf, Time-area optimized public-key engines: MQ-cryptosystems as replacement for elliptic curves?, Cryptographic Hardware and Embedded Systems, 5154 (2008), 45-61. Google Scholar
A. Boldyreva, A. Palacio and B. Warinschi, Secure proxy signature schemes for delegation of signing rights, J. Cryptology, 25 (2012), 57-115. doi: 10.1007/s00145-010-9082-x. Google Scholar
A. I.-T. Chen, M.-S. Chen, T.-R. Chen, C.-M. Cheng, J. Ding, E. L.-H. Kuo, F. Y.-S. Lee and B.-Y. Yang, SSE implementation of multivariate PKCS on modern x86 CPUs, International Workshop on Cryptographic Hardware and Embedded Systems, (2009), 33–48. Google Scholar
J. Chen, J. Ling, J. Ning, E. Panaousis, G. Loukas, K. Liang and J. Chen, Post quantum proxy signature scheme based on the multivariate public key cryptographic signature, International J. Distributed Sensor Networks, 16 (2020). doi: 10.1177/1550147720914775. Google Scholar
M.-S. Chen, A. Hülsing, J. Rijneveld, S. Samardjiska and P. Schwabe, From 5-pass MQ-based identification to MQ-based signatures, Adv. Cryptology, 10032 (2016), 135-165. doi: 10.1007/978-3-662-53890-6_5. Google Scholar
J. -Zhu Dai, X.-H. Yang and J.-X. Dong, Designated-receiver proxy signature scheme for electronic commerce, SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme-System Security and Assurance (Cat. No. 03CH37483), IEEE, 1 (2003), 384-389. Google Scholar
J. Ding and D. Schmidt, Rainbow, a new multivariable polynomial signature scheme, International Conference on Applied Cryptography and Network Security, (2005), 164–175. doi: 10.1007/s40840-015-0125-1. Google Scholar
G. Fuchsbauer and D. Pointcheval, Anonymous proxy signatures, International Conference on Security and Cryptography for Networks, (2008), 201–217. Google Scholar
M. R. Garey and D. S. Johnson, Computers and Intractability: A guide to the theory of NP-completeness, Freeman San Francisco, 174 (1979). Google Scholar
A. Kipnis, J. Patarin and L. Goubin, Unbalanced oil and vinegar signature schemes, International Conference on the Theory and Applications of Cryptographic Techniques, (1999), 206–222. doi: 10.1007/3-540-48910-X_15. Google Scholar
Q. Lin, Ji n Li, Z. Huang, W. Chen and J. Shen, A short linearly homomorphic proxy signature scheme, IEEE Access, 6 (2018), 12966-12972. Google Scholar
M. Mambo, K. Usuda and E. Okamoto, Proxy signatures: Delegation of the power to sign messages, IEICE Trans. on Fundamentals of Electronics, Communications and Computer Sciences, 79:9 (1996), 1338-1354. Google Scholar
M. Mambo, K. Usuda and E. Okamoto, Proxy signatures for delegating signing operation, Proceedings of the 3rd ACM conference on Computer and Communications Security, (1996), 48–57. Google Scholar
T. Matsumoto and H. Imai, Public quadratic polynomial-tuples for efficient signature-verification and message-encryption, Workshop on the Theory and Application of Cryptographic Techniques, (1988), 419–453. doi: 10.1007/3-540-45961-8_39. Google Scholar
J. Patarin, Hidden fields equations (HFE) and isomorphisms of polynomials (IP): Two new families of asymmetric algorithms, International Conference on the Theory and Applications of Cryptographic Techniques, (1996), 33–48. Google Scholar
A. Petzoldt, M.-S. Chen, B.-Y. Yang, C. Tao and J. Ding, Design principles for HFEV-based multivariate signature schemes, International Conference on the Theory and Application of Cryptology and Information Security, (2015), 311–334. doi: 10.1007/978-3-662-48797-6_14. Google Scholar
E. Sakalauskas, The multivariate quadratic power problem over ZN is NP-complete, Information Technology and Control, 41:1 (2012), 33-39. Google Scholar
K. Sakumoto, T. Shirai and H. Hiwatari, Public-key identification schemes based on multivariate quadratic polynomials, Advances in Cryptology, 6841 (2011), 706-723. doi: 10.1007/978-3-642-22792-9_40. Google Scholar
P. W. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer, SIAM Review, 41 (1999), 303-332. doi: 10.1137/S0036144598347011. Google Scholar
S. Tang and L. Xu, Proxy signature scheme based on isomorphisms of polynomials, in International Conference on Network and System Security, (2012), 113–125. doi: 10.1007/978-3-642-34601-9_9. Google Scholar
G. Wang, F. Bao, J. Zhou and R. H Deng, Security analysis of some proxy signatures, International Conference on Information Security and Cryptology, (2003), 305–319. doi: 10.1007/978-3-540-24691-6_23. Google Scholar
F. Wu, W. Yao, X. Zhang, W. Wang and Z. Zheng, Identity-based proxy signature over NTRU lattice, International J. Communication Systems, 32 (2019), e3867. doi: 10.1002/dac.3867. Google Scholar
K. Zhang, Threshold proxy signature schemes, International Workshop on Information Security, (1997), 282–290. Google Scholar
H. Zhu, Y. Tan, X. Yu, Y. Xue, Q. Zhang, L. Zhu and Y. Li, An identity-based proxy signature on NTRU lattice, Chinese J. Electronics, 27:2 (2018), 297-303. Google Scholar
Figure 1. Communication flow in signature scheme
Figure 2. 5-pass identification protocol
Figure 3. Our proxy signature protocol
Table 1. General comparison of different key sizes of our scheme Mult-proxy, Tang and Xu's scheme [22] and Proxy Rainbow [6] with Rainbow [9] as central map
Scheme Mult-proxy Tang and Xu's scheme [22] Proxy Rainbow [6]
Delegation Partial with warrant Partial with warrant Partial with warrant
O.S's pub-key $ \frac{mn^2+3mn+2m}{2}\cdot p $ $ (\frac{mn^2+3mn+2m}{2}+\xi)\cdot p $ $ \frac{mn^2+3mn+2m}{2}\cdot p $
P.S's pub-key $ \frac{mn^2+3mn+2m}{2}\cdot p $ $ (\frac{mn^2+3mn+2m}{2}+\xi)\cdot p $ $ \frac{mn^2+3mn+2m}{2}\cdot p $
O.S's sec-key $ (m^2+n^2+m+n+\xi)\cdot p $ $ (m^2+n^2+m+n)\cdot p $ $ (m^2+n^2+m+n+\xi)\cdot p $
P.S's sec-key $ (m^2+n^2+m+n+\xi)\cdot p $ $ (m^2+n^2+m+n)\cdot p $ $ (m^2+n^2+m+n+\xi)\cdot p $
Proxy share $ n\cdot p $ $ \frac{mn^2+2m^2+3mn+2n^2+4m+2n}{2}\cdot p $ $ \frac{mn^2+2m^2+3mn+2n^2+4m+2n}{2}\cdot p $
Proxy sig $ 2k\cdot \omega+(k(m+2n)+n)\cdot p $ $ k+k(m^2+n^2+m+n)\cdot p $ $ \frac{3mn^2+9mn+6m+6n}{2}\cdot p $
Table 2. Numeric comparison of different key sizes of our scheme Mult-proxy, Tang and Xu's scheme [22] and Proxy Rainbow [6] with Rainbow [9] as central map
Parameters (256, 18, 12, 12) (256, 18, 12, 12) (256, 18, 12, 12)
O.S's public key size (kB) $ 177.4 $ $ 297.9 $ $ 177.4 $
P.S's public key size (kB) $ 177.4 $ $ 297.9 $ $ 177.4 $
O.S's secret key size (kB) $ 139.4 $ $ 18.8 $ $ 139.4 $
P.S's secret key size(kB) $ 139.4 $ $ 18.8 $ $ 139.4 $
Proxy share size (kB) $ 0.33 $ $ 196.2 $ $ 196.2 $
Proxy signature size (kB) $ 173.7 $ $ 2424.9 $ $ 533.1 $
O.S's public key size (kB) $ 1501.9 $ $ 2542.6 $ $ 1501.9 $
P.S's public key size (kB) $ 1501.9 $ $ 2542.6 $ $ 1501.9 $
O.S's secret key size (kB) $ 1120.2 $ $ 79.6 $ $ 1120.2 $
P.S's secret key size(kB) $ 1120.2 $ $ 79.6 $ $ 1120.2 $
Proxy share size (kB) $ 0.7 $ $ 1581.4 $ $ 1581.4 $
Proxy signature size (kB) $ 290.9 $ $ 10263.7 $ $ 4507.7 $
Parameters (31, 28, 20, 20, 8) (31, 28, 20, 20, 8) (31, 28, 20, 20, 8)
O.S's public key size (kB) $ 938.7 $ $ 1935.5 $ $ 938.7 $
P.S's public key size (kB) $ 938.7 $ $ 1935.5 $ $ 938.7 $
Proxy signature size (kB) $ 206 $ $ 6414.8 $ $ 2817.3 $
Jintai Ding, Sihem Mesnager, Lih-Chung Wang. Letters for post-quantum cryptography standard evaluation. Advances in Mathematics of Communications, 2020, 14 (1) : i-i. doi: 10.3934/amc.2020012
Gerhard Frey. Relations between arithmetic geometry and public key cryptography. Advances in Mathematics of Communications, 2010, 4 (2) : 281-305. doi: 10.3934/amc.2010.4.281
Gérard Maze, Chris Monico, Joachim Rosenthal. Public key cryptography based on semigroup actions. Advances in Mathematics of Communications, 2007, 1 (4) : 489-507. doi: 10.3934/amc.2007.1.489
Philip Lafrance, Alfred Menezes. On the security of the WOTS-PRF signature scheme. Advances in Mathematics of Communications, 2019, 13 (1) : 185-193. doi: 10.3934/amc.2019012
Felipe Cabarcas, Daniel Cabarcas, John Baena. Efficient public-key operation in multivariate schemes. Advances in Mathematics of Communications, 2019, 13 (2) : 343-371. doi: 10.3934/amc.2019023
Jintai Ding, Zheng Zhang, Joshua Deaton. The singularity attack to the multivariate signature scheme HIMQ-3. Advances in Mathematics of Communications, 2021, 15 (1) : 65-72. doi: 10.3934/amc.2020043
Yang Lu, Quanling Zhang, Jiguo Li. An improved certificateless strong key-insulated signature scheme in the standard model. Advances in Mathematics of Communications, 2015, 9 (3) : 353-373. doi: 10.3934/amc.2015.9.353
Meenakshi Kansal, Ratna Dutta, Sourav Mukhopadhyay. Group signature from lattices preserving forward security in dynamic setting. Advances in Mathematics of Communications, 2020, 14 (4) : 535-553. doi: 10.3934/amc.2020027
Lidong Chen, Dustin Moody. New mission and opportunity for mathematics researchers: Cryptography in the quantum era. Advances in Mathematics of Communications, 2020, 14 (1) : 161-169. doi: 10.3934/amc.2020013
Yu-Chi Chen. Security analysis of public key encryption with filtered equality test. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021053
Pedro Branco. A post-quantum UC-commitment scheme in the global random oracle model from code-based assumptions. Advances in Mathematics of Communications, 2021, 15 (1) : 113-130. doi: 10.3934/amc.2020046
Florian Luca, Igor E. Shparlinski. On finite fields for pairing based cryptography. Advances in Mathematics of Communications, 2007, 1 (3) : 281-286. doi: 10.3934/amc.2007.1.281
Sanjit Chatterjee, Berkant Ustaoğlu. Malleability and ownership of proxy signatures: Towards a stronger definition and its limitations. Advances in Mathematics of Communications, 2020, 14 (2) : 177-205. doi: 10.3934/amc.2020015
Ke Gu, Xinying Dong, Linyu Wang. Efficient traceable ring signature scheme without pairings. Advances in Mathematics of Communications, 2020, 14 (2) : 207-232. doi: 10.3934/amc.2020016
Diego F. Aranha, Ricardo Dahab, Julio López, Leonardo B. Oliveira. Efficient implementation of elliptic curve cryptography in wireless sensors. Advances in Mathematics of Communications, 2010, 4 (2) : 169-187. doi: 10.3934/amc.2010.4.169
Andreas Klein. How to say yes, no and maybe with visual cryptography. Advances in Mathematics of Communications, 2008, 2 (3) : 249-259. doi: 10.3934/amc.2008.2.249
Anna-Lena Horlemann-Trautmann, Violetta Weger. Information set decoding in the Lee metric with applications to cryptography. Advances in Mathematics of Communications, 2021, 15 (4) : 677-699. doi: 10.3934/amc.2020089
Neal Koblitz, Alfred Menezes. Critical perspectives on provable security: Fifteen years of "another look" papers. Advances in Mathematics of Communications, 2019, 13 (4) : 517-558. doi: 10.3934/amc.2019034
Jie Xu, Lanjun Dang. An efficient RFID anonymous batch authentication protocol based on group signature. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1489-1500. doi: 10.3934/dcdss.2019102
Mohammad Sadeq Dousti, Rasool Jalili. FORSAKES: A forward-secure authenticated key exchange protocol based on symmetric key-evolving schemes. Advances in Mathematics of Communications, 2015, 9 (4) : 471-514. doi: 10.3934/amc.2015.9.471
Sumit Kumar Debnath Tanmay Choudhury Pantelimon Stănică Kunal Dey Nibedita Kundu | CommonCrawl |
11 A - Imperialism (Essentials)
jonloomis
The idea that the United States is unique in the world, usually in the sense that the United States is better than all other nations due to our history and form of government.
City Upon a Hill
An image borrowed from the Bible by Puritan minister John Winthrop to describe the United States as a model society that the rest of the world should look up to as an example.
The idea that people, businesses and nations operate by Charles Darwin's survival of the fittest principle. That is, successful nations are successful because they are inherently better than others. At the turn of the century, White culture was seen as superior to others because Europeans and the United States were imperial nations and had defeated the people of their colonies.
White Man's Burden
The idea that White Americans and Europeans had an obligation to teach the people of the rest of the world how to be civilized.
Nickname for the regions of China that were controlled by the various European nations. Within these zones, only one European power was permitted to carry out trade.
A small nation dominated by foreign businesses. This nickname was used especially for Central American nations dominated by fruit growers based in the United States.
Naval base on Oahu in Hawaii. The United States annexed Hawaii in part to gain control over this important coaling station.
Island nation just south of Florida that was a Spanish colony until the United States secured its independence in the Spanish-American War.
Island nation in Asia won by the United States from Spain in the Spanish-American War. It was granted independence in 1946.
Canal connecting the Atlantic and Pacific Oceans. It was an important success of President Theodore Roosevelt.
Alfred T. Mahan
Author of the book "The Influence of Seapower upon History."
American Anti-Imperialist League
Organization of Americans opposed to imperialism.
Nickname for Theodore Roosevelt's cavalry regiment in Cuba during the Spanish-American War.
Smoked Yankees
Nickname for African-American troops during the Spanish-American War.
American governor of the Philippines after the Spanish-American War and later president of the United States.
American author of such books as Tom Sawyer and Huckleberry Finn and famous anti-imperialist.
American Secretary of State who introduced the Open Door Policy.
Annexation of Hawaii
June 14, 1900 resolution by Congress that made Hawaii a territory of the United States.
Explosion of the USS Maine
Event that cause the United States to declare war on Spain in 1898.
1898 conflict with Spain in which the United States won control of Puerto Rico, Guam, the Philippines, and also won independence for Cuba.
Philippine-American War
Conflict between the American army and Philippine independence fighters after the Spanish-American War.
1899-1901 conflict between Chinese nationalists and Europeans, Japanese and Americans over control of China.
Great White Fleet
American fleet of battleships that sailed around the world between 1907 and 1909 to demonstrate American military might.
The Influence of Seapower upon History
Book by Alfred T. Mahan in which he argued that great nations have colonies and navies to protect trade with those colonies. This book inspired Theodore Roosevelt and led to the acquisition of overseas colonies such as Hawaii, the Philippines, Guam and Samoa.
Teller Amendment
Amendment to the declaration of war against Spain in 1898 that state that the United States would not annex Cuba.
Treaty of Paris of 1898
Treaty that ended the Spanish-American War and granted the United States control of Puerto Rico, Guam and the Philippines.
Platt Amendment
Law passed in 1903 in which the United States claimed the right to intervene in Cuban affairs, to maintain a naval base at Guantanamo, and limited the freedom of Cuba to make treaties without American consent.
Open Door Policy
American policy at the turn of the century that stated that all of China would be open to trade, essentially ignoring the European spheres of influence.
Big Stick Diplomacy
Theodore Roosevelt's approach to foreign policy. He emphasized the threat of military force as a way to force other nations to accept American positions.
Roosevelt Corollary
Theodore Roosevelt's addition to the Monroe Doctrine in which he stated that the United States would act as policeman for the Americas.
Dollar Diplomacy
President Taft's approach to foreign policy. He emphasized the use of American financial power rather than the threat of military force.
Moral Diplomacy
President Wilson's approach to foreign policy. He emphasized the use of American power to promote democracy and self-rule.
14 - World War II (Essentials)
10 - Immigration, Urbanization and Reform (All)
13 - The Great Depression and New Deal (Essentials)
3 - Revolution (All)
People - For Jan Rreview
AP Psych - Personality
AP Psych - Motivation, Eating & Emotion
AP Psych - Development 2022
Refer to the Johnson Filtration problem introduced in this section. Suppose that in addition to information on the number of months since the machine was serviced and whether a mechanical or an electrical repair was necessary, the managers obtained a list showing which repairperson performed the service. The revised data follow. $$ \begin{matrix} \text{Repair Time} & \text{Months Since}\\ \text{in Hours} & \text{Last Service} & \text{Type of Repair} & \text{Repairperson}\\ \hline \text{2.9} & \text{2} & \text{Electrical} & \text{Dave Newton}\\ \text{3.0} & \text{6} & \text{Mechanical} & \text{Dave Newton}\\ \text{4.8} & \text{8} & \text{Electrical} & \text{Bob Jones}\\ \text{1.8} & \text{3} & \text{Mechanical} & \text{Dave Newton}\\ \text{2.9} & \text{2} & \text{Electrical} & \text{Dave Newton}\\ \text{4.9} & \text{7} & \text{Electrical} & \text{Bob Jones}\\ \text{4.2} & \text{9} & \text{Mechanical} & \text{Bob Jones}\\ \text{4.8} & \text{8} & \text{Mechanical} & \text{Bob Jones}\\ \text{4.4} & \text{4} & \text{Electrical} & \text{Bob Jones}\\ \text{4.5} & \text{6} & \text{Electrical} & \text{Dave Newton}\\ \end{matrix} $$ Does the simple linear regression equation provide a good fit for the observed data? Explain.
Guaranty Income Life offered an annuity that pays $6.65 \% \mathrm{com}-$ pounded monthly. If $\$ 500$ is deposited into this annuity every month, how much is in the account after $10$ years? How much of this interest?
Find the amount of commission earned by each salesperson. | Salesperson | Amount of Sales | Rate of Commission| Amount of Commision | | | :--- | :--- | :--- | :--- | :--- | | | | | Estimate | Calculate | | Andy | $\$5,050$| $2.2\%$ | $\underline{\qquad \qquad}$ | $\underline{\qquad \qquad}$ |
Explain how reversing entries simplify recordkeeping.
1st Edition•ISBN: 9781938168178Glen Krutz
Politics in States and Communities
15th Edition•ISBN: 9780205994861Susan A. MacManus, Thomas R. Dye
American Corrections
11th Edition•ISBN: 9781305093300Michael D. Reisig, Todd R. Clear
COSC 353 Chapter 6
laneraulston1
Identification and Vocab American Art Mi…
Grace_EdsonPlus
georgia_dean1
Microsoft Word Study Guide
Kennedy_Stone81 | CommonCrawl |
Rotation number of contracted rotations
JMD Home
This Volume
Continuity of Lyapunov exponents for cocycles with invariant holonomies
2018, 12: 193-222. doi: 10.3934/jmd.2018008
Seifert manifolds admitting partially hyperbolic diffeomorphisms
Andy Hammerlindl 1, , Rafael Potrie 2, and Mario Shannon 2,3,
School of Mathematical Sciences, Monash University, Victoria 3800, Australia
URL: http://users.monash.edu.au/~ahammerl/
CMAT, Facultad de Ciencias, Universidad de la República, Igua 4225, Montevideo 11400, Uruguay
URL: www.cmat.edu.uy/~rpotrie
Institute Mathèmatique de Burgogne, Dijon, France
AH: Partially supported by the Australian Research Council.
RP: Partially supported by CSIC group 618, MathAmSud-Physeco, and the Australian Research Council.
MS: Partially supported by CSIC group 618
Received May 30, 2017 Revised March 15, 2018 Published June 2018
Figure(4)
We characterize which 3-dimensional Seifert manifolds admit transitive partially hyperbolic diffeomorphisms. In particular, a circle bundle over a higher-genus surface admits a transitive partially hyperbolic diffeomorphism if and only if it admits an Anosov flow.
Keywords: Partially hyperbolic diffeomorphisms, Seifert spaces.
Mathematics Subject Classification: Primary: 37D30, 37C15; Secondary: 57R30, 55R05.
Citation: Andy Hammerlindl, Rafael Potrie, Mario Shannon. Seifert manifolds admitting partially hyperbolic diffeomorphisms. Journal of Modern Dynamics, 2018, 12: 193-222. doi: 10.3934/jmd.2018008
T. Barbot, Flots d'Anosov sur les variétés graphées au sens de Waldhausen, Ann. Inst. Fourier (Grenoble), 46 (1996), 1451-1517. doi: 10.5802/aif.1556. Google Scholar
T. Barbot, Actions de groupes sur les 1-variétés non séparées et feuilletages de codimension un, Ann. Fac. Sci. Toulouse Math.(6), 7 (1998), 559-597. doi: 10.5802/afst.911. Google Scholar
C. Bonatti and A. Wilkinson, Transitive partially hyperbolic diffeomorphisms on 3-manifolds, Topology, 44 (2005), 475-508. doi: 10.1016/j.top.2004.10.009. Google Scholar
C. Bonatti, K. Parwani and R. Potrie, Anomalous partially hyperbolic diffeomorphisms Ⅰ: Dynamically coherent examples, Ann. Sci. Éc. Norm. Supér.(4), 49 (2016), 1387-1402. doi: 10.24033/asens.2311. Google Scholar
C. Bonatti, A. Gogolev and R. Potrie, Anomalous partially hyperbolic diffeomorphisms Ⅱ: Stably ergodic examples, Invent. Math., 206 (2016), 801-836. doi: 10.1007/s00222-016-0663-7. Google Scholar
C. Bonatti, A. Gogolev, A. Hammerlindl and R. Potrie, Anomalous partially hyperbolic diffeomorphisms Ⅲ: Abundance and incoherence, arXiv: 1706.04962.Google Scholar
J. Bowden, Contact structures, deformations and taut foliations, Geom. Topol., 20 (2016), 697-746. doi: 10.2140/gt.2016.20.697. Google Scholar
M. Brin, D. Burago and S. Ivanov, On partially hyperbolic diffeomorphisms of 3-manifolds with commutative fundamental group, in Modern Dynamical Systems and Applications, Cambridge Univ. Press, Cambridge, 2004,307–312. Google Scholar
M. Brittenham, Essential laminations in seifert fibered spaces, Topology, 32 (1993), 61-85. doi: 10.1016/0040-9383(93)90038-W. Google Scholar
D. Burago and S. Ivanov, Partially hyperbolic diffeomorphisms of 3-manifolds with abelian fundamental groups, J. Mod. Dyn., 2 (2008), 541-580. doi: 10.3934/jmd.2008.2.541. Google Scholar
K. Burns and A. Wilkinson, Dynamical coherence and center bunching, Discrete Contin. Dyn. Syst., 22 (2008), 89-100. doi: 10.3934/dcds.2008.22.89. Google Scholar
D. Calegari, Foliations and the Geometry of 3-Manifolds, Oxford Mathematical Monographs, Oxford University Press, Oxford, 2007. Google Scholar
A. Candel and L. Conlon, Foliations I, Graduate Studies in Mathematics, 23, American Mathematical Society, Providence, RI, 2000; Foliations II, Graduate Studies in Mathematics, 60, American Mathematical Society, Providence, RI, 2003. doi: 10.1090/gsm/060. Google Scholar
P. Carrasco, F. Rodriguez Hertz, M. A. Rodriguez Hertz and R. Ures, Partially hyperbolic dynamics in dimension 3, arXiv: 1501.00932.Google Scholar
S. Choi, Geometric Structures on 2-Orbifolds: Exploration of Discrete Symmetry, MSJ Memoirs, 27, Mathematical Society of Japan, Tokyo, 2012. doi: 10.1142/e035. Google Scholar
D. Eisenbud, U. Hirsch and W. Neumann, Transverse foliations of Seifert bundles and self-homeomorphism of the circle, Comment. Math. Helv., 56 (1981), 638-660. doi: 10.1007/BF02566232. Google Scholar
É. Ghys, Flots d'Anosov sur les 3-variétés fibrées en cercles, Ergodic Theory Dynam. Systems, 4 (1984), 67-80. doi: 10.1017/S0143385700002273. Google Scholar
N. Gourmelon, Adapted metrics for dominated splittings, Ergodic Theory Dynam. Systems, 27 (2007), 1839-1849. doi: 10.1017/S0143385707000272. Google Scholar
A. Hammerlindl, Horizontal vector fields and Seifert fiberings, arXiv: 1803.09922.Google Scholar
A. Hammerlindl and R. Potrie, Pointwise partial hyperbolicity in three-dimensional nilmanifolds, J. Lond. Math. Soc.(2), 89 (2014), 853-875. doi: 10.1112/jlms/jdu013. Google Scholar
A. Hammerlindl and R. Potrie, Classification of partially hyperbolic diffeomorphisms in three dimensional manifolds with solvable fundamental group, J. Topol., 8 (2015), 842-870. doi: 10.1112/jtopol/jtv009. Google Scholar
A. Hammerlindl and R. Potrie, Partial hyperbolicity and classification: A survey, Ergodic Theory Dynam. Systems, 38 (2018), 401-443. doi: 10.1017/etds.2016.50. Google Scholar
A. Hatcher, Notes on basic 3-manifold topology, Available from: http://www.math.cornell.edu/~hatcher.Google Scholar
M. Hirsch, C. Pugh and M. Shub, Invariant Manifolds, Lecture Notes in Mathematics, Vol. 583, Springer-Verlag, Berlin-New York, 1977. Google Scholar
M. Jankins and W. Neumann, Lectures on Seifert manifolds, Brandeis Lecture Notes, 2, Brandeis University, Waltham, MA, 1983. Google Scholar
G. Levitt, Feuilletages des variétés de dimension 3 qui sont fibrés en circles, Comment. Math. Helv., 53 (1978), 572-594. doi: 10.1007/BF02566099. Google Scholar
G. Levitt, Foliations and laminations on hyperbolic surfaces, Topology, 22 (1983), 119-135. doi: 10.1016/0040-9383(83)90023-X. Google Scholar
K. Mann, Spaces of surface group representations, Invent. Math., 201 (2015), 669-710. doi: 10.1007/s00222-014-0558-4. Google Scholar
R. Naimi, Foliations transverse to fibers of Seifert manifolds, Comment. Math. Helv., 69 (1994), 155-162. doi: 10.1007/BF02564479. Google Scholar
K. Parwani, On 3-manifolds that support partially hyperbolic diffeomorphisms, Nonlinearity, 23 (2010), 589-606. doi: 10.1088/0951-7715/23/3/009. Google Scholar
F. Rodriguez Hertz, M. A. Rodriguez Hertz and R. Ures, Tori with hyperbolic dynamics in 3-manifolds, J. Mod. Dyn., 5 (2011), 185-202. doi: 10.3934/jmd.2011.5.185. Google Scholar
F. Rodriguez Hertz, M. A. Rodriguez Hertz and R. Ures, A non-dynamically coherent example on $\mathbb{T}^3$, Ann. Inst. H. Poincaré Anal. Non Linéaire, 33 (2016), 1023-1032. doi: 10.1016/j.anihpc.2015.03.003. Google Scholar
P. Scott, The geometries of 3-manifolds, Bull. London Math. Soc., 15 (1983), 401-487. doi: 10.1112/blms/15.5.401. Google Scholar
V. V. Solodov, Components of topological foliations, (Russian) Mat. Sb. (N.S.), 119 (1982), 340–354, 447. Google Scholar
Figure 1. The concatenation on the left is coherently oriented and the one on the right is not
Figure 2. Cutting a curve in the concatenation
Figure 3. Constructing a vector field in a section of the bundle
Figure 4. A map from the bundle to the unit tangent bundle
Rafael Potrie. Partially hyperbolic diffeomorphisms with a trapping property. Discrete & Continuous Dynamical Systems - A, 2015, 35 (10) : 5037-5054. doi: 10.3934/dcds.2015.35.5037
Lorenzo J. Díaz, Todd Fisher. Symbolic extensions and partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1419-1441. doi: 10.3934/dcds.2011.29.1419
Lorenzo J. Díaz, Todd Fisher, M. J. Pacifico, José L. Vieitez. Entropy-expansiveness for partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4195-4207. doi: 10.3934/dcds.2012.32.4195
Boris Kalinin, Victoria Sadovskaya. Holonomies and cohomology for cocycles over partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 245-259. doi: 10.3934/dcds.2016.36.245
Lin Wang, Yujun Zhu. Center specification property and entropy for partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 469-479. doi: 10.3934/dcds.2016.36.469
Andrey Gogolev. Partially hyperbolic diffeomorphisms with compact center foliations. Journal of Modern Dynamics, 2011, 5 (4) : 747-769. doi: 10.3934/jmd.2011.5.747
Dmitri Burago, Sergei Ivanov. Partially hyperbolic diffeomorphisms of 3-manifolds with Abelian fundamental groups. Journal of Modern Dynamics, 2008, 2 (4) : 541-580. doi: 10.3934/jmd.2008.2.541
Keith Burns, Federico Rodriguez Hertz, María Alejandra Rodriguez Hertz, Anna Talitskaya, Raúl Ures. Density of accessibility for partially hyperbolic diffeomorphisms with one-dimensional center. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 75-88. doi: 10.3934/dcds.2008.22.75
Michael Brin, Dmitri Burago, Sergey Ivanov. Dynamical coherence of partially hyperbolic diffeomorphisms of the 3-torus. Journal of Modern Dynamics, 2009, 3 (1) : 1-11. doi: 10.3934/jmd.2009.3.1
Yujun Zhu. Topological quasi-stability of partially hyperbolic diffeomorphisms under random perturbations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 869-882. doi: 10.3934/dcds.2014.34.869
Doris Bohnet. Codimension-1 partially hyperbolic diffeomorphisms with a uniformly compact center foliation. Journal of Modern Dynamics, 2013, 7 (4) : 565-604. doi: 10.3934/jmd.2013.7.565
Radu Saghin. Volume growth and entropy for $C^1$ partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3789-3801. doi: 10.3934/dcds.2014.34.3789
Zeng Lian, Peidong Liu, Kening Lu. Existence of SRB measures for a class of partially hyperbolic attractors in banach spaces. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3905-3920. doi: 10.3934/dcds.2017164
Mauricio Poletti. Stably positive Lyapunov exponents for symplectic linear cocycles over partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5163-5188. doi: 10.3934/dcds.2018228
Pengfei Zhang. Partially hyperbolic sets with positive measure and $ACIP$ for partially hyperbolic systems. Discrete & Continuous Dynamical Systems - A, 2012, 32 (4) : 1435-1447. doi: 10.3934/dcds.2012.32.1435
Zhenqi Jenny Wang. Local rigidity of partially hyperbolic actions. Journal of Modern Dynamics, 2010, 4 (2) : 271-327. doi: 10.3934/jmd.2010.4.271
Zhenqi Jenny Wang. Local rigidity of partially hyperbolic actions. Electronic Research Announcements, 2010, 17: 68-79. doi: 10.3934/era.2010.17.68
Luiz Felipe Nobili França. Partially hyperbolic sets with a dynamically minimal lamination. Discrete & Continuous Dynamical Systems - A, 2018, 38 (6) : 2717-2729. doi: 10.3934/dcds.2018114
Alexander Arbieto, Luciano Prudente. Uniqueness of equilibrium states for some partially hyperbolic horseshoes. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 27-40. doi: 10.3934/dcds.2012.32.27
Andrey Gogolev, Ali Tahzibi. Center Lyapunov exponents in partially hyperbolic dynamics. Journal of Modern Dynamics, 2014, 8 (3&4) : 549-576. doi: 10.3934/jmd.2014.8.549
Andy Hammerlindl Rafael Potrie Mario Shannon | CommonCrawl |
Nearly optimal codebooks from generalized Boolean bent functions over $ \mathbb{Z}_{4} $
AMC Home
On the diffusion of the Improved Generalized Feistel
doi: 10.3934/amc.2020118
Rotated $ A_n $-lattice codes of full diversity
Agnaldo José Ferrari , and Tatiana Miguel Rodrigues de Souza
School of Sciences, Department of Mathematics, São Paulo State University - UNESP, Bauru, SP 17033-360, BR
Received August 2020 Revised September 2020 Published November 2020
Fund Project: This work was supported by FAPESP 2013/25977-7 and CNPq 429346/2018-2
Full Text(HTML)
Table(1)
Some important properties of lattices are packing density and full diversity, which may be good for signal transmission over both Gaussian and Rayleigh fading channel, respectively. The algebraic lattices are constructed through twisted homomorphism of some modules in the ring of integers of a number field $ \mathbb{K} $. In this paper, we present a construction of some families of rotated $ A_n- $lattices, for $ n = 2^{r-2}-1 $, $ r \geq 4 $, via totally real subfield of cyclotomic fields. Furthermore, closed-form expressions for the minimum product distance of those lattices are obtained through algebraic properties.
Keywords: Algebraic number field, algebraic lattice, packing density, twisted homomorphism.
Mathematics Subject Classification: Primary: 52C07; Secondary: 11H31, 11H71.
Citation: Agnaldo José Ferrari, Tatiana Miguel Rodrigues de Souza. Rotated $ A_n $-lattice codes of full diversity. Advances in Mathematics of Communications, doi: 10.3934/amc.2020118
E. Bayer-Fluckiger, Ideal lattices, in A Panorama of Number Theory or the View from Baker's Garden, Cambridge Univ. Press, Cambridge, 2002,165-184. doi: 10.1017/CBO9780511542961.012. Google Scholar
E. Bayer-Fluckiger, Lattices and number fields, in Algebraic Geometry: Hirzebruch 70, Contemp. Math., 241, Amer. Math. Soc., Providence, RI, 1999, 69–84. doi: 10.1090/conm/241/03628. Google Scholar
E. Bayer-Fluckiger, Upper bounds for Euclidean minima of algebraic number fields, J. Number Theory, 121 (2006), 305-323. doi: 10.1016/j.jnt.2006.03.002. Google Scholar
E. Bayer-Fluckiger and G. Nebe, On the Euclidian minimum of some real number fields, J. Théor. Nombres Bordeaux, 17 (2005), 437–454. doi: 10.5802/jtnb.500. Google Scholar
E. Bayer-Fluckiger, F. Oggier and E. Viterbo, New algebraic constructions of rotated $\mathbb{Z}^n$-lattice constellations for the Rayleigh fading channel, IEEE Trans. Inform. Theory, 50 (2004), 702–714. doi: 10.1109/TIT.2004.825045. Google Scholar
E. Bayer-Fluckiger and I. Suarez, Ideal lattices over totally real number fields and Euclidean minima, Arch. Math. (Basel), 86 (2006), 217–225. doi: 10.1007/s00013-005-1469-9. Google Scholar
K. Boullé and J. C. Belfiore, Modulation scheme design for the Rayleigh fading channel, Proc. Conf. Information Science and System, (1992), 288–293. Google Scholar
J. Boutros, E. Viterbo, C. Rastello and J.-C. Belfiore, Good lattice constellations for both Rayleigh fading and Gaussian channels, IEEE Trans. Inform. Theory, 42 (1996), 502–518. doi: 10.1109/18.485720. Google Scholar
H. Cohn and A. Kumar, Optimality and uniqueness of the Leech lattice among lattices, Ann. of Math. (2), 170 (2009), 1003–1050. doi: 10.4007/annals.2009.170.1003. Google Scholar
J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups, Fundamental Principles of Mathematical Sciences, 290, Springer-Verlag, New York, 1993. doi: 10.1007/978-1-4757-2249-9. Google Scholar
J. H. Conway and N. J. A. Sloane, The optimal isodual lattice quantizer in three dimensions, Adv. Math. Commun., 1 (2007), 257–260. doi: 10.3934/amc.2007.1.257. Google Scholar
P. Elia, B. A. Sethuraman and P. V. Kumar, Perfect space-time codes for any number of antennas, IEEE Trans. Inform. Theory, 53 (2007), 3853–3868. doi: 10.1109/TIT.2007.907502. Google Scholar
X. Hou and F. Oggier, Modular lattices from a variation of Construction A over number fields, Adv. Math. Commun., 11 (2017), 719–745. doi: 10.3934/amc.2017053. Google Scholar
G. C. Jorge, A. A. de Andrade, S. I. R. Costa and J. E. Strapasson, Algebraic constructions of densest lattices, J. Algebra, 429 (2015), 218–235. doi: 10.1016/j.jalgebra.2014.12.044. Google Scholar
G. C. Jorge and S. I. R. Costa, On rotated $D_n$-lattices constructed via totally real number fields, Arch. Math. (Basel), 100 (2013), 323–332. doi: 10.1007/s00013-013-0501-8. Google Scholar
G. C. Jorge, A. J. Ferrari and S. I. R. Costa, Rotated $D_n$-lattices, J. Number Theory, 132 (2012), 2397–2406. doi: 10.1016/j.jnt.2012.05.002. Google Scholar
D. Micciancio and S. Goldwasser, Complexity of Lattice Problems. A Cryptographic Perspective, The Kluwer International Series in Engineering and Computer Science, 671, Kluwer Academic Publishers, Boston, MA, 2002. doi: 10.1007/978-1-4615-0897-7. Google Scholar
F. Oggier, Algebraic Methods for Channel Coding, Ph.D Thesis, École Polytechnique Fédérale in Lausanne, Lausanne, 2005. Google Scholar
F. Oggier and E. Bayer-Fluckiger, Best rotated cubic lattice constellations for the Rayleigh fading channel, Proceedings of IEEE International Symposium on Information Theory, Yokohama, Japan, 2003. Google Scholar
P. Samuel, Algebraic Theory of Numbers, Houghton Mifflin Co., Boston, MA, 1970,109pp. Google Scholar
L. C. Washington, Introduction to Cyclotomic Fields, Graduate Texts in Mathematics, 83, Springer-Verlag, New York, 1997. doi: 10.1007/978-1-4612-1934-7. Google Scholar
Table 1. Normalized minimum product distance versus center density (from [5,12,15,16,18,19] and the results presented here)
$r $ $n $ $\sqrt[n]{d_{p}(\mathbb{Z}^n)} $ $\sqrt[n]{d_{p}(D_n)} $ $\sqrt[n]{d_{p}(A_n)} $ $\delta(\mathbb{Z}^n) $ $\delta(D_n) $ $\delta(A_n) $
$4 $ $3 $ $0.52275 $ $0.41491 $ $0.44544 $ $0.12500 $ $0.17677 $ $0.17677 $
$5 $ $7 $ $0.30080 $ $- $ $0.27602 $ $0.00780 $ $0.04419 $ $0.03125 $
$6 $ $15 $ $0.20138 $ $0.19229 $ $0.18513 $ $0.00003 $ $0.00276 $ $0.00138 $
$7 $ $31 $ $0.06220 $ $- $ $0.12782 $ $10^{-10} $ $10^{-5} $ $10^{-6} $
$8 $ $63 $ $0.09221 $ $0.09120 $ $0.08936 $ $10^{-19} $ $10^{-10} $ $10^{-11} $
$9 $ $127 $ $0.04542 $ $- $ $0.06284 $ $10^{-39} $ $10^{-20} $ $10^{-21} $
$10 $ $255 $ $0.03172 $ $- $ $0.04431 $ $10^{-77} $ $10^{-39} $ $10^{-40} $
$11 $ $511 $ $0.01819 $ $- $ $0.03129 $ $10^{-154} $ $10^{-78} $ $10^{-79} $
$12 $ $1023 $ $0.01569 $ $- $ $0.02211 $ $10^{-308} $ $10^{-155} $ $10^{-152} $
$14 $ $4095 $ $0.01106 $ $0.01106 $ $0.01106 $ $10^{-1233} $ $10^{-617} $ $10^{-619} $
$15 $ $8191 $ $0.00163 $ $- $ $0.00781 $ $10^{-2466} $ $10^{-1234} $ $10^{-1235} $
$16 $ $16383 $ $0.00319 $ $- $ $0.00552 $ $10^{-4932} $ $10^{-2467} $ $10^{-2468} $
$18 $ $65535 $ $0.00276 $ $0.00276 $ $0.00276 $ $10^{-19729} $ $10^{-9865} $ $10^{-9867} $
$19 $ $131071 $ $0.00079 $ $- $ $0.00195 $ $10^{-39457} $ $10^{-19729} $ $10^{-19731} $
$20 $ $262143 $ $0.00138 $ $0.00138 $ $0.00138 $ $10^{-78913} $ $10^{-39457} $ $10^{-39460} $
Download as excel
Ville Salo, Ilkka Törmä. Recoding Lie algebraic subshifts. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 1005-1021. doi: 10.3934/dcds.2020307
Kerioui Nadjah, Abdelouahab Mohammed Salah. Stability and Hopf bifurcation of the coexistence equilibrium for a differential-algebraic biological economic system with predator harvesting. Electronic Research Archive, 2021, 29 (1) : 1641-1660. doi: 10.3934/era.2020084
Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276
Hongyan Guo. Automorphism group and twisted modules of the twisted Heisenberg-Virasoro vertex operator algebra. Electronic Research Archive, , () : -. doi: 10.3934/era.2021008
François Dubois. Third order equivalent equation of lattice Boltzmann scheme. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 221-248. doi: 10.3934/dcds.2009.23.221
Yancong Xu, Lijun Wei, Xiaoyu Jiang, Zirui Zhu. Complex dynamics of a SIRS epidemic model with the influence of hospital bed number. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021016
Pablo Neme, Jorge Oviedo. A note on the lattice structure for matching markets via linear programming. Journal of Dynamics & Games, 2020 doi: 10.3934/jdg.2021001
Amira M. Boughoufala, Ahmed Y. Abdallah. Attractors for FitzHugh-Nagumo lattice systems with almost periodic nonlinear parts. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1549-1563. doi: 10.3934/dcdsb.2020172
Yi-Long Luo, Yangjun Ma. Low Mach number limit for the compressible inertial Qian-Sheng model of liquid crystals: Convergence for classical solutions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 921-966. doi: 10.3934/dcds.2020304
Juntao Sun, Tsung-fang Wu. The number of nodal solutions for the Schrödinger–Poisson system under the effect of the weight function. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021011
Wenbin Li, Jianliang Qian. Simultaneously recovering both domain and varying density in inverse gravimetry by efficient level-set methods. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020073
Arthur Fleig, Lars Grüne. Strict dissipativity analysis for classes of optimal control problems involving probability density functions. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020053
Tong Tang, Jianzhu Sun. Local well-posedness for the density-dependent incompressible magneto-micropolar system with vacuum. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020377
Mia Jukić, Hermen Jan Hupkes. Dynamics of curved travelling fronts for the discrete Allen-Cahn equation on a two-dimensional lattice. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020402
Pavel Eichler, Radek Fučík, Robert Straka. Computational study of immersed boundary - lattice Boltzmann method for fluid-structure interaction. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 819-833. doi: 10.3934/dcdss.2020349
Illés Horváth, Kristóf Attila Horváth, Péter Kovács, Miklós Telek. Mean-field analysis of a scaling MAC radio protocol. Journal of Industrial & Management Optimization, 2021, 17 (1) : 279-297. doi: 10.3934/jimo.2019111
Laura Aquilanti, Simone Cacace, Fabio Camilli, Raul De Maio. A Mean Field Games model for finite mixtures of Bernoulli and categorical distributions. Journal of Dynamics & Games, 2020 doi: 10.3934/jdg.2020033
Josselin Garnier, Knut Sølna. Enhanced Backscattering of a partially coherent field from an anisotropic random lossy medium. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1171-1195. doi: 10.3934/dcdsb.2020158
HTML views (74)
Agnaldo José Ferrari Tatiana Miguel Rodrigues de Souza | CommonCrawl |
Processing a Piezoelectric Accelerometer Output Using a Charge Amplifier
May 08, 2022 by Dr. Steve Arar
Learn about charge amplifiers which are used for charge output conversion of a piezoelectric sensor into a usable voltage signal.
In a previous article, we discussed the basics of piezoelectric accelerometers. In this article, we'll look at charge amplifiers that are commonly used to convert the charge output of a piezoelectric sensor into a usable voltage signal.
Background on Piezoelectric Accelerometers
Using a piezoelectric element, a piezoelectric accelerometer produces a charge output proportional to the applied acceleration. A charge output is a difficult type of signal to measure because it can gradually diminish over time through leakage resistance.
Besides, the typical sensing elements used in piezoelectric accelerometers, these sensors produce a small amount of charge in the range of tens or hundreds of picocoulombs per newton. As a result, a signal conditioning circuitry is often required to successfully extract the acceleration information without any charge being dissipated. This requires amplification stages with large input impedance to prevent the produced charge from leaking off through the input impedance of the amplifier that is in parallel with the sensing element.
In fact, while the piezoelectric effect was discovered in 1880 by Pierre and Jacques Curie, it was of no practical use up until the 1950s due to a lack of amplifiers with sufficiently high input impedance. A charge amplifier is the technology of choice when processing the output of a piezoelectric sensor. Charge amplifiers convert the charge produced by the sensor to a usable voltage signal.
The articles "Understanding and Implementing Charge Amplifiers for Piezoelectric Sensor Systems" and "How to Design Charge Amplifiers for Piezoelectric Sensors" provide a good introduction to the basics of charge amplifiers.
Below, we'll have a brief overview of the basic concepts along with some extra details.
Piezoelectric Sensor Equivalent Electrical Circuits
To get started, Figure 1 shows two equivalent electrical circuits that can be used to model a piezoelectric sensor.
Figure 1. Two example circuit models (a) (b) for piezoelectric sensors along with their schematic symbol (c).
A piezoelectric sensing element consists of a dielectric material placed between two electrodes. When a mechanical force is applied, the sensor produces some charge. With that in mind, a piezoelectric accelerometer can be modeled as a capacitor that charges itself when subjected to acceleration. This usage leads to the circuit model in Figure 1(a). In this equivalent circuit, a charge source, qp, is placed in parallel with the capacitance of the sensor Cp. The resistor Rp models the insulation resistance of the sensor that creates a leakage path for the produced charge.
Figure 1(b), on the other hand, depicts another circuit model that uses a voltage source in series with the sensor capacitor to take the effect of the produced charge into account. The output voltage of an open-circuit piezoelectric sensing element is equal to the produced charge qp divided by the capacitance Cp. In Figure 1(b), Veq is incorporated to produce the open-circuit voltage of the sensor. Finally, Figure 1(c) shows the typical schematic symbol of a piezoelectric sensor.
Charge Amplifier Configuration—Finding the Output Voltage
The basic configuration of a charge amplifier is shown in Figure 2.
Figure 2. A schematic showing the configuration of a charge amplifier within a sensor.
In this figure, the capacitor CC + CIN models the cable capacitance plus the input capacitance of the charge amplifier. When the sensor is subjected to acceleration, the charge produced by the sensor, qp, appears across the capacitors Cp and CC + CIN.
The output voltage of the sensor attempts to change the potential of the inverting input of the op-amp. However, we know that due to the negative feedback mechanism and the high gain of the op-amp, the inverting input of the op-amp remains at virtual ground.
The op-amp actually transfers some electric charge to the inverting input to null the output voltage of the sensor and keep the inverting input at virtual ground. This charge is equal to the charge produced by the sensor and has the opposite polarity. The op-amp provides this charge through the feedback path, i.e. through the combination of RF and CF.
With an appropriately designed charge amplifier, RF is much larger than the impedance of CF in the frequency range of interest. Therefore, CF is the dominant element in the feedback path, and the charge the amplifier transfers to the inverting input is provided through the feedback capacitor. In other words, the charge amplifier compensates for the electric charge yielded by the sensor, qp, with an equal charge of opposite polarity in the feedback capacitor CF.
Thus, the output voltage, which is equal to the voltage across CF, can be found as:
$$V_{\, out} = -\frac{q_{p}}{C_{F}}$$
Main Advantages of Using a Charge Amplifier
With a charge amplifier, the voltage across the sensor is ideally zero. Therefore, no current can flow through any insulation resistance in parallel with the sensor such as the insulation resistance of the cable or the leakage resistance of the sensor Rp. Thus, the charge produced by the sensor is not dissipated. Besides, the output voltage is only a function of the feedback capacitor and hence, the sensor and cable capacitance cannot change the gain of the circuit.
Charge Amplifier Time Constant Parameter—Feedback Resistor
The feedback resistor RF provides a DC path for the inverting input of the amplifier and sets the DC voltage of this node. However, adding this resistor can limit the accuracy when measuring a DC (or very low-frequency) acceleration signal.
As we discussed above, the charge produced by the sensor is transferred to the feedback capacitor through the charge amplifier operation. This charge can gradually leak off through the feedback resistor, which is in parallel with CF.
In fact, the quasi-static behavior of the amplifier is determined by the time constant parameter:
$$\tau=R_{F}C_{F}$$
In the context of charge amplifiers, the quasi-static (or near static) behavior refers to the measurement of signals that remain constant for a relatively long time duration. For measuring very low-frequency signals, the time constant should be maximized.
To better understand the effect of the time constant parameter on our measurements, consider the waveforms shown in Figure 3.
Figure 3. Output charge amplifier (bottom) and sensor signal (top) waveforms. Image used courtesy of Kistler.
In this figure, the top waveform shows the charge produced by the sensor, while the bottom waveform shows the output of the charge amplifier. In this example, it is assumed that the charge waveform has a fixed DC value along with some high-frequency components. The high-frequency components of the input appear at the output as expected. However, the DC value of the output, which is initially close to the DC value of the input, gradually approaches zero volts. This tendency is due to the fact that a static charge stored in CF leaks away through RF.
As you can see, after a time interval of one $$\tau$$, the DC value of the output is reduced to 37% of its initial value. With some types of charge amplifiers, it is possible to switch between different feedback resistor values to adjust the time constant parameter depending on the low-frequency content of the acceleration signal.
Charge Amplifiers With a Reset Switch
Alternatively, some charge amplifiers incorporate a reset switch instead of a feedback resistor, as illustrated in Figure 4, which gives us the maximum time constant value.
Figure 4. A schematic showing a charge amplifier, using a reset switch, in configuration with a sensor.
Before making a measurement, the switch is turned on to discharge the feedback capacitor and set the DC voltage of the inverting input of the op-amp. Then, the switch is turned off to start the measurement phase, as seen in Figure 5.
Figure 5. The circuit operation of the charge amplifier. Image used courtesy of Kistler
Again, the upper curve shows the charge produced by the sensor, and the lower curve depicts the output of the charge amplifier. Note that, when the switch is on, the output is zero. As a result, the reset switch also fixes the zero point for the subsequent measurement.
Although incorporating a reset switch maximizes the time constant, it makes the circuit susceptible to drift phenomenon. Drift refers to a change in the output of the charge amplifier that occurs over a period of time and is not caused by a change in the physical parameter being measured (acceleration in our discussion). Drift is caused by several different non-ideal effects such as the input bias current and offset voltage of the op-amp.
To further this discussion, the effect of the feedback resistor on the amplifier's low-frequency response and drift behavior should be evaluated in greater detail.
To see a complete list of my articles, please visit this page.
Driving Piezoelectric Actuators with Linear Amplifiers
Piezoelectric Accelerometers: IEPE Sensors vs. Charge Output Sensors
Piezoelectric Accelerometers With Integrated Electronics Piezoelectric (IEPE)
How to Design Charge Amplifiers for Piezoelectric Sensors
Understanding Piezoelectric Accelerometer Basics
Portable Power Tool Reference Design
piezoelectric
piezoelectric sensor
charge amplifier
piezoelectric accelerometer
piezoelectric element
Dissimilar Metals And The Risk of Galvanic Corrosion
In Partnership with Samtec
Industry and Academia Look Toward Sustainable Circuit Boards
by Chantelle Dubois
Intel Reveals Plans for a Trillion-transistor Processor by 2030
In-Cabin Radar for Child Protection
by Murata
Automotive Mergers and Acquisitions Point to Clear Road Ahead for EVs | CommonCrawl |
Technical advance
Modifying Hofstee standard setting for assessments that vary in difficulty, and to determine boundaries for different levels of achievement
Steven A. Burr1,
John Whittle2,
Lucy C. Fairclough3,
Lee Coombes1,4 &
Ian Todd3
Fixed mark grade boundaries for non-linear assessment scales fail to account for variations in assessment difficulty. Where assessment difficulty varies more than ability of successive cohorts or the quality of the teaching, anchoring grade boundaries to median cohort performance should provide an effective method for setting standards.
This study investigated the use of a modified Hofstee (MH) method for setting unsatisfactory/satisfactory and satisfactory/excellent grade boundaries for multiple choice question-style assessments, adjusted using the cohort median to obviate the effect of subjective judgements and provision of grade quotas.
Outcomes for the MH method were compared with formula scoring/correction for guessing (FS/CFG) for 11 assessments, indicating that there were no significant differences between MH and FS/CFG in either the effective unsatisfactory/satisfactory grade boundary or the proportion of unsatisfactory graded candidates (p > 0.05). However the boundary for excellent performance was significantly higher for MH (p < 0.01), and the proportion of candidates returned as excellent was significantly lower (p < 0.01). MH also generated performance profiles and pass marks that were not significantly different from those given by the Ebel method of criterion-referenced standard setting.
This supports MH as an objective model for calculating variable grade boundaries, adjusted for test difficulty. Furthermore, it easily creates boundaries for unsatisfactory/satisfactory and satisfactory/excellent performance that are protected against grade inflation. It could be implemented as a stand-alone method of standard setting, or as part of the post-examination analysis of results for assessments for which pre-examination criterion-referenced standard setting is employed.
Many university assessment systems have established pre-existing passing scores for determining degree classifications after application of an appropriate correction factor to account for guessing [1]. However, it is clear that variations in test difficulty have a marked effect on pass/fail rates for different cohorts [2], and thus predetermined fixed standards are increasingly difficult to justify and defend.
There is no single gold standard for setting grade boundaries for multiple choice question (MCQ)-style assessments, and criterion-based approaches (such as Angoff or Ebel) rely on panels of judges reviewing each question item [3]. However these criterion-based methods are resource intensive and susceptible to a high degree of inter-reviewer variability [4]. The alternative, norm-referenced, approaches involve failing a fixed proportion of each cohort and thus appear unfair. However norm-referencing is only unfair if there is significant variation in performance between cohorts, which is unlikely to be a significant factor [1]. It therefore seems reasonable to consider a method that sits somewhere between these two approaches.
The Hofstee method [5] can be described as a compromise between criterion-referenced and norm-referenced methods of standard setting, and is used in the UK to set standards on undergraduate exams [6]. While it is often not a first choice for many practitioners [7], it is still considered to be a common method and is reported in standard setting guides alongside more familiar methods such as Angoff and Ebel [8]. While there is evidence that the Hofstee method produces appropriate, stable and reliable passing scores [9, 10], concerns remain about its fairness and credibility [11].
Figure 1a shows how passing scores are determined in the Hofstee method, and represents the results of three assessments of different levels of difficulty on a Hofstee plot. If a fixed passing score (criterion-referencing) was applied (represented by the vertical dashed line) then a very large difference is observed across the three assessments in the proportion of candidates who fail (indicated by the three open circles). If, on the other hand, a fixed proportion of the cohort were failed (norm-referencing, represented by the horizontal dashed line) then there is a large difference in the pass marks across the three assessments (indicated by the three open squares).
Model for Hofstee standard setting. a Performance curves for three assessments (harder, intermediate and easier); open circles indicate the percent of the cohort who fail each assessment with a criterion reference pass mark of 55 %; open squares indicate the pass marks for a norm-referenced failure rate of 10 % of the cohort; solid squares indicate the pass marks and percent of the cohort who fail by application of the Hofstee method. b Application of Hofstee criteria to determine a BEP (indicated by the solid squares) – see text for details. c Application of modified Hofstee criteria to determine BSP (solid circles) and BEP (solid squares) – see text for details. d Graphical presentation of 'cranking' the standard set marks from an assessment to moderated marks on the University scale where the pass mark of 40 % equates to the BSP% and the 70 % distinction/first-class mark equates to BEP%. An individual student's standard set mark is mapped to the new moderated mark through linear interpolation on the gradient of the relevant line (e.g. X% mapped to Y%)
The application of the Hofstee method is represented by the bold solid lines in Fig. 1a. The vertical lines represent the maximum and minimum satisfactory boundary (i.e. the highest and lowest scores required to pass). The horizontal line represents the maximum percentage fails (i.e. the highest acceptable proportion of candidates failing (set at 20 % in this example); note that the minimum failure rate is set at 0 % as all of the candidates could pass). The effective passing score, or Boundary for Satisfactory Performance (BSP), is set to the point at which the diagonal line intersects with the curve for each assessment (indicated by the solid squares). For the intermediate assessment, the effective BSP is the same by all three methods. For the harder and easier assessments, the change in BSP (and consequent change in proportion of candidates who fail) by applying Hofstee is less than given by either the criterion or norm-referenced methods alone. Nevertheless, there remain perceived inequities that could be accounted for by modifying the Hofstee method to: control for differences in test difficulty; remove the percentage fail quotas; and, include additional grade boundaries.
This study aims to develop a modified form of the Hofstee method for MCQ-style assessments that does not require time-consuming and relatively subjective assessments of the difficulty of individual questions as in the Anghoff and Ebel methods, and which could be used to determine the boundary between satisfactory and excellent performance to produce a Boundary for Excellent Performance (BEP) as well as a BSP. Comparison of candidate outcomes with an established formula scoring or 'correction for guessing' (FS/CFG) method, which had been used for a number of years prior to the introduction of standard setting, should provide evidence to examine whether the modified Hofstee (MH) approach performs more appropriately. FS/CFG is not a standard setting method – it is simply a mechanism to moderate for guessing of correct responses; therefore, comparison is also made to the criterion-referenced Ebel standard setting method for determination of the BSP to establish whether the MH method performs similarly to this well-established standard setting method.
Applying the Hofstee method to determine a Boundary for Excellent Performance (BEP)
We propose that the criteria of the Hofstee method for determining a BSP (Background section, Fig. 1a) can be 'inverted' for the determination of a BEP (i.e. 'first class' or 'distinction'). This is illustrated in Fig. 1b, employing the same three performance curves as in Fig. 1a. The vertical solid lines represent the maximum and minimum boundary for excellence marks (i.e. the highest and lowest scores required to be judged 'first class'). The percentage above the horizontal solid line represents the maximum acceptable proportion of candidates judged to be 'first class' (set at 100–80 = 20 % in this example); the minimum proportion is 0 % as it is feasible that none of the candidates demonstrate 'excellence'). The diagonal line then gives the effective Boundary for Excellent Performance (BEP) where it intersects with the curve for each assessment.
Determining boundaries based on the median performance of a cohort
The essence of the classical Hofstee method is that judges decide on the minimum and maximum failure rate and acceptable pass mark. The pass mark range is based on the perceived difficulty of the assessment, with harder exams typically setting lower BSPs, though the exact range used is a subjective decision. However, if we assume that the students who take an assessment are sufficiently representative of the whole population of possible students and that the quality of the teaching is stable, then their performance can be used as a measure of the assessment difficulty. We verified this assumption by analysing response data for 31 multiple choice and extended matching questions (incorporating a total of 73 correct items and many more distractors) that were attempted by between three and six different cohorts of first year medical students in summative exams over a seven-year period (with between 237 and 263 students in each cohort). The percentage of students in the different cohorts that chose the correct items had a mean coefficient of variation of only 3.8 % (standard deviation ±3.0 %). This shows that correct response rates are stable across multiple cohorts of students over multiple years. We therefore propose that the BSP is adjusted based on the performance of the cohort as a whole, as judged by the median percentage mark (the median is used because cohort performances are frequently skewed with a tail caused by a small number of disproportionately low-scoring fails). Similarly, a BEP can be determined, as described in the previous paragraph, with its position set relative to the median performance of the cohort as a measure of assessment difficulty rather than setting an arbitrary boundary for excellence.
Boundaries for modified Hofstee
We undertook initial modelling on historical assessment data in order to determine acceptable boundaries in relation to both the BSP and the BEP. The assessments were composed of objective multiple choice questions (MCQ) – this term is used here generically to include single best answer, multiple choice, extended matching questions, etc. These assessments were delivered in the open-source Rogo e-Assessment Management System (http://rogo-oss.nottingham.ac.uk/); they represented summative assessments from a range of biomedical science disciplines delivered in semesters 1–4 of a medical course.
We decided that the maximum and minimum proportion of the cohort who can either fail or achieve excellence should be set at 100 and 0 %, respectively, since it could be deemed unfair to set a limit. Thus, all students in a cohort would be deemed 'unsatisfactory' if none of them scored any marks; equally, all students in a cohort would be deemed 'excellent' if they all scored 100 %.
Several years' experience of applying FS/CFG to MCQ assessments for a range of modules had shown that failure rates up to 10 % (but usually ≤5 %) on individual modules were acceptable in terms of level of difficulty and identifying students whose level of knowledge and understanding were unsatisfactory when applying a pass mark of 40 %. We therefore deemed that the MH protocol should be calibrated to generate a similar failure rate. We found that this was achieved by setting the maximum unsatisfactory mark at 20 percentage marks below the median percentage mark for the assessment, or setting it at 60 %, whichever is lower. Similarly, for an MSc module with a pass mark of 50 %, an acceptable failure rate (compared to FS/CFG) was achieved by setting the maximum unsatisfactory mark at 10 percentage marks below the median, or at 70 %, whichever was lower. In all cases, the minimum satisfactory mark is set at 0 %; the reason for this is that we wanted to avoid any application of FS/CFG as this automatically assumes an element of guessing, which may not always be the case even for candidates whose score is less than the FS/CFG mark. Setting the lower limit of the pass mark at zero then allows for the theoretical possibility that all the questions in an assessment were so difficult that any marks achieved should merit a pass, and that the number of distractors is so large that the likelihood of guessing a correct answer is insignificant (e.g. < 1/20).
We judged that, for most assessments, the proportion of candidates deemed to show excellent performance should be between 5 and 30 % of the cohort; we found that this was achieved when the minimum mark for excellence is set at 10 percentage marks above the median percentage mark for the assessment, or is set at 85 %, whichever is lower. The maximum mark for excellence is set at 100 %; this allows for the theoretical possibility that all the questions are sufficiently easy that demonstrating excellence requires all responses to be correct.
It should be noted that it is not appropriate to use a formula to determine proportions of performances that are deemed to be either unsatisfactory or excellent as this would tie the method to norm-referencing with fixed quotas in particular categories of performance; the MH method has been deliberately designed to avoid generating fixed quotas of unsatisfactory or excellent candidates. For example, it is possible that no student performances would be deemed unsatisfactory or excellent if the spread of marks around the median is very low – e.g. if the median mark was 75 % and the whole distribution of marks was between 55 and 85 %.
The 'whichever is lower' clause for the determination of both BSP and BEP is in the candidates' favour. In addition, all percentage marks were calculated to two decimal places to plot the Hofstee cumulative frequency curves to avoid inaccuracies which might result from the premature rounding of marks. The effective boundaries (cut scores) determined for satisfactory and excellent performance (BSP and BEP, respectively) were then rounded to the whole percentage mark below the cut score to two decimal places (e.g. 56.87 would be rounded to 56) in order also to be in the candidates' favour.
The application of the above principles for setting the boundaries (for assessments with a University pass mark of 40 %) is illustrated in Fig. 1c for the same sets of results shown in Fig. 1a and b. The median scores in the harder, intermediate and easier assessments are 60, 70 and 80 %, respectively; this generates upper limits for the BSP of 40, 50 and 60 %, respectively (median% minus 20 %). Diagonal lines are then drawn from these boundaries to the point where 100 % of the cohort would achieve 0 %, and the intersection of these diagonals with the performance curves gives the BSP. The three performance curves generate lower limits for the BEP of 70, 80 and 85 %; this represents the median% plus 10 %, except for the 'easier' assessment, where this would be greater than 85 %. Diagonal lines are then drawn to the point where 0 % of the cohort achieves 100 %, and the intersections with the performance curves give the BEP.
Modified Hofstee (MH) standard setting was applied to the results of 8 independent summative MCQ assessments (Modules 1–8) all sat in the same academic session by year 1 or year 2 undergraduate (UG) medical student cohorts (258–266 students), and a further 2 optional modules (Modules 9–10) each sat by a different subset of the same year 2 UG cohort (25–29 students); all of these assessments had a University-scale UG BSP of 40 % and a BEP of 70 %; they represented summative assessments collectively comprising all biomedical science disciplines delivered in semesters 1–4 of a UG medical course. MH was also applied to the results of an MSc level 3 post-graduate (PG) module in basic immunology with a University-scale PG BSP of 50 % and a BEP of 70 % (Module 11); this assessment was taken by three separate cohorts (37–50 students). The study was approved by the Ethics Committee of The School of Life Sciences, Faculty of Medicine and Health Sciences, University of Nottingham, UK (Ethics Reference Number B181114IT). Participant consent was deemed not to be necessary for this anonymised assessment data and was exempted from the ethical approval process. Permission to use the assessment data (to which access is restricted) was granted by the Associate Dean for Medical Education, Faculty of Medicine and Health Sciences, University of Nottingham, on behalf of the University. The summative assessment results returned by MH were compared with the assessment results that would have been returned if FS/CFG was applied. Summary statistics were expressed as medians with interquartile ranges, and significance determined by Wilcoxon matched-pairs tests.
The random mark used to determine the FS/CFG was calculated for each question using the formula R = N2/T, where R is the random mark, N is the number of correct options, and T is the total number of options (where each correct option is worth one mark). For example, for a single best answer question with four options: R = 12/4 = 0.25; for a multiple response question with two correct options from a choice of five: R = 22/5 = 0.8. The overall random mark for an assessment was then calculated as the sum of the random marks for all the questions in the assessment. The FS/CFG-adjusted mark for each candidate was then calculated using the formula:
$$ \mathrm{F}\mathrm{S}/\mathrm{C}\mathrm{F}\mathrm{G}-\mathrm{adjusted}\ \mathrm{mark}=\left[\mathrm{marka}\ \mathrm{chieved}\ \hbox{--}\ \mathrm{random}\ \mathrm{mark}\right]\div \left[\mathrm{total}\ \mathrm{available}\ \mathrm{mark}\mathrm{s}\ \hbox{--}\ \mathrm{random}\ \mathrm{mark}\right]. $$
Worked example
Table 1 shows a worked example of marks processing using the MH protocol for the module 10 assessment (taken by 29 students):
Table 1 Worked example of marks processing using the modified Hofstee protocol for the module 10 assessment
The marks expressed as percentages are shown in Table 1, column 1 to two decimal places (in this instance all the % marks are integers).
The median mark for the cohort is 78 %; applying the protocol described above, this sets the upper limit for the Boundary for Satisfactory Performance (BSP) at 58 % (20 % below the median is <60 %), and sets the lower limit for the Boundary for Excellent Performance (BEP) at 85 % (10 % above the median is >85 %).
The cumulative frequency curve of assessment marks (%) for percentage of the cohort (Y) against percentage correct score (X) is plotted (e.g. using the Survival Curve option in Graphpad Prism version 5.0d). This 'performance' curve of the cohort (% correct versus % of cohort achieving that mark or lower) is subjected to the MH protocol to determine the actual BSP and BEP (shown in Fig. 2b).
a-c Examples of applying the modified Hofstee protocol to the cumulative frequency curves of student cohort performance in MCQ-style assessments: a module 2; b module 10; c module 11. Modules 2 and 10 have a University pass mark of 40 % whereas module 11 has a University pass mark of 50 %; all three modules have a University first class/distinction mark of 70 %. d-f The frequency distributions of student performance in the same three assessments comparing the outcomes given by FS/CFG (dashed curves) and the MH protocol (solid curves) following moderation ('cranking') to the University scales – 40 %/70 % for modules 2 (d) and 10 (e), and 50 %/70 % for module 11 (f)
Where the diagonal lines cross the performance curve give: BSP = 52 % (rounded down from 52.83 % actual); BEP = 88 % (rounded down from 88.02 % actual) (Fig. 2b).
The candidates' percentage marks (X%) are converted using an equation in Microsoft Excel to normalise BSP (52 %) to a university boundary of 40 % (minimum pass mark), and BEP (88 %) to a university boundary of 70 % (minimum first class or distinction mark) to derive their mark (Y%) as in Fig. 1d. This gives the final standard set percentage marks shown in Table 1, column 2.
For comparison, the equivalent percentage marks given using FS/CFG are shown in Table 1, column 3. In columns 2 and 3, the marks below the BSP are shown in bold; the marks on or above the BEP are shown in bold italics.
Determining BSP and BEP using MH analysis
Figure 2(a-c) shows three examples of the cumulative frequency curves, each used to derive the Boundary for Satisfactory Performance (BSP) and Boundary for Excellent Performance (BEP) for a different individual module. Modules 2 and 10 have a University-scale UG BSP of 40 %, and module 11 has a University-scale PG BSP of 50 %; all have University-scale BEPs of 70 %.
Comparison of MH and FS/CFG for determining satisfactory performance
Figure 3a shows a comparison of MH with FS/CFG for determining satisfactory performance in the ten UG module assessments. There is no significant difference (p > 0.05) between the proportion of candidates returned as unsatisfactory: FS/CFG median = 3.25 % of candidates and MH median = 2.3 % of candidates. Thus, MH returns a proportion of candidates deemed to show unsatisfactory performance (fail) similar to that given by FS/CFG, when applying a maximum MH BSP of 60 %. It should be noted, however, that the interquartile range for the percentage of failing candidates is much lower using MH (3.850 – 1.325 = 2.525 %) than using FS/CFG (7.175 – 1.4 = 5.775 %). Given that all ten assessments were taken in the same academic session by first or second year medical students (or a subset thereof), this is consistent with the standard setting properties of MH taking account of exam difficulty, which FS/CFG does not.
Comparison of the outcomes for applying MH or FS/CFG to the assessments of ten different UG modules: a the percentage of candidates showing unsatisfactory performance; b the percentage marks defining the effective boundary for satisfactory performance; c the percentage of candidates showing excellent performance; d the percentage marks defining the effective boundary for excellent performance
The reason for the similar fail rates given by MH and FS/CFG is shown in Fig. 3b, which compares the BSP determined by MH with the 'effective' BSP using FS/CFG. The latter is the percentage of the total marks in the assessment that a candidate must achieve in order to be awarded a mark of 40 % after subtraction of the random mark from both the actual mark and the total marks; this is derived from the formula 0.4 = (effective BSP – random mark) / (total marks – random mark). There is no significant difference (p > 0.05) between the effective BSPs: FS/CFG median = 57.1 % (interquartile range = 55.88–59.03) and MH median = 56.0 % (interquartile range = 53.5–58.0).
Comparison of MH and FS/CFG for determining excellent performance
Figure 3c shows a comparison of MH with FS/CFG for determining excellent performance in the ten UG module assessments. There is a significant difference (p < 0.01) between the proportion of candidates returned as demonstrating excellent performance: FS/CFG median = 55.3 % of candidates (interquartile range = 51.7–63.63) and MH median = 24.7 % (interquartile range = 18.53–28.6). Thus, MH returns a significantly lower proportion of candidates deemed to show excellent performance (first class, distinction) when applying a maximum lower limit of BEP of 85 %.
The reason for the very different rates of performance deemed to be 'excellent' given by MH and FS/CFG is also shown in Figure 3d, which compares the BEP determined by MH with the 'effective' BEP using FS/CFG. The latter is the percentage of the total marks in the assessment that a candidate must achieve in order to be awarded a mark of 70 % after subtraction of the random mark from both the actual mark and the total marks; this is derived from the formula 0.7 = (effective BEP – random mark) / (total marks – random mark). There is a significant difference (p < 0.01) between the effective BEPs: FS/CFG median = 78.6 % (interquartile range = 77.95–79.5) and MH median = 87 % (interquartile range = 86.5–88.0).
Comparison of the marks profile generated by MH and FS/CFG
Figure 2(d-f) shows the frequency distribution of marks generated by MH (after moderation to the University scale with UG 40 %/70 % boundaries and PG 50 %/70 % boundaries) and FS/CFG, again using modules 2, 10 and 11 as examples. In both cases, FS/CFG gives marks distributions heavily skewed to the right, with the majority of candidates being awarded scores close to, or greater than, the BEP. The application of MH shifts the marks distribution to the left, generating a more symmetrical 'normalised' distribution; in most cases, the majority of candidates are then awarded scores between the BSP and BEP. In addition, lower performing candidates (with marks close to, or below, the BSP) are awarded somewhat higher scores using MH than with FS/CFG. The more favourable marks generated by MH for lower performing candidates is because FS/CFG assumes an element of guessing, so that a candidate whose actual mark is the same as the random mark is awarded 0 % if FS/CFG is applied.
Correlation between marks awarded using MH or Ebel standard setting
All UK medical schools are now required to employ standard setting methodologies in their assessments [6]. It is therefore important to assess the validity of the MH method in comparison to a widely used, criterion-referenced standard setting method, such as that proposed by Ebel [12]. In this method, judges rate each question according to difficulty (easy/medium/hard) and relevance (essential/important/nice-to-know); this generates nine categories of questions (easy/essential, medium/important, etc.) and, for each, a judgement is made on the percentage of questions in each category that a 'borderline candidate' on the cusp of failing would be expected to answer correctly. Multiplying the number of marks associated with each category by the corresponding'borderline percentage correct' and then adding up the values for all nine categories, gives the pass mark (BSP).
Figure 4 shows the marks generated for the assessment of a year 1 module using Ebel or MH standard setting to determine the BSP. A Pearson product–moment correlation coefficient between marks generated by Ebel and MH standard setting is highly significant (r = 0.9998, n = 262, p < .0001). Out of the 262 candidates, one would have failed by applying the Ebel BSP, and three would have failed by applying the MH BSP.
Correlation of the marks profile generated by applying Ebel or MH standard setting to a year 1 medical student assessment. Following standard setting by either method, the BSP was converted to a university-set pass mark of 40 % (indicate by the horizontal and vertical dotted lines on the graphs). The solid circles show the marks of candidates determined by both methods of standard setting
Similar pass marks generated by MH and Ebel standard setting
As a further comparison between the MH and Ebel methods, data from interdisciplinary clinical MCQ assessments were analysed in terms of the BSP generated by Ebel standard setting panels of judges (the procedure implemented for the exams) and the MH method applied retrospectively to the assessment results. Each assessment was attempted by cohorts of >300 fifth year medical students, with two MCQ exams taken by each cohort; data over a five year period were analysed, i.e. ten assessments in total. For these high-stake, final year assessments, we determined that the upper limit of the MH BSP should be set at 15 percentage marks below the median percentage mark for the cohort. A paired samples t-test indicated there was no significant difference between the BSPs for each assessment generated by applying Ebel or MH standard setting (t(9) = 1.417, p = 0.1902); there was also no significant difference in the number of candidates who failed when applying the BSP generated by Ebel or MH standard setting (p = 0.7031 by Wilcoxon matched-pairs signed rank test).
We have described here a modification of the Hoftsee standard setting method that employs the median score of a cohort to determine cut scores rather than the time-consuming and relatively subjective decisions of a panel of judges. In addition, we have used this to determine different boundaries (e.g. unsatisfactory/satisfactory and satisfactory/excellent), although this principle could also be applied to the conventional Hofstee method or to other standard setting methods; indeed, we have previously used the Ebel method to standard set the satisfactory/excellent borderline (unpublished observations). The principles of the MH method could readily be applied to other cut-points and to other grading systems that are in use around the world, in addition to the standard UK scoring system exemplified here. Furthermore, although the MH method has been developed for MCQ-style assessments, the principles of the method could be adapted to other assessment formats.
When compared to outcomes based on a FS/CFG method, the MH method produced similar boundaries for satisfactory performance and similar proportions of unsatisfactory graded candidates. The boundary for excellent performance was significantly higher using the MH method, with a significantly lower proportion of candidates awarded excellent grades. Furthermore, the MH protocol generated marks profiles and BSPs very similar to those given by applying the Ebel method of criterion-reference standard setting.
A key feature of the MH protocol described here is that the upper limit of the BSP is set an absolute (rather than relative) distance below the median mark of the cohort. This has several advantages: it controls for variation in difficulty between assessments; bases the satisfactory/unsatisfactory boundary mark on the average performance of all the candidates (while accounting for a non-normal distribution); avoids the necessity to impose a quota (such as failing all those below the 95 % confidence interval). We have also demonstrated that the MH protocol allows appropriate BSPs to be established for assessments requiring demonstration of different levels of competence (e.g. a 40 or 50 % pass mark on the University scale). Furthermore, whereas others have reported that the conventional Hofstee method generates only small changes in BSP for relatively large changes in the boundaries for fail rates and pass marks [11], we have demonstrated that the MH method detailed in the present report generates much larger changes in actual BSP when the position of the upper limit of the BSP is varied relative to the median performance of the cohort [13]; this is intuitively consistent with the standard setting process.
In addition, our results demonstrate that, by establishing a standard set BEP an absolute distance above the cohort median, MH reduced the proportion of candidates being awarded excellent grades and consequently protected against 'marks creep'. Thus, including control over the cut score for excellence mitigates against grade inflation and the danger of devaluing a qualification [14]. This appears to be particularly evident when using objectively marked assessments (e.g. multiple choice questions, extended matching questions, etc.), which involve identification of correct information, rather than 'unprompted recall' of correct information (e.g. short answer questions, essay questions, etc.). Furthermore, where conversion back to a range of marks correlating to university degree categories, determination of cut scores for both satisfactory and excellent performances increases fairness where the categories correlate to non-linear mark brackets (e.g. fail = 0–39 %, 3rd = 40–49, 2.2 = 50–59, 2.1 = 60–69, 1st = 70–100 %).
With regard to the time taken to apply the MH procedure, we have found that, from accessing the 'raw' marks to generating the final marks, standard set and converted to a university scale, takes about 15 min for each assessment. However this is considerably less labour intensive than criterion-only referenced methods (e.g. Ebel, Angoff), and MH has the further advantage that much of the process can be automated by computer. Indeed, programming of the MH method in the Rogo Assessment System means that the final, standard set results can be generated in 1–2 min. The pragmatic need for standard setting methodologies that are 'affordable' in terms of staff time and resources has also been a key factor in other standard setting protocols that have been reported recently [15]. Furthermore, in situations where pre-examination criterion-referenced standard setting methods, such as Ebel, are likely to be the method of choice (e.g. 'high-stakes' clinical exams), MH could be used as part of the post-examination analysis of results in the quality assurance of the whole standard setting process. If, for example, Ebel and MH generated significantly different pass marks and/or numbers of fails for the same assessment, further investigation could then be undertaken to determine whether the Ebel panel of judges set the pass mark appropriately (too high or low), or whether the cohort who sat this particular assessment could be deemed 'non-representative' based on experience of previous cohorts.
A possible problem with setting satisfactory boundaries which are referenced to the performance of the cohort (in this case, the median mark) is that, if the candidates in the cohort collectively agreed to 'try less hard' in a particular module, this might result in a lower number of fails. This possibility seems highly unlikely, particularly in a large cohort of candidates. However, the Hofstee method should counter such a possibility because the diagonal cut-off line means that the lower the effective grade boundary gets between the minimum and maximum satisfactory marks, the higher is the proportion of unsatisfactory candidates. So, applying this strategy means that candidates who might otherwise have just passed are more likely to fail.
Where competencies that can be clearly defined need to be demonstrated then a criterion-referenced form of standard setting should be advocated for determining boundaries for satisfactory performance [16, 17]. However, for assessments not concerned with competency, particularly for scientific knowledge where the difficulty and relevance are debatable, or when boundaries other than satisfactory/unsatisfactory need to be considered, then the method presented here may be more robust and has been found to be effective for cohorts ranging from 25 to 266 candidates. Indeed, as shown in Fig. 4, the MH protocol can deliver outcomes that are very similar to those generated by criterion-referenced methods, such as Ebel. Furthermore, the boundaries adopted in the MH protocol could be adjusted to correspond with local requirements.
Application of the modified Hofstee standard setting approach to resit assessments must take account of the relatively small number of candidates usually involved. In addition, a resit cohort is usually comprised of candidates with lower levels of achievement. Thus, any standard set for a resit must not be derived from student performance as the candidates involved are not likely to be representative of a complete cohort. However, there are ways in which resit assessments can have standards set for them using data from the modified Hofstee. If the resit assessment is identical to (or closely mirrors) an assessment used previously, the data from the original sitting of the assessment can be used to provide a standard using combined data, or the original standard can be retained and reapplied. Alternatively, if the resit assessment is made up of questions from a variety of previous assessments, either a hypothetical curve could be generated from the cumulative performance of these questions on previous occasions, or a generic curve for that module could be generated if there are several sets of previous data, and these show that performance is stable from year to year for that module. This would mean that new questions should not be used in a resit assessment. Clearly, if the assessment is totally new (so there are no results from previous years for the questions used in the resit assessment), an alternative method of standard setting independent of candidate performance would have to be explored.
There is clearly much scope for further analysis and development of the modified Hofstee protocol described here. For example, applying different percentage mark distances from the median for both the upper limit of the BSP and the lower limit of the BEP may be investigated for their effects on cohort performance and outcomes that may be appropriate in different circumstances. Indeed, we would recommend that this been done whenever the method is applied in a new setting in order to ensure appropriate and reliable outcomes. The modified Hofstee method could also be compared to other established standard setting methods (in addition to Ebel), such the Anghoff method.
Modified Hofstee provides an objective model for calculating variable grade boundaries to take account of assessment difficulty, which, if necessary, can be converted to a fixed scale to take account of local institutional requirements. Furthermore, MH produces awards comparable to FS/CFG for the majority of candidates while treating poorer performers more fairly. It also delivers outcomes very similar to those generated by the more labour-intensive Ebel method of criterion-referenced standard setting. Modelling indicates that MH can be applied to determine an excellence/satisfactory boundary as easily as for a satisfactory/unsatisfactory boundary and this can protect against grade inflation. Furthermore, MH could be implemented as a stand-alone method of standard setting, or as part of the post-examination analysis of results for assessments for which pre-examination criterion-referenced standard setting is employed.
BEP:
boundary for excellent performance
BSP:
boundary for satisfactory performance
FS/CFG:
formula scoring / correction for guessing
MH:
modified Hofstee
van der Vleuten CP. Setting and maintaining standards in multiple choice examinations: guide supplement 37.1 - Viewpoint. Med Teach. 2010;32:174–6.
Muijtjens AM, Schuwirth LW, Cohen-Schotanus J, Thoben AJ, van der Vleuten CP. Benchmarking by cross-institutional comparison of student achievement in a progress test. Med Educ. 2008;42:82–8.
Bandaranayake RC. Setting and maintaining standards in multiple choice examinations: AMEE Guide No 37. Med Teach. 2008;30:836–45.
Verhoeven BH, Verwijnen GM, Muijtens AM, Scherpbier AJ, van der Vleuten CP. Panel expertise for an Angoff standard setting procedure in progress testing: item writers compared to recently graduated students. Med Educ. 2002;36:860–7.
Hofstee WKB. The case for compromise in educational selection and grading. In: Anderson SB, Helminck JS, editors. On Educational Testing. San Francisco: Jossey-Bass; 1983. p. 109–27.
General Medical Council. Assessment in Undergraduate Medical Education Advice Supplementary to Tomorrow's Doctors (2009). London: GMC; 2011.
Cizek G, Bunch MB. Standard Setting: A Guide to Establishing and Evaluating Performance Standards on Tests. London: Sage; 2007.
Mckinley DW, Norcini JJ. How to set standards on performance based examinations: AMEE Guide No 85. Med Teach. 2014;36:97–110.
Schindler N, Corcoran J, DaRosa D. Description and impact of using a standard-setting method for determining pass/fail scores in a surgery clerkship. Am J Surg. 2007;193:252–7.
Wayne DB, Fudala MJ, Butter J, Siddall VJ, Feinglass J, Wade LD, et al. Comparison of Two standard-setting methods for advanced cardiac life support training. Acad Med. 2005;80:S63–6.
Tavakol M, Dennick R. Modelling the Hofstee method reveals problems. Med Teach. 2014;36(2):181–2.
Ebel R. Essential of Educational Measurement. Englewood Cliffs, NJ: Prentice-Hall; 1972.
Todd I, Burr S, Whittle J, Fairclough L. Modifying the Hofstee method may overcome problems. Med Teach. 2014;36:358–9.
Paton G: First-class degrees double in a decade. [http://www.telegraph.co.uk/education/universityeducation/9011098/Warning-over-grade-inflation-as-first-class-degrees-double.html]
Cohen-Schotanus J, van der Vleuten CP. A standard setting method with the best performing students as point of reference: practical and affordable. Med Teach. 2010;32:154–60.
Friedman Ben-David M. An extended summary of AMEE Guide No 18. Med Teach. 2012;22:120–30.
Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Lancet. 2001;357:945–9.
The authors are grateful to Dr R Dennick and Dr M Tavakol in the School of Medicine at the University of Nottingham for helpful comments and discussions.
Collaboration for the Advancement of Medical Education Research and Assessment (CAMERA), Peninsula Schools of Medicine and Dentistry, Plymouth University, Devon, PL4 8AA, UK
Steven A. Burr & Lee Coombes
School of Medicine, University of Nottingham, Queen's Medical Centre, Nottingham, NG7 2UH, UK
John Whittle
School of Life Sciences, University of Nottingham, Queen's Medical Centre, Nottingham, NG7 2UH, UK
Lucy C. Fairclough & Ian Todd
Current address: Institute of Medical Education, School of Medicine, University of Cardiff, Cardiff, CF14 4YS, UK
Lee Coombes
Steven A. Burr
Lucy C. Fairclough
Ian Todd
Correspondence to Ian Todd.
IT conceived the method and analysed the data. JW provided the conversion calculations. SAB drafted the manuscript. IT, JW, LF and LC helped refine the manuscript. All authors read and approved the final manuscript.
Burr, S.A., Whittle, J., Fairclough, L.C. et al. Modifying Hofstee standard setting for assessments that vary in difficulty, and to determine boundaries for different levels of achievement. BMC Med Educ 16, 34 (2016). https://doi.org/10.1186/s12909-016-0555-y
Hofstee | CommonCrawl |
Just like a woman? New comparative evidence on the gender income gap across Eastern Europe and Central Asia
Niels-Hugo Blunch ORCID: orcid.org/0000-0002-3211-24401,2
I examine the incidence and determinants of the gender income gap in Kazakhstan, Macedonia, Moldova, Serbia, Tajikistan, and Ukraine using recent household data based on an identical survey instrument across countries. Four main results are established, using a range of estimators, including OLS, interval regression, and quantile regression: (1) the presence of a substantively large gender income gap (favoring males) in all six countries; (2) some evidence of a gender-related glass ceiling in some of these countries; (3) some evidence that endowments diminish the income gaps, while the returns to characteristics increase the gaps; and (4) while observed individual characteristics explain a part of the gaps, a substantial part of the income gap is left unexplained. In sum, these results are consistent with the presence of income discrimination towards females but at the same time also point towards the importance of continued attention towards institutions and economic policy for decreasing the gender income gap in these former formally gender neutral economies—notably through attention towards the maternity and paternity leave system, as well as public provision of child care.
JEL Classification: J16, J31, J7
Despite a decline in recent years, the gender gap in income (or earnings or wages) undoubtedly is one of the most persistent regularities in the labor market. Most of the available evidence, however, is for Western economies, especially the USA (Albrecht et al. 2003; Altonji and Blank 1999; Blau 1998; Blau and Kahn 1992, 1996, 1997, 2000, 2003; Cho and Cho 2011), though evidence for the former socialist regimes of Eastern Europe and Central Asia is starting to emerge (Brainerd 2000; Grajek 2003; Hunt 2002; Orazem and Vodopivec 2000).
This decline notwithstanding the inequality of women in the labor market is important for several reasons. Most notably, the lack of gender equality in the labor market likely is associated with economic dependence of women more generally, leading to lack of influence in decision making—including investments in health and education for the household, including children, and greater susceptibility to violence in the home. Could the position of women in the labor market instead be improved, these outcomes will likely be reversed, also.
In light of these considerations, this paper provides a thorough examination of the incidence and nature of the gender income gap across six former socialist countries from Eastern Europe and Central Asia: Kazakhstan, Macedonia, Moldova, Serbia, Tajikistan, and Ukraine. Again, while evidence on the gender gap in transition countries in general is starting to emerge, it seems fair to say that it is still the case that little or no systematic data collection and reporting has been taking place, so far—thus mostly resulting in only fragmented data analysis for individual countries, at best. Contrary to this, the data examined here originate from a recent UNDP/UNICEF survey which was conducted using identical questionnaires for all six countries, thus greatly facilitating such comparative analysis as is pursued here. Indeed, examining the gender income gap for a collection of transition countries using comparable survey instruments is likely to increase our understanding of the gender income gap in transition countries in general. This includes the extent to which income-based gender discrimination seems to be present, as well as the extent to which the drivers of a possible gender income gap differs across countries—thus ultimately also serving as inputs for policy makers to help better address such gender-based discrimination by implementing appropriate gender-targeted policies.
The analysis starts out by establishing the prevalence of a substantively large gender income gap (favoring males) in all six countries, then goes on to estimate Mincer-type income regressions, and finally decomposes this gap using several alternative twofold and threefold decompositions to test the robustness of results—for both aggregate and detailed gender income gap decompositions, where the latter decomposes the origins of the gender income gap into its component part in terms of (groups of) specific explanatory variables such as education and sector of occupation.
The remainder of this paper is structured as follows. First, the next section reviews recent developments in the six countries examined here to provide a foundation for the subsequent analysis, including a context in which to both perform the analysis and interpret the subsequent results. Section 3 presents the data, discusses the construction of the dependent and explanatory variables, and estimates the raw gender income gaps. This is followed, in Section 4, by a discussion of the estimation strategy and related issues. Section 5 presents the main results while, finally, Section 6 concludes, discusses policy implications, and provides directions for further research.
Recent developments in the six countries under study with a focus on the labor market and gender-related developments
This section first gives a brief historical background and motivation for studying gender and labor market issues in the six former socialist countries of Eastern Europe and Central Asia examined here and then goes on to present recent economic trends in these countries.
Gender and the labor market in transition economies
Across the former socialist countries in Eastern Europe and Central Asia, wages were mostly assigned by central planners by establishing an occupational wage scale within each industry—and wages were then set as a multiple of the wage of the lowest-grade occupation, the base wage (Brainerd 2000). Another noteworthy feature of the wage setting scheme in most former socialist countries is the extreme compression of wage scales—so that top managers, for example, would rarely earn more than five times as much as the average manual worker, whereas the same ratio has been known to reach 20-to-1 or more in the USA (Brainerd 2000). It should be noted that while other labor market institutions included widespread membership in official unions, with the exception of Poland, these unions played little role in wage determination (Brainerd 2000). Together with government policies such as relatively high minimum wages and generous maternity leave and day care benefits in most of these countries (Brainerd 2000; Kuddo 2009), this would seem to have both encouraged women to work and also to have generated relatively low (if any) gender wage gaps in these former socialist countries back in the days of socialism. Indeed, "the socialist countries of Eastern Europe and the former Soviet Union were long committed—at least nominally—to gender equality in the labor market" (Brainerd 2000: 138). For example, employer discrimination against pregnant and nursing women was prohibited, and mothers with small children had the right to work part-time—so that female labor force rates and female educational attainment in the former Soviet Union were among the highest in the world (Abdurazakova 2010). Genuine gender equality, however, was never achieved—for example, women tended to concentrate in the state subsidized sectors of the economy such as health care, medicine, education, textile, and food industries where average wages were below the overall national average, and women were substantially underrepresented in leadership and top managerial positions (Abdurazakova 2010).
But now that more than two decades have passed since the fall of the Berlin Wall, things may have changed in terms of male and female labor market outcomes—especially since some of these former socialist countries have followed very different paths. Additionally, there has since also been an international financial crisis, with possible differential effects for women and men. So, in order to put the subsequent empirical analysis in context, it would seem interesting to first examine the extent to which these formerly socialist—and previously relatively identical, at least in terms of labor market policies, especially pertaining to the relative position of women and men—countries appear similar or different in terms of labor market liberalization and gender protection, as well as the general economic developments (the latter is presented in the next sub-section).
To be sure, most former socialist countries have gone through major liberalizations—in the labor market, as well as in the economy more generally. Whereas labor markets were previously characterized by a universal and mandatory system of job security and employment stability, this has increasingly been replaced by a more liberal institutional framework in several dimensions, including hirings and firings, as well as more flexible labor relations overall (Kuddo 2009). These developments have gone on at different speeds and with legislation made at different times. On the one hand, some countries such as Slovenia (1990), Hungary and Estonia (1992), Kyrgyzstan (1994), and Albania, Croatia, and Uzbekistan (1995) adopted new labor laws early on while other countries merely amended existing laws (Kuddo 2009). Since the emergence of the new millennium, however, a second generation of labor legislation reforms has been carried out in many former socialist countries—not least due to the membership for several of these countries of the European Union, which required explicitly transforming the national labor law, leading to an overhauling of labor laws in most other former socialist countries, as well (Kuddo 2009). Among other things, this has led not only to a more flexible labor market in most countries but also to entitlements on the worker side, which may "have the potential to adversely affect labor market participation" (Kuddo 2009). Among the side effects here are un- and underemployment (in the formal sector), as well as an increased informal sector.
The last couple of decades have also seen explicit gender-related developments in the transition economies of the former Soviet Union, including the six economies studied here. Most notably, initially, many countries went from the formal gender equality from the Soviet times to a period with trends of re-traditionalization, which then led to the deterioration of the position of women in the economy overall. For example, women's representation in political decision-making at all levels of central and local government sharply declined following abolition of the quota system widely practiced by the countries of the former Soviet system in 1989, so that in the mid-1990s, the average proportion of women in national elected bodies was less than 8% (Abdurazakova 2010). Additionally, the breakdown of the Eastern bloc led to an unprecedented growth and revival of religious and customary practices with impact on the status, choices, and opportunities for women, especially in rural areas (Abdurazakova 2010: 9).
Following this initial period of re-traditionalization vis-à-vis gender roles and the relative position of women in society, however, most if not all of the former transition countries of Eastern Europe and Central Asia have formally become part of several international initiatives to improve human rights, including women's rights. This includes joining the Beijing Declaration and Platform of Action and acceding to the principal international human rights instruments including the Convention on the Elimination of all Forms of Discrimination against Women (CEDAW), as well as regular government report to the Committee on the Elimination of Discrimination against Women, and active participation in subsequent review and appraisal processes (Abdurazakova 2010). Additionally, many former socialist countries, including the six examined here, have adopted stand-alone gender equality laws—in addition to confirming constitutional provisions for equality between the sexes (similar to what was declared under Soviet rule) (Abdurazakova 2010).
Relatedly, after initial abandonment of many of the programs supporting women in the labor market immediately following the breakdown of the Eastern bloc, in recent years, additional protective measures in the labor laws of many transition countries have been taken to counter low birth rates and low employment rates of females, as well as to support the status of women with small children—including maternity, parental, and paternity leaves (Kuddo 2009). The maternity leaves are particularly widespread and are also quite substantial in most transition economies, including the six studied here, where the financed durations are as follows: Kazakhstan 126 days, Tajikistan 140 days, Ukraine 126 days, Moldova 126 days, FYR Macedonia 9 months, and Serbia 365 days (Kuddo 2009: Table A10). In some countries, the duration even increases with the birth order—among the countries studied here, this is the case for Serbia, where the leave period for the third and each successive child is paid for 2 years (Kuddo 2009: Table A10). Notably, this is much more generous than most Western European countries—where in Germany, Ireland, and Portugal, for example, the duration of paid maternity leave is less than 100 days (Kuddo 2009: 52).
While paternity leaves are not that widespread and frequently of much lower duration,Footnote 1 parental leaves are quite widespread among this region and frequently of substantial duration. This is also the case for two of the countries studied here, namely Moldova and Tajikistan, where the durations are until the child reaches 3 years of age and 18 months of age, respectivelyFootnote 2; in both countries, the leave can be used by the mother, the father, or any relative who takes care of the child (Kuddo 2009: Table A10)—though when keeping in mind the widespread traditional mindset regarding gender roles, it seems likely that the leave will most likely be taken by the mother, thus effectively working as an enhancement of the maternity leave in these two countries.
Possibly related to these developments, female educational attainment has improved tremendously since the beginning of the economic transition across most of these countries, especially in higher education; indeed, only in Tajikistan, Uzbekistan, and Kosovo does it appear that girls are at a disadvantage relative to boys in education (Paci 2002).
In sum, progress has been made in terms of promoting gender equality and women's rights in the countries of the former Eastern bloc—including the six countries studied here. At the same time, the countries in this region remain characterized by a traditional mindset in terms of gender roles; indeed, many transition countries still have constitutional provisions that aim to protect motherhood—thus effectively working as protectionist labor legislation specifically targeted at women (Abdurazakova 2010). Additionally, "even in cases where such legislation has been abolished, discrimination against women in the labor market because of their real or potential role as wife and mother is still extremely widespread" (Abdurazakova 2010: 10). Notwithstanding the improvements in female educational attainment, females therefore still appear to be at a strong disadvantage in many of the dimensions of the former socialist countries as a whole—not least the labor market.
There therefore seem to be ample reason to explore in more detail the nature and correlates of this disadvantage specifically in terms of the female income disadvantage in transition countries, to help inform policy makers, so that these potential inequalities can be improved. Before moving on to the actual data analysis, however, recent economic trends in the six former socialist countries examined in this paper are briefly reviewed so as to "set the stage" for the subsequent empirical analysis.
Economic trends in six former socialist economies
While all the six countries examined in the analysis in this paper are all former socialist economies, they differ widely among themselves in terms of key economic indicators (Table 1). First, their GNI per capita (in 2008) are very different, ranging from US$600 for Tajikistan, so that that country (along with Moldova) is below the average per capita income of even the lower-middle income countries,Footnote 3 to US$6170 for Kazakhstan, thus bringing that country well along the way towards upper-middle income status. Population growth also varies widely, ranging from Moldova, which declined at 1.4% annually over the period 2002–2008, to Kazakhstan, which grew at almost 1% annually over the same period.
Table 1 Key macro indicators for the six transition economies: 2008
The growth of the labor force also differed widely across countries over the period 2002–2008, ranging from, again, a negative growth in Moldova, at 2.4%, to almost 5% annual growth of the labor force in Tajikistan. In terms of the gender composition of the labor force, however, these countries—except for Kazakhstan (and Serbia, where data is not available)—turn out to be quite different from Western economies by having relatively low labor force participation rates, especially for women. This probably reflects a combination of the increased non-participation and unemployment, as well as the increased importance of the informal sector in post-socialist countries (Kuddo 2009). Another noteworthy feature of the labor force participation rates of these countries is that the gender gap is quite substantial for several of these countries, again reflecting the more traditional norms and traditions in these countries as far as women and the labor market are concerned. For comparison, the labor force participation rates in the USA in 2008 were 68.3% for women and 80.1 for men—again, both substantially higher and less narrow, gender distribution-wise, than several of these transition countries.
The change in the sectoral composition of the six economies (in terms of the sectoral share of GDP) in recent years roughly corresponds to the similar change in Western economies, though with some variation across countries (Table 2). In particular, the agricultural sector declined in relative terms in all six economies (with the caveat that data are not available for the first period for Serbia—and that the agricultural sector remains relatively large in Tajikistan, which therefore can be interpreted as lagging behind in the changing sectoral composition towards much less agricultural importance in the national economy witnessed in most other countries across the world, especially developed countries), whereas the service sectors mostly increase—also in line with the developments in Western economies in recent years. The evidence for industry (and manufacturing) is more mixed, especially for Moldova, where the massive decline of the agricultural sector with about two thirds leaves room for both a massive decline of the industrial sector and an impressive increase in the service sector. Given the similarities with the developments in the Western economies, it would, perhaps, therefore also not be surprising to discover the presence of a substantial gender income gap for these six countries—a finding which they would then also "share" with most (if not all) Western economies.
Table 2 Change in the sectoral composition of the six economies, 1998–2008 (percent of GDP)
Data and descriptive analysis
The UNDP Social Exclusion Survey is a comprehensive nationally representative household survey aimed at evaluating living conditions and the level of social exclusion to help better plan future social and economic programs in a country. The survey was carried out for Kazakhstan, Macedonia, Moldova, Serbia, Tajikistan, and Ukraine using an identical survey instrument across all six countries of the adult population (15 years and above). The surveys used a multi-stage clustered and stratified sampling design involving multiple stages for each country including the region, rural-urban location, and cities/administrative division of an individual country, where the main respondent within the randomly selected household was selected either using the "next birthday" principleFootnote 4 or the Kish Grid,Footnote 5 both of which help ensure that the respondent is chosen randomly among all the eligible respondents in the selected household.Footnote 6 Basic household information (age, gender, educational attainment) was then recorded for all household members 15 years and older—and additional information, including labor market information such as employment status, income, and job characteristics (if working).
Interviews were conducted November–December 2009. Two thousand seven hundred individuals were interviewed in each country (except for Serbia, where 300 Roma persons in the so-called Roma booster part of the survey—as of the time of this analysis—were not released as part of the main dataset, leading to an initial sample for Serbia of 2401 individuals).
Since the dependent variable is income, the sample was first conditioned on individuals who answered "yes" to having worked for payment in cash or kind for at least 1 day during the past month (7044 observations). The kind of employment which women answering "yes" to this question have, however, is likely to differ significantly across countries, so that the wage gap in one country is then potentially estimated using many more part-time or informal workers than the estimate in another country. So as to base the wage gap estimates on a similarly employed group of women (and men) in each country, the sample is therefore further restricted to full-time workers, only 6032 observations. Some workers answer "do not have any income" when asked about their own total net monthly income later in the questionnaire and therefore must be excluded, leading to an initial sample of 5971 individuals. Some individuals are either temporarily on leave from their main job and/or have missing information on income or on one or more explanatory variables and are therefore dropped from the estimation sample, as well, leading to a final total estimation sample of 5533 individuals, distributed across the six countries (and across gender) as follows: Kazakhstan 1109 (455 females, 654 males), Macedonia 928 (438 females, 490 males), Moldova 860 (508 females, 352 males), Serbia 989 (452 females, 537 males), Tajikistan 614 (245 females, 369 males), and Ukraine 1033 (496 females, 537 males). The means and standard deviations for the final estimation samples by country and gender are reported in Table 7 in Appendix 1.
The dependent variable is the individual's total net monthly income. This is potentially an issue, since in addition to labor income, the total income also includes non-labor components such as capital and rental income and public transfers, where the issue of gender discrimination studied here appears less transparent. Additionally, this measure implicitly neglects the role of taxes.Footnote 7 Since the sample is restricted to individuals who worked during the past month, however, the major part of this income is plausibly labor earnings.Footnote 8 One issue, however, is that the responses are reported in intervalsFootnote 9 (five total)—the upper and lower bounds of each of which were determined by the respective country teamFootnote 10—rather than the actual incomes themselves, i.e., as a continuous measure. A continuous variable was therefore created using the interval midpoints to impute actual income.Footnote 11 While this is clearly less than ideal, it is a feasible way to proceed with an analysis of aggregate income data such as this—and thereby utilize this otherwise very desirable dataset. At a minimum, this clearly provides much less variation in the data than if working directly with a continuous income measure.Footnote 12 To examine this issue a bit more—or, if nothing else, at least to bring full disclosure on this issue—Table 3 presents the original intervals and percentage of workers in each interval. From the table, the intervals appear quite different across countries; for Tajikistan, especially, the choice of the bottom income bracket appears particularly problematic, capturing more than three quarters of the entire estimation sample. Again, this is the best I can do with the available data—but is certainly a weakness of this data, which should be kept in mind when evaluating the results, as well as when providing recommendations for future research (and data collection, especially!).
Table 3 Total monthly income data: original intervals and percentage of workers in each interval
The explanatory variables are specified based on standard human capital theory (Becker 1964; Mincer 1974; and, for a more recent exposition, Heckman et al. 2008) and include several potentially important individual and job characteristics, as well as geographical location—all of which have been shown to be important in previous studies of income (or earnings or wage) determinants: years of schooling, age, and age squaredFootnote 13 (to capture potential labor market (and other) experience), ownership/sector (created as a set of five dummy variables (public; private; mixed; cooperative, NGO, and other; and not specifiedFootnote 14)), contract status (dummy variable for no written contract/informal), social insurance coverage (dummy variable for no coverage), and geographical location (dummy for urban location).Footnote 15 Lastly, it should be noted that for several of the questions used for constructing the explanatory variables used in this analysis "Don't know" and "Refuse" were given as additional categories, rather than as simply being missing per se—which is how most other surveys treat these categories. Adding a separate dummy variable of "Don't know/Refuse" for these individuals—which otherwise would be excluded—help retain these individuals in the estimation sample, and is therefore also the approach followed here.
Turning to the descriptive analysis, the average monthly incomes of females are far lower than those of males for all six countries—with the estimated gender gaps ranging from 14.4% in Serbia to 17.3% in Tajikistan, 17.6% in Macedonia, 20.6% in Kazakhstan, 25.5% in Moldova, and, at the top, 30.5% in Ukraine (Table 4). This supports earlier findings (Brainerd 2000; Grajek 2003; Hunt 2002; Orazem and Vodopivec 2000; and Newell and Reilly 2001) of a substantial gender earnings gap in the former socialist economies, much like what has been found in the Western economies. At the same time, there seems to have been a narrowing of the gap—also much like in Western economies. Staneva et al. (2010), which examines 2003 data of hourly wages for two of the countries examined here, namely Serbia and Kazakhstan (as well as Bulgaria and Russia), establishes a male-female gap of 16.1% for Serbia and 47.8% for Kazakhstan. Similarly, Babović (2008) finds a gender gap of 14% for monthly earnings and 17% for hourly wages using 2004 data for Serbia. Notwithstanding the difference in methodology, it nevertheless seems that the gender income gap has narrowed over the 6–7-year period between the two datasets. Again, one should keep in mind the caveats regarding the methodology regarding the collection of income information—here again especially the fact that more than three quarters of the entire estimation sample was captured in the bottom income bracket underlying the estimated income gap for the case of Tajikistan.
Table 4 Raw gender income gap in six Eastern European and Central Asian countries
While the existence of substantively large gender income gaps have now been established across all six countries, the objective of the main analysis of this paper is to try to explain these gaps in terms of, on the one hand, characteristics/endowments such as educational attainment and job characteristics and returns to these characteristics (threefold division) and, on the other hand, observable and unobservable characteristics (twofold division). While the empirical strategy underlying this approach is widely used, it still seems fruitful to review the main components in some detail—which, therefore, is the objective of the next section.
Estimation strategy and related issues
The starting point of the Blinder-Oaxaca approach to decompose income (or other) differentials is an OLS regressionFootnote 16 of the outcome in question, estimated separately across the two relevant groups (Blinder 1973; Oaxaca 1973): here, male and female workers, respectively (suppressing subscripts for individual workers):
$$ {Y}_{\mathrm{M}}={\beta}_{\mathrm{M}}X+{\varepsilon}_{\mathrm{M}} $$
$$ {Y}_{\mathrm{F}}={\beta}_{\mathrm{F}}X+{\varepsilon}_{\mathrm{F}} $$
where YM and YF are the logarithms of monthly income of male and female workers respectively, X is a vector of workers' characteristics (education, experience, and so on), βM and βF are the returns to the workers' characteristics, and εM and εF are error terms.
As such, these regressions are—at least in this context—merely inputs into calculating the decompositions. However, it is potentially fruitful to consider these auxiliary regressions in and of themselves as separate and integral parts of the overall analysis, also, not only because the results from these regressions directly indicate the different returns to characteristics across gender but also because their specification, most notably in terms of explanatory variables, will affect the subsequent decomposition results.
Human capital theory suggests that education and potential experience directly affect income through the impact on individuals' productivity in the labor market and also suggest additional factors that are potentially important determinants of income. Hence, the first part of the multivariate analysis will examine these relationships, using ordinary least squares. Due to the nature of the data, I will also estimate the Mincer regressions using interval regressions (this is basically a generalization of the tobit model, since it extends censoring beyond left-censored data or right-censored data—see Cameron and Trivedi (2010, 548–550) and Wooldridge (2016, sec. 17.4) for more details).
Further, if additionally estimating Mincer regressions using quantile regressions instead of OLS (or interval regression)—following, for example, the approach laid out in Albrecht et al. (2003)—it is possible to test for the presence of a glass ceiling related to gender.Footnote 17 I will do that as part of the analysis here, as well. Additionally, while it is debatable whether variables such as industry and occupation which themselves reflect the impact of discrimination should be included as controls, I will examine the importance of adding industry and occupation to the Mincer regressions in a sensitivity analysis.
One potentially important econometric issue here is that educational attainment may be endogenous. The main concern here is possible omitted variables bias. Preferences and ability, for example, are unobserved and at the same time also, at least to some extent, determine both educational attainment and labor market income. However, as there are not available in this dataset any variables that may potentially act as instruments, it does not appear feasible to try to address this problem using instrumental variable methods. The effect of any omitted variables will therefore be captured by the error term, possibly causing omitted variables bias. As a result, any subsequent results must be interpreted with caution and hence not given a causal interpretation but rather as merely reflecting associations with labor market income.
Relatedly, there is the possibility that selection could be partly driving any observed gender gap. An apparently small gender income gap, for example, could be due to less qualified female workers withdrawing from the labor market to a greater extent than more qualified female workers. In that case, it would then appear as if the gender income gap is relatively small—and, thus, also that there is not strong evidence for any gender-based income discrimination being present—when in fact, this is all driven by unobservables. Due to the nature of the data—including it being a multipurpose household survey (though focused on social exclusion) rather than specifically a labor force survey—it is not possible to delve more into this issue, unfortunately. As a result, the subsequent results again must be interpreted with caution, keeping in mind this caveat.
Again, these income regressions formally are merely inputs into the decomposition analysis. Specifically, the decomposition analysis amounts to examining to which extent the observed income gaps across gender are attributable to differences in the observable characteristics, to differences in the returns to those characteristics, and to the interaction of the two ("threefold decomposition," see below for details) and, relatedly, to which extent the observed income gaps are due to observable and unobservable characteristics ("twofold decomposition," see below for details). This analysis will comprise the second part of the multivariate empirical analysis and will be pursued as an Oaxaca-Blinder-type decomposition.
Formally, following the methodology of Oaxaca (1973) and Blinder (1973), the difference in mean incomes for male and female sector workers, denoted R, can be decomposed into three parts (Jann 2008) using the empirical counterparts of Eqs. (1) and (2) aboveFootnote 18
$$ R={\overline{Y}}_{\mathrm{M}}-{\overline{Y}}_{\mathrm{F}}=\left({\overline{X}}_{\mathrm{M}}-{\overline{X}}_{\mathrm{F}}\right){\widehat{\beta}}_{\mathrm{M}}+{\overline{X}}_{\mathrm{M}}\left({\widehat{\beta}}_{\mathrm{M}}-{\widehat{\beta}}_{\mathrm{F}}\right)-\left({\overline{X}}_{\mathrm{M}}-{\overline{X}}_{\mathrm{F}}\right)\left({\widehat{\beta}}_{\mathrm{M}}-{\widehat{\beta}}_{\mathrm{F}}\right) $$
This is a threefold decomposition (Winsborough and Dickinson 1971), where the first term represents the "endowments effect" and explains the differences that are due to worker characteristics (such as education and sector of employment). The second term reflects the "coefficients effect," which shows the differences in the estimated returns to male and female sector workers' characteristics. Lastly, the third term, the "interaction effect," accounts for the fact that differences in endowments and coefficients between male and female workers exist simultaneously. If male and female workers obtain equal returns for their characteristics, the second and the third part in Eq. (3) will equal to zero and income differentials between male and female workers will be explained by the differences in endowments alone.
The above decomposition is formulated based on the prevailing income structure of male sector workers, i.e., the differences in endowments and coefficients between male and female workers are weighted by the coefficients (returns) of male sector workers. This seems reasonable for the application here, since males dominate in the labor force, at least in an economic sense/size-wise—as also revealed by the existence of substantial "raw" income gaps presented in Table 3. This is therefore also the approach pursued in the subsequent analysis.Footnote 19
An alternative approach, prominent in the literature on wage discrimination, is based on the assumption that wage differentials are explained by a unifying "non-discriminatory" coefficient vector, denoted β*, which is estimated in a regression that pools together both of the two groups under consideration (here, male and female workers). Then, the income gap can be expressed as
$$ R={\overline{Y}}_{\mathrm{M}}-{\overline{Y}}_{\mathrm{F}}=\left({\overline{X}}_{\mathrm{M}}-{\overline{X}}_{\mathrm{F}}\right)\widehat{\beta}\ast +{\overline{X}}_{\mathrm{M}}\left({\widehat{\beta}}_{\mathrm{M}}-\widehat{\beta}\ast \right)+{\overline{X}}_{\mathrm{F}}\left(\widehat{\beta}\ast -{\widehat{\beta}}_{\mathrm{F}}\right) $$
The above equation represents the so-called twofoldFootnote 20 decomposition:
$$ R=Q+U $$
where Q = (\( \overline{X} \)M − \( \overline{X} \)F)\( \widehat{\beta} \)* is the part of the income differential that is "explained" by sample differences assessed with common "returns" across the two groups and the second term U = \( \overline{X} \)M(\( \widehat{\beta} \)M − \( \widehat{\beta} \)*) + \( \overline{X} \)F(\( \widehat{\beta} \)* − \( \widehat{\beta} \)F) is the "unexplained" part not attributed to observed differences in male and female characteristics. The latter part is often treated as discrimination in the literatures on gender and racial income gaps. It is important to note, however, that the "unexplained" part also captures all potential effects of differences in unobserved variables (Jann 2008). And, to be sure, in the application here, it is indeed possible to talk about "discrimination," per se, as being a female worker is an intrinsic characteristic. Again choosing the male income structure as the reference, (4) reduces to
$$ R={\overline{Y}}_{\mathrm{M}}-{\overline{Y}}_{\mathrm{F}}=\left({\overline{X}}_{\mathrm{M}}-{\overline{X}}_{\mathrm{F}}\right){\widehat{\beta}}_{\mathrm{M}}+{\overline{X}}_{\mathrm{F}}\left({\widehat{\beta}}_{\mathrm{M}}-{\widehat{\beta}}_{\mathrm{F}}\right) $$
Again, while the main analysis here takes the male income structure as the reference, several different specifications for the baseline specification (also known as the "absence of discrimination" specification), i.e., \( \widehat{\beta} \)* in (4), will be pursued in the sensitivity analysis as a robustness check.
The standard errors of the individual components in Eqs. (3) and (4) above are computed using the Delta method by applying the procedure detailed in Jann (2008), which extends the earlier method developed in Oaxaca and Ransom (1998) to deal with stochastic regressors.
In addition to examining the overall composition of the established income gaps, it would seem instructive to perform detailed decompositions, as well, whereby it is possible to see which explanatory variables contribute the most to the three- and/or twofold overall decompositions. Similar to the OLS regressions, the detailed decomposition estimations also all allow for arbitrary heteroskedasticity (Huber 1967; White 1980). So as to condense the wealth of results obtained here—thereby easing the interpretation of the many results—the detailed decompositions are done group-wise, rather than for each individual variable (for example, for sector as a whole, rather than separately for public, private, and so on). Here, too, the focus will be on the case where the male structure is taken as the reference, though the sensitivity analysis again will consider alternative specifications, as well.
This section reviews the main results. This is done in three main parts: (i) Mincer income regressions, (ii) overall income decompositions, and (iii) detailed income decompositions. It should be noted that since some of the tables are rather large, they have been placed in the appendices (but are referred to, and discussed, in the body text below).
Mincer income regressions
Starting with the results that are most consistent across all six countries, in line with previous research, the results from the Mincer regressions reveal substantial returns to education (Table 8 in Appendix 2). Frequently, the return to an additional year of schooling is larger for females than for males. For Serbia, for example, the return to an additional year of education is 8% for females but only 5.5% for males—which is consistent with previous evidence (Blunch and Sulla 2010; Staneva et al. 2010).Footnote 21 The evidence on returns to ownership is mixed across countries, though frequently there is not much of an association. For Kazakhstan and Serbia, for example, there is no statistical difference across ownership status. Having no written contract (reference: written contract) is associated with an income penalty, though not always statistically significantly so. The "Don't know"/"Refuse" category again experiences a negative return in several cases—and both substantively and statistically significantly so for the cases of Serbian and Moldovan males. Not being covered by social security on the main job (reference category: covered) is associated with a negative and frequently substantively large income premium in several cases—and for Serbia for both females and males, both also statistically significant. Workers from urban areas tend to receive a positive income premium, which again accords well with their living expenses being larger, also.
Is there a glass ceiling related to gender in one or more of these former socialist economies? This is a testable hypothesis and I examine this using the approach laid out in Albrecht et al. (2003) for the case of Sweden, by estimating quantile regressions for the pooled (by gender) Mincer regressions—with clustered standard errors, following Parente and Santos Silva (2016). From these results, there is some evidence of a glass ceiling for Moldova and Ukraine, where the gender gaps are stronger at the higher end of the income distribution, whereas the evidence for the other countries is more mixed (Table 11 in Appendix 3).
There are potential issues with the previous analysis, and so it would be prudent to examine these issues in a set of sensitivity analyses. First, estimating the Mincer regressions (which, again, feed into the subsequent decomposition analysis) by OLS, using the interval midpoints, disregards the inherent interval nature of the underlying data. To explicitly incorporate the interval nature of the underlying data—and therefore also implicitly examine the consequences of the simplifying assumptions underlying the OLS estimations, using the interval midpoints—I instead estimate the Mincer equations using interval regression using the full interval data (this is basically a generalization of the tobit model, since it extends censoring beyond left-censored data or right-censored data—see Cameron and Trivedi (2010, 548–550) and Wooldridge (2016, sec. 17.4) for more details) (Table 5 and Table 9 in Appendix 2).
Table 5 Estimated gender coefficients from gender dummy, only. Estimations: OLS and interval regression
From Table 5, the results for the raw gender gaps are virtually identical to the OLS results, except for Tajikistan. Further estimating the full Mincer regressions using interval regression (Table 9 in Appendix 2), while there are some differences, the results are fairly robust, overall. With such relatively minor differences between the OLS and interval regression results, it seems prudent to continue with OLS for the remainder of the analysis.
The Mincer regressions estimated so far are purposely sparse in terms of the amount of explanatory variables. This is both to keep the analysis simple and because the inclusion of certain explanatory variables is debatable. In particular, some explanatory variables may themselves reflect the impact of discrimination, whereby their inclusion leads to understating the "unexplained gender wage/income gap" (the presence of which is taken by many researchers to measure the amount of discrimination, though it is really only consistent with the presence of discrimination) (Altonji and Blank 1999: 3191). Hence, one view here is that such variables may better be left out when estimating Mincer regressions, especially when the focus is on possible (gender, racial or other) discrimination. This is the case especially for industry and occupation, which is why (in addition to the inclusion of these variables leading to quite thin cells for many of these groups) I have left these variables out of the analysis, so far. On the other hand, it would still seem useful to at least explore the consequences of adding industry and occupation as a robustness check—as well as to potentially gain insights into possible gender-based sorting into occupations and/or industry, so that including these may provide additional information on segregation.
From the results from these augmented Mincer (OLS) regressions, it can be seen that the estimated coefficients are indeed frequently statistically insignificant due to the frequently quite small cell sizes (Table 10 in Appendix 2). Additionally, however, the results also reveal that men receive an income premium in traditional male-dominated industries such as mining, manufacturing, and construction—whereas there does not seem to be any such patterns for women. For occupation, the dummies are largely statistically insignificant. These results thus provides some, though arguably limited, evidence on selection and sorting into industry—if not occupation—in these countries. Given the small cell sizes and therefore frequently statistically insignificant results, it again seems prudent to continue with the more parsimonious specification estimated previously for the remainder of this analysis.
Overall income decompositions
A couple of results stand out particularly strongly from the results of the threefold decompositions (Table 6, top panel). First, the endowments decrease the female income gap overall in several cases (although not always statistically significantly so), indicating that women have relatively more favorable observable characteristics—that is, they are concentrated in better paying sectors, have more education, and so on (this will be examined more closely when considering the detailed decompositions in the next sub-section). Second, the returns to characteristics increase the gaps in both substantive and statistical terms, and for all countries, indicating that males have higher returns to characteristics overall. Notably, this result is much more consistent than the result regarding endowments—with the estimated effects ranging from a minimum of 14.1% (for Tajikistan) to as much as 31% (for Ukraine). It should be noted that the low coefficient effect in Tajikistan should probably not be attributed to improving conditions in that country— but probably rather to the fact that (as also discussed previously in Section 3) the bottom income bracket in that country was chosen extremely large by the questionnaire designers, which will then lead to less income inequality overall and therefore likely also to a smaller male-female income gap.
Table 6 Overall income decompositions: three- and twofold
Moving to the twofold decompositions, females on average have better employment-related characteristics (such as educational attainment and sector of employment) as indicated by the negative sign in the explained part—which in turn serves to narrow the overall income gap—whereas the unexplained part (capturing all the factors that cannot be attributed to differences in observed worker characteristics) accounts for an even larger share of the gender income differential (Table 6, bottom panel).
Notably—as can be seen from the results from the sensitivity analysis shown in Appendices 4 and 5 (Tables 12 and 13 respectively)—these results are quite robust to whether the decomposition is performed from females' viewpoint (i.e., using male endowments and returns) or whether the decomposition is performed from males' viewpoint (i.e., using female endowments and returns) for the threefold decompositions or from any of the many different possibilities of specifying the "absence of discrimination" group in the twofold decompositions.
Overall, these results are consistent with earlier findings for the region (Newell and Reilly 2001; Reva 2010; Babović 2008; Staneva et al., 2010) —and, thus, are indicative of substantial income discrimination against females in the labor markets of all six countries. But how are the overall gaps—both two- and threefold—explained by the endowment of and returns to the separate individual characteristics (or groups of characteristics), rather than by the endowment of and returns to individual characteristics overall? This is the object of the final empirical analysis, following next.
Detailed income decompositions
The detailed income decompositions allow further decomposing the overall gaps just established into the individual explanatory variables from the Mincer income regressions, discussed earlier. To help better facilitate interpretation, however, results are reported in groups of individual variables (e.g., aggregating up the contribution from all the ownership variables).
The results from the detailed threefold decompositions (Tables 14–19 in Appendix 6) reveal that in several cases, one of the most important contributors to the narrowing of the gender income gap—both substantively and statistically—in terms of individual characteristics, is education. For both Kazakhstan and Serbia, for example, education accounts for almost all of the explained gap—and at about 3%-points, education also accounts for a substantial part of the Ukrainian income gap. For Macedonia, although substantively large (at 2.2%-points), the effect is not statistically significant. For Moldova and Tajikistan, however, the effect is practically nil—both in substantive and statistical terms. In several cases, education also works to improve the gender gaps through the part attributable to characteristics, again consistent with earlier studies (Babović 2008; Blunch and Sulla 2010; Staneva et al. 2010).Footnote 22 Other observable characteristics and returns widen the gender gap, however. For Serbia and Moldova, for example, the returns to contract status widen the gap, as do social security in Ukraine. With a few exceptions, most of the remaining estimated effects are not statistically significant.
The results from the detailed twofold decompositions are mostly consistent with the results for the detailed threefold decompositions (Tables 20–25 in Appendix 7), so that education again is the most consistently important contributor to narrowing the gender income gap across all six countries, except for Moldova and Tajikistan, where the effect again is practically nil—both in substantive and statistical terms.
This paper examines the gender income gap in terms of its prevalence, magnitude, and determinants using a recent data set collected using identical survey instruments for six countries from Eastern Europe and Central Asia and thereby adds to the emerging, somewhat fragmented (partly because of using many different, not always comparable data sources) literature on the gender income gap for the former socialist economies.
Using a range of estimators, including OLS, interval regression, quantile regression, and overall and detailed income decompositions, four main results are established: (1) the presence of a substantively large gender income gap (favoring males) in all six countries; (2) some evidence of a gender-related glass ceiling in some of these countries; (3) some evidence that endowments diminish the income gaps, while the returns to characteristics increase the gaps—indicating that in some countries, women are concentrated in better paying sectors, have more education, and so on, while males have higher returns to characteristics overall; and (4) while observed individual characteristics explain a part of the gaps, a substantial part of the income gap is left unexplained.
These results have strong policy implications, consistent as they are with the presence of income discrimination towards females in the labor market. In particular, the continued presence of a gender income gap is likely to keep out females from the labor force that would otherwise be active participants and add to the economy. While increased economic activity has been important during the transition from a planned to a market economy, with the recent financial crisis, such efforts are perhaps more important than ever—thus highlighting not only the importance of both employment generation but also the improvements of the regulatory environment, since the former may be severely dampened with the continued presence of a substantively larger gender income gap.
But what are some of the potential mechanisms driving the gender income gap observed here—and does economic policy have a possible role to play? It was noted in the review of these countries' historical and economic background how, after initially abandoning programs specifically supporting the role of women in the labor market, most countries have gone back to instituting such programs anew.
Among such programs are paid maternity leave, where many countries have programs providing extensive programs—frequently of a duration longer even than in many advanced Western economies. As has also been noted elsewhere (Kuddo 2009: 78-79), these extensive programs may adversely affect women's labor market participation, as well as lead to actual or perceived erosion of skills, and, perhaps even more importantly, act to create reluctance on the part of employers to hire women of childbearing age, to avoid the associated indirect costs such as replacement workers. And the longer the leave, the greater the perceived disincentive from the employers' point of view. As also noted earlier, the parental leaves prevalent in some transition countries—since they frequently can be expected to be taken by the mother—may effectively act as an additional maternity leave. To counter this cycle, therefore, one possibility is to bring the leave durations more in line with those in Western economies—though, as also suggested by Kuddo (2009: 78), so as to continually help support women's access to the labor market, this should be combined with better access to child care facilities. Alternatively, extension of paternal leaves may be an option. In many transition countries, these are either absent or of an extremely short duration, sometimes only 1 week (Kuddo 2009: Table A10). Introducing (or extending) paternal leaves of a much longer duration would help level the playing field for men and women more in the labor market, since employers now would have to expect a potential leave of any employee of childbearing age (or for the males, with a wife of childbearing age), regardless of gender. As a possible side effect, such institutionalized gender equality in terms of child-birth related leaves may also help bring about more tolerance and openness to childbearing as a reason for detaching from the labor market for a shorter or longer period, regardless of the gender of the worker.
In terms of future research, even with the evidence emerging in recent years, we are only beginning to start to get a grasp of the prevalence and the nature of the gender income gap in the former socialist economies in Eastern Europe and Central Asia. Even more research is needed, especially if we want to go into the "black box" of what determines the gender income gap in terms of causal pathways. Crucial for these efforts, however, is the availability—and therefore collection—of more and better data.
The data examined here is a case in point. While it is certainly commendable—and very useful—to collect data using identical questionnaires for several countries simultaneously, it is a shame that such an important variable as labor income (income) is reported (if not collected) in such a way that the variation and therefore the informational content of this key variable is heavily diminished. An additional limitation of this dataset was the somewhat small survey sample sizes (certainly if conditioning on currently working adults), among other things limiting the amount of explanatory variables to relatively few individual and job characteristics, so as to avoid too small cell sizes. In turn, these comments may well serve as a warning to national and international agencies in charge of future data collection.
Among the six countries studied here, only Macedonia has paternity leave—and with a duration of only up to 7 days (Kuddo 2009: Table A10).
In Tajikistan, there is a possibility for an additional unpaid leave until the child reached 3 years of age.
Which was US$2078 in 2008 (World Bank 2010).
Which refers to choosing, among all the eligible respondents (here, individuals 15 years and above), in a given chosen household the person with the next birthday as the respondent of the household.
See Kish (1949) for details.
See TNS (2010) for more details.
Different countries may have different taxation methods, but there is unfortunately not much to do about this in practice, apart from providing a cautionary remark.
Some might prefer instead to examine wage rates, but unfortunately, hours worked are not available in the current dataset. It may be argued, however, that if one is interested in total worker welfare per se, one should indeed be examining total labor earnings (a proxy of which is available here) rather than the wage rate.
The questionnaire refers to them as "local currency (20 quintile) UNDP intervals," "local currency (40 quintile) UNDP intervals," etc., but they are not quintiles in the usual meaning of the word since they do not each contain 20 of the sample (neither among the total sample or the subsample that was working within the past month).
Based on personal correspondence with Susanne Milcher, Social Inclusion and Poverty Reduction Specialist, UNDP
Specifically, if belonging in the four lowest income brackets (see Table 3 below), I assume that peoples' income is the midpoint of the respective income bracket, and if belonging in the top bracket, I assume that peoples' income is the sum of the upper class point of the bottom bracket and the lower class point of the top bracket. The latter seems to help provide a conservative estimate of the degree of overall earnings inequality and therefore, to the extent that females probably are underrepresented in the top earnings bracket, also a conservative estimate of the female-male earnings gap.
Additionally, however, I also conduct a sensitivity analysis where I instead estimate interval regressions to examine the robustness of the OLS results using the interval mid-points.
Divided by 100, for scale consistency with the other explanatory variables
"Not specified" was specified as a separate category in the questionnaire and is therefore also treated as a separate group here.
The dataset also includes information on occupation (14 categories) and industry (18 categories), but inclusion of these as explanatory variables is debatable since if they themselves reflect the impact of discrimination, they will understate the "unexplained gender wage/income gap" (the presence of which is taken by many researchers to measure the amount of discrimination, though it is really only consistent with the presence of discrimination) (Altonji and Blank 1999: 3191). Additionally, including these variables frequently leads to some very small cell sizes and therefore also very imprecisely measured results for these variables; these variables are therefore not included in the main analysis (I do, however, include them in a sensitivity analysis).
With within-community correlation/clustering adjusted standard errors incorporated (Wooldridge 2010) (and therefore also (implicitly) robust (Huber 1967; White 1980)).
I also allow for clustered standard errors in the quantile regressions, following Parente and Santos Silva (2016).
In the following, bars on top of variables denote mean values, while \( \widehat{\beta} \) denotes estimated coefficient values from Eqs. (1) and (2) above.
Alternatively, however, this equation could also be represented based on the prevailing earnings structure of female workers; this will be explored further in the sensitivity analysis.
See Oaxaca (1973), Blinder (1973), Cotton (1988), Reimers (1983), Neumark (1988), and Jann (2008) for different approaches—basically, these differ in the relative weights they attribute to the two groups in the decomposition.
In contrast, somewhat surprisingly, Reva 2010 finds that male returns are higher than female returns for all levels of education in Serbia.
Again, detailed gender earnings decomposition are only available for very few countries in the region.
Abdurazakova D. National mechanisms for gender equality in South-East and Eastern Europe, Caucasus and Central Asia. New York: United Nations Economic Commission for Europe; 2010.
Albrecht J, Björklund A, Vroman S. Is there a glass ceiling in Sweden? J Labor Econ. 2003;21(1):145–77.
Altonji JG, Blank RM. Race and gender in the labor market. In: Ashenfelter O, Card D, editors. Handbook of labor economics, vol. 3. Amsterdam: Elsevier; 1999.
Babović M. The position of women on the labour market in Serbia. Belgrade: Gender Equality Council, Government of the Republic of Serbia and United Nations Development Programme, Serbia; 2008.
Becker GS. Human Capital. Chicago: University of Chicago Press; 1964.
Blau FD. The gender pay gap. In: Persson I, Jonung C, editors. Women's work and wages. London: Routledge; 1998.
Blau FD, Kahn LM. The gender earnings gap: learning from international comparison. Am Econ Rev Pap Proc. 1992;82(2):533–8.
Blau FD, Kahn LM. Wage structure and gender earnings differential: an international comparison. Economica Suppl. 1996;63(250):S29–62.
Blau FD, Kahn LM. Swimming upstream: trends in the gender wage differential in the 1980's. J Labor Econ. 1997;15(1):1–42.
Blau FD, Kahn LM. Gender differences in pay. J Econ Perspect. 2000;14(4):75–99.
Blau FD, Kahn LM. Understanding international differences in the gender pay gap. J Labor Econ. 2003;21(1):106–44.
Blinder AS. Wage discrimination: reduced form and structural estimates. J Hum Resour. 1973;8:436–55.
Blunch, Niels-Hugo and Victor Sulla (2010) The financial crisis, labor market transitions and earnings growth: a gendered panel data analysis for Serbia, background paper, Poverty Reduction and Economic Management Unit, Europe and Central Asia Region Department, Washington, DC: World Bank.
Brainerd E. Women in transition: changes in gender wage differentials in Eastern Europe and the former soviet union. Ind Labor Relat Rev. 2000;54(1):138–62.
Cameron AC, Trivedi PK. Microeconometrics using Stata. Revised ed. College Station: Stata Press; 2010.
Cho D, Cho J. How do labor unions influence the gender earnings gap? A comparative study of the US and Korea. Fem Econ. 2011;17(3):133–57.
Cotton J. On the decomposition of wage differentials. Rev Econ Stat. 1988;70:236–43.
Grajek M. Gender pay gap in Poland. Econ Plan. 2003;36:23–44.
Heckman JJ, Lochner LJ, Todd PE. Earnings functions and rates of return. J Hum Cap. 2008;2(1):1–31.
Huber PJ. The behavior of maximum likelihood estimates under nonstandard conditions. In: Le Cam LM, Neyman J, editors. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1. Berkeley: University of California Press; 1967.
Hunt J. The transition in East Germany: when is a ten point-fall in the gender gap bad news? J Labor Econ. 2002;20(1):148–69.
Jann B. The Blinder-Oaxaca decomposition for linear regression models. Stata J. 2008;8(4):453–79.
Kish L. A procedure for objective respondent selection within the household. J Am Stat Assoc. 1949;44(247):380–7.
Kuddo A. Employment services and active labor market programs in Eastern European and Central Asian countries, social protection discussion paper no. 0918. Washington, DC: World Bank; 2009.
Mincer J. Schooling, experience and earnings. New York: National Bureau of Economic Research; 1974.
Neumark D. Employers' discriminatory behavior and the estimation of wage discrimination. J Hum Resour. 1988;23:279–95.
Newell A, Reilly B. The gender pay gap in the transition from communism: some empirical evidence. Econ Syst. 2001;25(4):287–304.
Oaxaca R. Male-female wage differentials in urban labor markets. Int Econ Rev. 1973;14:693–709.
Oaxaca RL, Ransom MR. Calculation of approximate variances for wage decomposition differentials. J Econ Soc Meas. 1998;24:55–61.
Orazem PF, Vodopivec M. Male–female differences in labor market outcomes during the early transition to market: the cases of Estonia and Slovenia. J Popul Econ. 2000;13(2):283–303.
Paci P. Gender in transition, human development unit, Eastern Europe and Central Asia region. Washington, DC: World Bank; 2002.
Parente PMDC, Santos Silva JMC. Quantile regression with clustered data. J Econometric Methods. 2016;5:1–15.
Reimers CW. Labor market discrimination against Hispanic and black men. Rev Econ Stat. 1983;65:570–9.
Reva, Anna (2010) Gender inequality in the labor market in Serbia, draft report, Poverty Reduction and Economic Management Unit, Europe and Central Asia Region Department, Washington, DC: World Bank.
Staneva A, Arabsheibani GR, Murphy P. Returns to Education in Four Transition Countries: Quantile Regression Approach. IZA Discussion Paper No. 5210; 2010.
TNS (2010) Technical report: sampling description & quality control description, Technical Report accompanying the UNDP/UNICEF Social Exclusion Dataset 2010, Developed by TNS\Gallup Media Asia.
White H. A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica. 1980;48(4):817–30.
Winsborough HH, Dickinson P. Components of Negro-White Income Differences. Proceedings of the Social Statistics Section. 1971;6–8.
Wooldridge JM. Econometric analysis of cross-section and panel data. Second ed. Cambridge: The MIT Press; 2010.
Wooldridge JM. Introductory econometrics: a modern approach. 6th ed. Boston: Cengage; 2016.
World Bank (2010) Country data profile—at a glance, www.worldbank.org. Accessed 18 Nov 2010.
This manuscript is based on a background paper commissioned by the World Bank's Poverty Reduction and Economic Management Unit, Europe and Central Asia Region Department. I thank Mihail Arandarenko, David Ribar, Victor Sulla, and the participants at the first GRAPE Gender Gaps Conference for their helpful comments and suggestions and Victor Sulla and Caterina Ruggeri Laderchi for their managerial support. Remaining errors and omissions are my own. The data were kindly provided by the United Nations Development Program (UNDP). Assistance from Susanne Milcher, UNDP, helped me understand the data better and is greatly appreciated. I would like to thank the anonymous referee and the editor for the useful remarks. The findings and interpretations are those of the author and should not be attributed to the World Bank, the United Nations Development Program, or any affiliated institutions.
This manuscript is based on a background paper commissioned and funded by the World Bank's Poverty Reduction and Economic Management Unit, Europe and Central Asia Region Department. The World Bank ECA Department, however, was not involved in the design of the study, nor in the collection, analysis, and interpretation of data, or in writing the manuscript.
The data examined here is proprietary to the UNDP but was obtained and should, according to a UNDP Report, still be obtainable upon request via the following Website: http://europeandcis.undp.org/poverty/socialinclusion (when I last checked, this page seemed to have been removed, however). For more details, the UNDP Report can be downloaded from (as of January 14, 2018): http://www.eurasia.undp.org/content/rbec/en/home/library/poverty/Regional_Human_Development_Report_on_social_inclusion.html?download.
Department of Economics, Washington and Lee University, Lexington, VA, 24450, USA
Niels-Hugo Blunch
IZA, Bonn, Germany
Search for Niels-Hugo Blunch in:
Correspondence to Niels-Hugo Blunch.
Authors' information
The author is an Associate Professor at the Economics Department at Washington and Lee University in Lexington, Virginia, USA.
Consent for publication
The IZA Journal of Development and Migration is committed to the IZA Guiding Principles of Research Integrity. The author declares that he has observed these principles.
Descriptive statistics for estimation samples
Table 7 Means and standard deviations of monthly income and explanatory variables by gender
Mincer income regressions across gender
Table 8 Mincer income regressions across gender (OLS)
Table 9 Mincer income regressions across gender (interval regression)
Table 10 Mincer income regressions across gender, with industry and occupation added (OLS)
Testing for a glass ceiling using quantile regressions
Table 11 Female quantile regression coefficients from pooled Mincer income regressions
Total threefold income decompositions: sensitivity analysis
Table 12 Overall income decompositions, sensitivity analysis: threefold
Total twofold income decompositions: sensitivity analysis
Table 13 Overall income decompositions, sensitivity analysis: twofold
Detailed threefold income decompositions
Table 14 Detailed threefold income decompositions: Kazakhstan
Table 15 Detailed threefold income decompositions: Macedonia
Table 16 Detailed three-fold income decompositions: Moldova
Table 17 Detailed threefold income decompositions: Serbia
Table 18 Detailed Three-fold Income Decompositions: Tajikistan
Table 19 Detailed threefold income decompositions: Ukraine
Detailed twofold income decompositions
Table 20 Detailed twofold income decompositions: Kazakhstan
Table 21 Detailed two-fold income decompositions: Macedonia
Table 22 Detailed twofold income decompositions: Moldova
Table 23 Detailed twofold income decompositions: Serbia
Table 24 Detailed twofold income decompositions: Tajikistan
Table 25 Detailed twofold income decompositions: Ukraine
Blunch, N. Just like a woman? New comparative evidence on the gender income gap across Eastern Europe and Central Asia. IZA J Develop Migration 8, 11 (2018) doi:10.1186/s40176-017-0119-x
DOI: https://doi.org/10.1186/s40176-017-0119-x
Income gap
Oaxaca-blinder decomposition
Detailed decomposition
Maternity/paternity leave policies | CommonCrawl |
Home > Journals > Ark. Mat. > Volume 58 > Issue 2 > Article
October 2020 The Lelong number, the Monge–Ampère mass, and the Schwarz symmetrization of plurisubharmonic functions
Long Li
Long Li1
1Science Institute, University of Iceland, Reykjavik, Iceland
Ark. Mat. 58(2): 369-392 (October 2020). DOI: 10.4310/ARKIV.2020.v58.n2.a8
The aim of this paper is to study the Lelong number, the integrability index and the Monge–Ampère mass at the origin of an $S^1$-invariant plurisubharmonic function on a balanced domain in $\mathbb{C}^n$ under the Schwarz symmetrization. We prove that $n$ times the integrability index is exactly the Lelong number of the symmetrization, and if the function is further toric with a single pole at the origin, then the Monge–Ampère mass is always decreasing under the symmetrization.
Long Li. "The Lelong number, the Monge–Ampère mass, and the Schwarz symmetrization of plurisubharmonic functions." Ark. Mat. 58 (2) 369 - 392, October 2020. https://doi.org/10.4310/ARKIV.2020.v58.n2.a8
Received: 25 May 2019; Revised: 2 September 2019; Published: October 2020
First available in Project Euclid: 16 January 2021
Digital Object Identifier: 10.4310/ARKIV.2020.v58.n2.a8
Rights: Copyright © 2020 Institut Mittag-Leffler
Ark. Mat.
Vol.58 • No. 2 • October 2020
Institut Mittag-Leffler
Long Li "The Lelong number, the Monge–Ampère mass, and the Schwarz symmetrization of plurisubharmonic functions," Arkiv för Matematik, Ark. Mat. 58(2), 369-392, (October 2020) | CommonCrawl |
Responses to reviewers
The human disease network in terms of dysfunctional regulatory mechanisms
Jing Yang1, 2, 4,
Su-Juan Wu1, 2,
Wen-Tao Dai2, 3, 5,
Yi-Xue Li1, 2, 3, 4, 5Email author and
Yuan-Yuan Li2, 3, 5Email author
Biology Direct201510:60
© Yang et al. 2015
Elucidation of human disease similarities has emerged as an active research area, which is highly relevant to etiology, disease classification, and drug repositioning. In pioneer studies, disease similarity was commonly estimated according to clinical manifestation. Subsequently, scientists started to investigate disease similarity based on gene-phenotype knowledge, which were inevitably biased to well-studied diseases. In recent years, estimating disease similarity according to transcriptomic behavior significantly enhances the probability of finding novel disease relationships, while the currently available studies usually mine expression data through differential expression analysis that has been considered to have little chance of unraveling dysfunctional regulatory relationships, the causal pathogenesis of diseases.
We developed a computational approach to measure human disease similarity based on expression data. Differential coexpression analysis, instead of differential expression analysis, was employed to calculate differential coexpression level of every gene for each disease, which was then summarized to the pathway level. Disease similarity was eventually calculated as the partial correlation coefficients of pathways' differential coexpression values between any two diseases. The significance of disease relationships were evaluated by permutation test.
Based on mRNA expression data and a differential coexpression analysis based method, we built a human disease network involving 1326 significant Disease-Disease links among 108 diseases. Compared with disease relationships captured by differential expression analysis based method, our disease links shared known disease genes and drugs more significantly. Some novel disease relationships were discovered, for example, Obesity and cancer, Obesity and Psoriasis, lung adenocarcinoma and S. pneumonia, which had been commonly regarded as unrelated to each other, but recently found to share similar molecular mechanisms. Additionally, it was found that both the type of disease and the type of affected tissue influenced the degree of disease similarity. A sub-network including Allergic asthma, Type 2 diabetes and Chronic kidney disease was extracted to demonstrate the exploration of their common pathogenesis.
The present study produces a global view of human diseasome for the first time from the viewpoint of regulation mechanisms, which therefore could provide insightful clues to etiology and pathogenesis, and help to perform drug repositioning and design novel therapeutic interventions.
This article was reviewed by Limsoon Wong, Rui Wang-Sattler, and Andrey Rzhetsky.
Human disease network
Disease similarity
Dysfunctional regulation mechanism
Differential coexpression analysis
Differential regulation analysis
It is increasingly evident that human diseases are not isolated from each other although their clinical and pathological features are diversiform. Understanding how diseases are related to each other can provide novel insights into etiology and pathogenesis [1–4], and furthermore help to prioritize disease-related genes [5–8], perform drug repositioning and drug target identification [9–11].
Early works in this field were limited to examining the overlap in clinical presentation between diseases [4, 11]. For example, Payne et al. used logistic regression to estimate the relationships between Alzheimer's disease, Vascular dementia and other types of dementia based on the standardized measures of their clinical impairments, and revealed their notable differences [4]. Four years later, Kalaria et al. particularly explored the two extremes of dementia, Alzheimer's disease and Vascular dementia [11]. These studies improved accurate diagnosis of dementias and aided in clinical decisions on the applicability of different treatments [4, 11]. In 2008, Human Phenotype Ontology (HPO) systematically shows phenotypic similarities of diseases based on clinical synopsis features extracted from OMIM [12].
In recent years, scientists have been able to investigate genetic similarity between diseases based on gene-phenotype relationships [1–3, 8]. Disease relationships were revealed by measuring common disease-related genes or pathways [1, 2], or by clustering both genetic and environmental factors [3]. Moreover, Van Driel et al. integrated anatomy information, clinical synopsis, genetic mutation information and medical information context into a feature vector to calculate disease similarities [8]. In some other reports, disease relationships were evaluated by exploring the semantic similarity of disease names or related medical vocabulary concepts according to Disease Ontology (DO) [13, 14] or by checking whether the disease associated enzymes catalyze same/adjacent metabolic reactions or not [15]. Besides, some works combined multi-types of data to identify significant disease relationships [16–18]. It is noted that all these works rely heavily on prior knowledge, therefore they are not applicable when few disease knowledge are available. Also due to the limitation of prior knowledge, it is hard to find out novel disease relationships, or correct ambiguities and errors in the current knowledge repertories.
Fortunately, the rapidly accumulated biomedical data including registry data [15, 18–21] and high-throughput data such as gene expression profiles [9, 10, 16], and the greatly improved data mining strategies offer new chances to discover disease relationships. Based on registry data, scientists can examine the process of disease development by tracing the order of disease occurrence in a large number of patients for a fairly long period of time. In this way, Jensen et al. extracted temporal disease trajectories from registry data of 6.2 million Danish patients [19]. Similarly, Blair et al. studied the relationships between Mendelian diseases and complex diseases by examining how Mendelian variations enhance the risk of complex diseases according to electronic medical records [20]. Furthermore, Davis et al. exploited disease relationships via combining co-morbid diseases in electronic medical records and co-genes diseases in genetic data [18]. These works help to elucidate the process of disease development from a novel viewpoint. However, like the other common big data analysis strategies, these studies can only discover associations, but not causal connections or mechanisms.
In contrast, the genome-scale expression data give us another angle to address this problem since simultaneous measurement of the expression of thousands of genes allows for the exploration of gene transcriptional regulation, which is believed to be crucial to biological functions. In 2009, Hu and Agarwal presented an approach which replaces the pre-existing disease-related genes with differentially expressed genes correlated to diseases, and created a disease-drug network [9]. Similarly, Suthram et al. defined the correlation of differential expression values of protein interaction modules between different diseases as the disease similarity measure, and found out 138 significant similarities between diseases [10]. DiseaseConnect, a web server, also utilized differentially expressed genes to explore disease relationships [16]. These studies adopted a common understanding that diseases are highly correlated to the rewiring of gene regulation, which would be manifested at the transcriptional level. However, these dysregulation events are actually difficult to be discovered by traditional differential expression analysis (DEA), while could be captured by differential coexpression analysis (DCEA) [22] since they tend to display as the decoupling of expression correlation. In fact, the DCEA strategy has emerged as a promising method to unveil dysfunctional regulatory mechanisms underlying diseases [22–25]. Following this sense, we propose that a disease similarity measurement based on differential coexpression (DCE), instead of differential expression (DE), may lead to a disease network more relevant to pathogenesis.
In the present work, we developed a DCE-based computational approach to estimate human disease similarity, and identified 1326 significant Disease-Disease links (DDLs for short) among 108 diseases. Benefiting from the use of DCEA, the human disease network is constructed for the first time from the viewpoint of regulation mechanisms.
Gene expression dataset
As of April 19, 2013, we selected 954 GSE datasets (GSE short for GEO series) designed for human studies using Affymetrix U133A chip (i.e., GPL96), the most commonly used platform, from GEO (http://www.ncbi.nlm.nih.gov/geo/). We then picked out 106 GSEs which 1) were assigned to human disease condition and corresponding normal condition, 2) had more than five samples in each condition, and 3) came from fresh organs (excluding cell lines). We downloaded raw data (CEL files) of each sample, controlled and removed low quality samples using affy [26] and affyQCReport [27] packages, and finally retained 86 GSE datasets involving 4403 samples for 89 diseases (Additional file 1). In order to carry out a disease-centered analysis, the 86 GSE datasets were re-organized as follows: the datasets which studied the same disease with the same tissue were merged; the datasets which involved multiple diseases or multiple tissues were split. This procedure resulted in 108 datasets, corresponding to 108 diseases (Additional file 1). The disease number was expanded from 89 to 108 because 11 out of the original 89 diseases (12 %, as shown in Fig. 1) involved two or more tissues, which were termed as multi-tissue diseases. A multi-tissue disease was defined by combining its disease name and the originated tissue, for example, Type 2 diabetes - liver and Type 2 diabetes - PBMC (short for peripheral blood mononuclear cell).
Seven characteristics of disease network. Seven characteristics of our disease network, including its pathogenic relevance (i.e., the percentage of disease pairs which significantly share disease genes and drugs), degree distribution (i.e., the distribution of the number of disease neighbours), the correlation sign, its comparison with DE-based network, its comparison with tradition disease classification, as well as the percentage of multi-tissue diseases
Pathway data
Molecular signature database (MSigDB), a collection of annotated gene sets, includes 7 major collections [28]. A total of 6176 pathways from the following two collections of MSigDB v4.0 were extracted: 1) curated gene sets collected from public pathway databases (such as BIOCARTA. REACTOME, KEGG, etc.), publications in PubMed and knowledge of domain experts, 2) GO gene sets including biological processes, cellular components and molecular functions. In order to reduce the influence of missing data, we excluded pathways whose members were not significantly detected by GPL96 platform by using the binomial probability model. Consequently, we ended up with 5598 pathways covering a total of 21,003 unique genes.
Disease similarity algorithm
First, we normalized the gene expression data in each microarray sample using MAS5.0 (as shown in Additional file 2, step 1: normalization). Secondly, we calculated the differential coexpression value (dC) of each gene between disease and control samples for all diseases via DCp method which was developed in our previous work [23–25] (as shown in Additional file 2, step 2: calculating genes' dC). As described in the literatures, DCp was designed for identifying differentially co-expressed genes (DCGs), which proved to be superior to currently popular methods in simulation studies attributed to their uniqueness of exploiting the quantitative coexpression change of each gene pair in the coexpression networks [23–25]. For a certain disease, the Pearson correlation coefficients between gene i and its n neighbors form two vectors, X = (xi1, xi2, …, xin) and Y = (yi1, yi2, …,yin) corresponding to two comparative conditions (say, disease and normal). Finally, dC of each gene for the disease can be calculated with Eq. 1 [23–25].
$$ d{C}_i=\sqrt{\frac{{\left({x}_{i1}-{y}_{i1}\right)}^2+{\left({x}_{i2}-{y}_{i2}\right)}^2+\dots +{\left({x}_{in}-{y}_{in}\right)}^2}{n}} $$
Next, similar to what Suthram et al. did [10], we assigned dC of pathway to be the average dC of their component genes, and thus obtained a vector of pathways' dCs for each disease (as shown in Additional file 2, step 3: calculating pathways' dC). We eventually calculated the partial Spearman correlation coefficient between two diseases as their similarity value (as shown in Additional file 2, step 4: calculating partial correlations). The reason we adopted partial Spearman correlation, instead of generic Spearman correlation, was that partial Spearman correlation was proved to have the capability of factoring out the possible dependencies between different gene-expression experiments due to their underlying tissues [10]. The last step of Additional file 2 for obtaining significant partial correlations will be illustrated in the following section.
Permutation test of disease pairs
In order to evaluate the statistical significance of observed disease partial correlation coefficients, we randomly re-assigned the affiliation of gene to pathway as Suthram et al. did [10]. Pseudo pathways were obtained with the three following values unchanged: 1) the number of pathways a given gene belongs to, 2) the number of pathways' component genes and 3) the number of all pathways (as shown in Additional file 2, step 5: permutation test), and then calculated the pathways' dCs and the partial correlation coefficients between every possible disease pairs using the permuted data. This permutation procedure was repeated for 500 times, and the resulting partial correlation coefficient statistics formed an empirical null distribution. In this way, the p-value for each disease pair was estimated, and FDR value was obtained accordingly.
Disease-related genes and drugs
A total of 7357 genes known to be associated with 101 diseases were collected from Genetic Association Database (GAD) [29], Online Mendelian Inheritance in Man (OMIM) [30], Human Gene Mutation Database (HGMD) [31] and human single amino acid variants (SAV) of UniProt (http://www.uniprot.org/docs/humsavar). We also obtained 342 drugs for 83 diseases from DrugBank [32].
Within-network distance (WD)
According to Li et al.'s work, the mean shortest path length among all links in a network was defined as within-network distance (WD) in order to describe the relational closeness of a network (Eq. 2) [2].
$$ W{D}_c=\frac{{\displaystyle \sum d\left(i,j\right)}}{k},\kern1.2em i,j\in c $$
Where k denotes the total number of links in the network, and d (i, j) denotes the shortest path between vertex i and j.
The smaller the WD value, the greater the network compactness. Theoretically when WD = 1, the network is fully connected, displaying as a complete graph.
A human disease network was built with a differential coexpression (DCE-) based computational approach
First of all, for each disease, the differential coexpression values (dCs) of all genes were calculated by using differential coexpression algorithm, DCp [23, 24], which was developed in our previous work (see Methods for details). The mean value of the differential coexpression levels of all genes in a certain pathway was calculated as the differential coexpression value of the pathway. In this way, the differential coexpression value (dC) was summarized at the level of biological pathway, which characterized transcriptomic behaviors from a more systematic viewpoint than at the gene level. The disease similarity was then estimated as the partial Spearmen correlation coefficients of pathways' differential coexpression values (dCs) between any two diseases. Finally, by applying a permutation test, a total of 1326 significant disease relationships at a p-value threshold of 0.05 (FDR = 20.91 %), termed as Disease-Disease links (DDLs for short), were identified from all possible links among 108 diseases, leading to a human disease network (see Additional file 3 for details).
According to the basic understanding that similar diseases tend to share similar pathogenesis, and thus have the potential to be treated by common drugs, we assume that the more similarity the diseases display, the more disease-related genes and drugs they share. We therefore checked if the DDLs in our disease network showed this tendency. First, we tidied up a list of known disease genes and a list of drugs (see Methods). The disease genes were found to be associated with 101 out of 108 diseases and 1119 out of 1326 DDLs in our disease network; similarly, the known drugs were correlated to 83 diseases and 745 DDLs. As shown in Table 1, the hypergeometric tests for the 1119-DDL set and 745-DDL set indicated that 910 of 1119 DDLs (81 %, also showed in Fig. 1) significantly shared known disease genes, and 348 of 745 DDLs (47 %, also showed in Fig. 1) significantly shared drugs, both at a p-value threshold of 0.05. Among the non-DDL disease pairs, 3732 and 2576 pairs were correlated to known disease genes and drugs, respectively. The hypergeometric tests indicated that 2911 out of 3732 non-DDL pairs (78 %) shared known disease genes significantly, and 1095 out of 2576 (42 %) shared drugs significantly. At last, one-sided Fisher's exact tests showed that DDLs in our disease network significantly shared both disease-related genes and drugs at a p-value threshold of 0.05 (Table 1, 0.009 for disease genes and 0.023 for disease drugs). This verified the reliability of the disease relationships in our disease network at the molecular pathological level.
Contingency table to validate the assumption that DDLs significantly share disease-related genes or drugs
Disease pairs correlated to known disease genes or drugs
DDLs
Non-DDLs
P-value of Fisher's Exact test
Disease pairs sharing disease genes
910 (81 %)
2911 (78 %)
0.009085a
Non-significant
Disease pairs sharing disease drugs
0.02307a
adenotes statistic significance by one-sided Fisher's Exact test (p < 0.05)
Figure 1 summarizes the characteristics of our disease network in terms of its pathogenic relevance (i.e., the percentage of disease pairs which significantly share disease genes and drugs, as listed in Table 1), degree distribution (i.e., distribution of the number of disease neighbours), the correlation sign, its comparison with DE- based network, its comparison with traditional disease classification, as well as the percentage of multi-tissue diseases, which will be explained in details in the following subsections.
With regard to the degree distribution, 60 % diseases in our disease network have 21 ~ 30 neighbour diseases, 23 % have 10 ~ 20 neighbours and 17 % have 31 ~ 40 neighbours, basically following Poisson's distribution. It is noted that our disease network is a random graph without any hub nodes, contrary to the previous observation by Hu et al. [9]. In Hu et al.'s work [9], the disease network proved to be a scale-free graph with a few diseases acting as hubs, such as some cancers. We noticed that cancers account for almost half diseases in Hu et al.'s network, which is far more than those in our network. Since cancers involve common tumor activators (such as Ras and Myc) and tumor suppressors (such as p53 and PTEN) [33], the rich connections among cancers would make them form hubs. We therefore propose that a disease network which well resembles the heterogeneity of diseasome probably has no hub diseases.
It is interesting that among the 1326 DDLs, 529 (~40 %) links are negative (Fig. 1). When disease A and A' form a negative link, the patient with disease A tends to be protected from having disease A' and vice versa, which is probably due to the inversely regulated biological processes involved in the negatively correlated diseases [9]. In agreement with Liu et al. opinion, the disease similarity study based on omics data has more chance to find negatively correlated diseases than based on clinical symptom information or gene-phenotype data, because text-mining techniques for clinical symptom information cannot process negative language and gene-phenotype data include disease causal information rather than preventive information [3]. The proportion of negative links in our data (~40 %) is even much higher than Hu et. al's report (~30 %) which adopted differential expression based method to calculate disease similarity [9]. We found that 25 % of Hu et. al's negative links were also sorted out in our data. Since differential coexpression analysis (DCEA) has more potential to discover regulation mechanisms than differential expression analysis (DEA) does, we propose that the negative links which are not included in Hu et. al's work also deserve further investigation. By tracing the differential coexpression properties of a negatively correlated disease pair, one may obtain useful hints for explaining the underlying mechanisms of the mutual exclusion of the two diseases.
The DCE-based disease network is more relevant to pathogenic mechanisms than the DE-based one
As is mentioned above, differential coexpression analysis (DCEA) is more powerful in unveiling differential regulation mechanisms of diseases than differential expression analysis (DEA) since differential regulations would display as the decoupling of expression correlation [22, 24]. Based on this opinion, we assume that the present differential coexpression (DCE-) based human disease network should be more relevant to pathogenic mechanisms than the networks based on differential expression analysis (DEA).
In order to carry out a parallel comparison, we replaced the dC value in our similarity measurement with differential expression level, the log of Fold Change, and obtained 1583 differential expression (DE-) based DDLs. As expected, one-sided Fisher's exact tests showed that the DE-based DDLs did not significantly share disease-related genes (p-value 0.229) and disease drugs (p-value 0.596) (Additional file 4), whose p-values were much larger than those of the DCE- based DDLs, 0.009 and 0.023.
To further understand the difference between the two strategies, DE-based and DCE-based, we compared the DDLs identified by the two methods and found that only 162 disease links (~12 %, as shown in Fig. 1) were common. The non-significant disease pairs (DE_nonsig and DCE_nonsig in Fig. 2) were then included in the analysis. According to the percentage of the disease pairs which significantly share drugs (Fig. 2, color depth of each region), it was found that DCE-based DDLs (DCE_sig, including "773" region, "162" region and "391" region in Fig. 2) share drugs much more remarkably than DE-based DDLs (DE_sig in Fig. 2), DE-based non-significant pairs (DE_nonsig in Fig. 2), and DCE-based non-significant pairs (DCE_nonsig in Fig. 2) in order. At the same time, non-significant disease pairs identified by DCE-based method (DCE_nonsig, including "2554" region, "511" region and "1387" region in Fig. 2) have the lowest percentages of the disease pairs which significantly share drugs. However, among the DE-based DDLs (DE_sig, including "910" region, "162" region and "551" region in Fig. 2), the 551 DDLs which are non-significant disease pairs according to DCE-based method have the lowest percentage of the DDLs which significantly share drugs; while, out of the DE-based non-significant pairs (DE_nonsig, including "2417" region, "391" region, "1387" region in Fig. 2), the 391 non-significant pairs which are DDLs according to DCE-based method have the highest percentage of disease pairs sharing drugs. In this way, the disease network based on DCEA proved to be more relevant to pathogenesis than that based on DEA. Figure 2 clearly captures the potential false positive and false negative disease pairs identified by DE-based strategy, and explains why the DCE-based strategy outperformed DE-based strategy.
Comparison of two types of disease networks which were identified based on DCE strategy and DE strategy. DCE_sig and DCE_nonsig denote significant and non-significant disease pairs which were identified by differential coexpression based strategy. DE_sig and DE_nonsig denote significant and non-significant disease pairs which were identified by differential expression based strategy. Meanwhile, the depth of color in every region represents the percentages of disease pairs which significantly share disease drugs
In order to compare the relevance of DCE and DE in the present work, we specially extracted 32 cancer datasets from our 108 datasets. Since cancer progression requires the coordination of cancer genes, we simultaneously calculated dC values and the log values of Fold Change of Ras (NRAS, KRAS, HRAS and MRAS), Myc, p53 and PTEN in each cancer type, and proposed that the more relevant the measurement, the more coherent the value across the various cancer genes. It was found that gene dCs in the 32 cancer types are coincident across the seven cancer genes; in contrast, the log of Fold Change didn't display any significant pattern (Fig. 3 a, b). This result further supports the rationality of our DCE-based analysis strategy.
dC values and log values of Fold Change of Ras (NRAS, KRAS, HRAS and MRAS), Myc, p53 and PTEN in 32 cancer datasets. a dC values. b Log of Fold Change values
The DCE-based human disease network is partly consistent with traditional disease classification
In order to study the consistency of our DCE-based human disease network with previous knowledge on disease classification, we carried out the following analyses. First, we clustered the network by using the average method of hierarchical clustering based on their pair-wise partial correlation coefficients in which non-significant coefficients were set to be zero. This resulted in a cluster tree involving six disease groups, comprised of 6, 6, 12, 18, 22 and 44 diseases respectively (Fig. 4). The six groups are basically consistent with the classification systems in Medical Subject Headings (MeSH), International Classification of Diseases (ICD-10) and Disease Ontology (DO). For example, Neurodegenerative disease, Parkinson's disease, Alzheimer's disease and some clinically isolated syndromes which are members of "nervous system disease" category of DO (or "Diseases of the nervous system" category of ICD-10 or "Nervous System Diseases" category of MeSH) are gathered together. Similarly, diseases in "gastrointestinal system disease" category of DO (or "Diseases of the digestive system" category of ICD or "Digestive System Diseases" category of MeSH) such as Ulcerative colitis and Crohn's Disease are connected.
Hierarchical clustering of 108 diseases. Different colors represent different groups
Furthermore, we marked the 108 diseases in our disease network with their category names in MeSH, ICD-10 and DO, and thus the disease network were divided into several sub-networks according to category markers. In order to check if the diseases from the same category are inclined to form compact sub-network in our disease network, we applied a metric, within-network distance (WD) (see Methods), to estimate the relational closeness of each sub-network [2]. When the WD value of a sub-network is smaller than that of the whole network, the diseases in the sub-network, or within the category, are proposed to lie closer to each other. Table 2 indicates that most of the within-category diseases form more compact sub-networks than the background. Among them, "Infection" and "Mental disorder" are the most compact categories. However, there are a small part of sub-networks/categories which have larger WD scores relative to the whole network, including "disease of anatomical entity" of DO, "Diseases of the musculoskeletal system and connective tissue" of ICD and four categories of MeSH. We checked these categories one by one as follows. Since "disease of anatomical entity" of DO contains several sub-categories, we recalculated the WD scores of the sub-categories with three or more diseases. It was found that all the sub-categories actually show smaller WD scores than the whole network except one, "musculoskeletal system disease" (shown in Additional file 5). As expected, "Diseases of the musculoskeletal system and connective tissue" of ICD and "Skin and Connective Tissue Diseases" of MeSH, which are the congener disease categories of "musculoskeletal system disease" of DO, do not form compact sub-network either. For another three categories of MeSH with larger WD scores than the whole disease network, the scenario is much more complicated at least partly due to the inconsistency of disease taxonomy between DO/ICD-10 and the MeSH system since MeSH allots a disease into multiple categories.
WD scores for different categories and whole network
Category names
NO. of diseases
NO. of DDLs
WD scores
disease by infectious agent
disease of anatomical entitya
disease of cellular proliferation
disease of mental health
Certain infectious and parasitic diseases
Endocrine, nutritional and metabolic diseases
Disease of nervous system
Disease of the circulatory system
Diseases of the respiratory system
Diseases of the musculoskeletal system and connective tissuea
Diseases of the genitourinary system
Musculoskeletal Diseases
Male Urogenital Diseases
Female Urogenital Diseases and Pregnancy Complications
Hemic and Lymphatic Diseasesa
Congenital, Hereditary, and Neonatal Diseases and Abnormalities
Skin and Connective Tissue Diseasesa
Nutritional and Metabolic Diseases
Immune System Diseasesa
Pathological Conditions, Signs and Symptomsa
whole network
1326 DDLs
aindicates the categories whose WD scores are larger than that of the whole network
Until now, our disease network prove to be basically compatible with traditional disease classification systems, although some categories have larger WD scores than the whole network, for example, musculoskeletal system disease, which may have more heterogeneity than previously thought and deserve more investigation on their pathogenesis and classification.
At this point, we turned to check the 1326 significant disease relationships (DDLs) in our disease network individually to see if they are consistent with the previous knowledge in MeSH, ICD-10 and DO. It was found that for 566 DDLs (~43 %, see Fig. 1), the disease pair share at least one common disease category. While, the left 760 DDLs (~57 %) are supposed to be novel disease relationships, among which 82.13 % significantly share disease-related genes or drugs (Additional file 3). In fact, some of these novel DDLs have been found to share similar molecular mechanisms by individual studies. For example, in our disease network, Obesity is connected with several types of cancers including Lung squamous cell carcinoma, Lung adenocarcinoma, Colorectal cancer and Renal clear cell carcinoma, which is consistent with several population-based observations [34–36]. In our network, Obesity is also connected with Psoriasis which is located differently from Obesity according to traditional classifications; interestingly, Psoriasis is reported to be affected by many cytokines which contribute to metabolic syndromes such as Obesity [37]. Another novel disease pair, Lung adenocarcima and S. pneumonia, is also supported by independent observations, such as the increased risk of lung cancer among persons with lung infections including pneumonia infection [38]. These novel disease relationships will allow more opportunities for drug repositioning, that is, finding new uses of existing drugs.
Both the type of disease and the type of affected tissue influence disease similarity
Inspired by what the pan-cancer project pointed out, "cancers of disparate organs have many shared features, whereas, conversely, cancers from the same organ are often quite distinct", we analyzed the similarity of same diseases which originate from different tissues and the similarity of different diseases which originate from the same tissue, aiming to answer if the disease type and the affected tissue influence disease similarity in correlation or not.
As mentioned above, among the original 89 diseases, 11 diseases involved two or more tissues. For example, Type 2 diabetes split to Type 2 diabetes - liver and Type 2 diabetes - PBMC. We found that the same diseases originating from different tissues (termed as disease members thereafter) could have connections in our disease network, while are not necessarily completely connected. For example, Parkinson's disease - brain whole substantia nigra, Parkinson's disease - whole blood, Parkinson's disease - brain prefrontal cortex and Parkinson's disease - brain putamen are connected as shown in Fig. 5. Another three multi-tissue diseases also show this character, including Chronic obstructive pulmonary disease, Systemic lupus erythematosus and Type 2 diabetes, which have two, four and two disease members respectively. Huntington's disease is similar: Huntington's disease - frontal cortex is connected with Huntington's disease - cerebellum and Huntington's disease - caudate nucleus, except that Huntington's disease - whole blood is isolated from the other three. For the left six multi-tissue diseases, the disease members are absolutely isolated with each other in our disease network, which include Chronic lymphocytic leukemia, Colorectal cancer, Allergic asthma, Rheumatoid arthritis, IgA nephropathy and Down syndrome involving two, four, two, two, two and two members, respectively. The above observations are consistent with Hoadley's in cancer [39] that only a small proportion of diseases represent stable manifestations in different tissues. The isolation of disease members in the network indicates that the same diseases could have extremely different pathogeneses in different tissues, just like what the pan-cancer project declared, "same genetic aberrations have very different effects depending on the organ within which they arise" [40].
DDLs among four disease members of Parkinson's disease. Nodes denote Parkinson's diseases which originated from four tissues, red lines represent positively correlated diseases, and green lines represent negatively correlated diseases
Furthermore, we systematically estimated if different diseases with same tissue origin tend to show similarity. In the present work, the 108 diseases affect a total of 25 tissues, among which 74 diseases (~69 %, Fig. 1) affect 10 tissues with each tissue involving not less than three diseases. The ten tissues and the related 74 diseases were selected to perform the follow-up analysis. For each tissue, the WD score of the involved diseases was calculated to examine if the different diseases originated from the same tissue form a compact sub-network (Table 3). In order to evaluate the statistical significance of the WD score, we permutated diseases with disease number belonging to a certain tissue unchanged, counted the number of DDLs based on permuted data and calculated the new WD scores. This permutation procedure was repeated 500 times, and the resulting pseudo WD scores formed an empirical null vector. The p-value for each tissue was then estimated by using Wilcox test. It was found that WD scores of all the ten tissues are significantly smaller than that of whole disease network except one tissue, lymphoblastic cells. That is, different diseases with same "tissue-of-origin" tend to have similar pathogenesis.
WD scores for the diseases originated from specific tissues and for the whole network
P-values
2.346e-06
6.29e-06
lymphoblastic cells
PBMCa
whole blood
aPBMC is short for Peripheral Blood Mononuclear Cells
bdenotes non-significant p-value (p > 0.05)
Taken together, the traditional disease taxonomy which classifies diseases based on "tissue-of-origin" is reasonable from the viewpoint of disease network in terms of dysfunctional regulatory mechanisms. Additionally, the divergence of disease members in different tissues, the convergence of different diseases in the same tissue, and the stable manifestations in different tissues of a small part of multi-tissue diseases as well, indicate that both the type of disease and the type of affected tissue influence the degree of disease similarity.
The disease network helps to explore common molecular pathogenesis shared by similar diseases
Since our disease network was inferred by evaluating the similarity of gene correlation change between diseases, it offers us the possibility to explore the common dysfunctional regulation mechanisms underlying DDLs by extracting common differential coexpression relationships shared by linked diseases.
We took a sub-network including Allergic asthma, Type 2 diabetes, IgA nephropathy and Chronic kidney disease as an example to demonstrate the exploration of the common pathogenesis underlying the disease network, since the four diseases converged in the same disease cluster in Fig. 4 and were also identified as a disease trajectory which reflects temporal disease progression in Jensen et al.'s study [19]. As shown in Fig. 6a, Allergic asthma was connected with Type 2 diabetes, Type 2 diabetes connected with IgA nephropathy, and IgA nephropathy connected with Chronic kidney disease. Considering that IgA nephropathy is linked to several chromosomal regions while the responsible genes are still unclear [41], we excluded IgA nephropathy from the following analysis and focused on the other three relatively better known complex diseases, Allergic asthma, Type 2 diabetes and Chronic kidney disease. We first sorted out 197 common DCGs shared by the three diseases. Meanwhile, we obtained disease-related pathways of the three diseases from MalaCards database version 1.05 [42, 43], resulting in 154 disease-related pathways in total (Additional file 6). However, we noticed that there are no overlapped pathways across the three diseases, and only three pathways are shared between Allergic asthma and Chronic kidney disease, eight shared between Chronic kidney disease and Type 2 Diabetes (Fig. 6b). This is probably attributed to the limited prior knowledge on the disease pathogenesis. We then took the 154 union pathways as candidate pathways of the three diseases. It is interesting that 37 out of 154 pathways (~24 %) are significantly enriched by the 197 common DCGs according to hypergeometric test (see Table 4 for the top three pathways), and the proportion, 24 %, is significantly higher than expected at random by permutation test (p = 0.026). Hence, we propose that the 37 pathways and their included 49 common DCGs may contribute to the common molecular pathogenesis of Allergic asthma, Type 2 diabetes and Chronic kidney disease.
The disease sub-network, disease-related pathways and the differential coexpression information. a The sub-network formed by four diseases including Allergic asthma, Type 2 diabetes (T2D), IgA nephropathy and Chronic kidney disease and their partial correlation coefficients. b Venn diagram of disease-related pathways of Allergic asthma, T2D and Chronic kidney disease. c Gene network centered by three common DCGs in Wnt signaling pathway FOXN1, FZD8 and TLE2. Red nodes denote FOXN1, FZD8 and TLE2. Green nodes denote genes which form differentially coexpressed links with common DCGs. Four genes with bold type, FZD8, TGFBI, CCL18 and GHR, denote that their associations with Allergic asthma, T2D or Chronic kidney disease have been reported
Three the most significant disease-related pathways
Included common DCGs
WNT_SIGNALING
FOXN1, FZD8, TLE2
BIOCARTA_IGF1MTOR_PATHWAY
MTOR, IGF1R
KEGG_PROSTATE_CANCER
MTOR, IGF1R, IKBKG, E2F2
Numbers in "AA", "T2D" and "CKD" columns are the differential coexpression values (dCs) of pathways
Among the 37 pathways, Wnt signaling pathway is most significantly enriched by the 197 common DCGs (Table 4). There have been individual reports of associations between Wnt pathway and Asthma [44], Type 2 diabetes [45] and Chronic kidney disease [46], although in MalaCards database Wnt is not assigned to be Asthma or Chronic kidney disease related. According to our data, Wnt pathway involves three common DCGs including FZD8, FOXN1 and TLE2, among which only FZD8 was reported to participate in the pathogenesis of Asthma [47, 48], and display abnormal expression in Chronic kidney disease [49]. There are no literatures on the roles of FOXN1 and TLE2 in Asthma, T2D and Chronic kidney disease in the public domain. We propose that the three genes, FZD8, FOXN1 and TLE2 may contribute to the pathogenesis of the three complex diseases. We then identified the differentially coexpressed links (DCLs) by using DCGL [23, 25], and built a gene differential coexpression network which is centered by FZD8, FOXN1 and TLE2, linked by differential coexpression relationships (Fig. 6c). There are a total of 18 genes and 36 links in the network, with 23 links appearing in one disease, 12 links in two diseases, one (the link between FZD8 and TSC22D2) in all three diseases (Additional file 7). According to the philosophy of differential coexpression analysis [24], these links potentially represent the disturbed regulation relationships during disease progression, and therefore are worthy of further investigation.
For example, FZD8 and FOXN1 are commonly linked to 14 genes in Fig. 6c, among which TGFBI has been proved to contribute to Allergic asthma [47] and T2D [50], CCL18 contributes to Allergic asthma [51] and GHR is associated with T2D [52]. In our data, TGFBI and FOXN1 do not correlate with each other in normal tissue, while they present negative correlation in Allergic asthma (−0.76); meanwhile, the positive correlation of TGFBI and FZD8 in normal tissue is reversed to be negative in T2D (from 0.63 to −0.86). For CCL18, it is a differentially coexpressed gene (DCG) in Allergic asthma. As for GHR, its negative correlation with FZD8 in normal tissue (−0.69) disappears in T2D. These correlation changes may indicate altered protein protein interaction, disturbed gene regulation, or some other abnormal molecular events, and therefore provides clues for further investigation of signaling transduction in pathogenesis. It is interesting that none of the above mentioned six genes, FOXN1, FZD8, TLE2, TFGBI, CCL18 and GHR, are differentially expressed between disease and normal samples, which is consistent with the opinion that crucial factors are not necessarily differentially expressed [22, 24]. Among the six genes, although FOXN1is an immune-related transcription factor (see Additional file 8 for FOXN1 linked DCLs in the three diseases) and TLE2 is a transcriptional corepressor that inhibits Wnt signaling, there are not any known regulatory relationships involved in the 36 DCLs in Fig. 6c, which is probably due to the limited number of experimentally validated TFs (199) and their regulation relationships (199,950) in DCGL's TF2target library [25]. With the accumulation of experimental evidences for TFs and their corresponding targets, we believe the present analysis framework could generate more insightful testable hypotheses for pathogenesis studies.
Disease-Disease relationships are of great interest because this knowledge enhances our understanding of disease etiology and pathogenesis. The previous works estimated disease similarities based on commonalities in clinical phenotypes [4, 11], gene-phenotype knowledge bases (OMIM and GAD, for example) [1–3, 8], medical vocabulary concepts/features [13, 14], electronic medical records [15, 18–21], high-throughput data (gene expression profiles, for example) [9, 10, 16] and multi-types of data [16–18]. In this way, disease etiology, pathophysiology and disease-related genes/proteins/microRNAs can be appropriated from one disease to another [5–8]; furthermore, scientists can perform drug repositioning and drug target identification from drug clinical application of similar diseases [9–11]. However, we noticed that when gene expression data were exploited in the field of disease similarities study [9, 10, 16], the attention has only been paid on differential expression. It has been widely accepted that diseases originate from the dysregulation of cell signaling transduction, which causes abnormal expression of a large number of genes. That is, differentially expressed genes are more likely to be the consequences of differential regulation mechanisms, rather than the causes of phenotypic changes. More importantly, a causal factor is not necessarily differentially expressed, for example, when a mutation disrupts the regulation function of the causal factor, the causal factor could still be normally expressed, while in this case, the correlation between the causal factor and its targets will disappear. A new emerging strategy, differential coexpression analysis (DCEA) [22–24], was recently designed to explore gene correlation changes, instead of expression level changes, and has been considered more promising in unveiling differential regulation mechanisms of diseases than differential expression analysis [22]. Therefore, in the present work, we explored the architecture of disease relationships in terms of dysfunctional regulation mechanism by using DCEA for the first time, which has proved to be a complement to the disease networks generated from symptoms, disease concepts, biomedical big data. Benefiting from the use of DCEA, our disease links shared known disease genes and drugs more significantly than disease relationships captured by differential expression (DE) analysis based method (Table 1). By tracing the differentially coexpressed genes and links (DCGs and DCLs), the present disease similarity analysis framework provides a practical way to explore the underlying common molecular mechanisms shared by similar diseases and generate insightful molecular evidences for etiology and pathogenesis.
It is noted that there are quite a lot of novel disease relationships in our disease network, 760 DDLs (~57 % of all DDLs), and most of them (82.13 %) significantly share disease-related genes or drugs (Additional file 3). As mentioned above, the correlations between Obesity and cancer [34–36], Obesity and Psoriasis [37], Lung adenocarcima and S. pneumonia [38], and so on, have been reported by pathogenic or epidemiologic studies, although they have not been adopted by traditional disease classification systems. Whereas, by contrast, some diseases that are defined in the same category in traditional classification systems do not show significant similarities in our disease network. It seems that these categories may have more heterogeneity than previously thought and deserve further investigation. We hold the view that a small proportion of diseases need to be reclassified according to new molecular taxonomy. The contradictory observations between our disease network and the traditional disease classification systems may provide insightful clues.
It has been accepted that similar diseases tend to involve similar molecular mechanisms, hence have the potential to be treated by common drugs. That is to say, if a drug has been proved to successfully treat disease A, it might be used to treat A-linked diseases, which is the basis for drug repositioning. Based on the information from DrugBank, the DDLs in our disease network showed this tendency (Table 1). Taking Ulcerative colitis and Crohn's disease as an example, they are considered as similar diseases in traditional disease classification systems, and their correlation is 0.428 in our data, ranking third among the 1326 DDLs. The p-values of the hypergeometric test for their common disease genes and disease drugs are 6.82E-183 and 1.50E-05 respectively, both of which are at top 5 % in all DDLs (Additional file 3). In 1998, Infliximab, a chimeric monoclonal antibody against tumor necrosis factor alpha (TNF-α), was invented and approved for treatment of Crohn's disease [53]. After several years, some studies proved that Infliximab have positive outcome when treating Ulcerative colitis [54, 55]. We noticed that even among the 760 novel DDLs, 42.7 % significantly share drugs (Additional file 3). For example, Psoriasis and T-cell polymphoytic leukemia are different disease in traditional classification systems, while they form a DDL in our data (correlation coefficient 0.091, at top 8 % in all DDLs). They were found to significantly share drugs with a p value of 0.035 (at top 22.7 % in all DDLs, Additional file 3). Methotrexate, an antimetabolite and antifolate drug, is recorded in American Hospital Formulary Service (ASHP) drug information 2004 for treatment of both polymphoytic leukemia and Psoriasis, while methotrexate for autoimmune diseases is taken in lower doses than for cancer [56]. Another interesting example is Parkinson's disease and Influenza A, which seem to be unrelated with each other; however, Amantadine hydrochloride (trade name Symmetrel, by Endo Pharmaceuticals) has been approved for treatment of both Influenza A and Parkinson's disease [57]. In our disease network, Parkinson's disease is linked to Influenza A with the correlation coefficient of 0.061, at top 30 % in all DDLs. They significantly share drugs with p value of 0.032 (at top 21 % in all DDLs, Additional file 3).
On the other hand, since negatively correlated diseases, say disease A and A', may involve inversely regulated biological processes, we proposed that an anti-A drug may have an undesired property of inducing disease A' when the drug is inversing its target processes. Still taking Crohn's disease and its therapeutic drug, infliximab, as an example, Crohn's disease is negatively connected with T-cell source of chronic lymphocytic leukemia (correlation coefficient −0.15, at top 5 %) and Melanoma (correlation coefficient −0.05, at top 50 %) in our data; infliximab, a chimeric monoclonal antibody against tumor necrosis factor alpha (TNF-α), is usually used for treatment of inflammatory bowel disease (IBD) such as Crohn's disease [53]. In 2006, the Food and Drug Administration (FDA) issued a warning for infliximab given its potential association with the development of Hepatosplenic T-cell lymphoma which is a subtype of T-cell source of chronic lymphocytic leukemia [58]. This phenomenon was also observed in other independent studies [59–61]. Similarly, a case–control study showed an increased risk of melanoma with anti-TNF treatment in IBD patients [62]. We believe that the differential coexpression properties of these negatively correlated diseases could help to explore the underlying mechanisms and improve the therapeutic applications. It is interesting that we also noticed some negatively correlated diseases which shared drug(s). Still taking infliximab as an example, infliximab is used for treatment of both Crohn's disease [53] and Rheumatoid arthritis [63] although the two diseases formed a negative DDL in our disease network. Another example is tamoxifen, a commonly used anti-breast cancer drug, which was recently proved to remedy Myotonic muscular dystrophy (DMD) in the mdx(5Cv) mouse model [64], though Muscular dystrophy is negatively correlated to some cancers in our disease network. These intriguing observations need further investigation.
We believe that there are valuable druggability information to be discovered in our disease network, and the present work affords an effective and authentic way for systematic drug repositioning. Last but not least, the negative disease pair information helps to discover drug side effects, explore the underlying mechanisms and improve the therapeutic applications.
Just like it was claimed by Todd Golub, "Large, unbiased genomic surveys are taking cancer therapeutics in directions that could never have been predicted by traditional molecular biology [65]", data-driven disease similarity research strategy allows researchers to get a comprehensive, unbiased architecture of diseasome, which includes useful hints about pathogenesis exploration and drug development.
We developed a differential coexpression based approach to measure disease similarity, and constructed a human disease network involving 1326 DDLs among 108 diseases. We discovered quite a lot of novel disease links, some of which are being found to share similar pathogenesis. Our data-driven disease similarity strategy allows researchers to obtain a comprehensive, unbiased architecture of diseasome from the viewpoint of dysfunctional regulation mechanisms, which could include hints about pathogenesis exploration and drug development.
Reviewers: This article was reviewed by Limsoon Wong, Rui Wang-Sattler and Andrey Rzhetsky
Reviewer's report
Title: The human disease network in terms of dysfunctional regulatory mechanisms
Version: 1 Date: 16 July 2015
Reviewer: Prof Limsoon Wong. School of Computing, National University of Singapore
Report form:
Good points:
1/ This manuscript describes an interesting approach to measure the similarity of diseases by on hypothesized rewiring of gene regulation networks. The rewiring is hypothesized/predicted based on changes in the co-expression of adjacent genes in a pathway. This is an interesting idea and, in theory, is plausible.
2/ The manuscript presents a variety analyses based on the disease-disease network/links generated by the method mentioned in 1/ above. The analyses are interesting and provide reasonable evidence of the validity of the disease-disease links the authors have uncovered. E.g., in one analysis, the enrichment of shared disease genes between adjacent diseases in their inferred disease-disease network is shown.
3/ The manuscript highlights a number of hypotheses on the relationship between diseases; although I am not in a position to judge these, I find them interesting and sufficiently described for a more knowledgeable expert to judge.
4/ I also find the point that if two diseases have a negative relationship, then the drug for one may make the other worse to be interesting and plausible. If this proves valid upon deeper investigation, it points to a very important use of the constructed disease-disease network.
Weak points:
5/ The significance of a disease-disease pair/link is tested by a permutation by random assignment of genes to pathways (albeit preserving number of genes in a pathway, number of pathways, etc.). Nevertheless, such a random assignment is valid only when one assumes as a null hypothesis that genes in a pathway are mutually independent of each other. This null hypothesis is obviously false. Thereby, it has a tendency to be rejected, and this rejection is insufficient for one to conclude the validity of the disease-disease link. The rejection of this null hypothesis (which basically says genes in a pathway are no different from random ones) can only imply an alternate hypothesis that says genes in a pathway do behave differently from a random set of genes. But this alternate hypothesis has nothing to do with the validity of the disease-disease link. I.e., there will be a lot of false positives among the significant links by this permutation test. The authors should think of a more appropriate permutation test (or other form of test) that comes with a more appropriate null hypothesis.
Response: We agree with you that our permutation step could be designed more sophisticatedly. We actually borrowed this design from a previously reported DE-based disease similarity study [10]. Following its design, we randomized the relationships between genes and pathways while preserving the number of pathways a given gene belongs to, the number of pathways' component genes, and the number of all pathways. In this way, the distributions of pseudo pathways are similar to their corresponding real pathways, and therefore, we assume that the null hypothesis could be regarded as that the diseases are mutually independent with each other, and the alternate hypothesis is that the diseases links are different from random links.
In the present work, we further validated our disease links by checking if the similar diseases in our disease network tend to share disease related genes and drugs (see "A human disease network was built with a differential coexpression (DCE-) based computational approach" section). In order to describe the permutation test more clearly, we revised 'Permutation test of disease pairs' in the current version.
6/ A database of pathways is used as the starting point. It is not clear from the manuscript whether each pathway is used as a separate network and analyzed separately. Or, these pathways are integrated into one single big network, then co-expression analysis is performed on this integrated network. The authors should clarify this in their method description.
Response: Sorry, we didn't make it clear. The coexpression analysis was actually carried out on the gene level at the very beginning of our pipeline, and had nothing to do with pathway knowledge. Since the DCEA method we adopted in the current work has been thoroughly explained in a previous publication, we only cited our original paper and didn't describe its detailed information. Following the reviewer's suggestion, we re-organized the method description of "Disease similarity algorithm") and added a workflow to illustrate the algorithm in the Additional file 2 .
7/ The manuscript mentions that muscular dystrophy is negatively correlated with cancers. I am not sure that this is consistent with current medical knowledge. For counter examples, I recall myotonic muscular dystrophy has been reported to be associated with elevated risk of cancers and DMD patients have been reported to respond well to cancer drugs like tamoxifen.
Response: Thank you for providing this information. We found the paper which reported that tamoxifen, used to treat estrogen-dependent breast cancer, caused remarkable improvements of muscle force and of diaphragm and cardiac structure in the mdx(5Cv) mouse model of myotonic muscular dystrophy (DMD) [64]. After rechecking our data, we found another similar example, Crohn's disease [53] and Rheumatoid arthritis [63], which are negatively correlated while share a drug, infliximab. This is quite interesting and deserves further investigation. Since we don't see any plausible explanations about its potential mechanism, we merely added this observation to the discussion as follows, "It is interesting that we also noticed some negatively correlated diseases which shared drug(s). Still taking infliximab as an example, infliximab is used for treatment of both Crohn's disease [53] and Rheumatoid arthritis [63] although the two diseases formed a negative DDL in our disease network. Another example is tamoxifen, a commonly used anti-breast cancer drug, which was recently proved to remedy Myotonic muscular dystrophy (DMD) in the mdx(5Cv) mouse model [64], though Muscular dystrophy is negatively correlated to some cancers in our disease network. These intriguing observations need further investigation."
Reviewer: Dr Rui Wang-Sattler. Helmholtz Zentrum München, Munich
Report form: see the attached comments
Quality of written English: Acceptable
The manuscript "The human disease network in terms of dysfunctional regulatory mechanisms" presents a human disease network derived from mRNA expression data, pathway data, and information of disease-related genes and drugs. A differential coexpression analysis method, previously developed by the same group, was used to explore the larger data. The authors identified 760 novel disease-disease links and several disease relationships including obesity and cancer. Furthermore, both the types of diseases and of affected tissues were found to influence the degree of disease similarities.
Overall, the design of the study is of high interest. The methods employed are adequate and sound. The analysis is very well done. The results are very promising and provide a good insight into the etiology and pathogenesis. The weaknesses of the paper are the presentation of the complicated results and used methods. The organization of the paper can be improved, for example, a new figure of workflow may help the readers of Biology Direct to better follow and understand the design and results of the study. The clarity and/or coherence of the paper need to be improved as specified as the following:
Specific comments:
Please show words in the 'Keywords', e.g., human disease network instead of network.
Response: Thanks. "Human disease network" has been shown in the "Keywords".
Please remove the description of results from the background section, fourth paragraph, starting with 'we identified 1326 significant…'.
Please remove the paragraph starting with 'In conclusion, we construct…' from the background section, fourth paragraph, as this is also shown in the Abstract.
Response: Following your suggestion, we deleted the summary of results and conclusion from the "Background" section. In order to keep the manuscript more complete, we briefly summarized the results at the end of the "Background" section in two sentences, "In the present work, we developed a DCE-based computational approach to estimate human disease similarity, and identified 1326 significant Disease-Disease links (DDLs for short) among 108 diseases. Benefiting from the use of DCEA, the human disease network is constructed for the first time from the viewpoint of regulation mechanisms."
The organization of the publication can be improved. The current Results section is partly mixed with methods, introduction and discussion. Please either remove the repeated method description from the results part or move the methods into the Methods section.
1) In the results, the first paragraph, in general, it's a description of method starting with 'As mentioned in the Methods section, a total of 96 ……file 2 for details.' These should be moved to the method part;
Response: Following the reviewer's suggestion, we removed the repeated method description from the "Results" section, and re-organized the "Methods" section to include all information of data processing. The part of "As mentioned in the Methods section, a total of 96 ……file 2 for details." was re-wrote and integrated into "Gene expression data" section of "Methods".
2) In the results, the third paragraph, starting with 'In Hu et al.'s work…' should be introduced in the background;
Response: Yes, Hu et al.'s work was introduced in "Background" section. In the third paragraph of "results" section, we focused on the comparison between Hu et al's work and ours.
3) In the results, fifth paragraph, please move 'In 2006, the Food and Drug Administration ….' to the discussion;
Response: Modified. Thanks.
4) In the discussion, the first three paragraphs till 'A new emerging strategy…' can be removed from the manuscript;
5) In the discussion, the fifth paragraph, some results are first descried in the Discussion section, e.g., '…Fig. 6A, B.', which should be moved to Results section.
The tables are nicely presented. However, some figures can be improved: Fig. 1A can be removed as 1326 links cannot be seen clearly.
Response: We agree with your comment. Have removed the overview diagram of 1326 links from the Figure 1 in the revised version.
The coloring of Fig. 3 should be different: Once similar diseases and once same tissue should be shown in same colors. Additionally the information should be limited to tissues with more than one available disease and disease groups with more than one tissue measured.
Response: We tried to revise Figure 3 according to your suggestion (see the following figure). However, the large number of disease types and tissue types made the graph hard to read. We therefore maintained the original Figure 3.
For Fig. 4, please explain what is shown and add a legend.
Response: Following your suggestion, we added a legend to present the connections among the multi-tissue diseases.
Please correct these typos:
P10: Please exchange Fig. 1A with 1b, as Fig.1B was first mentioned.
P13: Several abbreviations were defined several times in the method, results and discussion. For example, the differential coexpression analysis (DECA) appeared in numerous places.
Please avoid citing the same reference twice, e.g., Reference no.16 = no. 21.
Reviewer: Andrey Rzhetsky. Institute for Genomics and Systems Biology, University of Chicago
The authors' main assumption is that gene expression in disease is different from"healthy" gene expression in the same tissue type in a partially predictable way. A further assumption is that diseases that share features of expression abnormality for the same tissue type should have partially shared etiology. These assumptions are reasonable and intuitive.
However, there is a disconnect at the point when molecular networks are divided into pathways and one computes the disease similarity statistic over these pathways ("Disease similarity algorithm"): There are numerous ways to split a graph into pathways and the currently-used split was produced using a sequence of somewhat arbitrary decisions. For instance, why is the differential co-expression value best defined by an Euclidean distance between two expression vectors (normalized by a squared root of the number of vector dimensions, equation 1). Are there alternatives? Are there desirable statistical properties or an intuitive physical meaning of a so-defined quantity? In other words, it would be nice if the approach did not depend on the arbitrary decisions of uncoordinated experts.
Response: We guess that the order of our descriptions in the "Methods" section seem misleading where the DCp method was explained almost at the last (together with another measure, WD). DCp is actually performed at the very beginning of the pipeline, and it is on the gene level. By using DCp, we obtained the differential coexpression values (dCs) of all genes for every diseases. As pathways are accountable for most processes in the cell, we then calculated the changes in the coexpression levels of various functional pathways of the systems, i.e., dCs of pathways, by calculating the average dC of pathways' component genes. The disease similarity was finally estimated as the partial Spearmen correlation coefficients of pathways' dCs between any two diseases. In order to describe the pipeline more clearly, we re-organized "Disease similarity algorithm" section and added a workflow to illustrate each step of the algorithm in Additional file 2 .
As for the design of dC measure, since the method was reported in our previous work, we did not explain its details in the current manuscript. As described in the original paper [24], in order to estimate the degree of correlation change of a gene in two contrastive conditions, say disease and normal, the differential coexpression measure, dC, was defined as the Euclidean distance of two contrastive coexpression profile of the gene under two conditions [24]. DCp proved to be superior to currently popular designs, including LRC, ASC and WGCNA [Choi, J.K., Yu, U., Yoo, O.J. and Kim, S. (2005) Differential coexpression analysis using microarray data and its application to human cancer. Bioinformatics, 21, 4348–4355. Reverter, A., Ingham, A., Lehnert, S.A., Tan, S.H., Wang, Y., Ratnakumar, A. and Dalrymple, B.P. (2006) Simultaneous identification of differential gene expression and connectivity in inflammation, adipogenesis and cancer. Bioinformatics, 22, 2396–2404. Mason, M.J., Fan, G., Plath, K., Zhou, Q. and Horvath, S. (2009) Signed weighted gene co-expression network analysis of transcriptional regulation in murine embryonic stem cells. BMC Genomics, 10, 327. Fuller, T.F., Ghazalpour, A., Aten, J.E., Drake, T.A., Lusis, A.J. and Horvath, S. (2007) Weighted gene coexpression network analysis strategies applied to mouse weight. Mamm Genome, 18, 463–472. van Nas, A., Guhathakurta, D., Wang, S.S., Yehya, N., Horvath, S., Zhang, B., Ingram-Drake, L., Chaudhuri, G., Schadt, E.E., Drake, T.A. et al. (2009) Elucidating the role of gonadal hormones in sexually dimorphic gene coexpression networks. Endocrinology, 150, 1235–1249.], in simulation studies in retrieving predefined differentially regulated genes and gene pairs, which was attributed to their uniqueness of exploiting the quantitative coexpression change of each gene pair in the coexpression networks.
I understand that the authors are trying to convince readers that the differential co-expression is biologically more relevant than differential expression. The logic of the comparison of these two types of networks can be made clearer. If I understand correctly, the authors cite their own paper and a commentary piece to claim that it "has been accepted that differential coexpression analysis (DCEA) is more powerful in unveiling regulation mechanisms of disease than differential expression analysis (DEA)." This is, in my view, an overstatement.
Response: Due to the space limitation, we did not explain the background of differental coexpression analysis (DCEA) sufficiently in our manuscript. Although DCEA is far from a commonly used method in the field of transcriptomics, differential coexpression and differential regulation have been discussed for more than one decade. Briefly speaking, Differential expression analysis (DEA) looks at absolute changes in gene expression levels, and treats each gene individually. While, gene coexpression analysis explores gene interconnection at the expression level from a systems perspective, and differential coexpression analysis (DCEA) was designed to investigate molecular mechanisms of phenotypic changes through identifying subtle changes in gene expression coordination [Choi, J.K., Yu, U., Yoo, O.J. and Kim, S. (2005) Differential coexpression analysis using microarray data and its application to human cancer. Bioinformatics, 21 , 4348–4355. Reverter, A., Ingham, A., Lehnert, S.A., Tan, S.H., Wang, Y., Ratnakumar, A. and Dalrymple, B.P. (2006) Simultaneous identification of differential gene expression and connectivity in inflammation, adipogenesis and cancer. Bioinformatics, 22 , 2396–2404. Watson, M. (2006) CoXpress: differential co-expression in gene expression data. BMC Bioinformatics, 7 , 509. Fuller, T.F., Ghazalpour, A., Aten, J.E., Drake, T.A., Lusis, A.J. and Horvath, S. (2007) Weighted gene coexpression network analysis strategies applied to mouse weight. Mamm Genome, 18 , 463–472. Mason, M.J., Fan, G., Plath, K., Zhou, Q. and Horvath, S. (2009) Signed weighted gene co-expression network analysis of transcriptional regulation in murine embryonic stem cells. BMC Genomics, 10 , 327. van Nas, A., Guhathakurta, D., Wang, S.S., Yehya, N., Horvath, S., Zhang, B., Ingram-Drake, L., Chaudhuri, G., Schadt, E.E., Drake, T.A. et al. (2009) Elucidating the role of gonadal hormones in sexually dimorphic gene coexpression networks. Endocrinology, 150 , 1235–1249.]. In 2010, a review, entitled "From 'differential expression' to 'differential networking' – identification of dysfunctional regulatory networks in diseases", systematically explicated the development from differential expression to differential coexpression for the first time as we know [22]. It summarized the purpose and features of differential expression analysis and differential coexpression analysis, and proposed that differential coexpression analysis has more chance to unveil regulation mechanisms of disease than differential expression analysis. Our first paper in this field was published immediately after this review [24]. As mentioned above, DCp displays a better performance than its contemporary methods. In very recent years, more and more scientists started to analyze their transcriptomic data from the angle of differential coexpression and differential regulation in order to generate testable hypotheses about the disrupted regulatory relationships or abnormal regulations specific to the phenotype of interest [Diao H, Li X, Hu S, Liu Y (2012) Gene expression profiling combined with bioinformatics analysis identify biomarkers for Parkinson disease. PLoS One 7: e52319. Araki R, Seno S, Takenaka Y, Matsuda H An estimation method for a cellular-state-specific gene regulatory network along tree-structured gene expression profiles. Gene 2013; 518: 17–25. Liu M, Hou X, Zhang P, Hao Y, Yang Y, et al. (2013) Microarray gene expression profiling analysis combined with bioinformatics in multiple sclerosis. Mol Biol Rep 40: 3731–3737. Li G, Han N, Li Z, Lu Q (2013) Identification of transcription regulatory relationships in rheumatoid arthritis and osteoarthritis. Clin Rheumatol. Qu Z, Miao W, Zhang Q, Wang Z, Fu C, et al. (2013) Analysis of crucial molecules involved in herniated discs and degenerative disc disease. Clinics (Sao Paulo) 68: 225–230.]. Following this sense, we proposed that a disease similarity measurement based on differential coexpression (DCE), instead of differential expression (DE), may lead to a disease network more relevant to pathogenesis.
In the present work, the disease links in the DCE-based disease network did prove to share known disease genes and drugs more significantly than DE-based disease relationships, supporting that the disease network based on DCEA is more relevant to pathogenesis than that based on DEA. Figure 2 captures the potential false positive and false negative disease pairs identified by DE-based strategy, and explains why the DCE-based strategy outperformed DE-based strategy.
The whole section comparing DCEA to DEA could be made much clearer by separating assumptions (such as "a good method would have similar diseases share more common drugs") and results.
Response: Thanks. We re-organized the related description, trying to make the assumption and results more readable.
The partial consistency of the disease classification network with traditional classification is, in my view, not very informative and convincing.
Response: Considering that traditional disease classification systems are descriptive conceptual systems, we designed that following analyses to make the comparison. First, we clustered the network by using the average method of hierarchical clustering based on their pair-wise partial correlation coefficients, resulting in a cluster tree including six disease groups (Figure 4). These six groups are basically consistent with the classification systems in Medical Subject Headings (MeSH), International Classification of Diseases (ICD-10) and Disease Ontology (DO). This comparison is similar with previous reports [9, 10]. We realized that this so-called consistency is not that informative and convincing, we then applied a metric, WD, to evaluate the consistency as follows, "we marked the 108 diseases in our disease network with their category names in MeSH, ICD-10 and DO, and thus the disease network were divided into several sub-networks according to category markers. In order to check if the diseases from the same category are inclined to form compact sub-network in our disease network, we applied a metric, within-network distance (WD) (see Methods), to estimate the relational closeness of each sub-network [2]. When the WD value of a sub-network is smaller than that of the whole network, the diseases in the sub-network, or within the category, are proposed to lie closer to each other. Table 2 indicates that most of the within-category diseases form more compact sub-networks than the background." Until now, our disease network was proved to be basically compatible with traditional disease classification systems, although some categories have larger WD scores than the whole network.
Version: 2 Date: 26 August 2015
The authors did not sufficiently address my earlier comment (#5) that the way the significance of disease-disease pair/link was tested was invalid as the null hypothesis was obviously false. The authors cited an earlier work that used the same strategy as a justification. But would you repeat a mistake when you know it is a mistake just because someone else also make that mistake? I think the authors should make a better effort here. E.g., instead of considering all the randomized pathways they have generated, they should perhaps consider only a subset of their randomized pathways whose genes exhibit a sufficient amount of correlation in their expression level (comparable to correlation levels found among genes in actual pathways of comparable sizes).
Response: Following the reviewer's suggestion, we made a further effort by checking whether our pseudo pathways' genes exhibit a sufficient amount of correlation or not. First, we calculated the genes' correlation values of each real pathway in 108 disease expression profiles. That formed a 5598*108 table (please see "/ real_pathway_genes_correlation/ pathway_MoreThanTen_c2c5_0_pathway_multi_exprs.txt" of Additional file 9 ), which is shortened as the following Table I.
Table I. Correlation values of every real pathways in 108 disease expression profiles.
Real pathways
Summation of genes' correlations in Disease 1
Summation of genes' correlations in Disease 108
Pathway 1
Pathway 5598
Then, we calculated the counterpart values for pseudo pathways. For example, based on the 5598 permuted pathways in the first simulation process, we obtained the genes' correlation values of the pseudo pathways in 108 expression profiles (shortened as Table II). In all, there were a total of 500 5598*108 tables since we permuted the affiliations between genes and pathways for 500 times (due to the limitation of additional file size, please see the result of 1st permutation time at "/pseudo_pathway_genes_correlation/pathway_MoreThanTen_c2c5_1_pathway_multi_exprs.txt" of Additional file 9 ).
Table II. Correlations of 1st permuted pathways in 108 disease expression profiles.
pseudo pathways
Also taking the permutation result in Table II as an example, if the correlation value of pseudo pathway n (P n ') in a certain expression profile is within the interval of the real pathway n (P n )'s correlation values in 108 expression profiles, genes of the pseudo pathway (P n ') would be considered as exhibiting a sufficient amount of correlation in the expression profile. Thus, within the 108 expression profiles, we can obtain the proportion of pseudo pathways which present the sufficient amount of correlation. This proportion is termed as sufficient proportion. As shown in Table III, in the 1st permutated process, 80 % (0.80) of correlation values of pathway 1 in 108 expression profiles were within the interval of real pathway 1 in 108 expression profiles.
Table III. The sufficient proportions of 108 expression profiles for 5598 pathways in 500 permuted processes.
1st time
2nd time
500th time
Distribution of sufficient proportion
84 % of 5598 values >0.80
Finally, we found that almost all sufficient proportions of pseudo pathways in 1st time of permutation are more than 0.5, and the percentage of pseudo pathways in 1st time of permutation whose sufficient proportion values are greater than 0.8 is 84 %. The performances of other 499 permutation times are similar as 1st permutation time (please see "/percentage_of_sufficient_proportion/ percent_sufficient_proportion_of_500_permutation_times.xls" of Additional file 9 ). That means most pseudo pathways in our permutation design met the requirement of gene expression correlation, although we did not limit the correlation values of genes when generating pseudo pathways.
We really appreciate your careful review and helpful suggestion. All the original calculation results are provided together with this revision.
The section on exploring common molecular pathogenesis shared by similar diseases is mostly descriptive in nature. It is not clear to me what the real insight is.
Response: Since our disease network was inferred by evaluating the similarity of gene correlation change between diseases, it offers us the possibility to explore the common dysfunctional regulation mechanisms underlying DDLs. The section of "The disease network helps to explore common molecular pathogenesis shared by similar diseases" section aims to demonstrate how to explore common molecular pathogenesis shared by similar diseases by extracting common differential coexpression relationships shared by linked diseases, for example, Allergic asthma, Type 2 diabetes, and Chronic kidney disease. In this example, we first sorted out 197 common DCGs shared by the three diseases, and then we integrated disease related pathways to the common DCGs to explore the potential common molecular pathogenesis of the three diseases. Wnt signaling pathway was then extracted, which has been reported to be associated with all the three diseases by individual literatures while has not been recorded in MalaCards database. We therefore highlighted Wnt related DCGs and DCLs, providing clues for those who are interested in the pathogenesis of Allergic asthma, Type 2 diabetes, and Chronic kidney disease. Also, this example demonstrated how to appropriate pathogenesis from one disease to its similar ones in a practical way.
Also, in the present study, edges are kept if they are significant at p <0.05. What if the threshold is changed to p < 0.01? Do you observe even stronger evidence for the disease-disease links (e.g., increased in proportion of shared disease genes and drugs)?
Response: Following this suggestion, we changed our threshold from p < 0.05 to p < 0.01 and obtained 724 disease pairs (1326 disease pairs when the threshold is p < 0.05).
According to the basic understanding that similar diseases tend to share similar pathogenesis, and thus have the potential to be treated by common drugs, we assume that the more similarity the diseases display, the more disease-related genes and drugs they share.
As described in the manuscript, when threshold is p < 0.05, a total of 1119 out of 1326 DDLs in the disease network could be associated with known disease genes; similarly, 745 out of 1326 DDLs could be correlated to known drugs (Table 1, Table IV). The hypergeometric tests for the 1119-DDL set and 745-DDL set indicated that 910 of 1119 DDLs (81 %) significantly shared known disease genes, and 348 of 745 DDLs (47 %) significantly shared drugs, both at a p-value threshold of 0.05 (Table 1, Table IV).
When threshold is p < 0.01, 599 and 309 out of 724 disease pairs were associated to known disease-related genes and drugs, respectively. Then we applied the same method to evaluate these disease pairs. We found 486 out of 599 (81 %) and 197 out of 309 (64 %) shared disease genes and drugs significantly (Table IV). As the reviewer expected, stronger evidence was observed for the disease links when the p value is changed to 0.01.
Table IV. Comparison of disease pairs in different thresholds.
DDLs when p < 0.05
910 ( 81 % )
DCEA:
DEA:
Differential expression analysis
DCE-based:
Differential coexpression based
DE-based:
Differential expression based
Differential coexpression value
DCG:
Differentially coexpressed gene
DCL:
Differentially coexpressed link
Differentially cexpressed gene
DDL:
Disease-Disease link
Medical Subject Headings
ICD-10:
International Classification of Diseases 10th revision
Disease Ontology
WD:
Within-network distance
PBMC:
Peripheral blood mononuclear cell
This work was supported by the grants from the National "973" Key Basic Research Development Program (2012CB316501 and 2013CB910801), the National Natural Science Foundation of China (31171268 and 81272279), the Program of International S&T Cooperation (2014DFB30020), and the Fundamental Research Program of Shanghai Municipal Commission of Science and Technology (14DZ1951300 and 14DZ2252000). We thank Mr. Bo Hu from Zhongshan Hospital, Fudan University for helpful discussions.
Additional file 1: Series information of 108 diseases from GEO. (XLS 41 kb)
Additional file 2: Workflow for identifying significant Disease-Disease links. (TIFF 3715 kb)
Additional file 3: 1326 significant Disease-Disease links. (XLS 358 kb)
Additional file 4: Contingency table to validate the assumption that DE-based disease relationships significantly share disease-related genes or drugs. (XLS 21 kb)
Additional file 5: WD scores for sub-categories of "disease of anatomical entity". (XLS 21 kb)
Additional file 6: 154 related pathways in Allergic asthma, Type 2 diabetes and Chronic kidney disease. (XLS 49 kb)
Additional file 7: FZD8, FOXN1 and TLE2 -centered differentially coexpressed links of Allergic asthma, Type 2 diabetes and Chronic kidney disease. (XLS 34 kb)
Additional file 8: FOXN1-centered differentially coexpressed links in Allergic asthma, Type 2 diabetes and Chronic kidney disease. (PDF 747 kb)
Additional file 9: Results of checking genes' correlation of pseudo pathways. (ZIP 9816 kb)
JY performed the experiments, analyze the data, drafted and revised the manuscript. SJW analyzed the data, helped to write the manuscript and plot figures. WTD helped to perform the statistical analysis and revise the manuscript. YXL conceived the study and revised the manuscript. YYL conceived the study, designed the experiments, analyzed the data, wrote and revised the manuscript. All authors read and approved the final manuscript.
Authors' information
School of Biotechnology, East China University of Science and Technology, Shanghai, 200237, P.R. China
Shanghai Center for Bioinformation Technology, 1278 Keyuan Road, Shanghai, 201203, P.R. China
Shanghai Industrial Technology Institute, 1278 Keyuan Road, Shanghai, 201203, P.R. China
Key Laboratory of Systems Biology, Institute of Biochemistry and Cell Biology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, 200031, P.R. China
Shanghai Engineering Research Center of Pharmaceutical Translation, 1278 Keyuan Road, Shanghai, 201203, P.R. China
Goh KI, Cusick ME, Valle D, Childs B, Vidal M, Barabasi AL. The human disease network. Proc Natl Acad Sci U S A. 2007;104(21):8685–90.PubMed CentralView ArticlePubMedGoogle Scholar
Li Y, Agarwal P. A pathway-based view of human diseases and disease relationships. PLoS One. 2009;4(2):e4346.PubMed CentralView ArticlePubMedGoogle Scholar
Liu YI, Wise PH, Butte AJ. The "etiome": identification and clustering of human disease etiological factors. BMC Bioinformatics. 2009;10 Suppl 2:S14.PubMed CentralView ArticlePubMedGoogle Scholar
Payne JL, Lyketsos CG, Steele C, Baker L, Galik E, Kopunek S, et al. Relationship of cognitive and functional impairment to depressive features in Alzheimer's disease and other dementias. J Neuropsychiatry Clin Neurosci. 1998;10(4):440–7.View ArticlePubMedGoogle Scholar
Lage K, Karlberg EO, Storling ZM, Olason PI, Pedersen AG, Rigina O, et al. A human phenome-interactome network of protein complexes implicated in genetic disorders. Nat Biotechnol. 2007;25(3):309–16.View ArticlePubMedGoogle Scholar
Wu X, Liu Q, Jiang R. Align human interactome with phenome to identify causative genes and networks underlying disease families. Bioinformatics. 2009;25(1):98–104.View ArticlePubMedGoogle Scholar
Lu M, Zhang Q, Deng M, Miao J, Guo Y, Gao W, et al. An analysis of human microRNA and disease associations. PLoS One. 2008;3(10):e3420.PubMed CentralView ArticlePubMedGoogle Scholar
van Driel MA, Bruggeman J, Vriend G, Brunner HG, Leunissen JA. A text-mining analysis of the human phenome. Eur J Hum Genet. 2006;14(5):535–42.View ArticlePubMedGoogle Scholar
Hu G, Agarwal P. Human disease-drug network based on genomic expression profiles. PLoS One. 2009;4(8):e6536.PubMed CentralView ArticlePubMedGoogle Scholar
Suthram S, Dudley JT, Chiang AP, Chen R, Hastie TJ, Butte AJ. Network-based elucidation of human disease similarities reveals common functional modules enriched for pluripotent drug targets. PLoS Comput Biol. 2010;6(2):e1000662.PubMed CentralView ArticlePubMedGoogle Scholar
Kalaria R. Similarities between Alzheimer's disease and vascular dementia. J Neurol Sci. 2002;203–204:29–34.View ArticlePubMedGoogle Scholar
Robinson PN, Kohler S, Bauer S, Seelow D, Horn D, Mundlos S. The human phenotype ontology: a tool for annotating and analyzing human hereditary disease. Am J Hum Genet. 2008;83(5):610–5.PubMed CentralView ArticlePubMedGoogle Scholar
Li J, Gong B, Chen X, Liu T, Wu C, Zhang F, et al. DOSim: an R package for similarity between diseases based on Disease Ontology. BMC Bioinformatics. 2011;12:266.PubMed CentralView ArticlePubMedGoogle Scholar
Mathur S, Dinakarpandian D. Finding disease similarity based on implicit semantic similarity. J Biomed Inform. 2012;45(2):363–71.View ArticlePubMedGoogle Scholar
Hidalgo CA, Blumm N, Barabasi AL, Christakis NA. A dynamic network approach for the study of human phenotypes. PLoS Comput Biol. 2009;5(4):e1000353.PubMed CentralView ArticlePubMedGoogle Scholar
Liu CC, Tseng YT, Li W, Wu CY, Mayzus I, Rzhetsky A, et al. DiseaseConnect: a comprehensive web server for mechanism-based disease-disease connections. Nucleic Acids Res. 2014;42(Web Server issue):W137–46.PubMed CentralView ArticlePubMedGoogle Scholar
Zitnik M, Janjic V, Larminie C, Zupan B, Przulj N. Discovering disease-disease associations by fusing systems-level molecular data. Sci Rep. 2013;3:3202.PubMed CentralView ArticlePubMedGoogle Scholar
Davis DA, Chawla NV. Exploring and exploiting disease interactions from multi-relational gene and phenotype networks. PLoS One. 2011;6(7):e22670.PubMed CentralView ArticlePubMedGoogle Scholar
Jensen AB, Moseley PL, Oprea TI, Ellesoe SG, Eriksson R, Schmock H, et al. Temporal disease trajectories condensed from population-wide registry data covering 6.2 million patients. Nat Commun. 2014;5:4022.PubMed CentralPubMedGoogle Scholar
Blair David R, Lyttle Christopher S, Mortensen Jonathan M, Bearden Charles F, Jensen Anders B, Khiabanian H, et al. A nondegenerate code of deleterious variants in mendelian loci contributes to complex disease risk. Cell. 2013;155(1):70–80.View ArticlePubMedGoogle Scholar
Rzhetsky A, Wajngurt D, Park N, Zheng T. Probing genetic overlap among complex human phenotypes. Proc Natl Acad Sci U S A. 2007;104(28):11694–9.PubMed CentralView ArticlePubMedGoogle Scholar
de la Fuente A. From 'differential expression' to 'differential networking' - identification of dysfunctional regulatory networks in diseases. Trends Genet. 2010;26(7):326–33.View ArticlePubMedGoogle Scholar
Liu BH, Yu H, Tu K, Li C, Li YX, Li YY. DCGL: an R package for identifying differentially coexpressed genes and links from gene expression microarray data. Bioinformatics. 2010;26(20):2637–8.PubMed CentralView ArticlePubMedGoogle Scholar
Yu H, Liu BH, Ye ZQ, Li C, Li YX, Li YY. Link-based quantitative methods to identify differentially coexpressed genes and gene pairs. BMC Bioinformatics. 2011;12(1):315.PubMed CentralView ArticlePubMedGoogle Scholar
Yang J, Yu H, Liu BH, Zhao Z, Liu L, Ma LX, et al. DCGL v2.0: An R Package for Unveiling Differential Regulation from Differential Co-expression. PLoS One. 2013;8(11):e79729.PubMed CentralView ArticlePubMedGoogle Scholar
Gautier L, Cope L, Bolstad BM, Irizarry RA. affy—analysis of Affymetrix GeneChip data at the probe level. Bioinformatics. 2004;20(3):307–15.View ArticlePubMedGoogle Scholar
Parman C, Halling C. affyQCReport: A Package to Generate QC Reports for Affymetrix Array Data. 2013Google Scholar
Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci U S A. 2005;102(43):15545–50.PubMed CentralView ArticlePubMedGoogle Scholar
Becker KG, Barnes KC, Bright TJ, Wang SA. The genetic association database. Nat Genet. 2004;36(5):431–2.View ArticlePubMedGoogle Scholar
Amberger J, Bocchini CA, Scott AF, Hamosh A. McKusick's Online Mendelian Inheritance in Man (OMIM). Nucleic Acids Res. 2009;37(Database issue):D793–6.PubMed CentralView ArticlePubMedGoogle Scholar
Stenson PD, Mort M, Ball EV, Howells K, Phillips AD, Thomas NS, et al. The human gene mutation database: 2008 update. Genome Med. 2009;1(1):13.PubMed CentralView ArticlePubMedGoogle Scholar
Wishart DS, Knox C, Guo AC, Cheng D, Shrivastava S, Tzur D, et al. DrugBank: a knowledgebase for drugs, drug actions and drug targets. Nucleic Acids Res. 2008;36(Database issue):D901–6.PubMed CentralPubMedGoogle Scholar
Hengstler JG, Bockamp EO, Hermes M, Brulport M, Bauer A, Schormann W, et al. Oncogene-blocking therapies: new insights from conditional mouse tumor models. Curr Cancer Drug Targets. 2006;6(7):603–12.View ArticlePubMedGoogle Scholar
Arnold M, Pandeya N, Byrnes G, Renehan AG, Stevens GA, Ezzati M, et al. Global burden of cancer attributable to high body-mass index in 2012: a population-based study. Lancet Oncol. 2015;16(1):36–46.View ArticlePubMedGoogle Scholar
Vucenik I, Stains JP. Obesity and cancer risk: evidence, mechanisms, and recommendations. Ann N Y Acad Sci. 2012;1271:37–43.PubMed CentralView ArticlePubMedGoogle Scholar
Hursting SD, Nunez NP, Varticovski L, Vinson C. The obesity-cancer link: lessons learned from a fatless mouse. Cancer Res. 2007;67(6):2391–3.View ArticlePubMedGoogle Scholar
Sterry W, Strober BE, Menter A. Obesity in psoriasis: the metabolic, clinical and therapeutic implications. Report of an interdisciplinary conference and review. Br J Dermatol. 2007;157(4):649–55.View ArticlePubMedGoogle Scholar
Engels EA. Inflammation in the development of lung cancer: epidemiological evidence. Expert Rev Anticancer Ther. 2008;8(4):605–15.View ArticlePubMedGoogle Scholar
Hoadley KA, Yau C, Wolf DM, Cherniack AD, Tamborero D, Ng S, et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell. 2014;158(4):929–44.PubMed CentralView ArticlePubMedGoogle Scholar
Weinstein JN, Collisson EA, Mills GB, Shaw KR, Ozenberger BA, Ellrott K, et al. The cancer genome atlas pan-cancer analysis project. Nat Genet. 2013;45(10):1113–20.PubMed CentralView ArticlePubMedGoogle Scholar
Bisceglia L, Cerullo G, Forabosco P, Torres DD, Scolari F, Di Perna M, et al. Genetic heterogeneity in Italian families with IgA nephropathy: suggestive linkage for two novel IgA nephropathy loci. Am J Hum Genet. 2006;79(6):1130–4.PubMed CentralView ArticlePubMedGoogle Scholar
Rappaport N, Twik M, Nativ N, Stelzer G, Bahir I, Stein TI, et al. MalaCards: a comprehensive automatically-mined database of human diseases. Curr Protoc Bioinformatics. 2014;47:1. 24 21–21 24 19.PubMedGoogle Scholar
Rappaport N, Nativ N, Stelzer G, Twik M, Guan-Golan Y, Stein TI, et al. MalaCards: an integrated compendium for diseases and their annotation. Database (Oxford). 2013;2013:bat018.View ArticleGoogle Scholar
Sharma S, Tantisira K, Carey V, Murphy AJ, Lasky-Su J, Celedon JC, et al. A role for Wnt signaling genes in the pathogenesis of impaired lung function in asthma. Am J Respir Crit Care Med. 2010;181(4):328–36.PubMed CentralView ArticlePubMedGoogle Scholar
Jin T, Liu L. The Wnt signaling pathway effector TCF7L2 and type 2 diabetes mellitus. Mol Endocrinol. 2008;22(11):2383–92.View ArticlePubMedGoogle Scholar
Iglesias DM, Hueber PA, Chu L, Campbell R, Patenaude AM, Dziarmaga AJ, et al. Canonical WNT signaling during kidney development. Am J Physiol Renal Physiol. 2007;293(2):F494–500.View ArticlePubMedGoogle Scholar
Kumawat K, Menzen MH, Bos IS, Baarsma HA, Borger P, Roth M, et al. Noncanonical WNT-5A signaling regulates TGF-beta-induced extracellular matrix production by airway smooth muscle cells. FASEB J. 2013;27(4):1631–43.View ArticlePubMedGoogle Scholar
Nishita M, Hashimoto MK, Ogata S, Laurent MN, Ueno N, Shibuya H, et al. Interaction between Wnt and TGF-beta signalling pathways during formation of Spemann's organizer. Nature. 2000;403(6771):781–5.View ArticlePubMedGoogle Scholar
Banon-Maneus E, Rovira J, Ramirez-Bajo MJ, Moya-Rull D, Hierro-Garcia N, Takenaka S, et al. Wnt pathway activation in long term remnant rat model. Biomed Res Int. 2014;2014:324713.PubMed CentralView ArticlePubMedGoogle Scholar
Han B, Luo H, Raelson J, Huang J, Li Y, Tremblay J, et al. TGFBI (betaIG-H3) is a diabetes-risk gene based on mouse and human genetic studies. Hum Mol Genet. 2014;23(17):4597–611.PubMed CentralView ArticlePubMedGoogle Scholar
de Nadai P, Charbonnier AS, Chenivesse C, Senechal S, Fournier C, Gilet J, et al. Involvement of CCL18 in allergic asthma. J Immunol. 2006;176(10):6286–93.View ArticlePubMedGoogle Scholar
Strawbridge RJ, Karvestedt L, Li C, Efendic S, Ostenson CG, Gu HF, et al. GHR exon 3 polymorphism: association with type 2 diabetes mellitus and metabolic disorder. Growth Horm IGF Res. 2007;17(5):392–8.View ArticlePubMedGoogle Scholar
FDA. Infliximab Product Approval Information - Licensing Action. In: Drugs@FDA US Food and Drug Administration (FDA). 1998.Google Scholar
Rutgeerts P, Sandborn WJ, Feagan BG, Reinisch W, Olson A, Johanns J, et al. Infliximab for induction and maintenance therapy for ulcerative colitis. N Engl J Med. 2005;353(23):2462–76.View ArticlePubMedGoogle Scholar
Järnerot G, Hertervig E, Friis-Liby I, Blomquist L, Karlén P, Grännö C, et al. Infliximab as rescue therapy in severe to moderately severe ulcerative colitis: a randomized, placebo-controlled study. Gastroenterology. 2005;128(7):1805–11.View ArticlePubMedGoogle Scholar
AHFS drug information 2004. McEvoy GK e. Methotrexate. Bethesda, MD: American Society of Health-System Pharmacists; 2003. p. 1082–9.Google Scholar
Inc. EP. SYMETREL® (Amantadine Hydrochloride, USP) Tablets and Syrup. 2009.Google Scholar
Mackey AC, Green L, Liang LC, Dinndorf P, Avigan M. Hepatosplenic T cell lymphoma associated with infliximab use in young patients treated for inflammatory bowel disease. J Pediatr Gastroenterol Nutr. 2007;44(2):265–7.View ArticlePubMedGoogle Scholar
Kotlyar DS, Osterman MT, Diamond RH, Porter D, Blonski WC, Wasik M, et al. A systematic review of factors that contribute to hepatosplenic T-cell lymphoma in patients with inflammatory bowel disease. Clin Gastroenterol Hepatol. 2011;9(1):36–41. e31.View ArticlePubMedGoogle Scholar
Mackey AC, Green L, Leptak C, Avigan M. Hepatosplenic T cell lymphoma associated with infliximab use in young patients treated for inflammatory bowel disease: update. J Pediatr Gastroenterol Nutr. 2009;48(3):386–8.View ArticlePubMedGoogle Scholar
Shale M, Kanfer E, Panaccione R, Ghosh S. Hepatosplenic T cell lymphoma in inflammatory bowel disease. Gut. 2008;57(12):1639–41.View ArticlePubMedGoogle Scholar
Long MD, Martin CF, Pipkin CA, Herfarth HH, Sandler RS, Kappelman MD. Risk of melanoma and nonmelanoma skin cancer among patients with inflammatory bowel disease. Gastroenterology. 2012;143(2):390–9. e391.PubMed CentralView ArticlePubMedGoogle Scholar
Perdriger A. Infliximab in the treatment of rheumatoid arthritis. Biogeosciences. 2009;3:183–91.Google Scholar
Dorchies OM, Reutenauer-Patte J, Dahmane E, Ismail HM, Petermann O, Patthey- Vuadens O, et al. The anticancer drug tamoxifen counteracts the pathology in a mouse model of duchenne muscular dystrophy. Am J Pathol. 2013;182(2):485–504.View ArticlePubMedGoogle Scholar
Golub T. Counterpoint: Data first. Nature. 2010;464(7289):679.View ArticlePubMedGoogle Scholar | CommonCrawl |
Range of a Function
For a function $f: A \rightarrow B$
Set A is called the domain of the function f
Set B is the called the codomain of the function
Set of Images of all elements in Set A is called the range i.e it is the set of values of f(x) which we get for each and every x in the domain
It is obvious that range could be subset of co-domain as their may be elements in co-domain which are not the images of any element in domain
It is denoted by R( f) or Range (f)
For real function, A and B are subset of the real numbers.
Find the domain and Range of 1. Find the domain and Range of the function given by $y=x^2$
Here it is clear that y assumes real values for all $x \in R$
So, D(y) =R
Now we can see that y value is always positive or zero i.e $y \geq 0$
So Range of function will be
Range (f) = $[0,\infty)$
2. Let A= {2,4,5,6}. A function f is defined from X to N
$f={(x,y),y=x^3 + 2, x \in A ,y \in N }$
Find the Range of the function?
D(f) ={2,4,5,6}
Let calculate the value of the function for each values of x in domain
for $x=2 , y=x^3 + 2 = 2^3 +2 = 10$
for $x=5 , y=x^3 + 2 = 5^3 +2 = 127$
So, Range of the function will be given by
R(f) ={10,77,127,218}
How to find the Range of a function
There are many method to find the range of a function
A.Range of the function may be find using below algorithm. This is inverse function technique
put y=f(x)
Solve the equation y=f(x) for x in terms of y ,let x =g(y)
Find the range of values of y for which the value x obtained are real and are in the domain of f
The range of values obtained for y is the Range of the function
B.One way would be checking the function for different value of x and drawing the graph and then draw the conclusion for Range
C.Another method would be to look for minimum and maximum values of function and then find the range
How to write the Range in interval form
\((a,b)\) It is the open interval set between point and b such that All the points between a and b belong to the open interval (a, b) but a, b themselves do not belong to this interval \(\{ y:a < y < b\} \)
\([a,b]\) It is the closed interval set between point and b such that All the points between a and b belong to the open interval (a, b) including a, b \(\{ x:a \le x \le b\} \)
\([a,b)\) It is the open interval set between point and b such that All the points between a and b belong to the open interval (a, b) including a, but not b \(\{ x:a \le x < b\} \)
\((a,b)\) It is the open interval set between point and b such that All the points between a and b belong to the open interval (a, b) including b, but not a \(\{ x:a < x \le b\} \)
Solved Examples
Find the domain and Range of the below functions
1. $y =f(x) = \frac {2}{x-1}$
Here we see that function is defined for all values of $x \in R$ except for x=1 (it becomes of the form 1/0 for x=1) .
So domain will be given
D(f) = R -{1}
For Range , let y =f(x) i.e
$ y=\frac {2}{x-1}$
$y(x-1)=2$
$yx -y=2$
$yx=2+y$
$x =\frac {2+y}{y}$
Obviously x assumes real value for all y except y=0
Hence Range is R(f) = R -{0}
2. $f(x) = \frac {1}{ \sqrt {x -3}}$
The function is defined only
x -3 > 0 or x > 3
Domain will be given as
D(f) =$(3,\infty )$
$y = \frac {1}{ \sqrt {x -3}}$
$y^2 = \frac {1}{x-3}$
$xy^2 -3y^2 =1$
$x= \frac {1 +3y^2}{y^2}$
Also $y=\frac {1}{ \sqrt {x -3}} > 0$
Therefore,Range of function R(f) =$(0,\infty)$
3. $f(x) = \frac {x}{x^2 + 3}$
Since $x^2 + 3$ > 0 for $x \in R$, This function is defined for all values of $x \in R$
D(f) =$(- \infty,\infty )$
$y = \frac {x}{x^2 + 3}$
$yx^2 + 3y -x =0$
clearly for y=0,x=0
Now for $y \neq 0$
$x = \frac {1 \pm \sqrt {1 -12y^2}}{2y}$
Obviously, x will resume real values if
$1 -12y^2 \geq 0$
$y^2 - \frac {1}{12} \leq 0$
or $ -\frac {1}{2\sqrt 3} \leq y \leq \frac {1}{2\sqrt 3}$
Hence Range of y is
R(f) = $[-\frac {1}{2\sqrt 3},\frac {1}{2\sqrt 3}]$
4.$ f(x) = \frac {(x-3)}{(3-x)}$
$f(x) = \frac {(x-3)}{(3-x)}$
$f(x) = -\frac {(x-3)}{(x-3)}$
We can observe that this assumes real values for all values of $x \in R$ except 0( it becomes of the undefined form 0/0).So Domain of the function is R -{3}
Now for values of x in domain, this function can be written as
$f(x) =-1$
Clearly Range of the function is {-1}
Find the range of the function $f(x) = \frac {x-5}{x-3}$
A. R -{5/3}
B. R -{1}
C. R -{-1}
Find the range of the function $f(x) = \sqrt {x-1}$ ?
A. $(1,\infty )$
B. $(0,\infty )$
C. $[0,\infty )$
D. $[1,\infty )$
if function f : X -> R, $f (x) = x^3 $, where X = {-1, 0, 3, 9, 7}, The Range of the function will be
A. {-1,0,3,9,7}
B. {-1,1,9,49,81}
C. {-1,0,27,343}
D. {-1,0,27,343,729}
Question .4
Let f(x) =x2 ,find the value of $\frac {f(2.1) -f(2)}{2.1 -2}$
B. .41
C. 4.1
D. .14
Find the range of the function defined as $f(x)=\sqrt {4-x^2}$
A. [-2,2]
B. [0,2]
C.(0,2)
D.[-2,0)
Find the Range and domain of the function $f(x) =\frac {x -11}{22-2x} $
A. Domain = R, Range = {-1/2, 1/2}
B. Domain = R - {1}, Range = R
C. Domain = R - {11}, Range = {-1/2}
D. Domain = R - {- 11}, Range = {-1/2, 1/2}
Cartesian Products
What is relations
What is Function
Domain of Function
Range of Function
Constant Function
Linear Function
Modules Function
Greatest Integer Function
Polynomial Function
Algebra of Real Function
Relations and Functions Class 11 Worksheet
NCERT Solutions Relation and Functions class 11 Exercise 2.1
Range a Function: Definition ,How to find it, Examples ,Quiz | CommonCrawl |
[Submitted on 16 Aug 2017 (v1), last revised 30 Apr 2018 (this version, v3)]
Title:On the global "two-sided" characteristic Cauchy problem for linear wave equations on manifolds
Authors:Umberto Lupo
Abstract: The global characteristic initial value problem for linear wave equations on globally hyperbolic Lorentzian manifolds is examined, for a class of smooth initial value hypersurfaces satisfying favourable global properties. First it is shown that, if geometrically well-motivated restrictions are placed on the supports of the (smooth) initial datum and of the (smooth) inhomogeneous term, then there exists a continuous global solution which is smooth "on each side" of the initial value hypersurface. A uniqueness result in Sobolev regularity $H^{1/2+\varepsilon}_\mathrm{loc}$ is proved among solutions supported in the union of the causal past and future of the initial value hypersurface, and whose product with the indicator function of the causal future (resp. past) of the hypersurface is past compact (resp. future compact). An explicit representation formula for solutions is obtained, which prominently features an invariantly defined, densitised version of the null expansion of the hypersurface. Finally, applications to quantum field theory on curved spacetimes are briefly discussed.
Comments: 41 pages, 1 figure. Some typos fixed, close to published version
Subjects: Mathematical Physics (math-ph); General Relativity and Quantum Cosmology (gr-qc); Analysis of PDEs (math.AP)
MSC classes: 58J45 (Primary) 58J47, 35L15, 58Z05, 53C50, 81T20 (Secondary)
Related DOI: https://doi.org/10.1007/s11005-018-1088-6
From: Umberto Lupo [view email]
[v1] Wed, 16 Aug 2017 15:54:19 UTC (72 KB)
[v2] Thu, 19 Apr 2018 08:28:40 UTC (69 KB)
[v3] Mon, 30 Apr 2018 16:04:38 UTC (67 KB)
math.AP | CommonCrawl |
Intensive Care Medicine Experimental
Aortic volume determines global end-diastolic volume measured by transpulmonary thermodilution
Aleksej Akohov1,
Christoph Barner1,
Steffen Grimmer1,3,
Roland CE Francis1 &
Stefan Wolf ORCID: orcid.org/0000-0002-3563-39542
Intensive Care Medicine Experimental volume 8, Article number: 1 (2020) Cite this article
Global end-diastolic volume (GEDV) measured by transpulmonary thermodilution is regarded as indicator of cardiac preload. A bolus of cold saline injected in a central vein travels through the heart and lung, but also the aorta until detection in a femoral artery. While it is well accepted that injection in the inferior vena cava results in higher values, the impact of the aortic volume on GEDV is unknown. In this study, we hypothesized that a larger aortic volume directly translates to a numerically higher GEDV measurement.
We retrospectively analyzed data from 88 critically ill patients with thermodilution monitoring and who did require a contrast-enhanced thoraco-abdominal computed tomography scan. Aortic volumes derived from imaging were compared with GEDV measurements in temporal proximity.
Median aortic volume was 194 ml (interquartile range 147 to 249 ml). Per milliliter increase of the aortic volume, we found a GEDV increase by 3.0 ml (95% CI 2.0 to 4.1 ml, p < 0.001). In case a femoral central venous line was used for saline bolus injection, GEDV raised additionally by 2.1 ml (95% CI 0.5 to 3.7 ml, p = 0.01) per ml volume of the vena cava inferior. Aortic volume explained 59.3% of the variance of thermodilution-derived GEDV. When aortic volume was included in multivariate regression, GEDV variance was unaffected by sex, age, body height, and weight.
Our results suggest that the aortic volume is a substantial confounding variable for GEDV measurements performed with transpulmonary thermodilution. As the aorta is anatomically located after the heart, GEDV should not be considered to reflect cardiac preload. Guiding volume management by raw or indexed reference ranges of GEDV may be misleading.
Transpulmonary thermodilution is commonly used and recommended in current guidelines for the management of critically ill patients with cardiovascular instability to assess cardiac output (CO) and volume status [1, 2]. The parameter global end-diastolic volume (GEDV), a hypothetical volume assuming all cardiac chambers being simultaneously in diastole, is considered to reflect cardiac preload [3]. Michard et al. described that GEDV indexed to body surface area (GEDVI) more adequately predicted volume responsiveness in patients with septic shock compared with the central venous pressure [4]. In a prospective randomized trial, Goepfert et al. found that guidance with an algorithm including GEDVI reduced complications and length of ICU stay in patients after cardiac surgery [5]. Kaneko et al. identified GEDVI as an important contributor to elevated extravascular lung water (EVLW) in patients with ARDS [6].
However, it was recently shown that GEDVI did not reflect even markedly enlarged left-ventricular end-diastolic volumes measured by cardiac angiography [7]. Furthermore, reference values for GEDVI proposed by expert opinion vary and a reference range applicable to all subjects was repeatedly questioned [8,9,10]. A meta-analysis including 64 studies recognized significantly higher mean GEDVI in septic patients compared with patients undergoing major surgery and concluded the need to adapt therapeutic targets for different patient populations [8]. Huber et al. noticed a dependence of GEDV on age, sex, body height, and body weight in patients in a medical intensive care unit and proposed sex-specific formulas to alleviate the problem of indexation [9]. A prospective observational trial found a large inter-individual variability of GEDV and GEDVI and hypothesized that the aortic volume might be the source of the observed heterogeneity [10]. This potential explanation was based on the fact that the cold saline bolus injected for measurement must transit the aorta to reach the temperature detector placed in a femoral artery. It is well known that the aortic size increases with age and is sex dependent [11]. Patients with an aortic aneurysm present with higher GEDVI values [12]. However, neither the theoretical derivation nor contemporary reviews of GEDV and GEDVI measured by transpulmonary thermodilution do consider the aortic volume [13,14,15,16].
In the present study, we investigate the hypothesis of a relationship between aortic volume and GEDV.
The study was approved by the Ethics Committee of Charité - Universitätsmedizin Berlin (vote EA 1/084/13). The study was performed at the Interdisciplinary Neurointensive Care Unit of Charité – Universitätsmedizin Berlin at Charité Campus Virchow, with inclusion from January 2009 to December 2016. We identified subjects who had monitoring with transpulmonary thermodilution implemented. Additionally, patients were required to have received a contrast-enhanced CT scan of the thorax and abdomen, either as screening for injury after trauma, but also in search of a septic focus. We selected patients with mechanical ventilation and an arbitrarily chosen time difference of maximum 12 h between CT scan and thermodilution measurement (Fig. 1).
Flow diagram of patient identification. CT computed tomography
Transpulmonary thermodilution measurements
As usual in transpulmonary thermodilution, iced saline was injected via a central venous line and the resulting thermal signal was detected by a thermodilution catheter (PVPK 2015 L20-A) in a femoral artery. Both catheters were connected to a PiCCO2 monitor (Pulsion Medical Systems, Munich, Germany).
Cardiac output (CO) is derived from the area under the curve by the Stewart-Hamilton formula [17]. The thermal signal may be characterized further by mean transit time (MTt) and downslope time (DSt), the inverse of its rate of decay [15, 18]. MTt times CO equals the distribution volume of thermal indicator, the intrathoracic thermal volume (ITTV). In a series of sequentially traversed volumes, the largest one determines the DSt [13]. In case of transpulmonary thermodilution, the largest thermal compartment is assumed to be the lung, resulting in the pulmonary thermal volume PTV = CO × DSt. The difference between ITTV and PTV equals the GEDV, which may be calculated as:
$$ \mathrm{GEDV}=\left(\mathrm{CO}\times \mathrm{MTt}\right)-\left(\mathrm{CO}\times \mathrm{DSt}\right) $$
CO, MTt, DSt, GEDV, and EVLW were obtained from the average of a series of at least three venous injections of 20 ml of iced saline [19], with outliers (± 3 SD) discarded. All thermodilution data were extracted from archived log files of the PiCCO2 devices. As suggested by the manufacturer, GEDVI was calculated by dividing GEDV by body surface area based on predicted body weight.
Of note, in transpulmonary thermodilution, the volume between the aortic valve and the detector in a femoral artery is obviously traversed by the cold indicator bolus. Therefore, measured GEDV may be split in a venous volume, a central part—the volume of interest as surrogate for cardiac preload—and the aortic volume:
$$ {\mathrm{GEDV}}_{\mathrm{measured}}={\mathrm{GEDV}}_{\mathrm{venous}}+{\mathrm{GEDV}}_{\mathrm{central}}+{\mathrm{GEDV}}_{\mathrm{aortic}} $$
The venous part may be assumed to be zero in case of a central venous line in the superior vena cava. However, the aortic part of GEDV remains inevitably included in transpulmonary thermodilution measurements.
Contrast-enhanced thoracic-abdominal CT scans were retrieved from the Picture Archive and Communication System GEPACS (Centricity PACS 3.2 RA 1000 Workstation, GE Healthcare, Chicago, USA). Post-processing of the images was performed with Osirix® MD 6.5.2 (Pixmeo SARL, Geneva, Switzerland). The aorta was identified on axial slides and marked manually as region of interest (ROI) [20, 21]. The resulting sequence of interconnected ROIs together with the slice width was used for volume calculation, with the left coronary artery and the tip of the transpulmonary thermodilution catheter in the femoral artery as longitudinal boundaries. This reconstructed volume is referred to as "aortic volume" (Fig. 2, Additional files 1 and 2). For length determination, a central path was marked manually. Diameters were calculated from cross-sectional areas assuming circular boundaries. When a femoral central venous catheter was present, reconstruction of the volume of the inferior vena cava was performed likewise, using the tip of the catheter and the right atrium as boundaries. In patients with a subclavian or jugular central venous catheter, the correct position of its tip is at the entrance of the right atrium. Consequentially, the additional volume of the vena cava relevant for thermodilution measurements was assumed to be zero.
Representative reconstructed three-dimensional sagittal computed tomography images of the heart and the aorta. Left side is from a 26-year-old female with meningoencephalitis and septic shock, 59 kg, 170 cm. GEDV 502 ml, GEDVI 293 ml/m2. Right side shows data from a 72-year-old female with aneurysmal subarachnoid hemorrhage, 165 cm, 78 kg. GEDV 1263 ml, GEDVI 787 ml/m2. 3D rotational images are provided in the electronic supplements (see Additional files 1 and 2). Aortic volume, defined as the volume of the aorta between the left coronary artery and the tip of the femoral catheter, is visualized in blue. Proportions reflect real dimensions. Note the difference in size and shape of the aortic volume
Statistical computation was performed with R 3.4.3 (R Core Team, R Software Foundation, Vienna, Austria, 2018). Results are given as median and interquartile range (IQR) or with mean and corresponding 95% confidence intervals (95% CI), as appropriate. No imputation was performed for missing data. Regression analysis was performed with robust linear regression (R package robustbase, version 0.93-5) to account for heteroscedasticity and skewness. Biometric parameters (age, sex, body height, and weight) were investigated simultaneously to account for partial correlation using multivariate models. Mixed effect models to correct for repeated CT measurements in few patients proved not to be superior by the minimized Akaike Information Criterion (AIC) [22]. Therefore, in favor of parsimony, all measurements were regarded as independent. Explained variance is given by adjusted R2. p values less than 0.05 were considered significant.
We identified 103 CT scans in 88 patients meeting the inclusion criteria (Fig. 1). Demographic data of the patients are shown in Table 1. ICU scores, vasoactive drugs, ventilation parameters, and location of central venous catheters are shown in Table 2. Included in this table are time differences and fluid balance between CT scanning and thermodilution measurements. Data of CT scans and transpulmonary thermodilution measurements are given in Table 3.
Table 1 Patient characteristics
Table 2 Clinical data at time of CT and thermodilution measurement, respectively
Table 3 Aortic and vena cava length and volume derived from CT scans and physiologic values from transpulmonary thermodilution measurements
Aortic volume
Median aortic volume, measured from the aortic valve to the tip of the femoral artery catheter, was 158 ml (IQR 126 to 207 ml) in females and 213 ml (IQR 169 to 287 ml) in males (p < 0.001). Aortic volume increased by 2.3 ml (95% CI 1.7 to 2.8 ml, p < 0.001) per year of patient age. Aortic volume showed no significant relationship to body height (p > 0.05), but increased by 1.2 ml (95% CI 0.3 to 2.2 ml, p = 0.009) per kg of patient body weight. Measurements of aortic volume had a coefficient of repeatability of 2.1%.
Measurements of inferior vena cava
Fifteen measurements were performed with a femoral central venous line. In two patients, we were unable to unequivocally identify the upper boundary of the vena cava at the level of the diaphragm due to enlarged hepatic veins. Thus, an accurate and reproducible volume calculation was impossible. In the remaining 13 patients, median volume of the inferior vena cava was 127 ml (IQR 93 to 155 ml). Analysis of relationships with age, sex, height, and weight was not considered meaningful due to the low number of patients.
Dependencies of GEDV and GEDVI on biometric parameters
Median GEDV in all patients was 1306 ml (IQR 1104 to 1569 ml). Median GEDVI was 730 ml/m2 (IQR 627 to 871 ml/m2).
GEDV increased by 7.4 ml (95% CI 4.1 to 10.7 ml, p < 0.001) per year of patient age. Per kilogram increase in body weight, GEDV increased by 5.2 ml (95% CI 1.7 to 8.6 ml, p = 0.003). After correction for age and weight, GEDV showed no significant dependency on height and sex. These relationships persisted after indexing GEDV by body surface area based on predicted body weight. GEDVI increased by 4.2 ml/m2 (95% CI 2.3 to 6.2 ml/m2, p < 0.001) per year of age and by 2.8 ml/m2 (95% CI 0.9 to 4.7 ml/m2, p = 0.004) per kg body weight, while height and sex showed no significant relationship.
In patients with a femoral central venous line, GEDV was 438 ml (95% CI 235 to 641 ml, p < 0.001) larger than in patients with jugular or subclavian central venous catheter. Likewise, GEDVI was 230 ml/m2 (95% CI 89 to 370 ml/m2, p = 0.002) larger in patients with a femoral venous line.
Time differences, fluid balances, changes in ventilator settings, or the level of vasoactive drugs between thermodilution measurements and CT scans were without significant impact on GEDV (p > 0.05 for each comparison). GEDV measurements showed a coefficient of repeatability of 4.3%.
Dependence of GEDV on central venous and aortic volume
A total of 38.4% of the variance of GEDV was explained by patient-specific biometric characteristics including age, sex, body weight, and body height. We then sequentially added the volumes of either the vena cava, the aorta, or both to this initial model. Inclusion of the volume of the vena cava raised the explained variance of GEDV to 47.8%. After adding the aortic volume to the basic model instead of the volume of the vena cava, explained GEDV variance was 59.3%. Combining both aortic and venous volume led to an explained variance of GEDV of 63.8%. In each model where the aortic volume was included, all biometric parameters lost their significance (Table 4).
Table 4 Statistical significance of confounding variables for GEDV
Analysis of GEDV components
In the final regression model including both aortic volume and the volume of the vena cava, GEDV increased by 3.0 ml (95% CI 2.0 to 4.1 ml, p < 0.001) per ml of aortic volume and by 2.1 ml (95% CI 0.5 to 3.7 ml, p = 0.01) per ml of vena cava volume. Plotting the data suggested a linear relationship between the aortic volume and GEDV (Fig. 3).
Relationship of global end-diastolic volume (GEDV) and aortic volume. Blue line indicates the regression line, with its 95% confidence interval marked in grey. Green dots represent measurements with central venous lines placed in the vena cava superior, red dots in the vena cava inferior
Measured GEDV consists of a "venous," a "central," and an "aortic" part (see Methods). Assuming the linear relationships found above allowed for estimation of single proportions of GEDV. The aortic part was in median 49% (IQR 40 to 58%) of measured GEDV. In case of a femoral central venous line, the venous part estimated in median to 14% (IQR 14 to 17%). The central part was in median 50% (IQR 40 to 57%) of measured GEDV. The central and venous parts did not depend on biometric parameters, while the aortic part had significant relationships with age and weight (p < 0.001 and p = 0.009, respectively).
To get further insight, we examined the influence of aortic volume on the different variables required for GEDV calculation: MTt, DSt, and CO. The largest impact was on MTt, with 3.5 s (95% CI 1.4 to 5.5 s, p = 0.001) per 100 ml of aortic volume. Additionally, MTt showed a hyperbolic decline with raising values of CO (p < 0.001). CO was larger in patients with higher aortic volume, 0.5 l/min (95% CI 0 to 1 l/min, p = 0.041) per 100 ml aortic volume. DSt enlarged by 0.8 s (95% CI 0 to 1.7 s, p = 0.048) per 100 ml aortic volume.
As main finding, we confirmed the hypothesized relationship between GEDV and aortic volume. Aortic volume determines the value of GEDV to a larger extent than any biometric parameter, including a patient's age, sex, body weight, and height.
Relevance of central venous and aortic volumes for GEDV measurement
It is well accepted and confirmed by our data that femoral central venous lines should be accounted for when interpreting GEDV measurements [23,24,25]. However, our results show that the aortic volume had an even larger, predominant influence by explaining roughly 60% of GEDV variance. As the aortic volume is anatomically placed after the heart, our findings challenge the view of GEDV as a cardiac preload parameter.
It is important to mention that our measurements of the aortic diameter, length, and volume as well as estimated aortic mean transit times are in line with published data [11, 26,27,28,29,30].
Analysis of influences on thermal volume
Contrast bolus traverse through the aorta led to a larger increase in GEDV than expected by considering plain aortic volume. Two interacting causes may be suggested. First, theory of single-indicator transpulmonary thermodilution requires a closed circulation between injection and detection site [14]. Thoracic and abdominal branches of the aorta invalidate this prerequisite. Second, the flow along the aorta is not laminar but turbulent and helical [31,32,33]. Both potential causes would challenge the assumption of CO times MTt being equal to the traversed volume.
In patients with a femoral central venous line, similar considerations concerning the necessary prerequisites apply. The vena cava inferior has influx from abdominal and hepatic veins, thus not resembling a closed system as required for calculation of GEDV from the thermodilution curve.
Concerns against indexing GEDV
Indexing of a physiological parameter intends to remove inter-individual variations to facilitate comparison between patients and derive normal ranges. From a mathematical point of view, indexing represents a linear regression, which may be defined by two points only. One is the mean of the parameter to be indexed and the mean of the index. The second point is the origin, where both the parameter and the index are zero, usually far away from physiologic ranges. Therefore, the slope of the regression line is mainly determined by the origin as a gross outlier. This may lead to the removal of existing correlations, but also generation of correlations not present in the original data [34,35,36].
Obviously, indexing GEDV can be performed numerically, but this does not imply that the result is meaningful. The relevant confounder of GEDV, the aortic volume, is cumbersome to achieve and usually not known. Therefore, indexing by aortic volume is not applicable. In current practice, GEDV indexation is performed with predicted body surface area derived from height. In our data, height was no significant confounder. In contrast, a dependency of GEDV on age and weight was present before, but also after indexing by predicted body surface area. Furthermore, the central part of GEDV had no relationship with any biometric parameter, while the aortic part was dependent on age and weight. The ratio between both parts varies from patient to patient. The quest for reference ranges of GEDV is further complicated when femoral central venous lines are taken into account. Therefore, there is little to support a scientifically validated and clinically useful indexation.
We interpret our data that the numeric value of GEDV reflects the intravascular volume status of a patient, with preload being a minor contributor and not the dominant part. Future, prospective work may address the impact of volume loading, vasopressors, or mechanical ventilation on venous, central, and aortic components of GEDV. It is likely that this impact is different on each component, given that a controlled volume loss affects the diameter of the vena cava more than that of the abdominal aorta [37]. However, current transpulmonary thermodilution technology does not allow to distinguish between the different parts of measured GEDV. Its value is partially dependent on the aortic volume, which is itself being associated with age, sex, and weight. As a consequence, any treatment decision aiming for standardized normal values of GEDV/GEDVI may be beneficial in one patient but detrimental in another. Variations between successive measurements may have clinical importance [4], but require further study, in our opinion.
Missing knowledge of the effect size rendered planning of a prospective study impossible. However, electronic recording guaranteed data accuracy and we are unaware of any systematic bias concerning patient selection.
Our population was treated in a neurosurgical ICU. While this may be considered a limitation, we want to point out that subjects presented with hemodynamic instability due to various causes, were on vasopressors and required mechanical ventilation. In our opinion, this reflects a typical scenario where transpulmonary thermodilution monitoring may be applied.
Central venous lines used were multi-lumen catheters of different brands. Per clinical standard, we mount the venous thermistor for transpulmonary thermodilution on the side arm of the first 3-way stopcock on the distal lumen. Occasional use of a different ports or failure of correct placement of the catheter tip at the entrance of the right atrium in case of jugular or subclavian central venous lines may have induced a minor error we were unable to correct for.
We provide evidence that the aortic volume mainly accounts for the variability of GEDV measured by single-indicator transpulmonary thermodilution with a femoral arterial line. Therefore, GEDV should not be considered to reflect the cardiac preload status of a patient. Furthermore, we were unable to provide a scientific physiological rationale for indexing GEDV. As a consequence, guiding individual volume therapy by reference ranges of GEDV or GEDVI may be misleading.
Individual patient data supporting the conclusions is available in the Zenodo repository [38].
Monnet X, Teboul J-L (2017) Transpulmonary thermodilution: advantages and limits. Crit Care Lond Engl 21:147. https://doi.org/10.1186/s13054-017-1739-5
Marx G, Schindler AW, Mosch C et al (2016) Intravascular volume therapy in adults: guidelines from the Association of the Scientific Medical Societies in Germany. Eur J Anaesthesiol 33:488–521. https://doi.org/10.1097/EJA.0000000000000447
Goedje O, Seebauer T, Peyerl M et al (2000) Hemodynamic monitoring by double-indicator dilution technique in patients after orthotopic heart transplantation. Chest 118:775–781. https://doi.org/10.1378/chest.118.3.775
Michard F, Alaya S, Zarka V et al (2003) Global end-diastolic volume as an indicator of cardiac preload in patients with septic shock. Chest 124:1900–1908. https://doi.org/10.1378/chest.124.5.1900
Goepfert MS, Richter HP, Zu Eulenburg C et al (2013) Individually optimized hemodynamic therapy reduces complications and length of stay in the intensive care unit: a prospective, randomized controlled trial. Anesthesiology 119:824–836. https://doi.org/10.1097/ALN.0b013e31829bd770
Kaneko T, Kawamura Y, Maekawa T et al (2014) Global end-diastolic volume is an important contributor to increased extravascular lung water in patients with acute lung injury and acuterespiratory distress syndrome: a multicenter observational study. J Intensive Care 2:25. https://doi.org/10.1186/2052-0492-2-25
Hilty MP, Franzen DP, Wyss C et al (2017) Validation of transpulmonary thermodilution variables in hemodynamically stable patients with heart diseases. Ann Intensive Care 7:86. https://doi.org/10.1186/s13613-017-0307-0
Eichhorn V, Goepfert MS, Eulenburg C et al (2012) Comparison of values in critically ill patients for global end-diastolic volume and extravascular lung water measured by transcardiopulmonary thermodilution: a metaanalysis of the literature. Med Intensiva 36:467–474. https://doi.org/10.1016/j.medin.2011.11.014
Huber W, Mair S, Götz SQ et al (2017) A systematic database-derived approach to improve indexation of transpulmonary thermodilution-derived global end-diastolic volume. J Clin Monit Comput 31:143–151. https://doi.org/10.1007/s10877-016-9833-9
Wolf S, Riess A, Landscheidt JF et al (2009) Global end-diastolic volume acquired by transpulmonary thermodilution depends on age and gender in awake and spontaneously breathing patients. Crit Care 13:R202. https://doi.org/10.1186/cc8209
Mao SS, Ahmadi N, Shah B, et al (2008) Normal thoracic aorta diameter on cardiac computed tomography in healthy asymptomatic adults: impact of age and gender. Acad Radiol 15:827–834. https://doi.org/PMC2577848
Sakka SG, Meier-Hellmann A (2001) Extremely high values of intrathoracic blood volume in critically ill patients. Intensive Care Med 27:1677–1678. https://doi.org/10.1007/s001340101071
Newman EV, Merrell M, Genecin A et al (1951) The dye dilution method for describing the central circulation. Circulation 4:735–746
Meier P, Zierler KL (1954) On the theory of the indicator-dilution method for measurement of blood flow and volume. J Appl Physiol 6:731–744
Isakow W, Schuster DP (2006) Extravascular lung water measurements and hemodynamic monitoring in the critically ill: bedside alternatives to the pulmonary artery catheter. Am J Physiol Lung Cell Mol Physiol 291:L1118–L1131. https://doi.org/10.1152/ajplung.00277.2006
Brown LM, Liu KD, Matthay MA (2009) Measurement of extravascular lung water using the single indicator method in patients: research and potential clinical value. Am J Physiol Lung Cell Mol Physiol 297:L547–L558. https://doi.org/10.1152/ajplung.00127.2009
Stewart GN (1921) The pulmonary circulation time, the quantity of blood in the lungs and the output of the heart. Am J Physiol 58:20–44
Effros RM, Pornsuriyasak P, Porszasz J, Casaburi R (2008) Indicator dilution measurements of extravascular lung water: basic assumptions and observations. Am J Physiol Lung Cell Mol Physiol 294:L1023–L1031. https://doi.org/10.1152/ajplung.00533.2007
Monnet X, Persichini R, Ktari M et al (2011) Precision of the transpulmonary thermodilution measurements. Crit Care 15:R204. https://doi.org/10.1186/cc10421
Ratib O, Rosset A, Heuberger J (2009) OsiriX - The Pocket Guide. Pixmeo SARL, Bernex, Switzerland
Setacci F, Sirignano P, Cappelli A, Setacci C (2012) The wonders of a newly available post-analysis CT software in the hands of vascular surgeons. Eur J Vasc Endovasc Surg 43:404–406. https://doi.org/10.1016/j.ejvs.2011.11.027
Akaike H (1974) A new look at the statistical model identification. Autom Control IEEE Trans On 19:716–723
Schmidt S, Westhoff TH, Hofmann C et al (2007) Effect of the venous catheter site on transpulmonary thermodilution measurement variables. Crit Care Med 35:783–786. https://doi.org/10.1097/01.CCM.0000256720.11360.FB
Saugel B, Umgelter A, Schuster T et al (2010) Transpulmonary thermodilution using femoral indicator injection: a prospective trial in patients with a femoral and a jugular central venous catheter. Crit Care 14:R95. https://doi.org/10.1186/cc9030
Huber W, Phillip V, Höllthaler J et al (2016) Femoral indicator injection for transpulmonary thermodilution using the EV1000/VolumeView(®): do the same criteria apply as for the PiCCO(®)? J Zhejiang Univ Sci B 17:561–567. https://doi.org/10.1631/jzus.B1500244
den Hartog AW, Franken R, de Witte P et al (2013) Aortic disease in patients with Marfan syndrome: aortic volume assessment for surveillance. Radiology 269:370–377. https://doi.org/10.1148/radiol.13122310
Rylski B, Desjardins B, Moser W et al (2014) Gender-related changes in aortic geometry throughout life. Eur J Cardiothorac Surg 45:805–811. https://doi.org/10.1093/ejcts/ezt597
Chan SK, Jaffer FA, Botnar RM et al (2001) Scan reproducibility of magnetic resonance imaging assessment of aortic atherosclerosis burden. J Cardiovasc Magn Reson 3:331–338
Hager A, Kaemmerer H, Rapp-Bernhardt U, et al (2002) Diameters of the thoracic aorta throughout life as measured with helical computed tomography. J Thorac Cardiovasc Surg 123:1060–1066. https://doi.org/12063451
Fleischmann D, Hastie TJ, Dannegger FC et al (2001) Quantitative determination of age-related geometric changes in the normal abdominal aorta. J Vasc Surg 33:97–105 https://doi.org/11137929
Kilner PJ, Yang GZ, Mohiaddin RH et al (1993) Helical and retrograde secondary flow patterns in the aortic arch studied by three-directional magnetic resonance velocity mapping. Circulation 88:2235–2247
Bogren HG, Buonocore MH, Valente RJ (2004) Four-dimensional magnetic resonance velocity mapping of blood flow patterns in the aorta in patients with atherosclerotic coronary artery disease compared to age-matched normal subjects. J Magn Reson Imaging 19:417–427. https://doi.org/10.1002/jmri.20018
Liu X, Sun A, Fan Y, Deng X (2015) Physiological significance of helical flow in the arterial system and its potential clinical applications. Ann Biomed Eng 43:3–15. https://doi.org/10.1007/s10439-014-1097-2
Tanner JM (1949) Fallacy of per-weight and per-surface area standards, and their relation to spurious correlation. J Appl Physiol 2:1–15
Turner ST, Reilly SL (1995) Fallacy of indexing renal and systemic hemodynamic measurements for body surface area. Am J Physiol 268:R978–R988
Dewey FE, Rosenthal D, Murphy DJ et al (2008) Does size matter? Clinical applications of scaling cardiac size and function for body size. Circulation 117:2279–2287. https://doi.org/10.1161/CIRCULATIONAHA.107.736785
Bilgin S, Topal FE, Yamanoğlu A et al (2019) Effect of changes in intravascular volume on inferior vena cava and aorta diameters and the caval/aorta index in healthy volunteers. J Ultrasound Med Off J Am Inst Ultrasound Med. https://doi.org/10.1002/jum.15093
Akohov A et al, "Aortic volume and GEDV - individual patient data", Zenodo Data Repository, www.zenodo.org, deposit 3378028
The authors want to thank Willehad Boemke, M.D., and Raimund Helbok, M.D., for reviewing and commenting on the manuscript.
We acknowledge support from the German Research Foundation (DFG) and the Open Access Publication Funds of Charité – Universitätsmedizin Berlin.
Department of Anesthesiology and Intensive Care Medicine (CCM/CVK), Charité – Universitätsmedizin Berlin, Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
Aleksej Akohov
, Christoph Barner
, Steffen Grimmer
& Roland CE Francis
Department of Neurosurgery, Charité Campus Mitte, Charité – Universitätsmedizin Berlin, Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
Stefan Wolf
Department of Anesthesiology, Vivantes Klinikum Neukölln, Vivantes Netzwerk für Gesundheit, Berlin, Germany
Steffen Grimmer
Search for Aleksej Akohov in:
Search for Christoph Barner in:
Search for Steffen Grimmer in:
Search for Roland CE Francis in:
Search for Stefan Wolf in:
Collection of data was done by A.A, S.G., and S.W. Analysis of data was done by A.A., C.B., and S.W. Interpretation of data was done by all authors. Drafting of the manuscript was done by A.A., C.B., and S.W. Conception of the study was done by S.W. Supervision of the study was done by R.F. and S.W. All authors read and approved the final manuscript.
Correspondence to Stefan Wolf.
The study was approved by the Ethics Committee of Charité Universitätsmedizin Berlin (vote EA 1/084/13).
The study was a retrospective analysis of clinical routine data. Therefore, consent was not required.
The authors have no competing interests to declare.
Supplementary information
Additional file 1. 3-D rotational image of a 26 years old female with meningoencephalitis and septic shock, 170 cm, 59 kg. Slice width 0.625 mm. Aortic volume 77 ml, distance form aortic valve to femoral detector 41 cm, CO 3.3 l/min, MTt 17 s, GEDV 502 ml, GEDVI 293 ml/m2.
Additional file 2. 3-D sagittal rotational image of a 72 years old female with aneurysmal subarachnoid hemorrhage, 165 cm, 78 kg. Slice width 0.625 mm. Aortic volume 211 ml, distance from aortic valve to femoral detector 62 cm, CO 3.3 l/min, MTT 39 s, GEDV 1263 ml, GEDVI 787 ml/m2.
Akohov, A., Barner, C., Grimmer, S. et al. Aortic volume determines global end-diastolic volume measured by transpulmonary thermodilution. ICMx 8, 1 (2020) doi:10.1186/s40635-019-0284-8
Global end-diastolic volume
GEDV
GEDVI
Transpulmonary thermodilution
Vena cava volume
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Parameter estimation using the sliding-correlator's output for wideband propagation channels
Xuefeng Yin1,
Cen Ling1,
Myung-Don Kim2 &
Hyun Kyu Chung2
EURASIP Journal on Wireless Communications and Networking volume 2015, Article number: 165 (2015) Cite this article
In this contribution, a high-resolution parameter estimation algorithm is derived based on the Space-Alternating Generalized Expectation-maximization (SAGE) principle for extracting multipath parameters from the output of sliding correlator (SC). The SC allows calculating channel impulse responses with a sampling rate less than that required by Nyquist criterion, and hence is widely used in real-time wideband (e.g., >500 MHz) channel sounding for the fifth generation wireless communication scenarios. However, since the sounding signal needs to be sent repetitively, the SC-based solution is unacceptable for time-variant channel measurements. The algorithm proposed here estimates multipath parameters by using a parametric model of both low- and high-frequency components of the SC's output. The latter was considered as distortions and discarded in the conventional SC-based channel sounding. The new algorithm allows estimating path parameters with less repetitions of transmitting the sounding signal and still exhibits higher estimation accuracy than the conventional method. Simulations are conducted and illustrate the root mean square estimation errors and the resolution capability of the proposed algorithm with respect to the bandwidth and the length of the SC's output. These studies pave the way for measuring time-variant wideband propagation channels using SC-based solutions.
Measurement-based channel models are important for verifying the performance of wireless communication systems in realistic propagation scenarios [1, 2]. Geometry-based stochastic channel models, such as the WINNER spatial channel models [3], IMT-Advanced models [4], and COST2100 multiple-input multiple-output (MIMO) models [5], have been proposed in various standards and widely used to generate single- and multi-link channel realizations at the carrier frequency up to 6 GHz with a bandwidth up to 100 MHz.
Recently, researches on the fifth generation (5G) wireless communications have been paid a lot of attention. The European 7th framework project "Mobile and wireless communications Enablers for the Twenty-twenty Information Society (METIS)" announced a white paper which describes the typical applications and propagation environments considered in 5G [6]. According to the definition by the METIS project, the candidate frequency bands for 5G applications range from 0.45 to 85 GHz, and the bandwidth is from 0.5 up to 2 GHz [6]. At present, the shortage of measurement-based channel models for these frequencies, particularly in the millimeter (mm)-wave bands hinders both the progress of 5G standardization and the designing of 5G-based communication systems and networks. Characterization of mm-wave channel with bandwidth beyond 0.5 GHz for various types of applications and environments began to attract much research attentions recently.
Data acquisition for wideband channels is usually performed by using the equipment such as oscilloscope, spectrum analyzer, and vector network analyzer. The latter two kinds of equipment usually do not have the capability of recording the complex time-domain signals, and thus not suitable for investigating the wideband channel characteristics extracted from multipath parameters. For sampling the mm-wave signals, the oscilloscope devices are required to have sampling rate up to 100 GHz, which is not easy to achieve. Furthermore, due to the small storage in the oscilloscope devices, measurement of wideband channel becomes very time-consuming. A solution tackling these problems is to down-sample the received wideband signals and store the data at a low speed which allows transferring data in real-time from local memory to external disk. Then by using a so-called sliding correlation (SC) technique, a time-dilated approximate channel impulse response (CIR) can be calculated by low-pass filtering (LPF) the received data if the sounding signals can be sent repetitively. The LPF in the receiver is applied to remove the distortion components which have higher frequencies [7]. It has been shown in [8] that pre-filtering techniques can also be applied in the transmitter side to achieve the same objective. Due to the benefits of low complexity in the receiver design and acceptable costs, the SC-based data acquisition has been widely adopted [9–12].
However, the SC-based data acquisition has two problems. First, the higher-frequency components in the SC's output considered as distortions still carry information of channel parameters and, thus, should be exploited to improve the accuracy of parameter estimation. A drawback resulted when higher-frequency components are considered is that the time-dilated approximation of CIR is unavailable, and as a consequence, conventional peak-searching estimation methods adopted in the SC-based channel estimation are inapplicable. Second, the time-dilated CIR generated by the conventional SC requires the sounding signal being sent repetitively. The number of the repetitions, also called as sliding factor, is usually in the 103 order of magnitude or even higher [7]. In the cases where channels are time-variant, the CIR may not be calculated within the channel coherence time. As a consequence, the mobile to mobile (M2M) channel measurements cannot be conducted by using the SC-based solution. Recently, a Space-Alternating Generalized Expectation-maximization (SAGE) estimation approach was introduced in [13] which is derived based on a parametric model characterizing the SC's output, allowing the estimation of multipath parameters by using higher-frequency components. However, this solution still relies on the SC's output obtained by sending the sounding signals many times. No thorough investigation has been carried out so far for the feasibility of accurate parameter estimation based on the SC's outputs without sending the sounding signals repetitively.
In this contribution, the SAGE algorithm originally derived in [13] based on a parametric model for both low- and high-frequency components of SC's output is elaborated. Its performances in estimating multipath parameters are investigated extensively by using simulations. It shows that without discarding the higher-frequency components of SC's output, the estimation accuracy, particularly for delay parameters, can be improved substantially. In addition, another benefit of this novel estimation algorithm not found previously is discovered; that is, the estimation of path parameters, including Doppler frequency, can be performed by using only a fraction of the SC's output. Hence, the overall observation span can be kept less than channel coherence time in time-variant cases, and characterizing time-variant channels through SC-based measurements, which cannot be performed before, becomes feasible. Simulations are carried out to compare the performance of the proposed algorithm with the conventional method, and investigate the impact of selecting different bandwidth of LPF and the length of the SC's output on the RMSEEs, resolution capability of the algorithm.
The rest of the paper is organized as follows. Section 2 describes the parametric signal model. In Section 3, a SAGE algorithm is presented. Section 4 describes the simulation results for the performance of the proposed algorithm. Finally, conclusive remarks are addressed in Section 5. To improve the understandings of the mathematical notations adopted in this contribution, Table 1 lists all the symbols introduced and corresponding explanations.
Table 1 Explanation of adopted symbols
Signal model
As elaborated in [9, 10] and [7], the SC performs a specific cross-correlation operation, e.g., between a pseudo-noise (PN) random sequence u(t) with chip rate f c and another sequence u ′(t) with chip rate f c′. According to [7], both sequences contain exactly the same chips, and the chip rates are related as \(f_{c}'=\frac {\gamma -1}{\gamma }f_{c}\), where γ is called sliding factor. By sample-wise multiplying these two sequences in the time domain for multiple cycles which start with linearly increasing time-offsets and summing the products over individual cycles of u ′(t), a time-dilated approximate a u (τ/γ) of the autocorrelation function a u (τ)=E[u(t)u ∗(t−τ)] can be calculated by low-pass-filtering the SC's output with bandwidth B=[−f c /γ,f c /γ].
In the channel sounding cases, the received signal is the convolution of the transmitted sequence u(t) with the CIR h(τ), the output of the SC after the LPF with bandwidth B, is the time-dilated approximate \(\hat {h}\left (\tau /\gamma \right)\) of the CIR. Here, \(\hat {h}\left (\tau /\gamma \right)\) is a time-dilated version of \(\hat {h}\left (\tau \right) = h\left (\tau \right)\ast a_{u}\left (\tau \right)\) with ∗ denoting the convolution operation. It is well-accepted that only \(\hat {h}(\frac {\tau }{\gamma })\) obtained with the LPF bandwidth B can be used to estimate the channel parameters [7, 14]. Whether the components obtained with larger B, e.g., B n =[−n f c /γ,n f c /γ], n>1, are applicable for estimating the characteristics of h(τ) is necessary to investigate. It is worth mentioning that the time-dilated CIRs with bandwidth of B n can be obtained by averaging the temporal output of a SC received within the time of L/(f c′n).
Let us consider the case where a time-variant channel consists of M specular paths which are dispersive in the delay and Doppler frequency domains. The baseband representation r(f) of the received signal expressed in frequency domain can be written as
$$ r\left(\,f\right) =\sum\limits_{\ell=1}^{M} \alpha_{\ell} \exp\left\{- j2\pi f\tau_{\ell}\right\} u\left(\,f-\nu_{\ell}\right)+n\left(\,f\right), $$
((1))
where α ℓ , τ ℓ , and ν ℓ denote the complex attenuation, delay, and Doppler frequency of the ℓth propagation path, respectively, and u(f) is the frequency-domain representation of the transmitted maximum-(m-)length PN sequence of L chips. When a rectangular pulse shape is applied, u(f) can be written as [7]
$${} {\fontsize{8.9pt}{9.6pt}\selectfont{\begin{aligned} u\left(\,f\right) = \frac{V_{0}}{L}\sum\limits_{k\in \mathcal{Z}} \text{sinc} \left(\frac{k}{L}\right)\delta\left(f-\frac{f_{c} k}{L}\right)e^{j\frac{k\pi}{L}}\sum\limits_{i=1}^{L}(2 a_{i}-1)e^{-j\frac{2k\pi}{L}i}, \end{aligned}}} $$
where \(\phantom {\dot {i}\!}\mathcal {Z}\) represents the set of integers, V 0 is the chip magnitude, and a i ∈[−1,1], i=i,…,L are specified by the m-sequence. The noise component n(f) in (1) is a standard white complex Gaussian random variable:
$$ n\left(\,f\right)= w\left(\,f\right)\sum\limits_{k\in \mathcal{Z}}\delta\left(f-\frac{f_{c} k}{L}\right),\;\;w\left(\,f\right)\sim\mathcal{CN}(0,N_{0}). $$
The expression u ′(f) for the lower chip-rate sequence u ′(t) is similar to u(f) in (2) but with f c substituted by f c′. The SC's output y(f)=r(f)∗u ′(f) is the result of convolution operation in the frequency domain, i.e.,
$$ y\left(\,f\right)=s\left(\,f\right)+n'\left(\,f\right), $$
where the signal component s(f) can be calculated as
$$ s\left(\,f\right)=\sum\limits_{\ell=1}^{M} \alpha_{\ell} p\left(\,f;\tau_{\ell}, \nu_{\ell}\right) $$
$$p\left(\,f;\tau_{\ell}, \nu_{\ell}\right)=\exp\{- j2\pi f\tau_{\ell}\} u\left(\,f-\nu_{\ell}\right)\ast u'\left(\,f\right) $$
calculated by invoking the equality δ(f−f 1)∗δ(f−f 2)=δ(f−(f 1+f 2)) as
$${} \begin{aligned} &p\left(\,f;\tau_{\ell}, \nu_{\ell}\right) =\left(\frac{V_{0}}{L}\right)^{2}\sum\limits_{k,k'\in\mathcal{Z}}e^{-j2\pi \left(\frac{f_{c}k}{L}+\nu_{\ell}\right)\tau_{\ell}-j\frac{\pi}{L}(k+k')}\\&\qquad\qquad\qquad\delta\left(f-\nu_{\ell} - \frac{f_{c}k+f_{c}'k'}{L}\right)\\ &\qquad\qquad\text{sinc}\left(\frac{k}{L}\right)\text{sinc}\left(\frac{k'}{L}\right) \sum\limits_{i=1}^{L}\sum\limits_{i'=1}^{L}\\&\qquad\qquad\qquad\left[(2a_{i}-1)(2a_{i}'-1)e^{-j\frac{\pi}{L}(ki+k'i')}\right] \end{aligned} $$
$$ \begin{aligned} &n'\left(\,f\right)=n\left(\,f\right)\ast u'\left(\,f\right)\\ &=\frac{V_{0}}{L}\sum\limits_{k\in\mathcal{Z}}w\left(\,f\right)\delta\left(\,f-\frac{f_{c}k+f_{c}'k'}{L}\right)\text{sinc}\left(\frac{k}{L}\right)e^{-j\frac{\pi}{L}k'} \\&\quad\sum\limits_{i=1}^{L}(2a_{i}-1)e^{-j\frac{2\pi}{L}(k'i)}. \end{aligned} $$
The parameters Θ=[α ℓ ,ν ℓ ,τ ℓ ;ℓ=1,…,M] in (4) are unknown and need to be estimated. For simplicity, let us assume that the data y=[y(f);f∈(f 1,…,f N )] with N being the total number of frequency bins, obtained within the duration long enough for generating one observation of \(\hat {h}(\frac {\tau }{\gamma })\), is available. Estimation of Θ needs to be carried out given y. Notice that this assumption is realistic in the case where the channel coherence time is so short that the SC cannot generate multiple consecutive CIRs for a stationary channel. However, the parameter estimation algorithm derived in the later part of the paper can be easily extended to the case where multiple CIRs are available.
The maximum likelihood estimate (MLE) of Θ can be derived based on the signal models (4) to (8). However, obtaining the MLE of Θ requires solving a 4M-dimensional optimization problem. The computational complexity involved prohibits any practical implementation. In the following, we present the SAGE algorithm which can iteratively update the subsets of Θ and output the approximate of the MLE of Θ when the estimation process converges [15, 16].
Figure 1 depicts the diagram of the SAGE algorithm derived in the case considered here. To execute the SAGE algorithm, an initial estimate \(\hat {\boldsymbol {\Theta }}^{[0]}\) of the unknown parameters Θ is necessary, which can be obtained by using, e.g., the Bartlett beamforming method [17], or parametric approaches based on successive interference cancelation, such as the non-coherent MLE proposed in [18]. The overall parameter estimates are split into multiple subsets. In each iteration of SAGE algorithm, the parameter estimates in a selected subset are updated under the conditions that the observations of received signals are available and that the unknown parameters are assumed to be identical with their estimates that have been calculated from the previous iterations. The SAGE algorithm guarantees that the likelihood of the overall parameter estimates monotonically increases and becomes stabilized after a certain number of iterations, i.e., the so-called "convergence" of the algorithm is achievable. Empirically, when the increment of likelihood as the iteration continues becomes insignificant, or the changes of the parameter estimates compared from a previous iteration are negligible, we may consider that the algorithm converges practically. In such cases, the iterative updating procedure is stopped, and the parameter estimates obtained are outputted as the final results.
Diagram of the SAGE algorithm
In the case considered here, we choose the subset of the parameters to be updated in each iteration of the SAGE algorithm to be θ ℓ =[α ℓ ,ν ℓ ,τ ℓ ], i.e., the parameters of individual paths. The admissible hidden data x ℓ (f) for estimating θ ℓ is naturally defined as the contribution of the ℓth propagation path and the noise components, i.e.,
$$ x_{\ell}\left(\,f\right)=\alpha_{\ell} p\left(\,f;\tau_{\ell}, \nu_{\ell}\right)+n'\left(\,f\right). $$
A SAGE iteration for updating \(\hat {\boldsymbol {\theta }}_{\ell }\) consists of two steps, i.e., the so-called Expectation (E-)step and Maximization (M-) step. In the E-step of the ith iteration, the expectation of the loglikelihood of θ ℓ given y and the estimates of Θ obtained from the ith iteration, denoted with \(\hat {\boldsymbol {\Theta }}^{[i]}\), can be calculated as
$${} {\fontsize{9.2pt}{9.6pt}\selectfont{\begin{aligned} \mathrm{E}\left[\Lambda(\boldsymbol{\theta}_{\ell})|\boldsymbol{y}, \hat{\boldsymbol{\Theta}}^{[i]}\right]=&\mathrm{E}\left[-\frac{N}{2}\log 2\pi-\sum\limits_{f=f_{1}}^{f_{N}}\log \mathrm{E}\left[|n'\left(\,f\right)|\right] - \right.\\ &\left.\sum\limits_{f=f_{1}}^{f_{N}} \frac{\left(x_{\ell}\left(\,f\right)-\alpha_{\ell} p\left(\,f;\tau_{\ell}, \nu_{\ell}\right)\right)^{2}}{\mathrm{E}[|n'\left(\,f\right)|^{2}]}\left|\boldsymbol{y}, \hat{\boldsymbol{\Theta}}^{[i]}\right.\right]. \end{aligned}}} $$
((10))
By dropping the constant terms in the right-hand side of (10), it can be shown that
$${} \mathrm{E}\left[\Lambda\left(\boldsymbol{\theta}_{\ell}\right)|\boldsymbol{y}, \hat{\boldsymbol{\Theta}}^{[i]}\right]\propto -\sum\limits_{f=f_{1}}^{f_{N}}\frac{\left(\hat{x}^{[i]}_{\ell}\left(\,f\right)-\alpha_{\ell} p\left(\,f;\tau_{\ell}, \nu_{\ell}\right)\right)^{2}}{\mathrm{E}[|n'\left(\,f\right)|^{2}]}, $$
where \(\hat {x}_{\ell }^{[i]}\left (\,f\right)= \mathrm {E}\left [x_{\ell }\left (\,f\right)|\boldsymbol {y}, \hat {\boldsymbol \Theta }^{[i]}\right ]\) can be calculated as
$$ \begin{aligned} \hat{x}_{\ell}^{[i]}\left(\,f\right) &= y\left(\,f\right) - \sum\limits_{\tiny \begin{matrix}\ell'\neq\ell\\ \!\!\!\ell'\!\!=1\end{matrix}}^{M} \mathrm{E}\left[\alpha_{\ell'} p\left(f;\tau_{\ell'}, \nu_{\ell'}\right)|\hat{\boldsymbol\Theta}^{[i]}\right]\\ &=y\left(\,f\right) - \sum\limits_{\tiny \begin{matrix}\ell'\neq\ell\\ \!\!\!\ell'\!\!=1\end{matrix}}^{M} \hat{\alpha}_{\ell'}^{[i]} p\left(\,f;\hat{\tau}_{\ell'}^{[i]}, \hat{\nu}_{\ell'}^{[i]}\right) \end{aligned} $$
with \(\hat {\alpha }_{\ell '}^{[i]}=\int \alpha _{\ell '} \delta (\alpha _{\ell '} - \hat {\alpha }_{\ell '}^{[i]})\mathrm {d}\alpha _{\ell '}\) and \(\hat {\tau }_{\ell '}^{[i]}\), \(\hat {\nu }_{\ell '}^{[i]}\) obtained similarly.
For notational convenience, \(\boldsymbol {\hat {x}}^{[i]}\) is used in the sequel to represent \(\boldsymbol {\hat {x}}^{[i]}=\left [\hat {x}^{[i]}\left (\,f\right); f\in \left [f_{1},\dots,f_{N}\right ]\right ]\).
In the maximization (M-) step of the ith iteration, the estimates \(\hat {\nu }_{\ell }^{[i+1]}\), \(\hat {\tau }_{\ell }^{[i+1]}\), and \(\hat {\alpha }_{\ell }^{[i+1]}\) can be calculated by maximizing the expectation of loglikelihood function obtained in the E-step
$$ \left(\hat{\nu}_{\ell}^{[i+1]}, \hat{\tau}_{\ell}^{[i+1]}, \hat{\alpha}_{\ell}^{[i+1]}\right) = \arg\max\limits_{\nu_{\ell},\tau_{\ell},\alpha_{\ell}}\mathrm{E}\left[\Lambda(\boldsymbol{\theta}_{\ell})|\boldsymbol{y}, \hat{\boldsymbol{\Theta}}^{[i]}\right]. $$
As shown in Appendix 1, \(\hat {\alpha }_{\ell }^{[i+1]}\) can be expressed as a linear function of \(\hat {\nu }_{\ell }^{[i+1]}\), \(\hat {\tau }_{\ell }^{[i+1]}\) as
$$\begin{array}{@{}rcl@{}} \hat{\alpha}_{\ell}^{[i+1]}=\frac{\boldsymbol{p}\left(\hat{\nu}_{\ell}^{[i+1]}, \hat{\tau}_{\ell}^{[i+1]}\right)^{\mathrm{\!H}}\boldsymbol{W}^{-1}\hat{\boldsymbol{x}}^{[i]}}{\boldsymbol{p}\left(\hat{\nu}_{\ell}^{[i+1]}, \hat{\tau}_{\ell}^{[i+1]}\right)^{\mathrm{\!H}}\boldsymbol{W}^{-1}\boldsymbol{p}\left(\hat{\nu}_{\ell}^{[i+1]}, \hat{\tau}_{\ell}^{[i+1]}\right)} \end{array} $$
with p(ν,τ)=[p(f;τ,ν);f=f 1,…,f N ] being a column vector and W being a diagonal matrix with its diagonal elements equal to E[|n ′(f)|2],f=f 1,…,f N . Inserting (14) to the right-hand side of (13), \(\hat {\nu }_{\ell }^{[i+1]}\) and \(\hat {\tau }_{\ell }^{[i+1]}\) can be obtained by solving the following maximization problem
$$ \left(\hat{\nu}_{\ell}^{[i+1]}, \hat{\tau}_{\ell}^{[i+1]}\right) = \arg\max\limits_{\nu_{\ell},\tau_{\ell}}\eta\left(\nu_{\ell},\tau_{\ell}\right), $$
where the objective function η(ν,τ) is shown in Appendix 2 as
$$ \eta(\nu,\tau)=\frac{|\boldsymbol{p}(\nu,\tau)^{\mathrm{H}}\boldsymbol{W}^{-1}\hat{\boldsymbol{x}}^{[i]}|^{2}}{\boldsymbol{p}(\nu,\tau)^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\nu,\tau)}. $$
The amplitude estimate \(\hat {\alpha }_{\ell }^{[i+1]}\) can be calculated by using (14).
When the convergence is achieved, e.g., the parameter estimates do not change as the iteration continues, the estimation procedure is suspended, and the parameter estimates obtained in the current iteration are considered to be the final result.
Discussion of the complexity of algorithm implementation and its influence on the SC device
The complexity of the proposed SAGE algorithm increases along with the number of paths to estimate, the total number of iterations, and the values of B n , ϱ which determine the number of data samples to be considered when calculating the objective function (16). Reducing the algorithm complexity can be performed in different ways. For example, instead of estimating a large number of multipath components, we may determine an appropriate model order in advance by applying the Akaike Information Criterion [19], and furthermore, the complexity involved in the parameter estimate updating procedure can be reduced by using advanced searching methods [20, 21].
When being implemented in reality, the proposed estimation method requires the SC's outputs that are the results of filtering the original received sequence by using a LPF of bandwidth B n , n>1. It can be shown that the relationship B n ≤f c′ is maintained if n≤γ−1 is selected. Therefore, enlarging the bandwidth of the SC's output up to B n with n≤γ−1 would not introduce any additional requirement on increasing the sampling rate at the input of the SC. In such a case, the original complexity in the SC device is not influenced by B n , 1≤n≤γ−1. The studies in the sequel are conducted under the constraint 1≤n≤γ−1.
Simulation study
Simulation studies are conducted to evaluate the performance of the proposed algorithm under the influence of different B n settings and the fraction ϱ of SC's output being considered. These two parameters determine how the SC's output is selected and applied for channel estimation. With larger B n , more high-frequency components can be involved in parameter estimation, which may improve the estimation accuracy. In the conventional SC-based channel estimation, ϱ=1 is usually adopted. In the proposed algorithm, ϱ<1 can be selected, which allows reducing the observation time required in each snapshot, and consequently, measurements of a time-variant channel can be performed within channel coherence time. The impact of B n and ϱ on the performance of the proposed algorithm is of importance for understanding the effectiveness of the algorithm. Therefore, we select B n and ϱ as parameters in the simulation studies.
It is worth mentioning that the conventional channel parameter estimation based on the SC's output was performed with LPF's bandwidth set to B 1=[−f c /γ,f c /γ] [7–9]. Therefore, the simulated algorithm's performance under B=B 1 can be viewed as the performance achievable when the conventional SC-based estimation is applied. Furthermore, the method proposed here allows estimating the Doppler frequencies of multipath components with SC's outputs collected in a time period less than that required for calculating a complete estimate of CIR. Since this function is not supported by the conventional SC-based approaches, no references can be provided for comparing with when Doppler frequency estimation results are demonstrated. Without being specifically mentioned, the parameter settings reported in Table 2 are adopted in the simulations. It is worth mentioning that γ=93 is specifically selected here in order to maintain a tractable computational complexity for the simulations. However, γ with larger values may be selected in empirical SC's applications.
Table 2 Parameter setting of the simulations
The performance of the proposed SAGE algorithm can be investigated from two perspectives, i.e., the RMSEEs in the case where paths are well-separated, and the resolution capability in separating multipath in the case where paths are closely spaced. Sections 4.1, 4.2, and 4.3 are dedicated to the investigation of the RMSEE behavior of the SAGE algorithm in the case with well-separated paths, and Section 4.4 to the resolution capability of the algorithm. When a channel consists of well-separated paths, the received signals of multipath components are mutually orthogonal, and the behavior of the SAGE algorithm can be represented by that of maximum-likelihood estimation (MLE) method in the single-path scenarios [16]. Therefore, the performance of MLE in single-path scenarios are studied in Sections 4.1, 4.2, and 4.3.
SC's output with B n and ϱ as parameters
Figure 2 depicts the magnitude frequency spectrum and the channel power delay profile (PDP) calculated by using of the SC's output with B n , n=1 and 7, respectively. The synthetic channel consists of one path with delay of 53 ns, 0 Hz Doppler frequency, and complex amplitude equal to 1·10−6. The SNR is set to 30 dB. It can be observed from Fig. 2b that for B=B 1, the channel PDP exhibits a dominant peak located at abscissa close to the true delay. For B=B 7, i.e., high-frequency components as shown in Fig. 2c are considered, the PDP exhibits a single peak with severe fluctuations on the top. From these results, we can see that the conventional method of delay estimation, which relies on finding the maximum of channel PDP, owns larger estimation errors when the LPF bandwidth increases. In addition, it can be observed from Fig. 2 that in the cases where only a part of the SC's output is considered, the time-dilated IR obtained with B 1 can be used for estimating the parameters of a path provided ϱ T>τ ′, i.e., the acquired part of the SC's output contains the impulse response of the path. In the case of large B n , since higher frequency components are considered in the estimation, the part of the IR with ϱ T<τ ′ may still contain the necessary information for estimating path parameters. It will be shown later than along with the increase of B n , less ϱ can be considered for estimating path parameters.
a–d Magnitude frequency spectrum, power delay profile of the CIR calculated by using the low-pass-filtered signals with the bandwidth of B n , n=1 and 7, respectively. The true path parameters are τ=53 ns, ν=0 Hz, and α=1×10−6
Objective functions of delay and Doppler frequency with B n and ϱ as parameters
η(τ;ν=ν ′) versus B n
Figure 3 depicts the objective functions of delay obtained with the LPF bandwidth B n , n=1,3,5, and 7 considered as a parameter in a single-path scenario. The synthetic channel is generated with the path's delay equal to 25 ns, Doppler frequency equal to 0 Hz, complex amplitude equal to 1·10−6. The SNR equals 30 dB. The path's Doppler frequency ν ′ is assumed to be known. It can be observed from Fig. 3 that the main-lobes of objective functions exhibit the same zero-to-zero distance in the abscissa regardless of B n . This indicates that the resolution is maintained the same and irrelevant with B n . However, the objective function calculated at the correct delay increases along with n. Since the objective function is proportional to the loglikelihood of parameters as shown in Appendix 2, the observation that the maximum of objective function increases along with n implies that enlarging the LPF bandwidth can enhance the likelihood of the delay estimate.
Objective functions of delay calculated with different B n in a single-path case with path's delay equal to 25 ns, Doppler frequency 0 Hz, complex amplitude 1·10−6, and SNR set to 30 dB. The Doppler frequency is assumed to be known in advance
η(ν;τ=τ ′) versus B n
Figure 4 depicts the objective function of Doppler frequency in single-path scenarios. The synthetic channel is generated with path's complex amplitude equal to 1·10−6, delay 35 ns, and Doppler frequency 5·104 Hz. The SNR is set to 30 dB. The delay τ ′ of the path is assumed to be known in advance. Similar observation with that shown in Fig. 3 is obtained, i.e., the objective functions exhibit more peaky main-lobes for larger B n . This indicates that by enlarging the LPF bandwidth, the likelihood of Doppler frequency can be improved. This is reasonable as more coherent observations are included into the estimation when B n increases.
Objective functions of Doppler frequency calculated with different B n in a single-path scenario where the path's complex amplitude equals 1·10−6, delay 35 ns, and Doppler frequency 5·104 Hz. The SNR is set to 30 dB. The path delay is assumed to be known in advance
η(τ;ν=ν ′) versus ϱ
Figure 5 depicts the objective function η(τ;ν=ν ′) in the delay domain calculated in a single-path scenario when the length of the SC's output considered is set with ϱ=1, 1/2, 1/3, and 1/4. The settings for the path and SNR are the same as those adopted in the simulations described in Section 4.2.2. It can be observed from Fig. 5 that the longer the length of SC's output considered, the higher is the main peak of the objective function. This indicates that the likelihood increases by considering more data at the output of SC. Furthermore, it is observed from Fig. 5 that the zero-to-zero distances of the main peaks of all objective functions are found to be identical, i.e., with the duration of 2 ns, corresponding to the limit of the intrinsic delay resolution set by 1/B with B=500 MHz being the bandwidth of the transmitted signal.
Objective function of delay calculated with the LPF bandwidth set to B 6, and ϱ considered as a parameter in a single-path scenario where the path has complex amplitude equal to 1·10−6, delay 35 ns, and Doppler frequency 5·104 Hz. The SNR is set to 30 dB. The Doppler frequency of the path is assumed to be known in advance in the simulation
η(ν;τ=τ ′) versus ϱ
Figure 6 depicts the objective function η(ν;τ=τ ′) of Doppler frequency calculated from the synthetic data in a single-path scenario with SNR of 30 dB. The true delay and Doppler frequency equal 35 ns and 5·104 Hz, respectively. The delay is assumed to be known in the simulations. It can be observed from Fig. 6 that the longer the length of SC's output considered, the higher is the maximum of the objective function. Meanwhile, the zero-to-zero distance of the main peak of the objective functions is observed to increase along with ϱ. This is reasonable as the total observation span increases when ϱ takes a larger value, resulting in higher resolution in estimating the Doppler frequency. It can be seen that even with a lower resolution, detecting the Doppler frequency is still possible for ϱ<1.
Objective function of Doppler frequency calculated with the LPF bandwidth set to B 6, and ϱ considered as a parameter in a single-path scenario where the path has complex amplitude equal to 1·10−6, delay and Doppler frequency equal to 35 ns and 5·104 Hz, respectively. The SNR is set to 30 dB. The delay is assumed to be known in the simulations
η(τ,ν) versus B n
Figure 7 illustrates the objective functions η(ν,τ) calculated with the SC's output in a multi-path scenario with ϱ=1, B n ,n=1, 7. The synthetic channel consists of a certain number of randomly generated paths with delays chosen from [0,62] ns and Doppler frequencies [−400,400] Hz. The SNR is set to 30 dB with respect to the maximum path power. It can be observed from Fig. 7 that when the B n increases, the objective function becomes more peaky, and thus, the multipath components can be separated more readily.
Objective functions η(ν,τ) calculated with a B 1 and b B 7 in a multi-path scenario where a certain number of randomly generated paths exist with their delays and Doppler frequencies randomly chosen from [0,62] ns and [−400,400] Hz, respectively. The SNR is set to 30 dB with respect to the maximum path power
Notice that by using the SC's output that allows generating one CIR, the Doppler frequency estimation resolution is very low due to the short observation duration T. By using the simulation settings in Table 2, the total observation span is calculated to be T=L γ/f c =31·93·2·10−9=5.77 μs. Thus, the Doppler frequency estimation resolution is 1/(2T)=43 KHz. Since the differences of Doppler frequencies of paths are empirically much less than 43 KHz, it is important to jointly estimate the delay and Doppler frequency in order to resolve the paths in the delay domain. Furthermore, due to the low Doppler frequency estimation resolution, observations with high SNRs are always preferable in order to obtain less estimation errors. Our simulation results here show that the SNR should be kept beyond 10 and 40 dB in order to obtain RMSEE (ν) less than 10 Hz when the LPF bandwidth B n is set with n≥5 and n≤3, respectively.
RMSEEs of delay and Doppler frequency
RMSEE (τ) versus B n
The benefits by enlarging the LPF bandwidth can also be evaluated by examining the RMSEE of delay and of Doppler frequency. Figure 8 depicts the RMSEE of delay versus the SNR with B n as a parameter. Single-path scenarios are considered where the Doppler frequencies of the synthetic paths are uniformly distributed within [−1×105,1×105] and the delays of the paths are uniformly selected from 20 to 70 ns. It is worth mentioning that the maximum Doppler frequency for the synthetic paths was set below the maximum frequency that can be measured unambiguously by the system. Totally, 400 Monte-Carlo simulations were conducted. The result obtained by using the conventional estimation method is also illustrated, which searches the estimates of path delays by finding the maxima of channel PDP. This is only feasible when the LPF bandwidth is set to B 1. Considering that such maxima-searching spectral-based methods are widely used in parameter estimation when the conventional SC is implemented, its performance is taken as a reference of comparison for the parametric estimation algorithm proposed here. It can be observed from Fig. 8 that for a fixed SNR, the RMSEE (τ) when the proposed algorithm is used is at least one order of magnitude lower than those obtained with the conventional method. The worse performance of the conventional method is due to the fact that the time-dilated channel impulse response estimated by using the conventional method consists of 93 samples in the delay domain. In the case where the true path delay is different from integer times of delay samples, by using the maxima-searching method, large estimation errors are resulted. However, in the case where a parametric model-based estimation is performed, searching the maximum of objective function can be performed with refined steps in such a way that more accurate estimates are obtained. In addition, it is observed that RMSEE (τ) decreases when B n increases, indicating that the estimation accuracy can be improved by taking into account the high-frequency components when the proposed parameter estimation method is applied.
RMSEE (τ) versus SNR with B n as a parameter
RMSEE (τ) versus ϱ
As shown in Fig. 5, the resolution capability of the estimator in the delay domain would not be changed when different values of ϱ are considered. This is confirmed by simulations of RMSEE (τ) for two paths with the separation of the two paths in delay taken as a parameter. Figure 9 depicts the results for SNR set to 30 dB and with B 6. In the simulation, both paths' delays are randomly selected with specified separation. The fraction ϱ of the IR ranges from \(\left [\frac {1}{4}, \frac {1}{3}, \frac {1}{2}, 1\right ]\). Totally 300 snapshots are run for collecting the random samples for computing RMSEEs. It can be observed from Fig. 9 that when the separation Δ τ is larger than 2 ns, the RMSEE (τ) reduces to a stable level, which does not change significantly when Δ τ keeps increasing. This is consistent with the fact that the signal bandwidth of 500 MHz provides the resolution of 2 ns. These results indicate that using the parts of the sliding correlator's output for parameter estimation would not lead to the reduced resolution. Furthermore, it can also be observed from Fig. 9 that when ϱ increases, the RMSEE decreases. This is due to the reason that the SNR for parameter estimation increases when more observations are included during the calculation of the objective functions.
RMSEE (τ) of a path 1 and b path 2 versus the separation of two paths in delay with ϱ being a parameter
RMSEE (ν) versus B n
Figure 10 illustrates the RMSEE of Doppler frequency versus SNR calculated from 250 snapshots in single-path scenarios. The LPF bandwidth B n changes with n=1,3,5, and 7. The value of ϱ is fixed to 1. It can be observed from Fig. 10 that for fixed SNRs, the RMSEE (ν) decreases as B n increases. The upper floor of RMSEE (ν) observed for low SNRs in Fig. 10 is due to the Doppler frequency estimation range of [−1,1] KHz specified during the simulations. Furthermore, we observed that the improvements in RMSEE (ν) obtained when the LPF bandwidth changes from B 1 to B 3 and from B 5 to B 7 are insignificant compared with that resulted when the bandwidth increases from B 3 to B 5. This is because the components in the frequency domain do not have the same spectral heights. Although the RMSEE monotonically decreases with respect to the increasing B n , the decreasing rate is not constant and is actually dependent on the exact range of the abscissa considered.
RMSEE (ν) versus SNR with B n as a parameter
RMSEE (ν) versus ϱ
As shown in Fig. 6, the width of main peak of the objective function is enlarged when less fractions of the SC's output results are taken into account in the estimation. Simulation studies are conducted to verify the resolution ability of the estimator in two-path scenarios when ϱ changes. In the simulations, the bandwidth of the SC output is set to B 6, and ϱ is set to [1,1/2,1/3,1/4]. The Doppler frequency resolutions κ ν can be calculated based on κ ν =1/(2T) to be 7×105, 3.4×105, 2.4×105, and 1.7×105 for ϱ=1/4,1/2,3/4 and 1, respectively. The SNR is set to −10 dB. Figure 11a, b depicts respectively the RMSEE (ν 1) and RMSEE (ν 2) for two paths versus their separation in the Doppler frequency domain. The same delay is set for both paths in the simulations and is assumed to be known in advance in parameter estimation. It can be observed from Fig. 11a, b that both RMSEE (ν 1) and RMSEE (ν 2) decrease when Δ ν increases. Practically, we can define the empirical resolution as the separation of two paths beyond which the resultant RMSEE (ν) for both paths becomes stabilized. It can be observed from Fig. 11 that the empirical resolutions are consistent with the theoretical intrinsic resolutions which are defined to be the inverse of the total observation span, and moreover, the empirical resolutions are found to increase when ϱ decreases. These results show that it is possible to estimate Doppler frequency using parts of the SC's output, and that the length of the SC's output determines the resolution of separating paths in the Doppler frequency domain. It is worth mentioning that significant fluctuations can be observed for RMSEE (ν 1) and RMSEE (ν 2) graphs when the Doppler frequency separation is less than the intrinsic resolution of the estimator. This is due to the biases in the Doppler frequency estimates. When Δ ν between two synthetic paths is less than the resolution, the maximum of the objective function calculated for estimating the Doppler frequency of the first path in the initialization step is usually found between the true Doppler frequencies. Although the SAGE algorithm can change the estimates with more iterations, such biases may still exist in the final estimation results. In addition, since the biases have different values depending on Δ ν, the RMSEE graphs obtained with Δ ν less than the intrinsic resolution exhibit significant fluctuations.
RMSEE (ν) of a path 1 and b path 2 versus the separation of two paths in Doppler frequency with ϱ being a parameter
RMSEE (τ) versus B n and ϱ
The aforementioned investigations focus on the behavior of the estimator with respect to either B n or ϱ. We now try to evaluate how ϱ and B n jointly influence the performance of the proposed estimation algorithm. Figure 12 illustrates RMSEE (τ), i.e., the RMSEE of delay in a single-path scenario with B n varying within the range n=1,2,3, and the fraction ϱ of the SC's output taking values among \(\left [\frac {1}{4}, \frac {1}{3}, \frac {1}{2}\right ]\). The SNR of 10 dB is considered in the simulations. Totally, 300 snapshots are applied to get the RMSEE (τ) graphs. The true parameters of the path are τ=25 ns, ν=0 Hz, |α|=3. The estimation of delay is perfromed under the assumption that the Doppler frequency is known in advance. It can be observed from Fig. 12 that when we take only a part of the SC's output for estimating the parameters, only for the bandwidth \(B\geq n\frac {f_{c}}{\gamma }\) with n=3, the parameters can be estimated.
RMSEE (τ) calculated in single-path scenarios with known Doppler frequency
RMSEE (ν) versus B n and ϱ
Figure 13 depicts RMSEE (ν), i.e., the RMSEE of Doppler frequency with the bandwidth B n being variable within the range \(\left [n\frac {f_{c}}{\gamma }; n= 1,2,3\right ]\), and the fraction ϱ of the SC's output taking values among \(\left [\frac {1}{4}, \frac {1}{3}, \frac {1}{2}\right ]\). Simulation settings for the path parameters as in the simulations for generating Fig. 12 are applied. The estimation of Doppler frequency is conducted under the assumption that the delay is known in advance. It can be observed from Fig. 13 that when we take only a part of the SC's output for estimating the parameters, for the bandwidth n≥3, the parameter estimation returns RMSEEs that start to be stabilized. This is consistent with the observations from Fig. 12. It can be also observed from Fig. 13 that the RMSEE (ν) graphs fluctuation when n varies. A possible reason for this effect is that a larger n does not necessarily lead to more signal components included into the estimation. From Fig. 2, we observed that when n increases from 1 to 7, the number of the mainlobes of signal components increases from 1 to 5, which implies that the signal contribution to the observations applied in parameter estimation does not increase linearly with respect to n. We postulate that this uneven increments of signal components generates the fluctuations of the RMSEE (ν) shown in Fig. 13.
RMSEE (ν) calculated in single-path scenarios with known delay and SNR equal to 10 dB
The SAGE performance in two-path scenarios
RMSEE (τ) and RMSEE (ν) versus B n
The performance of the derived SAGE algorithm is evaluated in a two-path scenario, where the parameters of the paths are (τ 1,ν 1,α 1)=(22 ns,−40 Hz,3) and (τ 2,ν 2,α 2)=(28 ns,40 Hz,1), respectively. The noise components are added with N 0=max(|α 1|2,|α 2|2)10−ζ/10 where max(a,b) returns the maximum of the given arguments a and b, ζ denotes the SNR in dB. To limit the simulation times, the SAGE algorithm was executed to estimate the parameters of two paths within maximum 5 iterations. Since the true paths are set with different magnitudes, the path estimated by the SAGE algorithm with larger magnitude is considered to the estimate of the first path, and the other estimated path is the estimate of the second path, i.e., the weaker path.
Figure 14 depicts the RMSEEs of delay and Doppler frequency for the two paths versus SNR obtained from 250 simulation snapshots. It can be observed from Fig. 14 that the RMSEEs of parameters for the second path are always larger than their counterparts for the first path. This is reasonable since the path 2 has lower SNR than the path 1 when synthetic channels were generated. Furthermore, by comparing Fig. 14a, b, we observe that the decrease of the RMSEE of Doppler frequency for the second path when the LPF bandwidth increases from B 5 to B 7 is more significant than that obtained for the first path. We postulate that this effect of larger improvement in parameter estimation obtained for weaker paths by increasing the LPF bandwidth is due to two reasons, i.e., a higher objective of the parameters can be resulted when B n is enlarged, and additionally, the interference cancelation in the E-steps of the SAGE algorithm can be performed more efficiently, especially for paths with lower power.
a–dRMSEEs of delay and Doppler frequency in two-path scenarios
RMSEE (τ) and RMSEE (ν) versus ϱ
The performance of the derived SAGE parametric estimation algorithm developed is investigated in two-path scenarios. In the simulations, the bandwidth B n is equal to \(6\frac {f_{c}}{\gamma }\), and the fraction ϱ is set to the values within \(\left [\frac {1}{4}, \frac {1}{3}, \frac {1}{2}\right ]\). The SNR varies from 0 to 30 dB in step of 10 dB in the simulations. The true path parameters are set to [τ 1,ν 1,α 1]=[22 ns,−40 Hz,2] and [τ 1,ν 1,α 1]=[28 ns,40 Hz,1]. The RMSEE graphs were generated by using 300 snapshots. It can be observed from Fig. 15 that when the fraction ϱ increases, the RMSEEs reduce. Since less ϱ may also influence the resolution in Doppler frequency domain, the improvement in RMSEE(ν) attributed to a larger value of ϱ is more significant than that observed for RMSEE(τ).
a–d RMSEEs of delay and Doppler frequency for two paths in two-path scenarios with B 6 and ϱ considered as a parameter
Performance of the SAGE algorithm in a multipath propagation scenario
The performance of the SAGE algorithm is investigated in the case where a channel consists of 10 paths. The average SNR equals 10 dB. The SAGE algorithm was set to estimate 10 paths from the received signals. The results show that running the SAGE algorithm for 10 iterations is sufficient for observing a stabilized likelihood of parameter estimates. Figure 16 depicts a comparison of synthetic and estimated multipath components obtained after 10 iterations. It is obvious by comparing Fig. 16a, b that the parameters of the estimated paths are not exactly the same as those of true paths. This is reasonable since the SAGE algorithm, which approximates the MLE with iterative procedures, has an inherent limitation that the estimates may lead to a local maximum of likelihood that is usually determined by the initialized parameter estimation instead of a global maximum. Furthermore, the erroneous results may also attribute to the limits of the intrinsic resolutions in delay and Doppler frequency domains, since paths closely spaced by distances less than the resolutions cannot be resolved accurately by the SAGE algorithm. Furthermore, we can also observe that paths estimated with significant magnitudes in Fig. 16b appear in the vicinity of their counterparts observed in Fig. 16a. Figure 17 illustrates the normalized delay-Doppler frequency power spectra (PS's) P Bartlett(τ,ν) calculated by using the Bartlett beamforming technique [17] of the received signals, \(\hat {P}_{\text {Bartlett}}(\tau,\nu)\) of the reconstructed signal calculated based on the SAGE estimation results, and \(\tilde {P}_{\text {Bartlett}}(\tau,\nu)\) of the residual signals calculated by subtracting the reconstructed signals from the original received signals. For comparison convenience, these PS's are all normalized by the maximum of P Bartlett(τ,ν) and represented in dB in Fig. 17. It can be observed from Fig. 17 that \(\hat {P}_{\text {Bartlett}}(\tau,\nu)\) is consistent with P Bartlett(τ,ν) especially in the portions of larger spectral height, and the maximum of \(\tilde {P}_{\text {Bartlett}}(\tau,\nu)\) for residual signals is 34 dB below the maximum of P Bartlett(τ,ν). These results demonstrate that the proposed SAGE algorithm is capable of extracting dominant components in the multipath channel, although the estimated paths may not have exactly the same parameters as the true paths due to the existence of noises, the limited resolutions caused by finite signal bandwidth and observation spans, as well as the inherent limitation of the SAGE algorithm.
Comparison of the a synthetic and bestimated multipath components
Comparison of the normalized delay-Doppler frequency power spectra a P Bartlett(τ,ν) of the received signals, b \(\hat {P}_{\text {Bartlett}}(\tau,\nu)\) of the reconstructed signal calculated based on the SAGE estimation results, and c \(\tilde {P}_{\text {Bartlett}}(\tau,\nu)\) of the residual signals calculated by subtracting the reconstructed signals from the original received signals
In this contribution, a parametric generic model was proposed to describe the output of the sliding correlator (SC) which is usually utilized to calculate the time-dilated wideband propagation channel impulse responses (CIRs). Based on the model proposed, a Space-Alternating Generalized Expectation-maximization (SAGE) algorithm was derived for extracting the delays, Doppler frequencies, and complex attenuations of multipath from the SC's output that contains only one observation of time-dilated CIR. Simulation results have shown that the conventional constraint that only the low-frequency component in the SC's output is applicable for channel estimation is unnecessary when the proposed estimation method is used. Furthermore, more high-frequency components considered, the higher the estimation accuracy can be achieved. Compared with the conventional approach which estimates the channel based on the time-dilated CIR, the proposed method is applicable not only for estimating the multipath's Doppler frequencies but also returns more accurate estimates than the conventional method, e.g., the delay estimation errors are at least one order of magnitude less than those obtained by using the conventional method. Simulation results also demonstrated that the root mean squared estimation errors (RMSEEs) can be reduced by enlarging the bandwidth of a low-pass-filter (LPF) applied in the SC. When only a part of the SC's output is available, the parameters can still be estimated provided the bandwidth of the LPF is no less than three times of the transmitted signal bandwidth divided by the sliding factor. In cases where only fractions of SC's output is considered for estimation, the RMSEEs increase along with the data amount due to the improved output signal to noise ratio and the enhanced resolution capability particularly in the Doppler frequency domain. These results revealed the potential of applying the proposed high-resolution method in the SC-based parameter estimation for mm-wave wideband channel characterization.
Appendix 1: Derivation of (14)
As shown in the right-hand side of (11), the original loglikelihood function by dropping constant terms can be rewritten as
$$\begin{array}{@{}rcl@{}} L(\boldsymbol{\theta}_{\ell}) &=& -\sum\limits_{f=\,f_{1}}^{f_{N}}\frac{\left(\hat{x}^{[i]}_{\ell}\left(\,f\right)-\alpha_{\ell} p\left(\,f;\tau_{\ell}, \nu_{\ell}\right)\right)^{2}}{\mathrm{E}\left[|n'\left(\,f\right)|^{2}\right]} \end{array} $$
$$\begin{array}{@{}rcl@{}} &=& -\left(\hat{\boldsymbol{x}}_{\ell}^{[i]} - \alpha_{\ell} \boldsymbol{p}(\tau_{\ell},\nu_{\ell})\right)^{\mathrm{H}}\boldsymbol{W}^{-1}\left(\hat{\boldsymbol{x}}_{\ell}^{[i]} - \alpha_{\ell} \boldsymbol{p}(\tau_{\ell},\nu_{\ell})\right),\\ \end{array} $$
where W is a N×N diagonal matrix calculated as
$${} {\fontsize{7.9pt}{9.6pt}\selectfont{\begin{aligned} \boldsymbol{W} \,=\, \left[\!\begin{array}{ccccc} \mathrm{E}\left[|n'\left(\,f_{1}\right)|^{2}\right] & 0 & 0 & \dots & 0 \\ 0 & \mathrm{E}\left[|n'\left(\,f_{2}\right)|^{2}\right] & 0 & \dots & 0\\ 0 & 0 & \mathrm{E}\left[|n'\left(\,f_{3}\right)|^{2}\right] & \dots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \dots & \mathrm{E}\left[|n'\left(\,f_{N}\right)|^{2}\right]\end{array}\!\right] \ . \end{aligned}}} $$
When the parameters τ ℓ , ν ℓ are given, α ℓ can be determined by solving the following equation
$$ \frac{\partial L(\boldsymbol{\theta}_{\ell})}{\partial \alpha_{\ell}}=0. $$
From (18) it is easy to show that \(\frac {\partial L(\boldsymbol {\theta }_{\ell })}{\partial \alpha _{\ell }}\) can be calculated as
$${} {\fontsize{8.2pt}{9.6pt}\selectfont{\begin{aligned} \frac{\partial L(\boldsymbol{\theta}_{\ell})}{\partial \alpha_{\ell}} &= \frac{\partial\left(\hat{\boldsymbol{x}}_{\ell}^{[i]}\right)^{\mathrm{H}}\boldsymbol{W}^{-1} \hat{\boldsymbol{x}}_{\ell}^{[i]}}{\partial \alpha_{\ell}} - \frac{\partial\alpha_{\ell}\left(\hat{\boldsymbol{x}}_{\ell}^{[i]}\right)^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell}) }{\partial \alpha_{\ell}} \\ &- \frac{\partial\alpha_{\ell}^{*}(\boldsymbol{p}(\tau_{\ell},\nu_{\ell}))^{\mathrm{H}}\boldsymbol{W}^{-1}\hat{\boldsymbol{x}}_{\ell}^{[i]}}{\partial \alpha_{\ell}} + \frac{\partial\alpha_{\ell}^{*}(\boldsymbol{p}(\tau_{\ell},\nu_{\ell}))^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell})\alpha_{\ell}}{\partial \alpha_{\ell}}\\ &=-\left(\hat{\boldsymbol{x}}_{\ell}^{[i]}\right)^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell}) + \hat{\alpha}_{\ell}^{*}(\boldsymbol{p}(\tau_{\ell},\nu_{\ell}))^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell}) \end{aligned}}} $$
Applying (21) in (20) yields
$${} \left(\hat{\boldsymbol{x}}_{\ell}^{[i]}\right)^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell}) - \hat{\alpha}_{\ell}^{*}(\boldsymbol{p}(\tau_{\ell},\nu_{\ell}))^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell})=0, $$
which further leads to the expression of \(\hat {\alpha }_{\ell }^{*}\) as a function of τ ℓ , ν ℓ , and \(\hat {\boldsymbol {x}}_{\ell }^{[i]} \):
$$ \hat{\alpha}_{\ell}^{*} = \frac{\left(\hat{\boldsymbol{x}}_{\ell}^{[i]}\right)^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell})}{ (\boldsymbol{p}(\tau_{\ell},\nu_{\ell}))^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}\left(\tau_{\ell},\nu_{\ell}\right)}. $$
By taking the complex conjugates of both sides in (23), (14) is finally obtained.
Substituting α ℓ in (21) by (14) yields for L(θ ℓ )
$$\begin{array}{@{}rcl@{}} L(\boldsymbol{\theta}_{\ell}) &=&\! \left(\!{\boldsymbol{\hat{x}}}_{\ell}^{[i]}\!\right)^{\mathrm{H}}\!\boldsymbol{W}^{-1} {\boldsymbol{\hat{x}}}_{\ell}^{[i]} - \frac{\left(\boldsymbol{p}(\tau_{\ell},\nu_{\ell}) \right)^{\mathrm{H}}\boldsymbol{W}^{-1}{\boldsymbol{\hat{x}}}_{\ell}^{[i]}({\boldsymbol{\hat{x}}}_{\ell}^{[i]})^{\mathrm{H}} \boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell}) }{ \left(\boldsymbol{p}(\tau_{\ell},\nu_{\ell}) \right)^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell})}\\ &-&\!\frac{\left({\boldsymbol{\hat{x}}}_{\ell}^{[i]}\right)^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell}) \left(\boldsymbol{p}(\tau_{\ell},\nu_{\ell}) \right)^{\mathrm{H}}\boldsymbol{W}^{-1}{\boldsymbol{\hat{x}}}_{\ell}^{[i]}} {\left(\boldsymbol{p}(\tau_{\ell},\nu_{\ell})\right)^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell})} \\ &&\quad+\frac{\left({\boldsymbol{\hat{x}}}_{\ell}^{[i]}\right)^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell})}{ \left(\boldsymbol{p}\left(\tau_{\ell},\nu_{\ell}\right) \right)^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell})} \left(\boldsymbol{p}(\tau_{\ell},\nu_{\ell}) \right)^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell})\\ &&\qquad\quad\frac{(\boldsymbol{p}\left(\tau_{\ell},\nu_{\ell}\right))^{\mathrm{H}}\boldsymbol{W}^{-1}{\boldsymbol{\hat{x}}}_{\ell}^{[i]}} {(\boldsymbol{p}(\tau_{\ell},\nu_{\ell}))^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell})}. \end{array} $$
It is easy to show that the last two terms in (24) are identical to each other with opposite signs. Thus, (24) can be rewritten as
$${} {\fontsize{8.4pt}{9.6pt}\selectfont{\begin{aligned} L(\boldsymbol{\theta}_{\ell}) \,=\, \left(\!{\boldsymbol{\hat{x}}}_{\ell}^{[i]}\!\right)^{\mathrm{H}}\!\boldsymbol{W}^{-1} {\boldsymbol{\hat{x}}}_{\ell}^{[i]} - \frac{\left(\boldsymbol{p}(\tau_{\ell},\nu_{\ell}) \right)^{\mathrm{H}}\boldsymbol{W}^{-1}{\boldsymbol{\hat{x}}}_{\ell}^{[i]}\left(\!{\boldsymbol{\hat{x}}}_{\ell}^{[i]}\!\right)^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell}) }{ \bigl(\boldsymbol{p}(\tau_{\ell},\nu_{\ell}) \bigr)^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell})}. \end{aligned}}} $$
The term \(\left ({\boldsymbol {\hat {x}}}_{\ell }^{[i]}\right)^{\mathrm {H}}\boldsymbol {W}^{-1} {\boldsymbol {\hat {x}}}_{\ell }^{[i]}\) in the right-hand side of (25) is constant with respect to θ ℓ . By dropping this constant term, we obtain
$$ L(\boldsymbol{\theta}_{\ell}) \propto - \frac{\bigl|(\boldsymbol{p}(\tau_{\ell},\nu_{\ell}))^{\mathrm{H}}\boldsymbol{W}^{-1}{\boldsymbol{\hat{x}}}_{\ell}^{[i]}\bigr|^{2}}{ (\boldsymbol{p}(\tau_{\ell},\nu_{\ell}))^{\mathrm{H}}\boldsymbol{W}^{-1}\boldsymbol{p}(\tau_{\ell},\nu_{\ell})}. $$
From (26) it is obvious that minimization of L(θ ℓ ) with respect to θ ℓ is equivalent with maximization of an objective function η(τ ℓ ,ν ℓ ) defined as shown in (16).
J Andersen, T Rappaport, S Yoshida, Propagation measurements and models for wireless communications channels. IEEE Commun. Mag. 33(1), 42–49 (1995).
C Wang, X Cheng, D Laurenson, Vehicle-to-vehicle channel modeling and measurements: recent advances and future challenges. IEEE Commun. Mag. 47(11), 96–103 (2009).
P Kyösti, J Meinilä, L Hentilä, X Zhao, T Jämsä, C Schneider, M Narandzić, M Milojević, A Hong, J Ylitalo, V-M Holappa, M Alatossava, R Bultitude, Y de Jong, and M Rautiainen, WINNER II Channel Models D1.1.2 V1.1, European Commission, Deliverable IST-WINNER D, IST-4-027756 WINNER II, D1.1.2 V1.1, 11, IST Winner II Project,2007.
Union International Telecommunication, Guidelines for evaluation of radio interface technologies for IMT-Advanced (12/2009),ITU-R M.2135-1 Std, 2009.
L Liu, C Oestges, J Poutanen, K Haneda, P Vainikainen, F Quitin, F Tufvesson, P Doncker, The COST 2100, MIMO channel model. IEEE Trans. Wirel. Commun. 19(6), 92–99 (2012).
J Tommi, K Pekka, K Katsutoshi, Deliverable D1.2 Initial channel models based on measurements. Project Name: Scenarios, requirements and KPIs for 5G mobile and wireless system (METIS). Document Number: ICT-317669-METIS/D1.2, 2014. https://www.metis2020.com/documents/deliverables/.
R Pirkl, G Durgin, Optimal sliding correlator channel sounder design. IEEE Trans. Wirel. Commun. 7(9), 3488–3497 (2008).
R Pirkl, G Durgin, Revisiting the spread spectrum sliding correlator: why filtering matters. IEEE Trans. Wirel. Commun. 8(7), 3454–3457 (2009).
G Dyer, T Gilbert, S Henriksen, E Sayadian, in Antennas and Propagation Society International Symposium, 1998,4. Mobile propagation measurements using CW and sliding correlator techniques (IEEE, Piscataway, 1998), pp. 1896–1899.
S Guillouard, G El-Zein, J Citerne, in Microwave Conference, 1998. 28th European,2. High time domain resolution indoor channel sounder for the 60 ghz band (IEEE, Piscataway, 1998), pp. 341–344.
H Xu, V Kukshya, T Rappaport, Spatial and temporal characteristics of 60-GHz indoor channels. IEEE J. Selected Areas Commun. 20(3), 620–630 (2002).
T Rappaport, F Gutierrez, E Ben-Dor, J Murdock, Y Qiao, J Tamir, Broadband millimeter-wave propagation measurements and models using adaptive-beam antennas for outdoor urban cellular communications. IEEE Trans. Antennas Propag. 61(4), 1850–1859 (2013).
X Yin, Y He, Z Song, M-D Kim, HK Chung, in Proceedings of the Eighth European Conference on Antenna and Propagation, Hague, Netherland,1. A sliding-correlator-based sage algorithm for mm-wave wideband channel parameter estimation (IEEE, Piscataway, 2014), pp. 708–713.
G Martin, in Vehicular Technology Conference Proceedings. VTC 2000-Spring Tokyo. 2000, IEEE 51st,3. Wideband channel sounding dynamic range using a sliding correlator (IEEE, Piscataway, 2000), pp. 2517–2521.
JA Fessler, AO Hero, Space-alternating generalized expectation-maximization algorithm. IEEE Trans. Signal Process. 42(10), 2664–2677 (1994).
BH Fleury, M Tschudin Heddergott, D Dahlhaus, KL Pedersen, Channel parameter estimation in mobile radio environments using the SAGE algorithm. IEEE J. Selected Areas Commun. 17(3), 434–450 (1999).
M Bartlett, Smoothing periodograms from time series with continuous spectra. Nat. 161, 686–687 (1948).
X Yin, BH Fleury, P Jourdan, A Stucki, in Proceedings of the IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), Beijing, China. Polarization estimation of individual propagation paths using the SAGE algorithm (IEEE, Piscataway, 2003).
H Akaike, A new look at the statistical model identification. IEEE Trans. Autom. Control. AC-19(6), 716–723 (1974).
Q Zuo, X Yin, J Zhou, B-J Kwak, HK Chung, in Antennas and Propagation (EUCAP), Proceedings of the 5th European Conference on. Implementation of golden section search method in sage algorithm (IEEE, Piscataway, 2011), pp. 2028 –2032.
A Richter, M Landmann, RS Thomä, in Proceedings of the 57th IEEE Semiannual Vehicular Technology Conference (VTC), 2. Maximum likelihood channel parameter estimation from multidimensional channel sounding measurements (IEEE, Piscataway, 2003), pp. 1056–1060.
This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) [B0101-15-222, Development of core technologies to improve spectral efficiency for mobile big-bang], the general project of national Natural Science Foundation of China (NSFC) (Grant No. 61471268), the national NSFC key program (Grant No. 61331009), and the international cooperation project "System design and demo-construction for cooperative networks of high-efficiency 4G wireless communications in urban hot-spot environments" granted by the Science and Technology Commission of Shanghai Municipality, China.
College of Electronics and Information Engineering, Tongji University, 4800 Cao An Road, Shanghai, China
Xuefeng Yin
& Cen Ling
Electronics and Telecommunications Research Institute, Daejeon, Republic of Korea
Myung-Don Kim
& Hyun Kyu Chung
Search for Xuefeng Yin in:
Search for Cen Ling in:
Search for Myung-Don Kim in:
Search for Hyun Kyu Chung in:
Correspondence to Xuefeng Yin.
XY carried out the generic studies, proposed the algorithm, prepared the simulation results, and drafted the manuscript. CL conducted simulations and plotted the figures used in the manuscript. M-DK and HKC conceived of the study and participated in drafting the manuscript. All authors read and approved the final manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Yin, X., Ling, C., Kim, M. et al. Parameter estimation using the sliding-correlator's output for wideband propagation channels. J Wireless Com Network 2015, 165 (2015) doi:10.1186/s13638-015-0400-8
Millimeter-wave propagation channel
Maximum-likelihood estimation
High-resolution parameter estimation
Sliding correlator
Low-pass filtering
Multipath
5G Wireless Mobile Technologies | CommonCrawl |
Methodology article
Efficient inference of homologs in large eukaryotic pan-proteomes
Siavash Sheikhizadeh Anari ORCID: orcid.org/0000-0002-6159-57441,
Dick de Ridder1,
M. Eric Schranz2 &
Sandra Smit1
BMC Bioinformatics volume 19, Article number: 340 (2018) Cite this article
Identification of homologous genes is fundamental to comparative genomics, functional genomics and phylogenomics. Extensive public homology databases are of great value for investigating homology but need to be continually updated to incorporate new sequences. As new sequences are rapidly being generated, there is a need for efficient standalone tools to detect homologs in novel data.
To address this, we present a fast method for detecting homology groups across a large number of individuals and/or species. We adopted a k-mer based approach which considerably reduces the number of pairwise protein alignments without sacrificing sensitivity. We demonstrate accuracy, scalability, efficiency and applicability of the presented method for detecting homology in large proteomes of bacteria, fungi, plants and Metazoa.
We clearly observed the trade-off between recall and precision in our homology inference. Favoring recall or precision strongly depends on the application. The clustering behavior of our program can be optimized for particular applications by altering a few key parameters. The program is available for public use at https://github.com/sheikhizadeh/pantools as an extension to our pan-genomic analysis tool, PanTools.
Detection of homologous genes (genes that share evolutionary ancestry) is fundamental to comparative genomics, functional genomics and phylogenomics. Homologs inherited from a single gene in the last common ancestor of two species are called orthologs, while those inherited from distinct duplicated genes are called paralogs [1]. Orthologs are usually under selection pressure, which conserves their sequence, structure and function; while paralogs can diverge rapidly and lose their previous functions or achieve completely or partially new functions [2].
With increasingly evolutionary distance and/or increased data-set sizes, there will be greater sets of gene and genome changes, that can complicate orthology inference [3]. Whole-genome and segmental duplications increase genomic content, local and structural mutations lead to gene losses and gains, and horizontal gene transfers mix genomic content between species. As a result, orthology detection is increasingly difficult in higher organisms and across large evolutionary distances.
In the presence of gene duplications, orthology is not always a one-to-one relationship but rather can be a one-to-many or even many-to-many relationship [4]. As a consequence, an orthology group may contain not only orthologous pairs, but also pairs of homologs duplicated after the speciation of the two species, so-called in-paralogs. In the rest of this text we therefore use the term homology group instead of orthology group to be more precise.
To date, several databases of homology groups have been established, which need to be continually updated to incorporate new genomes [5,6,7,8]. As genomic projects are generating novel data at an unprecedented scale, the analysis of new data means that researchers have to automate the process of inferring homology in their large gene sets. Consequently, in parallel to the static databases there has been a development of standalone tools for automatic detection of homologs [9,10,11]. Accurate homology detection tools rely on all-pairs comparison of proteins. However, calculating all-pair similarity scores quickly becomes a major computational burden as the number of proteomes increases. As the number of eukaryotic proteomes keeps expanding in the coming years, there is a need for even more efficient homology detection methods.
Here, we present an efficient graph-based approach towards homology detection. This method extends the functionality of our pan-genomic data analysis tool, PanTools [12], which integrates genomes, annotations and proteomes in a single graph database to facilitate comparative studies at the levels of structure, variation and function [13]. The motivation of this study was to detect homology groups de novo and efficiently, in large datasets of hundreds of eukaryotic genomes. The presented method scales to large proteome sets while maintaining its accuracy and can be tuned for different application scenarios.
We represent a pan-genome by a hierarchy of genome, annotation and proteome layers stored in a Neo4j graph database to connect different types of data (Fig. 1). The genome layer consists of pan-genome, genome, sequence and nucleotide nodes which contain some essential information about these entities. Nucleotide nodes form the generalized de Bruijn graph [12] which enables the compression and reconstruction of the constituent genomes. The annotation layer, currently, consists of the genomic features like genes, mRNAs, etc. Finally, the proteome layer of the pan-genome is formed by proteins and homology nodes which group the homologous proteins.
Integrating genomic data in a hierarchical pan-genome. The Neo4j graph data model allows to store different types of data in the nodes and edges of a graph
Before homology detection, first the protein nodes should be stored in a pan-genome graph. Instructions for constructing a pan-genome can be found in the Additional file 1. Having the proteins available in the proteome layer of the pan-genome, we take the steps described in Algorithm 1 to cluster them in homology groups.
First, we extract the hexamers of all proteins and, for each hexamer, keep track of the proteins containing that hexamer (lines 1–4). Then, we find all pairs of intersecting proteins (lines 5–10) and calculate their similarity score by aligning them. Two proteins intersect (Fig. 2a-b) if the number of hexamers they share is greater than the product of the intersection parameter (I) and the total number of hexamers of the shorter protein. We connect the intersecting proteins with a similarity score greater than the similarity threshold T (lines 11–15) to form the similarity graph (Fig. 2c). For reasons of efficiency, we have implemented this as three parallel routines A-C, in which B consumes the output of A and C the output of B. A and C employ one working thread and B multiple threads to maximize performance. Next, all the connectivity components of the resulting similarity graph are found using a simple breadth-first search (lines 16–18). This search allows to detect not only the directly connected proteins but also those connected through a path in the graph, the potential distant homologs. Every similarity component is then passed to the MCL (Markov clustering) algorithm [14] to be possibly broken into several homology groups (lines 19–24) (Fig. 2d). MCL has been frequently employed in homology inference methods [11, 15, 16]. Finally, the members of each homology group are connected to a single homology node in the graph (lines 25–27).
a An example of two intersecting proteins, P1 and P2, which share some hexamers. b The intersection graph is built from intersecting pairs of proteins. c The similarity graph consists of similarity components. Each bold edge represents a similarity score greater than the threshold (T). d Homology groups are detected in each similarity component by MCL
Normalizing the raw similarity scores
We compare intersecting pairs of proteins by a Smith-Waterman local alignment algorithm with an affined gap penalty (opening = − 10, extension = − 1) using the BLOSUM62 (Blocks Substitution Matrix 62) scoring matrix. After calculating the raw similarity scores, we normalize them to be independent of the protein lengths. To this end we divide each raw score by the score achieved by aligning the shorter protein to itself and multiply the result by 100; this way, the normalized similarity scores will always be less than or equal to 100. For the sake of simplicity, we use the term similarity score to refer to the normalized similarity score between pairs of proteins.
Rescaling the similarity scores
The pairwise similarity scores of highly similar homologs, which usually lie in the same similarity component, are very close to each other. This makes it very hard for MCL to detect the underlying substructures in such similarity components. To resolve this problem, we rescale the similarity scores in three different ways (Algorithm 1, line 22). First, we subtract the value T from these scores to emphasize small differences for the MCL process.
Furthermore, we would like the clustering to be relatively insensitive to evolutionary sequence divergence. That is, within a similarity component pairs of homologs from two distant species should be ideally scored nearly as high as pairs from two closely related species. To achieve this, in each similarity component we calculate the average distance between each pair of species as 100 minus the average inter-species similarity score and add it to all the similarity scores between those species within the similarity component.
Finally, to increase the contrast between the final similarity scores, before the similarity component is passed to the MCL algorithm, we raise the scores to the power of C, the contrast parameter. This operation is similar to one round of expansion as explained in [14] and was experimentally observed to increase the specificity of the resulting clusters.
Choice of k
Short peptide k-mers may occur in many proteins. This raises the number of intersecting proteins which will be aligned, increasing the resource consumption of the program significantly. On the other hand, long k-mers are more specific and decrease the sensitivity of the program in detecting the intersecting pair of proteins, thereby reducing the recall. As a result, we calculate the smallest k value which keeps the probability of random occurrences of a k-mer below a desirable probability p. For peptide sequences, size of the alphabet α = 20, and considering L = 30,000 the length of the largest known protein [17] and setting p = 0.001, the smallest suitable k will be 6 (see Additional file 1). Therefore, we chose to use hexamers for detecting the intersections.
To reduce the memory needs of the program and increase the specificity of the intersections, we ignore extremely abundant hexamers (for example "QQQQQQ" in the yeast datasets), which their frequency exceeds p × n + c × m, where p = 0.001, n is the total number of proteins, c = 50 is an a priori estimate of the maximum number of occurrences of a hexamer in the proteome of a species, and m is the total number of species (proteomes). Likewise, hexamers with frequency 1 are considered rare and thereby ignored. This filtration notably improves the efficiency and the precision of the method.
Measures of accuracy for evaluation
To evaluate the accuracy of the method, we used the recall, precision and F-score measures as defined previously [16, 18] (Fig. 3). Given a set of real and detected homology groups, for each true homology group, THG, we find the detected homology group, DHG, which has the largest overlap with the THG. Then we consider true positives (tp) as the number of proteins in both THG and DHG, false negatives (fn) as the number of proteins in THG but not in DHG, and false positives (fp) as the number of proteins avilable in DHG but not in THG. Then TP, FP and FN are defined as the summation of the tp's, fp's and fn's over all true homology groups, respectively. Finally, the recall, precision and F-score measures are calculated as follows:
$$ \mathrm{recall}=\mathrm{TP}/\left(\mathrm{TP}+\mathrm{FN}\right) $$
$$ \mathrm{precision}=\mathrm{TP}/\left(\mathrm{TP}+\mathrm{FP}\right) $$
$$ \mathrm{F}\hbox{-} \mathrm{score}=2\times \left(\mathrm{Recall}\times \mathrm{Precision}\right)/\left(\mathrm{Recall}+\mathrm{Precision}\right) $$
Proteins of three distinct homology groups are represented as triangles, circles and squares. Green shapes are true positives (tp) which have been assigned to the true group; red shapes are false positives (fp) for the group they have been incorrectly assigned to, and false negatives (fn) for their true group
Recall represents the ability of the method to put the true homologs together in one group, precision shows its ability to separate the non-homologs, and the F-score is the harmonic mean of these two measures combining them in one. There is always a trade-off between recall and precision, since detecting more TPs often leads to some FPs.
In the following experiments, we need to know the real groups in various datasets to serve as a ground truth for evaluation. For the S.cerevisiae datasets and the single E.coli dataset, the real groups are defined based on the locus tags of the proteins extracted from the GenBank files (Additional file 2). For A.thaliana datasets the real groups are defined based on the gene identifiers which end with .1, for example AT3G54340.1, which correspond to the first annotated isoform of the genes. For the single Metazoa dataset, we used the identifiers of the 70 protein families of OrthoBench as the real group identifiers.
Here, we present results demonstrating the accuracy, scalability, efficiency and applicability of PanTools for detecting homology in large proteomes of bacteria, fungi, plants and Metazoa (Additional file 1: Table S1). We compare PanTools to the BLAST-based orthology detector OrthoFinder [16] and to DIAMOND-based PanX [19], a pipeline dedicated to microbial data (Additional file 1: Tables S2–S5). First we evaluated the methods on OrthoBench [18], a public benchmark of curated protein families from 12 metazoans. Unfortunately, we were not able to run PanX on this data (M12), as this benchmark only provides the protein sequences but not the gene sequences PanX requires. Next, we tested scalability on 5 datasets of increasing size compiled from 93 Saccharomyces cerevisiae strains [20] and 5 datasets compiled from 19 Arabidopsis thaliana accessions [21]. Additionally, we compared the performance of PanTools and PanX on a large dataset of 600 Escherichia coli; we did not run OrthoFinder, as we estimated it would need ~ 5000 h on this dataset. Finally, we studied the effect of evolutionary distance on homology detection using 12 Brassicaceae species proteomes. Experiments were executed on an Ubuntu 14.04 server, Intel® Xeon® [email protected], with 64GB RAM using 16 processing cores and 32GB of RAM disk.
PanTools is adaptable to handle varying degrees of input divergence
PanTools has four main parameters that affect the homology clustering: intersection rate, similarity threshold, contrast and MCL inflation. To examine the general effect of these parameters on the accuracy of the method on proteomes of diverged species, we used the set of 1695 proteins from the OrthoBench. Figures 4 and 5 present contour plots illustrating the effect of these four parameters on the recall and precision of PanTools, respectively; lighter colors represent higher values.
The effect of intersection rate, similarity threshold, contrast and inflation rate, on the recall of PanTools. Each contour plot belongs to a pair of intersection and threshold values, with the x and y axis representing inflation and contrast parameters
The effect of intersection rate, similarity threshold, contrast and inflation rate, on the precision of PanTools. Each contour plot belongs to a pair of intersection and threshold values, with the x and y axis representing inflation and contrast parameters
The first parameter, intersection rate (I) (in the range of 0.01–0.1), determines the minimum number of hexamers that two proteins need to have in common to be considered intersecting proteins in order to be selected for exact alignment. This number is calculated as the product of the intersection rate and the total number of hexamers of the shorter protein. In general, by choosing lower intersection values the number of pairwise alignments and, in turn, the resource consumption of the program increases, significantly. The lower the intersection value, the higher the recall and the lower the precision.
The second parameter affecting the clustering is the similarity threshold (T) (in the range of 25–99). Two proteins are considered similar if the normalized similarity score of their local alignment exceeds this threshold. Lower thresholds increase the number of detected similarities, boosting the sensitivity of the homology detection. So, the lower the threshold, the higher the recall, but the lower the precision.
The connectivity components of the similarity graph (similarity components) are the candidate homology groups which are then passed to the MCL clustering algorithm to be possibly split into more specific homology groups. To increase the granularity of the clustering and split the similarity components into a larger number of groups, we choose greater MCL inflations (M). Finally, we raise the scores to the power of the contrast parameter (C) to increase the contrast between the final similarity scores. Like for I and T, the lower the inflation and/or contrast, the higher the recall and the lower the precision.
The resulting F-scores (Additional file 1: Figure S1) suggest that higher values of the four parameters are not desirable for grouping the proteome of these distant species. In support of this, we observed that increasing the parameter values improves the F-score of the method when analyzing the proteomes of closely related species.
Based on these observations, we experimentally optimized 8 groups of default parameter settings (d1-d8), ranging from strict to relaxed by linearly decreasing the 4 mentioned parameters (Additional file 1: Table S6). This allows the user to fine-tune the settings for different types of datasets and/or downstream applications. We recommend users to either use Table S6 to choose appropriate settings based on the divergence of the proteomes or try multiple settings and pick one based on the desired resolution from one-to-one orthologs to multi-gene families. In our experiments, we used the most strict setting (d1) for the closely related strains of E.coli and S.cerevisiae, the next strict setting (d2) for A.thaliana datasets, and the most relaxed setting (d8) for the OrthoBench data.
PanTools is efficient and accurate on OrthoBench data
OrthoBench is a resource of 70 curated eukaryotic protein families from 12 metazoans which was established to assess the performance of TreeFam [22], eggNOG [23], OrthoDB [24], OrthoMCL [25], and OMA [26]. We call this benchmark M12 in the rest of this paper. The homology relationships between these protein families are difficult to detect due to differences in their rate of evolution, domain architecture, low-complexity regions/repeats, lineage-specific losses/duplications, and alignment quality [18].
We compared the performance of PanTools to that of OrthoFinder, which previously showed the highest accuracy on this benchmark data. We first created a mapping from the 1695 OrthoBench proteins to the 404,657 proteins of the 12 metazoans available in Ensembl release 60. We then ran PanTools and OrthoFinder independently on these 12 complete proteomes and calculated the recall, precision and F-score using the same procedure as proposed for OrthoFinder. In this experiment, PanTools achieved the same recall as OrthoFinder but at a remarkably higher precision, resulting in a 3% higher overall F-score of 85.5%. Additionally, there were significant differences in run-times. Running on 16 cores, PanTools terminated after 2 h and OrthoFinder after 77.6 h.
PanTools scales to large eukaryotic datasets and maintains accuracy
To examine the scalability of our method to large eukaryotic datasets, we first ran it on 5 datasets of Saccharomyces cerevisiae (Y3, Y13, …, Y93) and on 5 datasets of Arabidopsis thaliana accessions (A3, A7, …, A19) with an increasing number of proteomes. We compared the run-time and accuracy (F-score) of PanTools to those of OrthoFinder and PanX (Fig. 6).
a The run-time and b the F-score of the three methods on the 5 S. cerevisiae datasets. c The run-time and d the F-score of the three methods on the 5 A. thaliana datasets
On the largest yeast dataset (Y93), PanTools was 112 times faster than OrthoFinder (0.9 h vs. 4 days) and 7.6 times faster than PanX, with a slightly higher F-score. Similarly, on the largest Arabidopsis dataset (19 accessions), PanTools was 42 times faster (1 h vs. 2.7 days) than OrthoFinder and 5.2 times faster than PanX while maintaining its higher F-score. Overall, OrthoFinder starts with a low accuracy but seems to level out at a higher value as the number of proteomes grows, albeit at the cost of drastic increase in run-time. Although PanX was almost as accurate as OrthoFinder on the S.cerevisiae data, its accuracy fell below that of OrthoFinder on the A. thaliana data, likely because plants have more diverse proteomes than the bacteria PanX was designed for.
PanTools is applicable to large microbial datasets
To compare the performance of our approach to PanX, a recently published tool dedicated to the microbial data, we applied both tools to the proteomes of 600 E.coli strains downloaded from GenBank (Additional file 2). Both PanX and PanTools processed this large dataset in ~ 15 h, resulting in F-scores of 71.6 and 72.9, respectively. In this experiment, we ran PanX in divide-and-conquer mode to speed it up.
PanTools significantly reduces the number of pairwise comparisons
The efficiency of PanTools is due to its k-mer-based approach, which significantly reduces the number of fruitless protein alignments. Table 1 shows that the numbers of pairwise comparisons in different experiments are thousands-fold less than what is needed in a naïve all-pairs approach.
Table 1 The number of PanTools comparisons compared to a naïve all-pairs approach
To scale to hundreds of eukaryotic or thousands of prokaryotic proteomes using reasonable amount of resources, there were two main limitations to be resolved: first, the local sequence alignment of proteins, which we tried to mitigate by distributing the intersecting pairs among multiple threads to be aligned in parallel; second, the size of the data structure used for detecting the intersecting proteins, which grows linearly with the size of the input data. To reduce the memory needs, currently we ignore extremely abundant and rare hexamers, which are less informative. By using space-efficient data structures, for example MinHash sketches [27], we may be able to further decrease the memory consumption of the program.
PanTools reproduces the majority of groups detected by other tools
In all experiments, PanTools was able to perfectly reproduce the majority of the groups detected by OrthoFinder and PanX. Table 2 shows the percentage of the groups generated by OrthoFinder and PanX which have an identical counterpart in the PanTools groups. Generally, the overlap decreases as the size of data grows, because the probability of having exactly identical groups drops, although the corresponding groups have highly similar compositions.
Table 2 The percentage of OrthoFinder and PanX groups that PanTools reproduce
Parameters can affect the performance of different application scenarios
To investigate the effect of the 8 suggested parameter sets (from strict to relaxed) on homology clustering, we used a large proteome of 12 phylogenetically diverse Brassicaceae species, including the model plant Arabidopsis thaliana, plus Vitis vinifera as an outgroup. We specifically considered four genes with different copy numbers in A.thaliana, including three MADS-box genes – the floral homeotic protein APETALA 3 (AP3), the floral homeotic protein AGAMOUS (AG) and the flowering locus C (FLC) – and one housekeeping gene: the ubiquitin extension protein 1 (UBQ1), and looked into the composition of their homology groups detected by PanTools using the 8 parameter settings from strictest (d1) to the most relaxed (d8). Each column of Table 3 represents a homology group and each entry reflects the count of homologs of the genes AP3, AG, FLC and UBQ1 from different species in that group.
Table 3 Counts of the homologs of 4 genes from Brassicaceae species in each homology group
With all settings, we detected a single AP3 homolog in Arabidopsis, which indicates that this MADS-box gene is significantly differentiated from other MADS-box genes. We also found unique orthologues for most of the other species.
We detected a single ortholog of AG until d5, after which we also identify its ancient paralogs Shatterproof 1 and 2 (SHP1/2). The duplication that gave rise to the split between AG and SHP1/2 is quite old (the gamma triplication shared by most eudicot species). At d6 we also detect STK which comes from an even earlier duplication (perhaps at the origin of angiosperms) [28]. At d7 and d8 we identify many of the various MADS-box genes across different lineages.
FLC is alone until d4, where the transposition duplicate MAF1 (but not yet members of the MAF2–5 clade) is added. Then MAF2–5 members derived from the At-alpha WGD (whole genome duplication) from FLC come up, followed by inclusion of the tandem expansion of these genes. At subsequent settings, we start picking up other MADS-box genes.
UBQ1 is a house-keeping gene that was duplicated by the ancient whole genome duplication (WGD) At-alpha shared across the Brassicaceae (PGDD database) [29]. Our method recovered both the ortholog and its in-paralog (UBQ2) even using the strictest setting (d1), meaning that these genes are very similar despite having diverged around 40 mya. Thus, the function of the two genes is likely highly conserved. From d5 on, PanTools identifies other, more distantly related homologs and ultimately (d8) all members of the larger family (UBQ1-UBQ14) plus a few related genes.
Table 4 shows the distribution of the normalized similarity scores in each of the detected homology groups. It is clear that more relaxed settings allow including more diverse pairs of homologs, which are less similar in the final clusters.
Table 4 Minimum, maximum and average of normalized similarity scores in the homology group of 4 genes using 8 different settings
We presented an efficient method for detecting homology groups across a large number of individuals and/or species. To make homology detection efficient we adopted a k-mer-based approach, which substantially reduces the number of pairwise comparisons. Specifically, we first count the number of peptide hexamers two proteins share, and only if this number is high enough, we perform a local alignment of the so-called intersecting proteins to calculate their exact similarity score.
We clearly observed a trade-off between recall and precision of the homology inference. Favoring recall or precision strongly depends on the application [30]. In a phylogenetic study one may specifically be interested in identifying precise one-to-one orthologs, while others may want to capture a complete protein family to achieve insights into gene-duplication events across species. The four defined parameters (and the 8 default settings) give users the flexibility to control the program's behavior. It is important to note that different types of genes may be under different selection pressures and constraints and have different evolutionary dynamics. Thus, the optimal parameter setting will depend both on the specific gene and on the desired application, as demonstrated by the four genes across the Brassicaceae.
As we store the homology groups in the pan-genome, it is possible to query the pan-genome graph database for statistics on, for example, the size of the homology groups, the copy number of the genes and the conservation rate of the proteins in different groups. In the future, we will extend PanTools with additional functionality to exploit this pan-genome database for comparative genomics on large collections of complex genomes.
Koonin EV. Orthologs, paralogs, and evolutionary genomics. Annu Rev Genet. 2005;39:309–38.
Zhu J, Vinothkumar KR, Hirst J. Structure of mammalian respiratory complex I. Nature. 2016;536(7616):354–8.
Tekaia F. Inferring orthologs: open questions and perspectives. Genomics Insights. 2016;9:17–28.
Tatusov RL. A genomic perspective on protein families. Science. 1997;278(5338):631–7.
Powell S, Forslund K, Szklarczyk D, Trachana K, Roth A, Huerta-Cepas J, et al. EggNOG v4.0: nested orthology inference across 3686 organisms. Nucleic Acids Res. 2014;42:D231–9.
Zdobnov EM, Tegenfeldt F, Kuznetsov D, Waterhouse RM, Simão FA, Ioannidis P, et al. OrthoDB v9.1: cataloging evolutionary and functional annotations for animal, fungal, plant, archaeal, bacterial and viral orthologs. Nucleic Acids Res. 2016;45:D744–9.
Huerta-Cepas J, Capella-Gutiérrez S, Pryszcz LP, Marcet-Houben M, Gabaldón T. PhylomeDB v4: zooming into the plurality of evolutionary histories of a genome. Nucleic Acids Res. 2014;42:D897–902.
Li H. TreeFam: a curated database of phylogenetic trees of animal gene families. Nucleic Acids Res. 2006;34(90001):D572–80.
Remm M, Storm CEV, Sonnhammer ELL. Automatic clustering of orthologs and in-paralogs from pairwise species comparisons. J Mol Biol. 2001;314(5):1041–52.
Roth AC, Gonnet GH, Dessimoz C. Algorithm of OMA for large-scale orthology inference. BMC Bioinformatics. 2008;9(1):518.
Li L, Stoeckert CJ, Roos DS. OrthoMCL: identification of ortholog groups for eukaryotic genomes. Genome Res. 2003;13(9):2178–89.
Sheikhizadeh S, Schranz ME, Akdel M, de Ridder D, Smit S. PanTools: representation, storage and exploration of pan-genomic data. Bioinformatics. 2016;32(17):i487–93.
Marschall T, Marz M, Abeel T, Dijkstra L, Dutilh BE, Ghaffaari A, et al. Computational pan-genomics: status, promises and challenges. Brief Bioinform. 2018;19(1):118–35.
Enright AJ, Van Dongen S, Ouzounis CA. An efficient algorithm for large-scale detection of protein families. Nucleic Acids Res 2002;30(7):1575–1584.
Wang R, Liu G, Wang C, Su L, Sun L. Predicting overlapping protein complexes based on core-attachment and a local modularity structure. BMC Bioinformatics. 2018;19:305.
Emms DM, Kelly S. OrthoFinder: solving fundamental biases in whole genome comparisons dramatically improves orthogroup inference accuracy. Genome Biol. 2015;16(1):157.
Opitz CA, Kulke M, Leake MC, Neagoe C, Hinssen H, Hajjar RJ, et al. Damped elastic recoil of the titin spring in myofibrils of human myocardium. Proc Natl Acad Sci U S A. 2003;100(22):12688–93.
Trachana K, Larsson TA, Powell S, Chen W-H, Doerks T, Muller J, et al. Orthology prediction methods: a quality assessment using curated protein families. BioEssays. 2011;33(10):769–80.
Ding W, Baumdicker F, Neher RA. panX: pan-genome analysis and exploration. Nucleic Acids Res. 2017;46(1):e5.
Strope PK, Skelly DA, Kozmin SG, Mahadevan G, Stone EA, Magwene PM, et al. The 100-genomes strains, an S. cerevisiae resource that illuminates its natural phenotypic and genotypic variation and emergence as an opportunistic pathogen. Genome Res. 2015;125(5):762–74.
Gan X, Stegle O, Behr J, Steffen JG, Drewe P, Hildebrand KL, et al. Multiple reference genomes and transcriptomes for Arabidopsis thaliana. Nature. 2011;477(7365):419–23.
Ruan J, Li H, Chen Z, Coghlan A, Coin LJM, Guo Y, et al. TreeFam: 2008 update. Nucleic Acids Res. 2008;36:D735–40.
Muller J, Szklarczyk D, Julien P, Letunic I, Roth A, Kuhn M, et al. eggNOG v2.0: extending the evolutionary genealogy of genes with enhanced non-supervised orthologous groups, species and functional annotations. Nucleic Acids Res. 2009;38:D190–5.
Waterhouse RM, Zdobnov EM, Tegenfeldt F, Li J, Kriventseva EV. OrthoDB: the hierarchical catalog of eukaryotic orthologs in 2011. Nucleic Acids Res. 2011;39:D283–8.
Chen F. OrthoMCL-DB: querying a comprehensive multi-species collection of ortholog groups. Nucleic Acids Res. 2006;34(90001):D363–8.
Altenhoff AM, Schneider A, Gonnet GH, Dessimoz C. OMA 2011: Orthology inference among 1000 complete genomes. Nucleic Acids Res. 2011;39:D289–94.
Ondov BD, Treangen TJ, Melsted P, Mallonee AB, Bergman NH, Koren S, et al. Mash: fast genome and metagenome distance estimation using MinHash. Genome Biol. 2016;17(1):132.
Cheng S, van den Bergh E, Zeng P, Zhong X, Xu J, Liu X, et al. The Tarenaya hassleriana genome provides insight into reproductive trait and genome evolution of crucifers. Plant Cell. 2013;25(8):2813–30.
Lee TH, Tang H, Wang X, Paterson AH. PGDD: a database of gene and genome duplication in plants. Nucleic Acids Res. 2013;41:D1152–8.
Altenhoff AM, Boeckmann B, Capella-Gutierrez S, Dalquen DA, DeLuca T, Forslund K, et al. Standardized benchmarking in the quest for orthologs. Nat Methods. 2016;13(5):425–30.
This work has been published as a part of the research project called Pan-genomics for crops funded by the Graduate School Experimental Plant Sciences (EPS) in the Netherlands.
The datasets generated and/or analyzed during the current study are available at: http://www.bioinformatics.nl/pangenomics.
Bioinformatics Group, Wageningen University, Wageningen, The Netherlands
Siavash Sheikhizadeh Anari, Dick de Ridder & Sandra Smit
Biosystematics Group, Wageningen University, Wageningen, The Netherlands
M. Eric Schranz
Siavash Sheikhizadeh Anari
Dick de Ridder
Sandra Smit
SSh developed and implemented the method and performed the computational experiments and was a major contributor in writing the manuscript. DdR, ES and SSm as the supervisors contributed to the development, experimental design, and writing of the manuscript. All authors read and approved the final manuscript.
Correspondence to Siavash Sheikhizadeh Anari.
Supplementary methods, tables, and figures Caption. (DOCX 1107 kb)
Data description. (XLSX 52 kb)
Sheikhizadeh Anari, S., de Ridder, D., Schranz, M.E. et al. Efficient inference of homologs in large eukaryotic pan-proteomes. BMC Bioinformatics 19, 340 (2018). https://0-doi-org.brum.beds.ac.uk/10.1186/s12859-018-2362-4
Accepted: 09 September 2018
Pan-genome
Protein similarity
Homologous genes
Orthology
k-mer | CommonCrawl |
Azimi, M. (2017). Subspace-diskcyclic sequences of linear operators. Sahand Communications in Mathematical Analysis, 08(1), 97-106. doi: 10.22130/scma.2017.23850
Mohammad Reza Azimi. "Subspace-diskcyclic sequences of linear operators". Sahand Communications in Mathematical Analysis, 08, 1, 2017, 97-106. doi: 10.22130/scma.2017.23850
Azimi, M. (2017). 'Subspace-diskcyclic sequences of linear operators', Sahand Communications in Mathematical Analysis, 08(1), pp. 97-106. doi: 10.22130/scma.2017.23850
Azimi, M. Subspace-diskcyclic sequences of linear operators. Sahand Communications in Mathematical Analysis, 2017; 08(1): 97-106. doi: 10.22130/scma.2017.23850
Subspace-diskcyclic sequences of linear operators
Article 8, Volume 08, Issue 1, Autumn 2017, Page 97-106 PDF (95.32 K)
Document Type: Research Paper
DOI: 10.22130/scma.2017.23850
Mohammad Reza Azimi
Department of Mathematics, Faculty of Sciences, University of Maragheh, Maragheh, Iran.
A sequence $\{T_n\}_{n=1}^{\infty}$ of bounded linear operators on a separable infinite dimensional Hilbert space
$\mathcal{H}$ is called subspace-diskcyclic with respect to the closed subspace $M\subseteq \mathcal{H},$ if there exists a vector $x\in \mathcal{H}$ such that the disk-scaled orbit $\{\alpha T_n x: n\in \mathbb{N}, \alpha \in\mathbb{C}, | \alpha | \leq 1\}\cap M$ is dense in $M$. The goal of this paper is the studying of subspace diskcyclic sequence of operators like as the well known results in a single operator case. In the first section of this paper, we study some conditions that imply the diskcyclicity of $\{T_n\}_{n=1}^{\infty}$. In the second section, we survey some conditions and subspace-diskcyclicity criterion (analogue the results obtained by some authors in \cite{MR1111569, MR2261697, MR2720700}) which are sufficient for the sequence $\{T_n\}_{n=1}^{\infty}$ to be subspace-diskcyclic(subspace-hypercyclic).
Sequences of operators; Diskcyclic vectors; Subspace-diskcyclicity; Subspace-hypercyclicity
Main Subjects
Functional analysis and operator theory
[1] N. Bamerni, V. Kadets, and A. Kιlιçman, On subspaces diskcyclicity, arXiv:1402.4682 [math.FA], 1-11.
[2] N. Bamerni, V. Kadets, A. Kιlιçman, and M.S.M. Noorani, A review of some works in the theory of diskcyclic operators, Bull. Malays. Math. Sci. Soc., Vol. 39 (2016) 723-739.
[3] F. Bayart and ´E. Matheron, Dynamics of linear operators, Cambridge Tracts in Mathematics, Vol. 179, Cambridge University Press, Cambridge, 2009.
[4] L. Bernal-Gonz´alez and K.-G. Grosse-Erdmann, The hypercyclicity criterion for sequences of operators, Studia Math., Vol. 157 No. 1 (2003) 17-32.
[5] P.S. Bourdon, Invariant manifolds of hypercyclic vectors, Proc. Amer. Math. Soc., Vol. 118 No. 3 (1993) 845-847.
[6] G. Godefroy and J.H. Shapiro, Operators with dense, invariant, cyclic vector manifolds, J. Funct. Anal., Vol. 98 No. 2 (1991) 229-269.
[7] K-G. Grosse-Erdmann, Universal families and hypercyclic operators, Bull. Amer. Math. Soc., Vol. 36 No. 3 (1999) 345-381.
[8] R.R. Jiménez-Munguía, R.A. Martínez-Avendaño, and A. Peris, Some questions about subspace-hypercyclic operators, J. Math. Anal. Appl., Vol. 408 No. 1 (2013) 209-212.
[9] C. Kitai, Invariant closed sets for linear operators, ProQuest LLC, Ann Arbor, MI, Thesis (Ph.D.)–University of Toronto, Canada 1982.
[10] F. León-Saavedra and V. Müller, Hypercyclic sequences of operators, Studia Math., Vol. 175 No.1 (2006) 1-18.
[11] B.F. Madore and R.A. Martínez-Avendaño, Subspace hypercyclicity, J. Math. Anal. Appl., Vol. 373 No.2 (2011) 502-511.
[12] H. Petersson, A hypercyclicity criterion with applications, J. Math. Anal. Appl., Vol. 327 No. 2 (2007) 1431-1443.
[13] H. Rezaei, Notes on subspace-hypercyclic operators, J. Math. Anal. Appl., Vol. 397 No. 1 (2013) 428-433.
[14] Z.J. Zeana, Cyclic Phenomena of operators on Hilbert space, Thesis, University of Baghdad, 2002.
Article View: 594
PDF Download: 139 | CommonCrawl |
Feature visualization in comic artist classification using deep neural networks
Kim Young-Min ORCID: orcid.org/0000-0002-6914-901X1
Deep neural networks have become a standard framework for image analytics. Besides the traditional applications, such as object classification and detection, the latest studies have started to expand the scope of the applications to include artworks. However, popular art forms, such as comics, have been ignored in this trend. This study investigates visual features for comic classification using deep neural networks. An effective input format for comic classification is first defined, and a convolutional neural network is used to classify comic images into eight different artist categories. Using a publicly available dataset, the trained model obtains a mean F1 score of 84% for the classification. A feature visualization technique is also applied to the trained classifier, to verify the internal visual characteristics that succeed in classification. The experimental result shows that the visualized features are significantly different from those of general object classification. This work represents one of the first attempts to examine the visual characteristics of comics using feature visualization, in terms of comic author classification with deep neural networks.
Recent progress in computer vision has facilitated the scientific understanding of artistic visual features in artworks. Artistic style classification and style transfer are two notable examples of this type of analysis. The former aims to classify artworks into one of the predefined classes. The class type can represent the artist, genre, or painting style that effectively represents the aesthetic features of the artwork [1]. The latter aims to migrate a style from one image to another [2, 3]. This models a reference image's statistical features, which are then used to transform other images. This high-level understanding of visual features enables the effective retrieval, processing, and management of artworks. Both examples have been based on machine learning techniques in recent studies, and deep neural networks in particular. However, there is a noticeable limit in current applications, in that most existing approaches deal with fine arts. Popular art forms, such as comics, have been somewhat overlooked in this trend. Considering the present influence of popular art forms, investigating the distinguishing aspects of different types of popular artworks would be useful.
Comics is a medium expressed through juxtaposed pictorial and other images in a sequence, with the objective of delivering information or invoking an aesthetic response by viewer [4]. This is globally a very popular medium, and is currently increasing in influence thanks to the development of online comics, namely webcomics or webtoons. Despite the popularity of this medium, not many works have investigated the artistic aspects of comics in computer vision. Several aspects have been studied, such as coloring comics automatically [5] or applying style transfer to comics [6]. Anime character creation [7] and avatar creation [8] are examples of other related domains. However, these have limits in that no distinct characteristics have been examined compared to fine art.
This study attempts to tackle the problem via comic-book page classification in terms of the artistic styles expressed in the pages. A convolutional neural network (CNN), which is a standard technique in image classification, is employed as the classifier. The visual features that facilitate the classification in a trained CNN model are investigated in detail. Feature visualization is a useful tool to interpret an image classifier in ways that humans can understand. At each neuron of a trained network, a feature visualization technique is performed to reveal the neuron's visual properties. Two different input formats, comic book page and comic panel, are tested in our approach. Each image is labeled as the artist who drew the comic book.
Deep neural networks, especially convolutional neural networks have achieved a considerable success in image analysis [9, 10] and other related applications [11, 12]. ResNet [13], which have obtained the best result in the ImageNet large scale visual recognition challenge (ILSVRC) in 2015, even exceeds human recognition. While the ImageNet challenge aims to classify images into 1000 different object categories, the proposed model classifies the artwork images into fewer than 10 author categories. Therefore, a simple CNN architecture is enough for this work. Once the CNN classifier have been trained, the feature visualization technique presented in [14] is applied. In the approach, the pixels of a random noise image are updated by optimization to produce an image that can represent each neuron.
The remainder of this paper is organized as follows. In "Related works" section introduces the recent studies on image classification using deep neural networks and artwork analysis. "Methods" section deals with the proposed deep neural network structure for comic classification as well as the feature visualization using image optimization for the trained classifier. "Results and discussion" section presents the experimental results of the classification and feature visualization. Finally, the conclusions are presented in "Conclusions" section.
Image classification is a representative domain of deep neural network applications. Since AlexNet [15] won the ILSVRC with a top-5 error rate of 15.4% in 2012, CNNs have become the standard frameworks for image classification. While AlexNet had only eight layers, other variations have added layers or introduced new concepts to enhance the performance. VGG-16 [16] enhanced the classification performance by increasing the layers to 16 and slightly modifying the structure. GoogLeNet [14] introduced inception modules and reduced the classification error to 6.7%. One of the most recent networks, ResNet with "skip connections", produced a top-5 error rate of 3.6%. The latter has 152 layers, but the new structure rather reduced the computational complexity compared to the previous models. These networks have also been successfully applied to other different kinds of recognition tasks, such as object detection and face detection.
Deep neural networks can be applied to artwork classification. Most previous studies have aimed to find effective features to represent well the paintings [17, 18]. Following the considerable success of deep learning for image classification, these techniques have been applied to the classification of art images. Firstly, CNN features have been added to the visual features describing art images and enhanced the classification accuracy [19]. Instead of CNN, a different classification method such as support vector machine had been used in the work.
Secondly, CNN classifiers have been directly applied to the art images. Various class types, such as art genre, style, and artist, have been considered. The authors of [1, 20] attempted to classify fine-art images into 27 different art style categories. They employed the WikiArt dataset with 1000 different artists, and obtained better results than previous studies using traditional classifiers. There have also been some studies dealing with other types of visual art, such as photographs [21] or illustrations [22]. These have employed CNN classifiers to identify the authorship of input images.
Meanwhile, there are very few previous studies applying deep learning techniques to the popular art forms, such as comics, until a couple of years ago. One main reason is the rack of data. Unlike fine arts, most comic books are protected by copyright. Therefore, it is difficult to construct and distribute a large-scale comic dataset. Lately, since the new comics dataset, Manga 109 was distributed in 2017, many studies in image analytics have begun to refer to this dataset. Image super-resolution is one major research area employing the dataset [23,24,25].
Another major area involves different analytics for comics itself. The authors of [26] introduced a new large-scale dataset of contemporary artwork including comic images. While general object recognition is applied in their work, the authors of [27] focused on the comic object detection. Four different object types have been detected in [27]. There are also studies on specialized network for comic face detection [28, 29] or comic character detection [30].
A previous study [31] revealed well a fundamental difference in comic classification from fine art classification. The work did not involve deep learning but the design of computational features from comic line segments. The authors understood well that the characteristic drawing styles of comics come from lines. This property of comics would make a difference during the training of a classifier.
With the rapid development of deep learning in visual analysis, researchers have started to interpret the trained results in ways that humans can understand. One of the attempts toward this is feature visualization, which represents each neuron of a layer in a trained neural network using the weights. Using this method, the most activated image per neuron that captures the trained characteristics of that neuron can be visualized. Feature visualization has been investigated from the primary stage of image analysis based on neural networks, and recently various additional techniques have been proposed. One main direction of current research is to find the images activating each neuron the most [32, 33]. Another is to produce an activation vector, which minimizes the difference between a real image and its represented image from a neuron [34, 35]. This study employs the image optimization technique proposed in [14] to visualize the neurons in a trained comic classifier.
This study consists of two main parts. First, the CNN models are trained to classify comic images into different categories, which correspond to different authors. Two input image formats are individually tested, to determine the better input image form for comic classification. The classification performance is evaluated using a publicly available comic dataset. Second, the trained models are visualized using a feature visualization technique. The two models with the different input formats are tested to examine the visual characteristics of comics in detail.
It is first necessary to define the format of the input images to classify comic images in terms of the artistic styles. The simplest format would be the entire comic-book page. The entire page of a greyscale comic book is first employed as the input image. Each page of the comic book is scanned and filtered, to select standard pages only. A standard page is one including panels and balloons. Some unusual pages, such as those including images only, are filtered out. The second input format is the comic panel. In general, a comic page includes several panels, each of which contains a segment of action. As the drawing in a page is segmented by panels, it would be reasonable to attempt to use panels as input images. The characteristics of these two formats are compared via both classification and feature visualization.
The original data used for the experiments is the Mange 109 dataset [36], which consists of 109 manga (Japanese comic) volumes. All the volumes are drawn by different professional artists. The resolution of the scanned images is 827 × 1170. Eight volumes of the 109 are chosen for the experiments. An import supposition here is that an artist represents a distinct artistic style. Therefore, eight different comic styles are tested in our experiments. The top eight manga volumes are taken from the dataset sorted by title in ascending alphabetical order.
Selected eight comic books for the classification of comic artist styles
Figure 1 presents the examples of the selected comic pages. Each image corresponds to a representative page for each class. The ID of the artist who drew the comics is indicated at the bottom of each image. Each volume has its own distinct characteristics in drawing style. A1, A4, and A8 have the style of Shojo (girl) manga, whereas A2, A5, and A7 have a Shonen (boy) manga style. A6 is difficult to classify into one of the two types. A3 represents a special case of comics, namely four-cell manga. Table 1 shows the number of book pages used in the experiments per class.
Table 1 Number of comic book pages in each class for the experiments
Unlike the entire page format, the panel format needs data preprocessing to prepare input images. In other words, it is necessary to extract comic panels from the comic pages. A publicly available software is used for the extraction [37]. Some post-processing is also conducted to filter out the mis-segmented panels. Moreover, the images need to be reformatted to the same size, because the extracted panels are all different in size. Instead of adjusting resolutions, the images smaller than 256 × 256 are eliminated and the larger images are cropped to 256 × 256. In the latter case, only the center part of the image is kept. Then, a manual post-processing removes inappropriate images for training, such as backgrounds only, parts of the body, balloons only, and images that are difficult to classify even for a human.
Panel image examples after post-processing
Figure 2 presents examples of the panel images for each class after post-processing. Even after the post-processing, there are some problematic images. For example, sample (a) contains a segmentation error and (c) includes parts of balloons with words. The examples (e) and (g) include a large background with small person area. But these types of images are kept, because it is not possible to eliminate all the problematic cases, and these cases include the distinguishable characteristics of drawing styles anyway. Table 2 shows the number of panel images in each artist class used for the experiments.
Table 2 Number of panel images in each class for the experiments
CNN architecture for comic classification
Figure 3 illustrates the overall process of the proposed approach and the detailed architecture of the CNN model. For each input image format, a CNN model is trained. And the model is used for the feature visualization in the end. As the number of classes is significantly smaller than for other major architectures, a modified version of AlexNet, one of the simplest benchmarks, is used. Filtering in the overall process means eliminating the images smaller than 256 × 256 for the second format.
The overall process of our approach and the CNN architecture for the classification of comic artist styles
The proposed network has five convolutional layers, five pooling layers, two fully connected layers, and an output layer. A ReLU activation function is applied at the end of each convolutional layer, and is followed by a max pooling. The input images consist of a greyscale image of 300 × 400 pixels for the first input format and a greyscale image of 256 × 256 pixels for the second. The numbers of the filters in the convolutional layers are 32, 64, 128, 256, and 512 respectively, from the first convolutional layer to the fifth. Each convolution filter has 5 × 5 patches with a stride of 1. Furthermore, max pooling is employed with a 2 × 2 filter and a stride of 2. Therefore, the image size is reduced by half when passing through a pooling layer. The fully connected layers have 1024 nodes each, and a ReLU function with 10% of dropout is applied.
This architecture is fixed from various pre-experiments with different settings. At each convolutional layer, different combinations of model hyperparameter values have been tested. These are the number of filters and whether pooling is applied at the end. In the architecture, the number of filters is doubled at the next convolutional layer, and max pooling is always applied, unlike AlexNet. When two convolutional layers (third and fourth) have the same number of filters without pooling between them, the performance decreases. When the number of filters at the final convolutional layer decreases, the performance also decreases. Applying repeatedly pooling layers does not degrade the final result.
Feature visualization
Feature visualization is a useful tool for expressing the trained features of a deep neural network in image analytics. We can understand how a trained classifier can distinguish the class of an input image via feature visualization. There are many approaches, but our approach adopts a simple but powerful technique developed by the Google Brain team [14]. This involves feature visualization by image optimization and provides various regularization techniques to enhance the visualization quality. To find a representative image for a neuron, image pixels are updated via optimization by fixing the trained weights, unlike when training weights. The input image is first set to greyscale random noise. Then, the updates are repeated several times (20,000 times in our experiments), to finally obtain a feature visualization result, which represents the updated final image.
The visualization is performed for each neuron, or more specifically a channel of the trained network that corresponds to each filter in case of the convolutional layers. The objective function for the optimization at each channel can be written as follows:
$$\begin{aligned} \mathop {\mathrm{arg\,max}}\limits _{I} \sum _{i} f_{i}(I), \end{aligned}$$
where I is the input image to be updated, and \(f_{i}\) is the ith activation score.
The detailed feature visualization process is represented in Algorithm 1. The visualization is conducted for a selected layer L, and a selected channel (filter) ch. We apply the forward-backward algorithm to the newly defined objective function to find the optimized image \(I^{*}\). For computational efficiency, the mean value of the activation scores at the selected channel becomes the optimization objective. The function "\(reduce\_mean\)" computes the mean of elements across dimension of the structured channel output, L[ : , : , : , ch]. The computed gradient is normalized by using the standard deviation. Finally, the image is updated using gradient ascent.
This section is dedicated to the experimental results for our two main contributions: comic artist style classification and feature visualization for the classifier.
Comic artist classification
All the experiments are conducted on an NVIDIA P100 GPU. For each experiment, 80% of the images in the data were randomly selected for training, and the remainder were used for testing. The experiments were repeated 10 times by using random sub-sampling, and then the results were averaged. The total number of iterations was 30,000 with a mini-batch size of 20 for the entire-page input format, and 40,000 with the same mini-batch size for the panel input format. Under this setting, the training time was on average 90 min for entire pages and 50 min for panels.
Entire page input format
Table 3 represents the average performances of the experiments for the entire page format. The precision, recall, and F1 score are calculated for each class. All values are averaged over 10 different experiments. The total means of the averaged precision, recall, and F1 score are given in the last column of the second subtable. The mean F1 score in the experiments is 0.84. This is an encouraging result, considering that there may have been noises owing to the input format. By using the entire page, an input image includes not only drawings, but also texts, balloons, panels, and so on, which represent the different aspects of the comics.
Class A3 obtained the best performance, with an F1 score of 0.94, whereas class A2 was worst, with a score of 0.77. As A3 corresponds to four-cell manga, it is reasonable to assume that the classifier learned this special format during training. When verifying the result in detail, most of the false negatives for the class A2 were predicted as class A8, and vice versa. Considering that the difference in drawing styles between A2 and A8 is smaller than for the other class pairs (see Fig. 1), the misclassification of A2 is understandable. The other classes with low F1 scores are A4 and A5, with 0.79 and 0.8, respectively. The false negatives for the class A5 were almost all predicted as A7. This explains the relatively low precision of the class A7.
Table 3 Classification performance for the entire page format
Figure 4 presents three misclassified examples that correspond to the class A1, A3, and A4 respectively. The first example is an unusual page where panels are not found. The second is an exceptional case of the four cell manga class A3. This type of non-standard format was sometimes found in A3. The third example is also represents an infrequent case because it includes transformed photos only. The trained CNN model mainly misclassified these types of unusual images, of which the form had not been observed in the training set.
Images with classification errors: a an unusual page where panels are not found, b an exceptional case of the four cell manga class A3, c an infrequent case including transformed photos only
Besides the unusual images, there are also regular pages that are incorrectly predicted, as mentioned above. Let us examine those cases in detail. Figure 5 shows the prediction errors for two pairs of classes, A5–A7 and A2–A8. The images in the upper row show the errors between A5 and A7. The original class of the image is given before the arrow, and the predicted class is given after. Misclassification from A5 to A7 often occurs, whereas the reverse does not. The drawing styles are different from each other but both use many complicated backgrounds and effects. This common property might confuse the classifier when learning the weights. False negatives did not occur as often for class A7 as for class A5, because class A7 has a unique style, representing decorative drawing. The images in the lower row show the errors between A2 and A8. Most false negatives for the class A8 were predicted as A2. The two classes share similar drawing styles compared to the other classes, and they both employ many action lines.
Misclassification errors between A5 and A7 (upper) and those between A2 and A8 (lower)
Using the CNN structure, the classifier could successfully separate the images into different artist classes. There are some mistakes, but the errors are mostly because of the similarities of layouts or of drawing styles among images from different classes. This issue might be solved by adding training examples, or using an enhanced model.
Panel input format
Table 4 presents the average results of the experiments for the panel format. As above, all values were averaged over 10 experiments. Interestingly, the performance is considerably weaker compared to the entire page format. The mean F1 score is 0.5, and the mean precision and recall are both 0.48. Despite the weak result, class A3 again obtained the best performance. The result for A3 was impressively high, with an F1 of 0.91. This is most likely because of the uniqueness of the class, which is four-cell manga. Panels in the class were clearly extracted with small errors, and in general the drawings were found in the interiors of the panels. This distinctiveness led to the exceptional score.
The classes A1 and A7 also exhibit relatively good results. These have characteristic drawing styles, where A1 prefers simple and thick lines and A7 has very decorative drawing styles with complicated patterns. On the other hand, classes A2 and A5 achieved the worst results. Their F1 scores were 0.29 and 0.32, respectively. The weak performance for A2 was mainly because of its recall of 0.20, which means that 80% of the tested images in class A2 were incorrectly predicted. Most of these were classified into the two classes, A7 and A8, and the misclassified images in the same class shared some common characteristics.
Table 4 Classification performance for the panel format
The above consequence was predictable, because in the panel format the overall layout of the page was disappeared, while the drawing styles and noises remained. Unlike paintings, the layouts of comics, such as panel structures, speech balloons, and action lines, are as important as the drawing styles. By eliminating the overall layout, the classifier should concentrate on the drawing styles and partial layout only, and therefore the training becomes more difficult. As there are not enough training data, finding patterns using mostly drawing styles becomes nontrivial.
Let us examine in detail the worst recall and precision cases, which are marked by shadow in Table 4. The upper row of Fig. 6 shows the false negative samples for the class A2 (the worst recall). As previously mentioned, their predictions were mostly A7 or A8. The examples (a) and (b) are classified as A7, whereas (c) and (d) are classified as A8. The images misclassified into A7 contain complicated action lines, whereas that into A8 include many texts. As the class A7 contains many complicated backgrounds and A8 contains more texts than the others, it is reasonable to assume that the classifier learned these properties of the classes effectively. The lower row of Fig. 6 shows the false positives for the class A5. These examples lead to the low precision for A5. Unlike the false negatives for A2 in the upper row, it is difficult to determine any pattern in the examples. As the images of A5 contain usually complicated backgrounds but the drawing lines are not very distinctive, the class is likely to share common drawing style properties with the other classes. That would be a reason for the low precision of A5. As a result, the performance gap between classes becomes wider because the classes with low performance have relatively indistinct drawing styles.
False negatives of class A2 (upper) and false positives of class A5 (lower)
The low performance of panel format reflects the fundamental problems of the training data. There are insufficient examples, and partial layouts such as speech balloons and action lines are too often. Using the simple CNN architecture, it is difficult to extract internal patterns in the dataset.
This subsection presents the feature visualization results for two different input formats. The visualization of neurons and the image transformation for selected neurons are provided.
Figure 7 presents examples of the feature visualization for the trained classifier with the entire page format. While feature visualization in object recognition networks captures each object type's common characteristics, this is not the case for our comic classification approach. Instead of detecting object shapes, the model extracts common artistic patterns, such as textures used to separate different styles in the training set. In the figure, each row corresponds to the convolutional layer of the same number. That is, the first row represents the first convolutional layer, and so on. Nine representative neurons are selected for each layer. Some neurons do not update the input image, because the trained weights are almost zeros. There is a clear difference between the layers. The captured features in the first layer are relatively fine and dense. The extracted textures become more complicated and bolder in the upper layers. However, toward the end, the delicate patterns disappear, and only global textures remain.
Feature visualization for the convolutional layers. Each row corresponds to a layer. From top to bottom, first layer, second, third, fourth, and fifth layers are represented respectively
Image transformation results for two different neurons of the first convolutional layer. a Original image, b transformation via 7th neuron, c transformation via 11th neuron
Image transformation results for two different neurons of the first convolutional layer. a Feature visualization of 20th neuron, b–d transformed images using 20th neuron in classes A1, A4 and A7. e Feature visualization of 6th neuron, f–h transformed images using 6th neuron in classes A1, A4 and A7
Because the detected features of the comic classifier reflect the overall patterns of entire pages, the visualization cannot reflect objects. Therefore, while the general feature visualization for object classification detects more sophisticated objects in the latter layers, our classifier rather combines the textures found in the previous layers.
Besides feature visualization, image transformation for each neuron would provide an interesting option for analyzing the captured features in the neurons. Figure 8 presents the transformation results for the image using two different neurons in the first convolutional layer. Figure 8a shows the selected source image of class A1, (b) shows the transformation result with the seventh neuron, and (c) shows that of the 11th neuron. The same technique as used for the feature visualization is employed. However, this time the input is not a random noise image, but a comic page itself. After updating the pixels of the input image 200 times, we can get the transformed result. The feature visualization of the selected neuron is shown at the bottom right of each result.
The two neurons exhibit similar feature visualization results in appearance, but the transformed images are significantly different from each other. While the seventh neuron highlights horizontal lines, and emphasizes the outlines with white curves (b), the 11th neuron highlights diagonal lines (c). Likewise, when classifying an image with a trained model, the image is transformed by emphasizing the particular features of each neuron. Thus, at the final layer of the network, the classification is realized by aggregating these features.
To verify the image transformation more in detail, Fig. 9 illustrates the results of two other neurons (the 20th and sixth neurons). Three images from classes A1, A4, and A7, respectively, are used for the transformation. The two neurons were selected by their scores obtained when updating the test image at each neuron. A high score means that the neuron was highly activated by the image. The scores of all the neurons in the first convolutional layer are computed by updating an image. Finally, a list of scores for all neurons given an image of a certain class is obtained.
The 20th neuron was scored highly by an image of class A1 but not so highly by those of classes A4 and A7. The scores were 95, 22, and 62 for A1, A4, and A7, respectively. The feature visualization and image transformation results for the neuron are shown in the upper row of Fig. 9. An image was selected from each of the classes A1, A4, and A7. When comparing the original images with their transformations, we can discover that the dark parts of the images are emphasized during the optimization. Therefore, the image of A4, which includes tiny dark parts naturally obtained the lowest score.
Meanwhile, the sixth neuron was scored highly for all the three images. This means that the images have all been highly activated by this neuron. Thus, the captured features in the neuron would reflect the common attributes of the three images that contribute to the final classification. The transformation results are shown in the lower row of Fig. 9. Unlike at the 20th neuron, the images were significantly modified. In the case of A4, the original drawing was nearly disappeared. This might explain the comparatively low classification performance (see Table 3).
The feature visualization of the classifier trained using the panel input format produces almost the same result as for the entire page format, while a different visualization was expected. The reason of eliminating the panel structures was to concentrate more on the drawing styles during training. Thus, extracting more sophisticated visual features, which effectively express the drawings, was expected before training. However, the other partial layouts, such as action lines, balloons, and cuts, are still present in the panels. Moreover, as the panel also images include too various shapes in each class, the visualization could not detect representative objects for each neuron. The lack of training data could also disturb the extraction of delicate features. Thus, the layouts and drawing styles both influenced on the visualization, as in the entire page format. Some examples of the obtained visual features are represented in Fig. 10. The first three convolutional layers are shown starting from the top.
Feature visualization of the first, second, and third convolutional layers from the top. The classifier was trained with panel format
Novelty in comic style feature visualization
So far, the different aspects of feature visualization of the proposed comic classifier have been discussed. The primary difference compared to conventional object classifiers is that it does not capture objects in the neurons. The main reason is that the objective of our authorship classification approach is to categorize images in terms of drawing styles, rather than specific objects. Therefore, different objects are mixed together in a class, such that no specific shapes are detected in neurons. However, the CNN classifier could determine the internal patterns of the images in the neurons anyway. Global textures and patterns that highlight partial properties of the images have been detected via feature visualization.
There are also other distinctive characteristics of our work compared to the general image analytics. First, the target images are represented in greyscale. This makes the classification more difficult, because the color in an artwork is an important aspect of the artistic style. Second, the target images consist of drawings, or more specifically lines. Existing deep learning-based approaches dealing with paintings extract the visual features based on textures, shapes, and patterns in two-dimensional color. On the other hand, the comics expresses textures, shapes, and patterns using lines in general. This work performed a foundational study of the visual features of the line-based artworks. For a more detailed analysis, it would be necessary to further develop specialized networks, designed to deal with those line-based artworks such as comics, drawings, and some illustrations.
Feature visualization technique used in this study had been also applied to a trained GoogLeNet [38]. Different approaches to enhance the visualization quality were proposed in that work. Diversity term, regularization, and interaction between neurons are representative examples. Although the proposed comic classifier cannot detect clear object patterns as in the work, those approaches are expected to enhance the comic feature visualization quality as well.
This study proposed to use a CNN for the classification of comic styles. Comic volumes of eight artists are selected from a publicly available comic dataset for the experiments. Two different of input data formats were tested to determine the most effective format for the classification. The first was an entire-page format, and the second was a panel format. The trained model obtained an 84% mean F1 score for the former format. The experimental results are verified in detail, to demonstrate that the classifier could effectively separate the different styles, but made some errors when the styles of different classes were similar. In the case of the panel format, the trained model obtained a weak performance with an F1 score of 48%. This was mainly because of the extracted panel images, which contained too many various shapes in each class. Comparatively, distinguishing classes such as A1 and A5 achieved better results, with F1 scores over 60% and A3 obtained an exceptional score of 91%, thanks to its special layout.
The visual characteristics of a trained classifier was also investigated via a feature visualization technique. This is one of the first attempts to visualize a trained artistic style classifier. An image optimization technique was applied to the trained CNN model, to determine the visual features with which the classifier identifies the classes of test images. The visualized features were significantly different from those of general object classification. The detected features reflected the internal layouts and drawing styles of the comics, instead of representing objects.
An important drawback of our approach is that the detected features diverge strongly from the actual aesthetic elements. Although the features represent the basis of a CNN classifier effectively, they are different from the real artistic styles that distinguish artworks from a human point of view. Therefore, developing a specialized architecture, designed for the detection of aesthetic features, can be considered for future work. One of the most closely related techniques is style transfer, which transfers a style from one image to another. Combining style transfer and feature visualization for line-based artworks would represent an interesting research topic.
The original dataset is available on demand: http://www.manga109.org/ja/index.html.
convolutional neural network
ILSVRC:
large scale visual recognition challenge
Bar Y, Levy N, Wolf L. Classification of artistic styles using binarized features derived from a deep neural network. In: European conference on computer vision 2014. Springer: Cham; 2014.
Gatys LA, Ecker AS, Bethge M. Image style transfer using convolutional neural networks. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR); 2016. p. 2414–23.
Chen D, Yuan L, Liao J, Yu N, Hua G. Stylebank: an explicit representation for neural image style transfer. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. p. 2770–9.
McCloud S. Understanding comics: the invisible art; 1993.
Hensman P, Aizawa K. cGAN-based manga colorization using a single training image. In: Proceedings of the 14th IAPR international conference on document analysis and recognition; 2017. p. 72–7.
Chen Y, Lai Y-K, Liu Y-J. Cartoongan: generative adversarial networks for photo cartoonization. In: The IEEE conference on computer vision and pattern recognition (CVPR); 2018.
Jin Y, Zhang J, Li M, Tian Y, Zhu H, Fang Z. Towards the automatic anime characters creation with generative adversarial networks. CoRR arxiv: abs/1708.05509; 2017.
Wolf L, Taigman Y, Polyak A. Unsupervised creation of parameterized avatars. In: IEEE international conference on computer vision (ICCV), 2017; 2017. p. 1539–47.
Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the 2014 IEEE conference on computer vision and pattern recognition, CVPR '14; 2014. p. 580–7.
Li H, Lin Z, Shen X, Brandt J, Hua G. A convolutional neural network cascade for face detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2015. p. 5325–34.
Singh J, Singh G, Singh R. Optimization of sentiment analysis using machine learning classifiers. Hum Centric Comput Inf Sci. 2017;7(32):1–12.
Yuan C, Li X, Wu QMJ, Li J, Sun X. Fingerprint liveness detection from different fingerprint materials using convolutional neural network and principal component analysis. Comput Mater Contin. 2017;3:357–72.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2016. p. 770–8.
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2015.
Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Commun ACM. 2017;60(6):84–90.
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: ICLR; 2015.
Johnson CR, Hendriks E, Berezhnoy I, Brevdo E, Hughes S, Daubechies I, Li J, Postma E, Wang JZ. Image processing for artist identification—computerized analysis of Vincent van Gogh's painting brushstrokes. In: IEEE signal processing magazine; 2008. p. 37–48.
Karayev S, Trentacoste M, Han H, Agarwala A, Darrell T, Hertzmann A, Winnemoeller H. Recognizing image style. In: Proceedings of the British machine vision conference; 2014.
Saleh B, Elgammal AM. Large-scale classification of fine-art paintings: learning the right metric on the right feature. Int J Digit Art Hist. 2015:71–93.
Tan WR, Chan CS, Aguirre HE, Tanaka K. Ceci n'est pas une pipe: a deep convolutional network for fine-art paintings classification. In: 2016 IEEE international conference on image processing (ICIP); 2016. p. 3703–7.
Thomas C, Kovashka A. Seeing behind the camera: Identifying the authorship of a photograph. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2016.
Hicsonmez S, Samet N, Sener F, Duygulu P. Draw: deep networks for recognizing styles of artists who illustrate children's books. In: Proceedings of the 2017 ACM on international conference on multimedia retrieval. ICMR '17; 2017. p. 338–46.
Lai W-S, Huang J-B, Ahuja N, Yang M-H. Deep laplacian pyramid networks for fast and accurate super-resolution. In: IEEE conference on computer vision and pattern recognition; 2017.
Zhang Y, Tian Y, Kong Y, Zhong B, Fu Y. Residual dense network for image super-resolution. In: The IEEE conference on computer vision and pattern recognition (CVPR); 2018.
Haris M, Shakhnarovich G, Ukita N. Deep back-projection networks for super-resolution. In: IEEE conference on computer vision and pattern recognition (CVPR); 2018. p. 1664–73.
Wilber MJ, Fang C, Jin H, Hertzmann A, Collomosse J, Belongie SJ. Bam! the behance artistic media dataset for recognition beyond photography. In: IEEE international conference on computer vision (ICCV); 2017. p. 1211–20.
Ogawa T, Otsubo A, Narita R, Matsui Y, Yamasaki T, Aizawa K. Object detection for comics using manga109 annotations. CoRR arxiv: abs/1803.08670; 2018.
Chu W-T, Li W-W. Manga facenet: face detection in manga based on deep neural network. In: Proceedings of the 2017 ACM on international conference on multimedia retrieval. ICMR '17; 2017. p. 412–5.
Nguyen N, Rigaud C, Burie J. Digital comics image indexing based on deep learning. J Imaging. 2018;4(7):89.
Nguyen N, Rigaud C, Burie J. Comic characters detection using deep learning. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR); 2017. p. 41–6.
Chu W-T, Chao Y-C. Line-based drawing style description for manga classification. In: Proceedings of the 22Nd ACM international conference on multimedia; 2014. p. 781–4.
Erhan D, Bengio Y, Courville A, Vincent P. Visualizing higher-layer features of deep networks. Technical report; 2009.
Yosinski J, Clune J, Nguyen AM, Fuchs TJ, Lipson H. Understanding neural networks through deep visualization. In: Proceedings of ICML—deep learning workshop; 2015.
Dosovitskiy A, Brox T. Inverting visual representations with convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR); 2016. p. 4829–37.
Mahendran A, Vedaldi A. Visualizing deep convolutional neural networks using natural pre-images. Int J Comput Vision. 2016;120(3):233–55.
Matsui Y, Ito K, Aramaki Y, Fujimoto A, Ogawa T, Yamasaki T, Aizawa K. Sketch-based manga retrieval using manga109 dataset. Multimed Tools Appl. 2017;76(20):21811–38.
Furusawa C, Hiroshiba K, Ogaki K, Odagiri Y. Comicolorization: semi-automatic manga colorization. In: SIGGRAPH Asia 2017 technical briefs; 2017. p. 12–1124.
Olah C, Mordvintsev A, Schubert L. Feature visualization. Distill. 2017. https://doi.org/10.23915/distill.00007.
Acknowlegements
This work is partially supported by two projects, Classification of The Artists using Deep Neural Networks, funded by Hanyang University (201600000002255) and Smart Multimodal Environment of AI Chatbot Robots for Digital Healthcare (P0000536), funded by the Ministry of Trade, Industry and Energy (MOTIE).
Graduate School of Technology & Innovation Management, Hanyang University, Wangsimni-ro, Seoul, South Korea
Kim Young-Min
Search for Kim Young-Min in:
The author read and approved the final manuscript.
Correspondence to Kim Young-Min.
The author declares that there is no competing interests.
Young-Min, K. Feature visualization in comic artist classification using deep neural networks. J Big Data 6, 56 (2019) doi:10.1186/s40537-019-0222-3
Artistic styles
Comic classification
Deep neural networks | CommonCrawl |
Research article | Open | Published: 27 January 2016
Statewide program to promote institutional delivery in Gujarat, India: who participates and the degree of financial subsidy provided by the Chiranjeevi Yojana program
Kristi Sidney1,
Veena Iyer2,
Kranti Vora2,
Dileep Mavalankar2 &
Ayesha De Costa1,2
Journal of Health, Population and Nutritionvolume 35, Article number: 2 (2016) | Download Citation
The Chiranjeevi Yojana (CY) is a large public-private partnership program in Gujarat, India, under which the state pays private sector obstetricians to provide childbirth services to poor and tribal women. The CY was initiated statewide in 2007 because of the limited ability of the public health sector to provide emergency obstetric care and high out-of-pocket expenditures in the private sector (where most qualified obstetricians work), creating financial access barriers for poor women. Despite a million beneficiaries, there have been few reports studying CY, particularly the proportion of vulnerable women being covered, the expenditures they incur in connection with childbirth, and the level of subsidy provided to beneficiaries by the program.
Cross-sectional facility based the survey of participants in three districts of Gujarat in 2012–2013. Women were interviewed to elicit sociodemographic characteristics, out-of-pocket expenditures, and CY program details. Descriptive statistics, chi square, and a multivariable logistic regression were performed.
Of the 901 women surveyed in 129 facilities, 150 (16 %) were CY beneficiaries; 336 and 415 delivered in government and private facilities, respectively. Only 36 (24 %) of the 150 CY beneficiaries received a completely cashless delivery. Median out-of-pocket for vaginal/cesarean delivery among CY beneficiaries was $7/$71. The median degree of subsidy for women in CY who delivered vaginally/cesarean was 85/71 % compared to out-of-pocket expenditure of $44/$208 for vaginal/cesarean delivery paid by non-program beneficiaries in the private health sector.
CY beneficiaries experienced a substantially subsidized childbirth compared to women who delivered in non-accredited private facilities. However, despite the government's efforts at increasing access to delivery services for poor women in the private sector, uptake was low and very few women experienced a cashless delivery. While the long-term focus remains on strengthening the public sector's ability to provide emergency obstetric care, the CY program is a potential means by which the state can ensure its poor mothers have access to necessary care if uptake is increased.
Despite the global maternal mortality ratio (MMR) declining from 380 maternal deaths per 100,000 live births in 1990 to 210 deaths in 2013 [1], maternal deaths still remain high in some countries such as India. Almost a fifth of the 287,000 annual maternal deaths occur in India [2–5].
It is known that skilled birth attendance and access to quality emergency obstetric care (EmOC) are critical to the reduction of maternal mortality [6, 7]. Institutional childbirth has been advocated and adopted by governments all over the world, including India, as a strategy to reduce maternal mortality. Considering the unpredictable occurrence of life-threatening obstetric complications, the assumption is that a facility birth will provide a woman access to skilled birth attendance and EmOC, facilitating the management of complications that could ultimately lead to a reduction in mortality [8].
Although governments in many low middle income countries actively encourage facility-based childbirth for this reason, the capacity of public health facilities to provide life-saving EmOC is limited because of structural weaknesses in the health system including a lack of qualified human resources and shortages of infrastructure and supplies [9]. Such a situation exists in the public health system in many parts of India and in the Western Indian state of Gujarat. The public health sector has an extreme shortage of qualified obstetricians [10] and hence little capacity to provide EmOC. However, in comparison, there are over 1500 qualified obstetricians [11] practicing in the for-profit private health sector. This sector operates largely on the basis of out-of-pocket (OOP) payments from users.
The relationship between poverty and maternal death is well known [12]. Recent studies in South Asia [13, 14] have highlighted OOP expenditures for poor women as a barrier to seeking childbirth services in a health facility. In 2005–2006, only 13 % of India's poorest women gave birth in a health facility providing EmOC, while the corresponding figure for the wealthiest women was 84 % [15]. Poor/tribal women (who bear the brunt of maternal morbidity and mortality) face financial barriers to accessing functional EmOC services in the country as these services are largely concentrated in the for-profit private sector [16, 17]. This inequity emphasizes the importance of developing strategies that remove financial barriers to maternal delivery services and enable poor women to receive proper care where it is available.
In order to minimize financial barriers and provide poor/tribal women access to the available EmOC in the private sector, the Government of Gujarat initiated a voucher-like program, Chiranjeevi Yojana (CY, a scheme for long life). Under this public-private partnership, qualified private obstetricians are paid by the state government to provide a cashless delivery for poor/tribal women within the state [18].
Most voucher-like programs worldwide are small and managed by non-governmental organizations or donors [19]. CY in comparison is a large statewide voucher-like program run and financed entirely by the government. Despite nearly a million beneficiaries [20], there have been few reports critically studying the CY public-private partnership [21–27]. While a small pilot evaluation was performed in 2006 [21], only three studies were implemented since the program was rolled out statewide. Two studies examined the impact of CY on increasing institutional delivery [23, 24], and the third was a qualitative study focusing on the perception and experience of private providers with regard to the CY program [27].
This paper aims to advance the state of knowledge on the CY program particularly by establishing the degree of uptake and the level of financial subsidy obtained by beneficiaries by (i) studying the proportion of eligible women who become CY beneficiaries and (ii) ascertaining OOP expenditures and the extent the CY program subsidized childbirth. This is relevant not only for researchers, implementers, and policy makers in India but also for other low-income settings where similar programs are being planned and implemented.
Study setting
Gujarat, India, has a population of 60.3 million [4], a per capita income 25 % higher than the national average, a MMR of 122 per 100,000 live births [2], and an infant mortality rate of 41 per 1000 live births [28]. The state is divided into 26 administrative districts, each with a population of 1–3 million [4]. It is considered one of the high-performing states in India with strong socioeconomic growth over the last decade and a 24 % reduction in MMR between 2004 and 2012 [2]. Sixty percent of all births in the state take place in the private sector [24].
The Chiranjeevi Yojana program
CY is a performance-based financing program that functions in the context of an existing strong private obstetric care sector in Gujarat, India. The rationale for the program has been described above. The state government pays accredited private facilities, run by qualified obstetricians, to provide free childbirth care to women from below poverty line (BPL) households and tribal women. BPL or tribal eligibility is identified by official documentation provided by a government authority [29]. All willing private obstetricians who met the basic requirements outlined by the government could apply to participate in the CY program.
The remuneration package at the time of the study was $5600 per 100 deliveries (described in Additional file 1). The package has been revised upwards periodically since the program's inception. The payment structure creates an embedded disincentive for unnecessary cesareans as the provider receives a fixed payment per 100 deliveries regardless of the delivery mode. The program was implemented statewide in 2007 and has benefited almost a million women [20].
Study design: a cross-sectional study performed in health facilities
Three districts, Sabarkantha, Dahod, and Surendranagar, were purposefully selected from diverse geographic areas. These districts had varying human development indices and different population compositions, i.e., varying proportions of tribes and populations living below the poverty line. As seen in Table 1, the eligible population for the program differed widely among the study districts, as did the number of accredited facilities.
Table 1 Characteristics of the study districts in Gujarat, India
Identifying facilities providing intrapartum care
An initial list of all public and private health facilities that routinely provided intrapartum care was obtained from the district public health officials. These facilities and local pharmacies were approached to identify any remaining private facilities that were not on the initial listing. The number of deliveries performed in the previous 3 months for each of the identified facilities was ascertained. Facilities that performed more than 30 deliveries in the previous 3 months were included in the study.
Study participants
Trained research assistants visited each of the study facilities for a consecutive 5-day period and interviewed women who gave birth at these facilities. A questionnaire was administered to the mother or a family member present in the facility before discharge. Basic sociodemographic characteristics, pregnancy and delivery details, OOP expenses, and whether they received the CY benefit were elicited. More specific details related to the delivery and complications experienced (when applicable) were obtained from a nurse on the labor ward. On average, the administration of the questionnaire took 25 min. During this period, research assistants also enquired whether the facility routinely performed cesarean sections and blood transfusions in the last 3 months or only vaginal deliveries. The study was performed between June 2012 and April 2013.
During the recruitment period, 1632 mothers delivered in the study facilities. Women were excluded from this study for the following reasons: (i) not eligible for the CY program (n = 409, 25 %), (ii) discharged from the facility before being recruited (n = 221, 14 %), or (iii) resided outside the province of Gujarat (n = 101, 6 %).
Eligibility criteria for women to be beneficiaries of the CY program
Women were considered eligible for the program if they reported possessing a government-issued BPL card, tribal certificate, or other officially accepted documentation as formal proof of poverty status.
Beneficiary status by place of delivery
Beneficiary status by place of delivery are grouped as follows: CY beneficiary (CYB): women who delivered in a facility participating in the CY program and reported receiving the CY benefit. CY non-beneficiary (CYNB): eligible women who delivered in a CY facility but did not receive the benefit. Private non-beneficiary (PNB): Eligible women who delivered in a non-accredited private facility and did not receive the CY benefit. Government non-beneficiary (GNB): eligible women who delivered in a government-run (public sector) facility and hence did not receive the CY benefit.
Facilities were classified into three groups depending on if they provided cesarean sections (CS) and blood transfusions (BT) in the previous last 3 months: non-CS facility: facilities that did not provide CS and only conducted vaginal deliveries. CS facilities: facilities that conducted both vaginal and CS deliveries but did not provide BT. CS & BT facilities: facilities that conducted vaginal and CS deliveries and provided BT.
Background variables
Education: Women were categorized as having no formal education (i.e., never went to school) or having some formal education.
Caste or tribe: Women were divided into three groups, i.e., tribal (indigenous people), backward caste, and general (not backward caste). Backward castes are specially identified groups in the Indian constitution who have faced social discrimination historically and are still vulnerable. The constitution identifies these groups as they are recipients of positive affirmative action under the law [30]. Backward caste includes scheduled caste and other backward castes.
Household wealth: To assess household wealth, 20 household items, structural type of dwelling, and sanitation arrangements were included as used in the National Family Health Survey [15]. Principal component analysis was used to calculate a wealth index score, and then women were categorized into five wealth quintiles.
Direct obstetric complications: Intrapartum care complications were recorded from a staff member on the labor ward. Hemorrhage (antepartum, intrapartum, or postpartum) prolonged/obstructed labor, postpartum sepsis, and severe pre-eclampsia/eclampsia were all classified as complications.
CY program awareness was recorded as "yes" if the women reported knowledge of the CY program prior to delivery.
To study the OOP expenditures and the degree of subsidy provided by the program, we grouped expenses incurred by each mother as follows:
Health facility expenditure: Both direct and indirect medical expenditures incurred for childbirth in the facility were collected. Direct medical OOP expenditures included expenses for delivery, medicines, supplies, BT, laboratory investigations, and anesthesia. Indirect medical OOP expenses included admission fee, accommodation charge, and food. All health facility expenditures (direct and indirect) are theoretically covered by payments to the obstetrician under the CY program, so that a CY beneficiary receives cashless service for their delivery.
Informal payments were expenditures reported as 'rewards' paid by the women/families to the staff for assisting their care.
Transportation costs included all costs associated with reaching the health facility for delivery.
Degree of subsidy provided by the CY program
The assumption was made that the expense paid for delivery by PNB was the current market price for childbirth services in the private sector. In the absence of the CY program, this would be the minimum price that a mother would have paid OOP if she delivered in the private sector. We calculated the extent to which each mother was subsidized by participating in the CY program shown below.
$$ \mathrm{Subsidy}\;\%\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{vaginal}\ \mathrm{delivery}:\left[1-\left(\frac{\mathrm{Median}\ \mathrm{C}\mathrm{Y}\mathrm{B}\ \mathrm{Health}\ \mathrm{f}\mathrm{acility}\ \mathrm{expenditure}\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{vaginal}\ \mathrm{delivery}+\mathrm{transportation}\ \mathrm{cost}\ }{\mathrm{Median}\ \mathrm{P}\mathrm{N}\mathrm{B}\ \mathrm{Health}\ \mathrm{f}\mathrm{acility}\ \mathrm{expenditure}\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{vaginal}\ \mathrm{delivery}+\mathrm{transportation}\ \mathrm{cost}}\right)\right]\times 100\% $$
$$ \mathrm{Subsidy}\;\%\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{C}\mathrm{S}\ \mathrm{delivery}:\left[1-\left(\frac{\mathrm{Median}\ \mathrm{C}\mathrm{Y}\mathrm{B}\ \mathrm{Health}\ \mathrm{f}\mathrm{acility}\ \mathrm{expenditure}\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{C}\mathrm{S}\ \mathrm{delivery}+\mathrm{transportation}\ \mathrm{cost}\ }{\mathrm{Median}\ \mathrm{P}\mathrm{N}\mathrm{B}\ \mathrm{Health}\ \mathrm{f}\mathrm{acility}\ \mathrm{expenditure}\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{C}\mathrm{S}\ \mathrm{delivery}+\mathrm{transportation}\ \mathrm{cost}}\right)\right]\times 100\% $$
Descriptive statistics were used to describe the study sample by place of delivery. Chi square was used to identify significant differences between characteristics of women who delivered in an accredited CY facility and the other two groups (PNB and GNB). Simple proportions were used to describe the proportion of eligible women who became CYB and CYNB in an accredited CY facility. A multivariable logistic regression was performed to identify predictors of receiving the CY benefit within an accredited CY facility. The median and interquartile range (IQR) for health facility expenditures was stratified by vaginal and CS deliveries. Since the health expenditures were not normally distributed, the non-parametric Wilcoxon signed rank test was used to detect differences between different groups of women. Informal payments and transportation costs were also described. The percentage subsidy provided was calculated for each individual CYB and expressed as a median for the cohort.
The study was described to all study participants. Written informed consent was obtained from the participants before they were enrolled in the study and responded to the questionnaire. Ethical approval was granted by the Indian Institute of Public Health, Gandhinagar, Gujarat, India, with the ethical approval number TRC-IEC No:23/2012 and Karolinska Institutet:2010/1671–31/5.I.
One hundred fifty-eight public and private facilities were identified in the initial listing process. Among those facilities, 21 did not perform a delivery during the 5-day recruitment period and eight declined to participate in the study. As depicted in Table 2, the study participants delivered in 129 different facilities within the three study districts; 37 accredited CY private, 36 government, and 56 non-CY-accredited private facilities. Among the 129 facilities, 48 (37 %) did not perform CS, 8 (6 %) conducted only CS but not BT, and 73 (57 %) performed both CS and BT in the last 3 months. The majority (86 %, 31/36) of government facilities did not provide CS or BT while most private facilities (73 %, 68/93) provided both services.
Table 2 Access to emergency interventions (cesarean sections and blood transfusions) performed by facility type (n=129). Column % presented
As shown in Fig. 1, the final study sample included 901 women who met the CY program eligibility criteria of being BPL or tribal. Of these eligible women, 286 delivered in a facility that participated in the CY program, 150 (16 %) were CYB, and 136 were CYNB. Of the remaining eligible non-beneficiaries, 336 delivered in a government facility (GNB) and 279 delivered in a private facility (PNB).
Study sample by place of delivery and receiving the CY benefit
Characteristics of eligible women for the CY program
Table 3 describes the overall characteristics of the study sample. The sociodemographic characteristics of women who delivered in an accredited CY facility and a non-private facility (PNB) did not significantly differ with the exception of proportions of women in the poorest (more in CY facilities) and richest (more in non-accredited private facilities) quintiles. Women who delivered in a government facility (GNP) were significantly poorer, less educated, higher parity, and belonged to tribes when compared to women who delivered in a CY facility. They also utilized antenatal care services less.
Table 3 Characteristics, pregnancy, and delivery details of the study sample (n = 901). Column % presented
The proportion of direct obstetric complications reported was similar for women across all three places of delivery; however, the CS proportion was significantly higher for women who delivered in a non-accredited private facility (PNB) (20 %, n = 55/279) compared to women who delivered in a CY (8 %, n = 22/286) and government facility (GNB) (5 %, n = 18/336).
CY program awareness
More than a third of all women (n = 353) had previous awareness of the program. While 74 % (n = 211) of women who delivered in a CY facility had prior knowledge of the program, only 27 % (n = 154) and 20 % (n = 68) of women who delivered in a non-CY private (PNB) and government facility (GNB) reported the same.
The accredited social health activist (ASHA), a village volunteer, was responsible for informing almost half (n = 172/353) of the women who knew about the CY program. Women also gained knowledge of the program from local community health workers (n = 97), relatives, and friends (n = 85) and other sources including the facility itself and the media (n = 52). Among the women who did not have prior knowledge (n = 541), half delivered in a government facility (GNB) and a third delivered in a non-accredited private facility (PNB)
The proportion of CY beneficiaries
A third (n = 286/901) of the women in the study delivered in an accredited CY facility, but only half (n = 150/286) became program beneficiaries. As shown in Additional file 2, women who received the CY benefit did not significantly differ from non-beneficiary (CYNB) women who delivered in the same facility with the exception of education and prior knowledge of the program. Women who received the benefit were more educated and more likely to have had prior knowledge of the program. In a multivariable analysis (Additional file 2), formal education and knowledge about the program were significantly associated with receiving the CY benefit.
The main reasons cited by women who delivered in a CY facility but did not receive the benefit were lack of (i) proper documentation required by the provider to issue the benefit (n = 112) and (ii) awareness about the CY program (n = 46).
Access to cesarean sections and blood transfusions
Among the 901 participants, a quarter (n = 233) delivered in a facility that did not routinely provide CS, 86 % (n = 201) of these were in government-operated facilities. Almost all of women who delivered in an accredited CY facility (n = 284) and 90 % of PNB (n = 249) had access to CS. Among women who delivered in a CY facility and non-accredited private facility, 70 % (n = 201) and 72 % (n = 201), respectively, had access to blood transfusion services as well.
Out-of-pocket expenditures and degree of subsidy provided by the CY program
Out-of-pocket expenditures
Figure 2 depicts the total OOP costs (health facility expenditure, informal payments, and transport) by type of delivery. Almost a quarter (n = 214) of the study sample women received a cashless delivery; the majority of these women delivered in a government facility (n = 178). Only 36 (24 %) of the 150 CYB received a completely cashless delivery. As described in Table 4, the median health facility expenditure for a vaginal/cesarean delivery among beneficiaries (CYB) who delivered in a CY-accredited facility was $5/$69 and $47/$199 for non-beneficiaries (CYNB). The facility OOP expenditures for CYB were significantly different than facility OOP expenditures for CYNB and PNB. The facility expenditures for women who delivered in non-accredited private facility (PNB) ($44/$208) did not significantly differ from non-beneficiaries (CYNB) who delivered in facilities where the CY program was operational. The median facility expenditure associated with government facility care was $0/$18.
Boxplot for out-of-pocket expenditures by beneficiary status for vaginal and cesarean section deliveries (US$). OOP out-of-pocket, CS cesarean section, CYB CY beneficiary, CYNB delivered in an accredited CY facility, but did not receive the benefit, PNB private non-beneficiary, GNB government non-beneficiary
Table 4 Median and interquartile range (IQR) for health facility expenditures associated with normal and cesarean deliveries, informal payments, and transportation costs in dollars
Degree of subsidy
The degree of subsidy provided by the program differed between vaginal and cesarean deliveries. The median subsidy for women who delivered vaginally was 85 % with an IQR of 74 to 100 %. Women who had a cesarean section received a median protection of 71 % and ranged from 66 to 80 %.
Previous literature on CY (i) has been small-scale studies performed during the initial rollout [21], (ii) has been on secondary data [24, 31], or (iii) did not identify CY beneficiaries [23]. Our study results show the uptake and level of subsidy provided by an innovative public-private partnership program to help remove financial barriers for poor/tribal women to deliver in a health facility. It contributes to the existing body of literature on the CY program and has not been reported previously with two main findings: (1) Uptake of the CY program was 16 % among eligible women and (2) the CY program subsidized a substantial portion of the cost for its beneficiaries. However, many eligible women were not able to avail the CY benefit despite delivering in a facility that participated in the program.
Difficulty to reach the poorest populations
There is extensive literature establishing the link between poverty and maternal death. A study from Gujarat State found that poverty is the most important determinant influencing utilization of maternal health services, regardless of social caste or place of residence [32]. This inequality emphasizes the importance of developing innovative strategies that remove financial barriers and enable the most vulnerable women to receive proper access to delivery care. A recent synthesis of literature on demand-side financing programs argued that one of the most significant shortfalls of these programs is inadequate targeting, i.e., the difficulty to reach the poorest and underserved populations [33].
Low uptake in the Chiranjeevi Yojana program
There is a two-step process to receive the CY benefit: (i) Women must choose to deliver in a facility that participates in the program and (ii) then they need to prove their eligibility. We found only a third of the women chose to deliver in a facility that participated in the CY program, which implies that only a third of the study sample had the opportunity to become beneficiaries. Secondly, only half of those eligible women who delivered in a participating facility received subsidized services. Therefore, only a portion of our study sample successfully became beneficiaries despite a high awareness of the program in this group.
Steps to improve uptake
In light of the fact that most public facilities do not provide EmOC, uptake in the CY program needs to be improved so poor women have access to necessary care in the case of an obstetric emergency. While we do not know the explicit reason why women chose to deliver in a CY facility as we did not specifically enquire, a large proportion of women who delivered in a CY facility compared to non-accredited CY facilities had prior knowledge of the program. As highlighted in a recent review of maternal health voucher programs, community mobilization is one of the most important components of a successful program in terms of uptake and reaching the target population [19]. From our results, prior knowledge of the CY program was also a key determinant for receiving the benefit within a facility. Community level actors like the ASHA could be better utilized to improve awareness and knowledge of the program by targeting these vulnerable groups.
Barriers to receiving the CY benefit within a participating facility
As demonstrated in our study, delivering in a participating facility does not guarantee a woman will automatically receive the CY benefit. Lack of requisite documentation/proof of eligibility was reported as a barrier to receiving the benefit by many eligible women. In a qualitative study, Ganguly et al. found that some doctors who participated in the program felt that the women had little knowledge of the eligibility documentation needed to receive the benefit [27]. The role of and interaction with the community health worker becomes especially critical in the preparation of the documents necessary to establish the women's eligibility status for the program and subsequently receive delivery services free of charge. The requirement to lower paperwork and documentation necessary for women to enter the program needs to be considered. Snarls around paperwork have been reported as precluding becoming a beneficiary.
Is the Chiranjeevi Yojana program's providing cashless deliveries?
While the program significantly subsidized delivery costs and reduced the financial burden for vulnerable women in our study through the CY program, only 36 of the 150 beneficiaries received a completely cashless delivery. Information asymmetry could be responsible for women not experiencing a cashless delivery. Similar to other health care settings in low-income countries, there is an extreme asymmetry between the health care provider and the patient [34].
Another probable explanation is the insecurity felt by some private health providers around receiving the reimbursement from the government. Some providers reported mitigating the risk of not receiving payment by imposing a cash deposit upon registration of the pregnant women for delivery. If the appropriate eligibility proof was supplied, the deposit was returned [27]. Constructive oversight in the form of better monitoring by the state can ensure cashless deliveries are facilitated under the program.
Poor uptake of the program could also be related to women sharing their experiences of paying for delivery services irrespective of the program's intended objective. It is important for the program to ensure delivery services are free of cost at the point of care as this could be a deterrent for women to participate.
CY program reduces OOP expenditures for beneficiaries
A few studies have reported that childbirth expenditures, usually incurred in the private sector, are catastrophic for poor households [13, 35]. In our study, the CY program gave poor women the ability to choose where they delivered and receive EmOC if needed while avoiding debilitating amounts of debt. Even though a large majority of CY beneficiaries reported incurring some OOP expenditure, we still found a significant reduction in costs for those beneficiaries. Non-beneficiary women who delivered in a private facility paid 6.5 times more for a vaginal birth and three times more for a cesarean section than CY beneficiaries. This finding is consistent with what Bhat et al. previously reported (i.e., the CY program was effective in reducing OOP childbirth expenditures for its beneficiaries) [21]. However, Mohanan et al. found little or no association between the Chiranjeevi Yojana program and the reduction of OOP costs for deliveries [23]. The contradicting results may be explained by the fact they did not identify CY beneficiaries, the difference in study designs [36], and the timeframe when the OOP expenses were collected.
Methodology considerations/limitations
This is the first study to estimate the proportion of CY beneficiaries among women who deliver in health facilities.
It has been reported in many Asian countries that families borrow money to pay for maternal-related costs thus being forced to forego essential items like food and education to repay the loans. These costs have a ripple effect on the family for years to come [37]. While this study has shown CY beneficiaries have reduced OOP expenditures compared to non-beneficiaries, it is not known if the reduction is large enough. Further research is needed to understand the magnitude of the reduction.
This study is facility based; therefore, our sample is restricted to women who reached a facility to deliver. While the proportion of home deliveries in Gujarat is low (10.7 %) [24], the majority of women who delivered at home would probably be eligible for the CY program.
Many studies highlight the limitations (e.g., recall bias and underreporting) associated with collecting health expenditure data [38–40]. Cost data was collected shortly after delivery and triangulated with other family members to minimize recall bias. A disaggregated cost collection design was used to improve accuracy and avoid underreporting of expenditures.
CY program beneficiaries experienced a substantially subsidized childbirth compared to other women who delivered in non-CY-accredited private facilities. However, despite the government's efforts at increasing access to delivery services for poor women in the private sector, uptake was low and very few women actually experienced a cashless delivery. While there is definitely a need to strengthen the provision of EmOC in the public sector, the CY program is a means by which the state can ensure its poor mothers have access to appropriate care at facilities that can provide EmOC. Measures need to be taken to improve uptake.
World Health Organization. Trends in maternal mortality: 1990 to 2013 executive summary. Geneva: 2014. Available: http://apps.who.int/iris/bitstream/10665/112697/1/WHO_RHR_14.13_eng.pdf?ua=1.
Registrar General of India. Special bulletin on maternal mortality: 2010-2012. New Delhi: Office of the Registrar General, India Ministry of Home Affairs, Government of India; 2013.
Government Vital Gujarat Statistics Division. Health statistics, Gujarat: 2010-2011. Gandhinagar: 2012. Available: http://gujhealth.gov.in/images/pdf/HEALTH_STATISTICS_2010-11.pdf.
Government of India. Provisional population totals: Gujarat. Census; 2011. Available: http://www.censusindia.gov.in/2011-prov-results/prov_data_products_gujarat.html.
Mavalankar D. State of Maternal Health, Chapter 2, in State of India's Newborns, National Neonatology Forum & Save the Children US, New Delhi/Washington DC: 2004. pp. 27–42.
McCarthy J, Maine D. A framework for analyzing the determinants of maternal mortality. Stud Fam Plann. 2014;23:23–33. doi:10.2307/1966825.
Ronsmans C, Graham WJ. Maternal mortality: who, when, where, and why. Lancet. 2006;368:1189–200. doi:10.1016/S0140-6736(06)69380-X.
Campbell OMR, Graham WJ. Strategies for reducing maternal mortality: getting on with what works. Lancet. 2006;368:1284–99. doi:10.1016/S0140-6736(06)69381-1.
Paxton a, Bailey P, Lobis S, Fry D. Global patterns in availability of emergency obstetric care. Int J Gynecol Obstet. 2006;93:300–7. doi:10.1016/j.ijgo.2006.01.030.
Government of India. Rural health statistics bulletin 2006. New Delhi: 2006;20, p. 43. Available: http://www.cbhidghs.nic.in/hia2005/content.asp.
Bhat R, Verma BB, Reuben E. Hospital efficiency: an empirical analysis of district hospitals and grant-in-aid hospitals in Gujarat. J Health Manag. 2001;3:167–97. Available at: http://jhm.sagepub.com/cgi/doi/10.1177/097206340100300202. Accessed 22 January 2014.
Graham WJ, Fitzmaurice AE, Bell JS, Cairns J a. The familial technique for linking maternal death with poverty. Lancet. 2004;363:23–7. doi:10.1016/S0140-6736(03)15165-3.
Bonu S, Bhushan I, Rani M, Anderson I. Incidence and correlates of "catastrophic" maternal health care expenditure in India. Health Policy Plan. 2009;24:445–56. Available at: http://www.ncbi.nlm.nih.gov/pubmed/19687135. Accessed 25 June 2011.
Borghi J, Storeng KT, Filippi V. Overview of the costs of obstetric care and the economic and social consequences for households. Stud Heal Serv Organ Policy. 2008;24:23–46. Available at: http://www.itg.be/itg/Uploads/Volksgezondheid/shsop24/03_Overview of the costs of obstetric care and the economic and social consequences for households.pdf.
International Institute for Population Sciences (IIPS) and Macro International. National Family Health Survey (NFHS-3), 2005-06: India: Volume I. Mumbai; 2007. Available: http://pdf.usaid.gov/pdf_docs/PNADK385.pdf.
Vora K, Mavalankar D, Ramani K, Upadhyaya M, Sharma B, Iyengar S, et al. Maternal health situation in India: a case study. J Health Popul Nutr. 2009;27:184–201.
International Institute for Population Sciences (IIPS). District Level Household and Facility Survey (DLHS-3), 2007-08: India. Mumbai: 2010. Available: http://www.rchiips.org/pdf/india_report_dlhs-3.pdf.
Government of Gujarat. Socio-economic review 2010-2011. Gujarat State. Gandhinagar: Directorate of Economics and Statistics, Government of Gujarat; 2011.
Bellows BW, Conlon CM, Higgs ES, Townsend JW, Nahed MG, Cavanaugh K, et al. A taxonomy and results from a comprehensive review of 28 maternal health voucher programmes. J Health Popul Nutr. 2013;31:106–28.
Government of Gujarat. Vibrant Gujarat Summit Indian Institute of Public Health Gandhinagar. 2015.
Bhat R, Mavalankar DV, Singh PV, Singh N. Maternal healthcare financing: Gujarat's Chiranjeevi scheme and its beneficiaries. J Heal Popul Nutr. 2009;27:249–58.
Mavalankar D, Singh A, Patel SR, Desai A, Singh PV. Saving mothers and newborns through an innovative partnership with private sector obstetricians: Chiranjeevi scheme of Gujarat, India. Int J Gynecol Obstet. 2009;107:271–6. Available at: http://dx.doi.org/10.1016/j.ijgo.2009.09.008.
Mohanan M, Bauhoff S, Forgia L, Singer K, Miller G. Effect of Chiranjeevi Yojana on institutional deliveries and neonatal and maternal outcomes in Gujarat, India: a difference-in differences analysis. Bull World Health Organ. 2013;1–13. doi:10.2471/BLT.13.124644.
Vora K, Ryan K, Santacatterina M, De Costa A. The state-led large scale public private partnership "Chiranjeevi program" to increase access to institutional delivery among poor women in Gujarat, India: how has it done? What can we learn? PLoS One. 2014. doi:10.1371/journal.pone.0095704.
Acharya A, McNamee P. Assessing Gujarat's "Chiranjeevi" scheme. Econ Polit Wkly. 2009;48. Available at: http://www.indiaenvironmentportal.org.in/files/Chiranjeevi.pdf.
Singh A, Mavalankar D V, Bhat R, Desai A, Patel SR, Singh V, et al. Providing skilled birth attendants and emergency obstetric care to the poor through partnership with private sector obstetricians in Gujarat, India. Geneva: 2009. p. 960-964. doi:10.2471/BLT.08.060228.
Ganguly P, Jehan K, de Costa A, Mavalankar D, Smith H. Considerations of private sector obstetricians on participation in the state led "Chiranjeevi Yojana" scheme to promote institutional delivery in Gujarat, India: a qualitative study. BMC Pregnancy Childbirth. 2014;14:352. Available at: http://bmcpregnancychildbirth.biomedcentral.com/articles/10.1186/1471-2393-14-352. Accessed 6 November 2014.
Registrar General of India. Sample registration system. New Delhi: SRS Bulletin; 2011.
Government of India. Below poverty line (BPL) 2002 census. New Delhi: 2002. Available: http://bpl.nic.in/.
Government of India Ministry of Law and Justice. Constitution of India. New Delhi: 2007. Available: http://lawmin.nic.in/coi/coiason29july08.pdf.
Singh A, Mavalankar DV, Bhat R, Desai A, Patel SR, Singh V, et al. Providing skilled birth attendants and emergency obstetric care to the poor through partnership with private sector obstetricians in Gujarat, India. World Health: 2009, 960–964. doi:10.2471/BLT.08.060228.
Saxena D, Vangani R, Mavalankar DV, Thomsen S. Inequity in maternal health care service utilization in Gujarat: analyses of district-level health survey data. Glob Health Action. 2013;6:1–9. Available at: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3591509/?tool=pmcentrez&report=abstract.
Morgan L, Stanton ME, Higgs ES, Balster RL, Bellows BW, Brandes N, et al. Financial incentives and maternal health: where do we go from here? J Heal Popul Nutr. 2013;31:8–22. http://www.ncbi.nlm.nih.gov/pubmed/24992800.
Das J, Hammer J. Quality of primary care in low-income countries. Annu Rev Econom. 2014. Available: http://wws-roxen.princeton.edu/chwpapers/papers/Das_Hammer_AnnualReviewEconomics_FinalRevision.pdf.
Skordis-worrall J, Pace N, Bapat U, Das S, More NS, Joshi W, et al. Maternal and neonatal health expenditure in mumbai slums (India): A cross sectional study. BMC Public Health. BioMed Central Ltd; 2011;11:150. Available: doi: 10.1186/1471-2458-11-150.
De Costa A, Vora K, Schneider E, Mavalankar D. Gujarat's Chiranjeevi Yojana—a difficult assessment in retrospect. Bull World Heal Organ. 20152015;93: 436A–436B. doi:10.2471/BLT.14.137745.
Rannan-Eliya RP, Kasthuri G, Alwis S De. Impact of maternal and child health private expenditure on poverty and inequity: maternal and child health expenditure in Bangladesh. Technical Report C. Mandaluyong City, Philippines: 2012. Available: http://www.adb.org/sites/default/files/publication/30159/impact-mnch-private-expenditure-poverty-inequity-bangladesh-tr-c.pdf.
Lu C, Chin B, Li G, Murray CJ. Limitations of methods for measuring out-of-pocket and catastrophic private health expenditures. Bull World Health Organ. 2009;87:238–44. doi:10.2471/BLT.08.054379. Accessed 4 November 2014.
Xu K, Ravndal F, Evans DB, Carrin G. Assessing the reliability of household expenditure data: results of the World Health Survey. Health Policy. 2009;91:297–305. Available: http://www.ncbi.nlm.nih.gov/pubmed/19217184. Accessed 1 November 2014.
Winter J. Response bias in survey-based measures of household consumption. Econ Bull. 2004;3:1–12.
The authors would like to thank all of the study participants. We would like to acknowledge the Department of Health and Family Welfare, Government of Gujarat. The research leading to these results has received funding from the European Community's Seventh Framework Program under grant agreement no [261304]. We would also like to acknowledge the MATIND research and field staff at the Indian Institute of Public Health—Gandhinagar for their tireless efforts during data collection.
Public Health Sciences, Karolinska Institutet, Widerströmska, Tomtebodavägen 18A, plan 4, SE-171 77, Stockholm, Sweden
Kristi Sidney
& Ayesha De Costa
Indian Institute of Public Health Gandhinagar, Public Health Foundation of India, Ahmedabad, Gujarat, India
Veena Iyer
, Kranti Vora
, Dileep Mavalankar
Search for Kristi Sidney in:
Search for Veena Iyer in:
Search for Kranti Vora in:
Search for Dileep Mavalankar in:
Search for Ayesha De Costa in:
Correspondence to Kristi Sidney.
KS, KV, VI, DM, and AD conceived and designed the experiments. VI and KS performed the experiments. KS analyzed the data. KS and AD wrote the paper. All authors have approved this manuscript.
Remuneration package for Chiranjeevi Yojana program at the time of the study. (DOCX 14 kb)
Characteristics of CY Beneficiaries and CY Non-Beneficiaries who delivered in participating CY facilities and logistic multivariable regression for receiving the CY benefit (n=286). Column % presented. (DOC 64 kb)
Demand-side financing
Chiranjeevi Yojana | CommonCrawl |
Only show content I have access to (14)
Psychiatry (12)
Physics and Astronomy (9)
Materials Research (2)
Publications of the Astronomical Society of Australia (7)
Antarctic Science (2)
International Psychogeriatrics (2)
MRS Online Proceedings Library Archive (2)
Radiocarbon (2)
Ageing & Society (1)
Enterprise & Society (1)
European Psychiatry (1)
Infection Control & Hospital Epidemiology (1)
Journal of Clinical and Translational Science (1)
Journal of Glaciology (1)
Journal of Paleontology (1)
PMLA / Publications of the Modern Language Association of America (1)
Palliative & Supportive Care (1)
Proceedings of the Nutrition Society (1)
The British Journal of Psychiatry (1)
Cambridge University Press (14)
Materials Research Society (2)
Royal College of Speech and Language Therapists (2)
Business History Conference (1)
European Psychiatric Association (1)
International Glaciological Society (1)
International Psychogeriatric Association (1)
Modern Language Association of America (1)
Nestle Foundation - enLINK (1)
Ryan Test (1)
Society for Healthcare Epidemiology of America (SHEA) (1)
The Paleontological Society (1)
309 MYC Inhibition Overcomes IMiD Resistance in Heterogeneous Multiple Myeloma Populations
JCTS 2022 Abstract Collection
Lorraine Davis, Zachary J. Walker, Denis Ohlstrom, Brett M. Stevens, Peter A. Forsberg, Tomer M. Mark, Craig T. Jordan, Daniel W. Sherbenou
Journal: Journal of Clinical and Translational Science / Volume 6 / Issue s1 / April 2022
Published online by Cambridge University Press: 19 April 2022, p. 54
OBJECTIVES/GOALS: Immunomodulatory drugs (IMiDs) are critical to multiple myeloma (MM) disease control. IMiDs act by inducing Cereblon-dependent degradation of IKZF1 and IKZF3, which leads to IRF4 and MYC downregulation (collectively termed the "Ikaros axis"). We therefore hypothesized that IMiD treatment fails to downregulate the Ikaros axis in IMiD resistant MM. METHODS/STUDY POPULATION: To measure IMiD-induced Ikaros axis downregulation, we designed an intracellular flow cytometry assay that measured relative protein levels of IKZF1, IKZF3, IRF4 and MYC in MM cells following ex vivo treatment with the IMiD Pomalidomide (Pom). We established this assay using Pom-sensitive parental and dose-escalated Pom-resistant MM cell lines before assessing Ikaros axis downregulation in CD38+CD138+ MM cells in patient samples (bone marrow aspirates). To assess the Ikaros axis in the context of MM intratumoral heterogeneity, we used a 35-marker mass cytometry panel to simultaneously characterize MM subpopulations in patient samples. Lastly, we determined ex vivo drug sensitivity in patient samples via flow cytometry. RESULTS/ANTICIPATED RESULTS: Our hypothesis was supported in MM cell lines, as resistant lines showed no IMiD-induced decrease in any Ikaros axis proteins. However, when assessed in patient samples, Pom treatment caused a significant decrease in IKZF1, IKZF3 and IRF4 regardless of IMiD sensitivity. Mass cytometry in patient samples revealed that individual Ikaros axis proteins were differentially expressed between subpopulations. When correlating this with ex vivo Pom sensitivity of MM subpopulations, we observed that low IKZF1 and IKZF3 corresponded to Pom resistance. Interestingly, most of these resistant populations still expressed MYC. We therefore assessed whether IMiD resistant MM was MYC dependent by treating with MYCi975. In 88% (7/8) of patient samples tested, IMiD resistant MM cells were sensitive to MYC inhibition. DISCUSSION/SIGNIFICANCE: While our findings did not support our initial hypothesis, our data suggest a mechanism where MYC expression becomes Ikaros axis independent to drive IMiD resistance, and resistant MM is still dependent on MYC. This suggests targeting MYC directly or indirectly via a mechanism to be determined may be an effective strategy to eradicate IMiD resistant MM.
MARINE ORGANIC CARBON AND RADIOCARBON—PRESENT AND FUTURE CHALLENGES
Ellen R M Druffel, Steven R Beaupré, Hendrik Grotheer, Christian B Lewis, Ann P McNichol, Gesine Mollenhauer, Brett D Walker
Journal: Radiocarbon / Volume 64 / Issue 4 / August 2022
Published online by Cambridge University Press: 25 January 2022, pp. 705-721
We discuss present and developing techniques for studying radiocarbon in marine organic carbon (C). Bulk DOC (dissolved organic C) Δ14C measurements reveal information about the cycling time and sources of DOC in the ocean, yet they are time consuming and need to be streamlined. To further elucidate the cycling of DOC, various fractions have been separated from bulk DOC, through solid phase extraction of DOC, and ultrafiltration of high and low molecular weight DOC. Research using 14C of DOC and particulate organic C separated into organic fractions revealed that the acid insoluble fraction is similar in 14C signature to that of the lipid fraction. Plans for utilizing this methodology are described. Studies using compound specific radiocarbon analyses to study the origin of biomarkers in the marine environment are reviewed and plans for the future are outlined. Development of ramped pyrolysis oxidation methods are discussed and scientific questions addressed. A modified elemental analysis (EA) combustion reactor is described that allows high particulate organic C sample throughput by direct coupling with the MIniCArbonDAtingSystem.
Edited by Audrey Walker, Albert Einstein College of Medicine, New York, Steven Schlozman, Jonathan Alpert, Albert Einstein College of Medicine, New York
Book: Introduction to Psychiatry
Print publication: 12 August 2021, pp vii-xvi
Print publication: 12 August 2021, pp iv-iv
Print publication: 12 August 2021, pp v-vi
Print publication: 12 August 2021, pp 497-512
1 - Introduction
By Audrey M. Walker, Steven C. Schlozman, Jonathan E. Alpert
Print publication: 12 August 2021, pp 1-8
The first edition of Introduction to Psychiatry is a textbook designed to reach medical students, house staff, primary care clinicians, and early-career mental health practitioners. It is the editors' hope that this text will enable its readers to understand the neuroscientific basis of psychiatry, best practices in the psychiatric assessment and treatment of the patient, the current understanding of core psychiatric diagnoses, and the important underlying issues of population health, public policy, and workforce recruitment and training that must be tackled to bring these advances to all.
Why create a textbook of psychiatry specifically for clinicians not trained for the mental health field? To answer this question, one must understand the troubling challenges facing the mental health workforce, the changing face of mental health care delivery, the enormous comorbidity between psychiatric illnesses and other health conditions, and the impact on non-psychiatric medical illnesses when a comorbid psychiatric disorder is present.
Introduction to Psychiatry
Preclinical Foundations and Clinical Essentials
Edited by Audrey Walker, Steven Schlozman, Jonathan Alpert
Print publication: 12 August 2021
Buy the print book
The current global crisis in mental health has seen psychiatry assume an increasingly integral role in healthcare. This comprehensive and accessible textbook provides an evidence-based foundation in psychiatry for medical students and serves as an excellent refresher for all mental health professionals. Written by medical school faculty and experts in the field, with comprehensive coverage from neurobiology to population health, this essential textbook is an invaluable guide to the evaluation, treatment and current understanding of the major disorders in psychiatry. The book introduces the basics of clinical assessment and all major modalities of evidence based treatment, along with topics often not covered adequately in textbooks such as gender and sexuality, and global mental health. Chapters are complemented by easy to navigate tables, self-assessment questions, and a short bibliography of recommended reading. An essential resource for medical students, trainees, and other medical professionals seeking a clear and comprehensive introduction to psychiatry.
Making the mundane remarkable: an ethnography of the 'dignity encounter' in community district nursing
Emma Stevens, Elizabeth Price, Elizabeth Walker
Journal: Ageing & Society , First View
Published online by Cambridge University Press: 08 July 2021, pp. 1-23
The concept of dignity is core to community district nursing practice, yet it is profoundly complex with multiple meanings and interpretations. Dignity does not exist absolutely, but, rather, becomes socially (de)constructed through and within social interactions between nurses and older adult patients in relational aspects of care. It is a concept, however, which has, to date, received little attention in the context of the community nursing care of older adults. Previous research into dignity in health care has often focused on care within institutional environments, very little, however, explores the variety of ways in which dignity is operationalised in community settings where district nursing care is conducted 'behind closed doors', largely free from the external gaze. This means dignity (or the lack of it) may go unobserved in community settings. Drawing on observational and interview data, this paper highlights the significance of dignity for older adults receiving nursing care in their own homes. We will demonstrate, in particular, how dignity manifests within the relational aspects of district nursing care delivery and how tasks involving bodywork can be critical to the ways in which dignity is both promoted and undermined. We will further highlight how micro-articulations in caring relationships fundamentally shape the 'dignity encounter' through a consideration of the routine and, arguably, mundane aspects of community district nursing care in the home.
Remnant radio galaxies discovered in a multi-frequency survey
Murchison Widefield Array
GAMA Legacy ATCA Southern Survey
Australian SKA Pathfinder
Benjamin Quici, Natasha Hurley-Walker, Nicholas Seymour, Ross J. Turner, Stanislav S. Shabala, Minh Huynh, H. Andernach, Anna D. Kapińska, Jordan D. Collier, Melanie Johnston-Hollitt, Sarah V. White, Isabella Prandoni, Timothy J. Galvin, Thomas Franzen, C. H. Ishwara-Chandra, Sabine Bellstedt, Steven J. Tingay, Bryan M. Gaensler, Andrew O'Brien, Johnathan Rogers, Kate Chow, Simon Driver, Aaron Robotham
Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021
Published online by Cambridge University Press: 09 February 2021, e008
The remnant phase of a radio galaxy begins when the jets launched from an active galactic nucleus are switched off. To study the fraction of radio galaxies in a remnant phase, we take advantage of a $8.31$ deg $^2$ subregion of the GAMA 23 field which comprises of surveys covering the frequency range 0.1–9 GHz. We present a sample of 104 radio galaxies compiled from observations conducted by the Murchison Widefield Array (216 MHz), the Australia Square Kilometer Array Pathfinder (887 MHz), and the Australia Telescope Compact Array (5.5 GHz). We adopt an 'absent radio core' criterion to identify 10 radio galaxies showing no evidence for an active nucleus. We classify these as new candidate remnant radio galaxies. Seven of these objects still display compact emitting regions within the lobes at 5.5 GHz; at this frequency the emission is short-lived, implying a recent jet switch off. On the other hand, only three show evidence of aged lobe plasma by the presence of an ultra-steep-spectrum ( $\alpha<-1.2$) and a diffuse, low surface brightness radio morphology. The predominant fraction of young remnants is consistent with a rapid fading during the remnant phase. Within our sample of radio galaxies, our observations constrain the remnant fraction to $4\%\lesssim f_{\mathrm{rem}} \lesssim 10\%$; the lower limit comes from the limiting case in which all remnant candidates with hotspots are simply active radio galaxies with faint, undetected radio cores. Finally, we model the synchrotron spectrum arising from a hotspot to show they can persist for 5–10 Myr at 5.5 GHz after the jets switch of—radio emission arising from such hotspots can therefore be expected in an appreciable fraction of genuine remnants.
2 - On Discourse-Intensive Approaches to Environmental Decision-Making: Applying Social Theory to Practice
from Part I - Methods
By Steven E. Daniels, Gregg B. Walker
Edited by Katharine Legun, Julie C. Keller, University of Rhode Island, Michael Carolan, Colorado State University, Michael M. Bell, University of Wisconsin, Madison
Book: The Cambridge Handbook of Environmental Sociology
Print publication: 03 December 2020, pp 29-46
The rise of multi-party processes in which people with quite different ties to a region, natural resource-related industry, or environmental issue work collaboratively to hammer out mutually acceptable agreements is arguably one of the biggest shifts in environmental management over the past twenty-five years. This chapter engages in some sensemaking around this diverse and evolving phenomenon in two ways. First, an approach to designing collaborative natural resource-related discourse with a particularly strong theoretical foundation (Collaborative Learning) is presented to illustrate how theory is manifest in practice. Second a recent best practices/common features list is examined through the perspectives of four social science theorists: Max Weber, Pierre Bourdieu, Niklas Luhmann, and Muzafer Sherif. The practical recommendations that emerge from this list is largely consistent with the larger social and communicative dynamics articulated by these theorists.
Innovation in Urban Transit at the Start of the Twentieth Century: A Case Study of Metropolitan Street Railway's Stealth Hostile Takeover of Third Avenue Railroad
TIMOTHY A. KRUSE, STEVEN KYLE TODD, MARK D. WALKER
Journal: Enterprise & Society / Volume 23 / Issue 2 / June 2022
Published online by Cambridge University Press: 28 October 2020, pp. 357-407
Print publication: June 2022
In 1900, a syndicate of investors used open market purchases and manipulative trading strategies to exploit an ongoing financial crisis at the Third Avenue Railroad Company and stealthily gain control of the company. The acquisition occurred during the first great merger wave in U.S. history and represented the street railway industry's response to a new technology, namely electrification. The lax regulatory environment of the period allowed operators and insiders to profit handsomely and may have benefited consumers, but possibly harmed some minority shareholders. Our case study illuminates an unusual acquisition, when capital markets were less transparent.
Calibration database for the Murchison Widefield Array All-Sky Virtual Observatory
Marcin Sokolowski, Christopher H. Jordan, Gregory Sleap, Andrew Williams, Randall Bruce Wayth, Mia Walker, David Pallot, Andre Offringa, Natasha Hurley-Walker, Thomas M. O. Franzen, Melanie Johnston-Hollitt, David L. Kaplan, David Kenney, Steven J. Tingay
Published online by Cambridge University Press: 11 June 2020, e021
We present a calibration component for the Murchison Widefield Array All-Sky Virtual Observatory (MWA ASVO) utilising a newly developed PostgreSQL database of calibration solutions. Since its inauguration in 2013, the MWA has recorded over 34 petabytes of data archived at the Pawsey Supercomputing Centre. According to the MWA Data Access policy, data become publicly available 18 months after collection. Therefore, most of the archival data are now available to the public. Access to public data was provided in 2017 via the MWA ASVO interface, which allowed researchers worldwide to download MWA uncalibrated data in standard radio astronomy data formats (CASA measurement sets or UV FITS files). The addition of the MWA ASVO calibration feature opens a new, powerful avenue for researchers without a detailed knowledge of the MWA telescope and data processing to download calibrated visibility data and create images using standard radio astronomy software packages. In order to populate the database with calibration solutions from the last 6 yr we developed fully automated pipelines. A near-real-time pipeline has been used to process new calibration observations as soon as they are collected and upload calibration solutions to the database, which enables monitoring of the interferometric performance of the telescope. Based on this database, we present an analysis of the stability of the MWA calibration solutions over long time intervals.
Effect of cerebral white matter changes on clinical response to cholinesterase inhibitors in dementia
M.E. Devine, J.A. Saez Fonseca, R.W. Walker, T. Sikdar, T. Stevens, Z. Walker
Journal: European Psychiatry / Volume 22 / Issue S1 / March 2007
Published online by Cambridge University Press: 16 April 2020, p. S305
Cerebral white matter changes (WMC) represent cerebrovascular disease (CVD) and are common in dementia. Cholinesterase inhibitors (ChEIs) are effective in Alzheimer's Disease (AD) with or without CVD, and in Dementia with Lewy Bodies/Parkinson's Disease Dementia (DLB/PDD). Predictors of treatment response are controversial.
To investigate the effect of WMC severity on response to ChEIs in dementia.
CT or MRI brain scans were rated for WMC severity in 243 patients taking ChEIs for dementia. Raters were blind to patients' clinical risk factors, dementia subtype and course of illness. Effects of WMC severity on rates of decline in cognition, function and behaviour were analysed for 140 patients treated for nine months or longer. Analysis was performed for this group as a whole and within diagnostic subgroups AD and DLB/PDD. The main outcome measure was rate of change in Mini Mental State Examination (MMSE) score. Secondary measures were rates of change in scores on the Cambridge Cognitive Examination (CAMCOG), Instrumental Activities of Daily Living (IADL) and Clifton Assessment Procedures for the Elderly – Behaviour Rating Scale (CAPE-BRS).
There was no significant correlation between severity of WMC and any specified outcome variable for the cohort as a whole or for patients with AD. In patients with DLB/PDD, higher WMC scores were associated with more rapid cognitive decline.
Increased WMC severity does not predict response to ChEIs in AD, but may weaken response to ChEIs in patients with DLB/PDD.
Outcomes of an electronic medical record (EMR)–driven intensive care unit (ICU)-antimicrobial stewardship (AMS) ward round: Assessing the "Five Moments of Antimicrobial Prescribing"
Misha Devchand, Andrew J. Stewardson, Karen F. Urbancic, Sharmila Khumra, Andrew A. Mahony, Steven Walker, Kent Garrett, M. Lindsay Grayson, Jason A. Trubiano
Journal: Infection Control & Hospital Epidemiology / Volume 40 / Issue 10 / October 2019
Published online by Cambridge University Press: 13 August 2019, pp. 1170-1175
The primary objective of this study was to examine the impact of an electronic medical record (EMR)–driven intensive care unit (ICU) antimicrobial stewardship (AMS) service on clinician compliance with face-to-face AMS recommendations. AMS recommendations were defined by an internally developed "5 Moments of Antimicrobial Prescribing" metric: (1) escalation, (2) de-escalation, (3) discontinuation, (4) switch, and (5) optimization. The secondary objectives included measuring the impact of this service on (1) antibiotic appropriateness, and (2) use of high-priority target antimicrobials.
A prospective review was undertaken of the implementation and compliance with a new ICU-AMS service that utilized EMR data coupled with face-to-face recommendations. Additional patient data were collected when an AMS recommendation was made. The impact of the ICU-AMS round on antimicrobial appropriateness was evaluated using point-prevalence survey data.
For the 202 patients, 412 recommendations were made in accordance with the "5 Moments" metric. The most common recommendation made by the ICU-AMS team was moment 3 (discontinuation), which comprised 173 of 412 recommendations (42.0%), with an acceptance rate of 83.8% (145 of 173). Data collected for point-prevalence surveys showed an increase in prescribing appropriateness from 21 of 45 (46.7%) preintervention (October 2016) to 30 of 39 (76.9%) during the study period (September 2017).
The integration of EMR with an ICU-AMS program allowed us to implement a new AMS service, which was associated with high clinician compliance with recommendations and improved antibiotic appropriateness. Our "5 Moments of Antimicrobial Prescribing" metric provides a framework for measuring AMS recommendation compliance.
UV Photochemical Oxidation and Extraction of Marine Dissolved Organic Carbon at UC Irvine: Status, Surprises, and Methodological Recommendations
Brett D Walker, Steven R Beaupré, Sheila Griffin, Ellen R M Druffel
Journal: Radiocarbon / Volume 61 / Issue 5 / October 2019
Published online by Cambridge University Press: 15 April 2019, pp. 1603-1617
The first ultraviolet photochemical oxidation (UVox) extraction method for marine dissolved organic carbon (DOC) as CO2 gas was established by Armstrong and co-workers in 1966. Subsequent refinement of the UVox technique has co-evolved with the need for high-precision isotopic (Δ14C, δ13C) analysis and smaller sample size requirements for accelerator mass spectrometry radiocarbon (AMS 14C) measurements. The UVox line at UC Irvine was established in 2004 and the system reaction kinetics and efficiency for isolating seawater DOC rigorously tested for quantitative isolation of ∼1 mg C for AMS 14C measurements. Since then, improvements have been made to sampling, storage, and UVox methods to increase overall efficiency. We discuss our progress, and key UVox system parameters for optimizing precision, accuracy, and efficiency, including (1) ocean to reactor: filtration, storage and preparation of DOC samples, (2) cryogenic trap design, efficiency and quantification of CO2 break through, and (3) use of isotopic standards, blanks and small sample graphitization techniques for the correction of DOC concentrations and Fm values with propagated uncertainties. New DOC UVox systems are in use at many institutions. However, rigorous assessment of quantitative UVox DOC yields and blank contributions, DOC concentrations and carbon isotopic values need to be made. We highlight the need for a community-wide inter-comparison study.
The Phase II Murchison Widefield Array: Design overview
Randall B. Wayth, Steven J. Tingay, Cathryn M. Trott, David Emrich, Melanie Johnston-Hollitt, Ben McKinley, B. M. Gaensler, A. P. Beardsley, T. Booler, B. Crosse, T. M. O. Franzen, L. Horsley, D. L. Kaplan, D. Kenney, M. F. Morales, D. Pallot, G. Sleap, K. Steele, M. Walker, A. Williams, C. Wu, Iver. H. Cairns, M. D. Filipovic, S. Johnston, T. Murphy, P. Quinn, L. Staveley-Smith, R. Webster, J. S. B. Wyithe
Published online by Cambridge University Press: 23 November 2018, e033
We describe the motivation and design details of the 'Phase II' upgrade of the Murchison Widefield Array radio telescope. The expansion doubles to 256 the number of antenna tiles deployed in the array. The new antenna tiles enhance the capabilities of the Murchison Widefield Array in several key science areas. Seventy-two of the new tiles are deployed in a regular configuration near the existing array core. These new tiles enhance the surface brightness sensitivity of the array and will improve the ability of the Murchison Widefield Array to estimate the slope of the Epoch of Reionisation power spectrum by a factor of ∼3.5. The remaining 56 tiles are deployed on long baselines, doubling the maximum baseline of the array and improving the array u, v coverage. The improved imaging capabilities will provide an order of magnitude improvement in the noise floor of Murchison Widefield Array continuum images. The upgrade retains all of the features that have underpinned the Murchison Widefield Array's success (large field of view, snapshot image quality, and pointing agility) and boosts the scientific potential with enhanced imaging capabilities and by enabling new calibration strategies.
The Engineering Development Array: A Low Frequency Radio Telescope Utilising SKA Precursor Technology
Randall Wayth, Marcin Sokolowski, Tom Booler, Brian Crosse, David Emrich, Robert Grootjans, Peter J. Hall, Luke Horsley, Budi Juswardy, David Kenney, Kim Steele, Adrian Sutinjo, Steven J. Tingay, Daniel Ung, Mia Walker, Andrew Williams, A. Beardsley, T. M. O. Franzen, M. Johnston-Hollitt, D. L. Kaplan, M. F. Morales, D. Pallot, C. M. Trott, C. Wu
Published online by Cambridge University Press: 17 August 2017, e034
We describe the design and performance of the Engineering Development Array, which is a low-frequency radio telescope comprising 256 dual-polarisation dipole antennas working as a phased array. The Engineering Development Array was conceived of, developed, and deployed in just 18 months via re-use of Square Kilometre Array precursor technology and expertise, specifically from the Murchison Widefield Array radio telescope. Using drift scans and a model for the sky brightness temperature at low frequencies, we have derived the Engineering Development Array's receiver temperature as a function of frequency. The Engineering Development Array is shown to be sky-noise limited over most of the frequency range measured between 60 and 240 MHz. By using the Engineering Development Array in interferometric mode with the Murchison Widefield Array, we used calibrated visibilities to measure the absolute sensitivity of the array. The measured array sensitivity matches very well with a model based on the array layout and measured receiver temperature. The results demonstrate the practicality and feasibility of using Murchison Widefield Array-style precursor technology for Square Kilometre Array-scale stations. The modular architecture of the Engineering Development Array allows upgrades to the array to be rolled out in a staged approach. Future improvements to the Engineering Development Array include replacing the second stage beamformer with a fully digital system, and to transition to using RF-over-fibre for the signal output from first stage beamformers.
Coenobichnus currani (new ichnogenus and ichnospecies): Fossil trackway of a land hermit crab, early Holocene, San Salvador, Bahamas
Sally E. Walker, Steven M. Holland, Lisa Gardiner
Journal: Journal of Paleontology / Volume 77 / Issue 3 / May 2003
Published online by Cambridge University Press: 20 May 2016, pp. 576-582
Land hermit crabs (Coenobitidae) are widespread and abundant in Recent tropical and subtropical coastal environments, yet little is known about their fossil record. A walking trace, attributed to a land hermit crab, is described herein as Coenobichnus currani (new ichnogenus and ichnospecies). This trace fossil occurs in an early Holocene eolianite deposit on the island of San Salvador, Bahamas. The fossil trackway retains the distinctive right and left asymmetry and interior drag trace that are diagnostic of modern land hermit crab walking traces. The overall size, dimensions and shape of the fossil trackway are similar to those produced by the modem land hermit crab, Coenobita clypeatus, which occurs in the tropical western Atlantic region. The trackway was compared to other arthropod traces, but it was found to be distinct among the arthropod traces described from dune or other environments. The new ichnogenus Coenobichnus is proposed to accommodate the asymmetry of the trackway demarcated by left and right tracks. The new ichnospecies Coenobichnus currani is proposed to accommodate the form of the proposed Coenobichnus that has a shell drag trace.
By Mitchell Aboulafia, Frederick Adams, Marilyn McCord Adams, Robert M. Adams, Laird Addis, James W. Allard, David Allison, William P. Alston, Karl Ameriks, C. Anthony Anderson, David Leech Anderson, Lanier Anderson, Roger Ariew, David Armstrong, Denis G. Arnold, E. J. Ashworth, Margaret Atherton, Robin Attfield, Bruce Aune, Edward Wilson Averill, Jody Azzouni, Kent Bach, Andrew Bailey, Lynne Rudder Baker, Thomas R. Baldwin, Jon Barwise, George Bealer, William Bechtel, Lawrence C. Becker, Mark A. Bedau, Ernst Behler, José A. Benardete, Ermanno Bencivenga, Jan Berg, Michael Bergmann, Robert L. Bernasconi, Sven Bernecker, Bernard Berofsky, Rod Bertolet, Charles J. Beyer, Christian Beyer, Joseph Bien, Joseph Bien, Peg Birmingham, Ivan Boh, James Bohman, Daniel Bonevac, Laurence BonJour, William J. Bouwsma, Raymond D. Bradley, Myles Brand, Richard B. Brandt, Michael E. Bratman, Stephen E. Braude, Daniel Breazeale, Angela Breitenbach, Jason Bridges, David O. Brink, Gordon G. Brittan, Justin Broackes, Dan W. Brock, Aaron Bronfman, Jeffrey E. Brower, Bartosz Brozek, Anthony Brueckner, Jeffrey Bub, Lara Buchak, Otavio Bueno, Ann E. Bumpus, Robert W. Burch, John Burgess, Arthur W. Burks, Panayot Butchvarov, Robert E. Butts, Marina Bykova, Patrick Byrne, David Carr, Noël Carroll, Edward S. Casey, Victor Caston, Victor Caston, Albert Casullo, Robert L. Causey, Alan K. L. Chan, Ruth Chang, Deen K. Chatterjee, Andrew Chignell, Roderick M. Chisholm, Kelly J. Clark, E. J. Coffman, Robin Collins, Brian P. Copenhaver, John Corcoran, John Cottingham, Roger Crisp, Frederick J. Crosson, Antonio S. Cua, Phillip D. Cummins, Martin Curd, Adam Cureton, Andrew Cutrofello, Stephen Darwall, Paul Sheldon Davies, Wayne A. Davis, Timothy Joseph Day, Claudio de Almeida, Mario De Caro, Mario De Caro, John Deigh, C. F. Delaney, Daniel C. Dennett, Michael R. DePaul, Michael Detlefsen, Daniel Trent Devereux, Philip E. Devine, John M. Dillon, Martin C. Dillon, Robert DiSalle, Mary Domski, Alan Donagan, Paul Draper, Fred Dretske, Mircea Dumitru, Wilhelm Dupré, Gerald Dworkin, John Earman, Ellery Eells, Catherine Z. Elgin, Berent Enç, Ronald P. Endicott, Edward Erwin, John Etchemendy, C. Stephen Evans, Susan L. Feagin, Solomon Feferman, Richard Feldman, Arthur Fine, Maurice A. Finocchiaro, William FitzPatrick, Richard E. Flathman, Gvozden Flego, Richard Foley, Graeme Forbes, Rainer Forst, Malcolm R. Forster, Daniel Fouke, Patrick Francken, Samuel Freeman, Elizabeth Fricker, Miranda Fricker, Michael Friedman, Michael Fuerstein, Richard A. Fumerton, Alan Gabbey, Pieranna Garavaso, Daniel Garber, Jorge L. A. Garcia, Robert K. Garcia, Don Garrett, Philip Gasper, Gerald Gaus, Berys Gaut, Bernard Gert, Roger F. Gibson, Cody Gilmore, Carl Ginet, Alan H. Goldman, Alvin I. Goldman, Alfonso Gömez-Lobo, Lenn E. Goodman, Robert M. Gordon, Stefan Gosepath, Jorge J. E. Gracia, Daniel W. Graham, George A. Graham, Peter J. Graham, Richard E. Grandy, I. Grattan-Guinness, John Greco, Philip T. Grier, Nicholas Griffin, Nicholas Griffin, David A. Griffiths, Paul J. Griffiths, Stephen R. Grimm, Charles L. Griswold, Charles B. Guignon, Pete A. Y. Gunter, Dimitri Gutas, Gary Gutting, Paul Guyer, Kwame Gyekye, Oscar A. Haac, Raul Hakli, Raul Hakli, Michael Hallett, Edward C. Halper, Jean Hampton, R. James Hankinson, K. R. Hanley, Russell Hardin, Robert M. Harnish, William Harper, David Harrah, Kevin Hart, Ali Hasan, William Hasker, John Haugeland, Roger Hausheer, William Heald, Peter Heath, Richard Heck, John F. Heil, Vincent F. Hendricks, Stephen Hetherington, Francis Heylighen, Kathleen Marie Higgins, Risto Hilpinen, Harold T. Hodes, Joshua Hoffman, Alan Holland, Robert L. Holmes, Richard Holton, Brad W. Hooker, Terence E. Horgan, Tamara Horowitz, Paul Horwich, Vittorio Hösle, Paul Hoβfeld, Daniel Howard-Snyder, Frances Howard-Snyder, Anne Hudson, Deal W. Hudson, Carl A. Huffman, David L. Hull, Patricia Huntington, Thomas Hurka, Paul Hurley, Rosalind Hursthouse, Guillermo Hurtado, Ronald E. Hustwit, Sarah Hutton, Jonathan Jenkins Ichikawa, Harry A. Ide, David Ingram, Philip J. Ivanhoe, Alfred L. Ivry, Frank Jackson, Dale Jacquette, Joseph Jedwab, Richard Jeffrey, David Alan Johnson, Edward Johnson, Mark D. Jordan, Richard Joyce, Hwa Yol Jung, Robert Hillary Kane, Tomis Kapitan, Jacquelyn Ann K. Kegley, James A. Keller, Ralph Kennedy, Sergei Khoruzhii, Jaegwon Kim, Yersu Kim, Nathan L. King, Patricia Kitcher, Peter D. Klein, E. D. Klemke, Virginia Klenk, George L. Kline, Christian Klotz, Simo Knuuttila, Joseph J. Kockelmans, Konstantin Kolenda, Sebastian Tomasz Kołodziejczyk, Isaac Kramnick, Richard Kraut, Fred Kroon, Manfred Kuehn, Steven T. Kuhn, Henry E. Kyburg, John Lachs, Jennifer Lackey, Stephen E. Lahey, Andrea Lavazza, Thomas H. Leahey, Joo Heung Lee, Keith Lehrer, Dorothy Leland, Noah M. Lemos, Ernest LePore, Sarah-Jane Leslie, Isaac Levi, Andrew Levine, Alan E. Lewis, Daniel E. Little, Shu-hsien Liu, Shu-hsien Liu, Alan K. L. Chan, Brian Loar, Lawrence B. Lombard, John Longeway, Dominic McIver Lopes, Michael J. Loux, E. J. Lowe, Steven Luper, Eugene C. Luschei, William G. Lycan, David Lyons, David Macarthur, Danielle Macbeth, Scott MacDonald, Jacob L. Mackey, Louis H. Mackey, Penelope Mackie, Edward H. Madden, Penelope Maddy, G. B. Madison, Bernd Magnus, Pekka Mäkelä, Rudolf A. Makkreel, David Manley, William E. Mann (W.E.M.), Vladimir Marchenkov, Peter Markie, Jean-Pierre Marquis, Ausonio Marras, Mike W. Martin, A. P. Martinich, William L. McBride, David McCabe, Storrs McCall, Hugh J. McCann, Robert N. McCauley, John J. McDermott, Sarah McGrath, Ralph McInerny, Daniel J. McKaughan, Thomas McKay, Michael McKinsey, Brian P. McLaughlin, Ernan McMullin, Anthonie Meijers, Jack W. Meiland, William Jason Melanson, Alfred R. Mele, Joseph R. Mendola, Christopher Menzel, Michael J. Meyer, Christian B. Miller, David W. Miller, Peter Millican, Robert N. Minor, Phillip Mitsis, James A. Montmarquet, Michael S. Moore, Tim Moore, Benjamin Morison, Donald R. Morrison, Stephen J. Morse, Paul K. Moser, Alexander P. D. Mourelatos, Ian Mueller, James Bernard Murphy, Mark C. Murphy, Steven Nadler, Jan Narveson, Alan Nelson, Jerome Neu, Samuel Newlands, Kai Nielsen, Ilkka Niiniluoto, Carlos G. Noreña, Calvin G. Normore, David Fate Norton, Nikolaj Nottelmann, Donald Nute, David S. Oderberg, Steve Odin, Michael O'Rourke, Willard G. Oxtoby, Heinz Paetzold, George S. Pappas, Anthony J. Parel, Lydia Patton, R. P. Peerenboom, Francis Jeffry Pelletier, Adriaan T. Peperzak, Derk Pereboom, Jaroslav Peregrin, Glen Pettigrove, Philip Pettit, Edmund L. Pincoffs, Andrew Pinsent, Robert B. Pippin, Alvin Plantinga, Louis P. Pojman, Richard H. Popkin, John F. Post, Carl J. Posy, William J. Prior, Richard Purtill, Michael Quante, Philip L. Quinn, Philip L. Quinn, Elizabeth S. Radcliffe, Diana Raffman, Gerard Raulet, Stephen L. Read, Andrews Reath, Andrew Reisner, Nicholas Rescher, Henry S. Richardson, Robert C. Richardson, Thomas Ricketts, Wayne D. Riggs, Mark Roberts, Robert C. Roberts, Luke Robinson, Alexander Rosenberg, Gary Rosenkranz, Bernice Glatzer Rosenthal, Adina L. Roskies, William L. Rowe, T. M. Rudavsky, Michael Ruse, Bruce Russell, Lilly-Marlene Russow, Dan Ryder, R. M. Sainsbury, Joseph Salerno, Nathan Salmon, Wesley C. Salmon, Constantine Sandis, David H. Sanford, Marco Santambrogio, David Sapire, Ruth A. Saunders, Geoffrey Sayre-McCord, Charles Sayward, James P. Scanlan, Richard Schacht, Tamar Schapiro, Frederick F. Schmitt, Jerome B. Schneewind, Calvin O. Schrag, Alan D. Schrift, George F. Schumm, Jean-Loup Seban, David N. Sedley, Kenneth Seeskin, Krister Segerberg, Charlene Haddock Seigfried, Dennis M. Senchuk, James F. Sennett, William Lad Sessions, Stewart Shapiro, Tommie Shelby, Donald W. Sherburne, Christopher Shields, Roger A. Shiner, Sydney Shoemaker, Robert K. Shope, Kwong-loi Shun, Wilfried Sieg, A. John Simmons, Robert L. Simon, Marcus G. Singer, Georgette Sinkler, Walter Sinnott-Armstrong, Matti T. Sintonen, Lawrence Sklar, Brian Skyrms, Robert C. Sleigh, Michael Anthony Slote, Hans Sluga, Barry Smith, Michael Smith, Robin Smith, Robert Sokolowski, Robert C. Solomon, Marta Soniewicka, Philip Soper, Ernest Sosa, Nicholas Southwood, Paul Vincent Spade, T. L. S. Sprigge, Eric O. Springsted, George J. Stack, Rebecca Stangl, Jason Stanley, Florian Steinberger, Sören Stenlund, Christopher Stephens, James P. Sterba, Josef Stern, Matthias Steup, M. A. Stewart, Leopold Stubenberg, Edith Dudley Sulla, Frederick Suppe, Jere Paul Surber, David George Sussman, Sigrún Svavarsdóttir, Zeno G. Swijtink, Richard Swinburne, Charles C. Taliaferro, Robert B. Talisse, John Tasioulas, Paul Teller, Larry S. Temkin, Mark Textor, H. S. Thayer, Peter Thielke, Alan Thomas, Amie L. Thomasson, Katherine Thomson-Jones, Joshua C. Thurow, Vzalerie Tiberius, Terrence N. Tice, Paul Tidman, Mark C. Timmons, William Tolhurst, James E. Tomberlin, Rosemarie Tong, Lawrence Torcello, Kelly Trogdon, J. D. Trout, Robert E. Tully, Raimo Tuomela, John Turri, Martin M. Tweedale, Thomas Uebel, Jennifer Uleman, James Van Cleve, Harry van der Linden, Peter van Inwagen, Bryan W. Van Norden, René van Woudenberg, Donald Phillip Verene, Samantha Vice, Thomas Vinci, Donald Wayne Viney, Barbara Von Eckardt, Peter B. M. Vranas, Steven J. Wagner, William J. Wainwright, Paul E. Walker, Robert E. Wall, Craig Walton, Douglas Walton, Eric Watkins, Richard A. Watson, Michael V. Wedin, Rudolph H. Weingartner, Paul Weirich, Paul J. Weithman, Carl Wellman, Howard Wettstein, Samuel C. Wheeler, Stephen A. White, Jennifer Whiting, Edward R. Wierenga, Michael Williams, Fred Wilson, W. Kent Wilson, Kenneth P. Winkler, John F. Wippel, Jan Woleński, Allan B. Wolter, Nicholas P. Wolterstorff, Rega Wood, W. Jay Wood, Paul Woodruff, Alison Wylie, Gideon Yaffe, Takashi Yagisawa, Yutaka Yamamoto, Keith E. Yandell, Xiaomei Yang, Dean Zimmerman, Günter Zoller, Catherine Zuckert, Michael Zuckert, Jack A. Zupko (J.A.Z.)
Edited by Robert Audi, University of Notre Dame, Indiana
Book: The Cambridge Dictionary of Philosophy
Published online: 05 August 2015
Print publication: 27 April 2015, pp ix-xxx | CommonCrawl |
Cryptographically significant mds matrices over finite fields: A brief survey and some generalized results
AMC Home
The secure link prediction problem
November 2019, 13(4): 759-778. doi: 10.3934/amc.2019044
Identity-based key aggregate cryptosystem from multilinear maps
Sikhar Patranabis and Debdeep Mukhopadhyay
Department of Computer Science and Engineering, Indian Institute of Technology Kharagpur, West Bengal 721302, India
Received October 2018 Revised March 2019 Published June 2019
Table(2)
A key-aggregate cryptosystem (KAC) is the dual of the well-known notion of broadcast encryption (BE). In KAC, each plaintext message is encrypted with respect to some identity, and a single aggregate key can be generated for any arbitrary subset $ \mathcal{S} $ of identities, such that any ciphertext designated for any identity in $ \mathcal{S} $ can be decrypted using this aggregate key. A KAC scheme is said to be efficient if all public parameters, ciphertexts and aggregate keys have polynomial overhead, and can be generated using poly-time algorithms.
A KAC scheme is said to be identity-based if remains efficient even when the number of unique identities supported by the system is exponential in the security parameter $ \lambda $. Unfortunately, existing KAC constructions do not satisfy this property. In particular, adopting these constructions to the identity-based setting leads to public parameters with exponential overhead.
In this paper, we propose new identity-based KAC constructions using multilinear maps that are secure in the generic multilinear map model, and are fully collusion resistant against any number of colluding parties. Our first construction is based on asymmetric multilinear maps, with a poly-logarithmic overhead for the public parameters, and a constant overhead for the ciphertexts and aggregate keys. Our second construction is based on the more generalized symmetric multilinear maps, and offers tighter security bounds in the generic multilinear map model. This construction has a poly-logarithmic overhead for the public parameters and the ciphertexts, while the overhead for the aggregate keys is still constant.
Keywords: Key-aggregate, identity-based, multilinear maps, collusion-resistant, standard model, generic group model.
Mathematics Subject Classification: Primary: 11T71, 68P25, 94A60; Secondary: 68R01.
Citation: Sikhar Patranabis, Debdeep Mukhopadhyay. Identity-based key aggregate cryptosystem from multilinear maps. Advances in Mathematics of Communications, 2019, 13 (4) : 759-778. doi: 10.3934/amc.2019044
D. Boneh, X. Boyen and E.-J. Goh, Hierarchical identity based encryption with constant size ciphertext, in Advances in Cryptology–EUROCRYPT 2005, Springer, 3494 (2005), 440–456. doi: 10.1007/11426639_26. Google Scholar
D. Boneh, C. Gentry and B. Waters, Collusion resistant broadcast encryption with short ciphertexts and private keys, in Advances in Cryptology–CRYPTO 2005, Springer, 3621 (2005), 258–275. doi: 10.1007/11535218_16. Google Scholar
D. Boneh and B. Waters, Constrained pseudorandom functions and their applications, in Advances in Cryptology-ASIACRYPT 2013, Springer, 8270 (2013), 280–300. doi: 10.1007/978-3-642-42045-0_15. Google Scholar
D. Boneh, B. Waters and M. Zhandry, Low overhead broadcast encryption from multilinear maps, in Advances in Cryptology–CRYPTO 2014, Springer, 8616 (2014), 206–223. doi: 10.1007/978-3-662-44371-2_12. Google Scholar
D. Boneh, D. J. Wu and J. Zimmerman, Immunizing multilinear maps against zeroizing attacks, IACR Cryptology ePrint Archive, 2014 (2014), 930. Google Scholar
J. H. Cheon, K. Han, C. Lee, H. Ryu and D. Stehlé, Cryptanalysis of the multilinear map over the integers, in Advances in Cryptology–EUROCRYPT 2015, Springer, 9056 (2015), 3–12. doi: 10.1007/978-3-662-46800-5_1. Google Scholar
C.-K. Chu, S. S. Chow, W.-G. Tzeng, J. Zhou and R. H. Deng, Key-aggregate cryptosystem for scalable data sharing in cloud storage, Parallel and Distributed Systems, IEEE Transactions on, 25 (2014), 468-477. Google Scholar
J. Coron, M. S. Lee, T. Lepoint and M. Tibouchi, Cryptanalysis of GGH15 multilinear maps, in Advances in Cryptology - CRYPTO 2016 - 36th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 14-18, 2016, Proceedings, Part II, 9815 (2016), 607–628. doi: 10.1007/978-3-662-53008-5_21. Google Scholar
J.-S. Coron, T. Lepoint and M. Tibouchi, Practical multilinear maps over the integers, in Advances in Cryptology–CRYPTO 2013, Springer, 8042 (2013), 476–493. doi: 10.1007/978-3-642-40041-4_26. Google Scholar
J.-S. Coron, T. Lepoint and M. Tibouchi, Cryptanalysis of two candidate fixes of multilinear maps over the integers, IACR Cryptology ePrint Archive, 2014 (2014), 975. Google Scholar
S. Garg, C. Gentry and S. Halevi, Candidate multilinear maps from ideal lattices, Advances in cryptology–EUROCRYPT 2013, 7881 (2013), 1-17. doi: 10.1007/978-3-642-38348-9_1. Google Scholar
C. Gentry, S. Gorbunov and S. Halevi, Graph-induced multilinear maps from lattices, in Theory of Cryptography, Springer, 9015 (2015), 498–527. doi: 10.1007/978-3-662-46497-7_20. Google Scholar
C. Gentry, S. Halevi, H. K. Maji and A. Sahai, Zeroizing without zeroes: Cryptanalyzing multilinear maps without encodings of zero, IACR Cryptology ePrint Archive, 2014 (2014), 929. Google Scholar
J. Kilian, Founding crytpography on oblivious transfer, in Proceedings of the twentieth annual ACM symposium on Theory of computing, ACM, 1988, 20–31. doi: 10.1145/62212.62215. Google Scholar
D. Moshkovitz, An alternative proof of the schwartz-zippel lemma., in Electronic Colloquium on Computational Complexity (ECCC), 17 (2010), 34. Google Scholar
M. Naor and O. Reingold, Number-theoretic constructions of efficient pseudo-random functions, Journal of the ACM (JACM), 51 (2004), 231-262. doi: 10.1145/972639.972643. Google Scholar
S. Patranabis, Y. Shrivastava and D. Mukhopadhyay, ynamic key-aggregate cryptosystem on elliptic curves for online data sharing, in Progress in Cryptology–INDOCRYPT 2015, Springer, 9462 (2015), 25–44. doi: 10.1007/978-3-319-26617-6_2. Google Scholar
S. Patranabis, Y. Shrivastava and D. Mukhopadhyay, Provably secure key-aggregate cryptosystems with broadcast aggregate keys for online data sharing on the cloud, IEEE Trans. Computers, 66 (2017), 891–904. doi: 10.1109/TC.2016.2629510. Google Scholar
Table 1. Theorem 1: Upper Bounds on Contributions to Length of $ L $
Query Stage Maximum Contribution to $ |L| $
SetUp $ m+2 $
Oracle Query Phase $ Q_E+Q_M+Q_P $
Aggregate Key Query Phase (1 and 2) $ Q_K $
Challenge $ 5 $
Total $ Q_E+Q_M+Q_P+Q_K+m+7 $
SetUp $ 2m+1 $
Aggregate Key Query Phase (1 and 2) $ 2Q_K $
Challenge $ m+5 $
Total $ Q_E+Q_M+Q_P+2Q_K+3m+6 $
Yang Lu, Jiguo Li. Forward-secure identity-based encryption with direct chosen-ciphertext security in the standard model. Advances in Mathematics of Communications, 2017, 11 (1) : 161-177. doi: 10.3934/amc.2017010
David Galindo, Javier Herranz, Eike Kiltz. On the generic construction of identity-based signatures with additional properties. Advances in Mathematics of Communications, 2010, 4 (4) : 453-483. doi: 10.3934/amc.2010.4.453
Yang Lu, Quanling Zhang, Jiguo Li. An improved certificateless strong key-insulated signature scheme in the standard model. Advances in Mathematics of Communications, 2015, 9 (3) : 353-373. doi: 10.3934/amc.2015.9.353
Vikas Srivastava, Sumit Kumar Debnath, Pantelimon Stǎnicǎ, Saibal Kumar Pal. A multivariate identity-based broadcast encryption with applications to the internet of things. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021050
Rainer Steinwandt, Adriana Suárez Corona. Attribute-based group key establishment. Advances in Mathematics of Communications, 2010, 4 (3) : 381-398. doi: 10.3934/amc.2010.4.381
Roman VodiČka, Vladislav MantiČ. An energy based formulation of a quasi-static interface damage model with a multilinear cohesive law. Discrete & Continuous Dynamical Systems - S, 2017, 10 (6) : 1539-1561. doi: 10.3934/dcdss.2017079
Helen Moore, Weiqing Gu. A mathematical model for treatment-resistant mutations of HIV. Mathematical Biosciences & Engineering, 2005, 2 (2) : 363-380. doi: 10.3934/mbe.2005.2.363
Alex John Quijano, Michele L. Joyner, Edith Seier, Nathaniel Hancock, Michael Largent, Thomas C. Jones. An aggregate stochastic model incorporating individual dynamics for predation movements of anelosimus studiosus. Mathematical Biosciences & Engineering, 2015, 12 (3) : 585-607. doi: 10.3934/mbe.2015.12.585
Anton Stolbunov. Constructing public-key cryptographic schemes based on class group action on a set of isogenous elliptic curves. Advances in Mathematics of Communications, 2010, 4 (2) : 215-235. doi: 10.3934/amc.2010.4.215
Gerhard Keller. Maximal equicontinuous generic factors and weak model sets. Discrete & Continuous Dynamical Systems, 2020, 40 (12) : 6855-6875. doi: 10.3934/dcds.2020132
Woojoo Shim. On the generic complete synchronization of the discrete Kuramoto model. Kinetic & Related Models, 2020, 13 (5) : 979-1005. doi: 10.3934/krm.2020034
Abba B. Gumel, Baojun Song. Existence of multiple-stable equilibria for a multi-drug-resistant model of mycobacterium tuberculosis. Mathematical Biosciences & Engineering, 2008, 5 (3) : 437-455. doi: 10.3934/mbe.2008.5.437
Yishan Gong, Yang Yang. Asymptotics for VaR and CTE of total aggregate losses in a bivariate operational risk cell model. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021022
Yanan Zhao, Yuguo Lin, Daqing Jiang, Xuerong Mao, Yong Li. Stationary distribution of stochastic SIRS epidemic model with standard incidence. Discrete & Continuous Dynamical Systems - B, 2016, 21 (7) : 2363-2378. doi: 10.3934/dcdsb.2016051
Yixiang Wu, Necibe Tuncer, Maia Martcheva. Coexistence and competitive exclusion in an SIS model with standard incidence and diffusion. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 1167-1187. doi: 10.3934/dcdsb.2017057
Eleonora Messina, Mario Pezzella, Antonia Vecchio. A non-standard numerical scheme for an age-of-infection epidemic model. Journal of Computational Dynamics, 2021 doi: 10.3934/jcd.2021029
Gerhard Kirsten. Multilinear POD-DEIM model reduction for 2D and 3D semilinear systems of differential equations. Journal of Computational Dynamics, 2021 doi: 10.3934/jcd.2021025
Mohamed Baouch, Juan Antonio López-Ramos, Blas Torrecillas, Reto Schnyder. An active attack on a distributed Group Key Exchange system. Advances in Mathematics of Communications, 2017, 11 (4) : 715-717. doi: 10.3934/amc.2017052
Marco Abate, Francesca Tovena. Formal normal forms for holomorphic maps tangent to the identity. Conference Publications, 2005, 2005 (Special) : 1-10. doi: 10.3934/proc.2005.2005.1
Delaram Kahrobaei, Mima Stanojkovski. Cryptographic multilinear maps using pro-p groups. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021041
Sikhar Patranabis Debdeep Mukhopadhyay | CommonCrawl |
IZA Journal of Development and Migration
The regional impact of cultural diversity on wages: evidence from Australia
Amanuel Elias1 &
Yin Paradies1
IZA Journal of Migration volume 5, Article number: 12 (2016) Cite this article
This paper investigates the impact of cultural diversity on labour market outcomes, particularly on wages across regions using a large longitudinal data. We apply an instrumental variable approach and account for individual and time fixed effects. Our findings indicate that the current level of cultural diversity positively affected current regional weekly wages; however, the positive effect holds only partially when the diversity is lagged. The results appear to be robust in all estimations controlling for heterogeneity factors and accounting for the self-selection of individuals into places with better economic opportunities. Our findings concerning the effect of lagging on the effect of diversity may explain the variation in the literature where some studies report that cultural diversity increases wages across time while others do not.
JEL Classification: J610, R23, Z190
Economic theory indicates that cultural diversity is related to economic performance.Footnote 1 Large cities with culturally diverse population are usually likely to be the centres of rapid economic growth and employment. But they can also be the centres of attraction for more labour and diversity. Therefore, endogeneity and reverse causality have been the focus of substantial research in the economics of diversity (Longhi 2013; Ottaviano and Peri 2006). In addition, whether the net effect of diversity is good or bad for the economy in general, and the labour market in particular, continues to stir debate among researchers (Alesina and La Ferrara 2004; Herring 2009; Longhi 2013; Ottaviano and Peri 2006). Generally, the literature in this area has focused on four outcomes of interest: labour market, innovation, social capital/tolerance, and economic growth. In this paper, we focus on the labour market, examining the effects of cultural diversity on regional wages. The pathway through which this relationship plays out depends on both the demographic composition and the cultural distance that underlie the diversity. Competing theories have suggested that cultural diversity is beneficial for long-term economic growth but can reduce trust in the short-term (Putnam 2007). In a situation where a culturally diverse climate contributes to a variety of skills in the workforce, diversity has a positive impact on economic growth. However, the impact becomes negative if it leads to conflict and polarisation (Alesina and La Ferrara 2005).
Previous cross-sectional studies suggest that cultural diversity increases employment and wages at the regional level, thereby leading to economic growth (Bellini et al. 2013; Kohler 2012; Ottaviano and Peri 2006). A range of reasons have been proposed for this, including factors that affect the labour market, businesses, and industry. For example, cultural diversity has been linked with greater employee commitment and improved productivity as well as greater creativity, innovation, and problem solving arising from a wider pool of skills and with the diffusion of these capabilities (Damelang and Haas 2012; Herring 2009; Perotin et al. 2003; Putnam 2007; Richard 2000; Suedekum et al. 2014). Diversity has also been associated with an increased variety of preferences, better customer satisfaction, larger market share, increased sales revenue, and greater relative profits (Bertone and Leahy 2001; Herring 2009; Page 2008). In relation to human and social capital, it has been linked with improved student wellbeing in schools (Juvonen et al. 2006) as well as augmented social capital (Putnam 2007).
In contrast, the ethnic fractionalisation literature indicates that cultural diversity can lead to interracial conflict or racism, at least in the short-term, followed by a decline in economic performance due to reduced investment and public spending (Alesina and La Ferrara 2005; Fearon 2003; Montalvo and Reynal-Querol 2005; Stahl et al. 2010). In a dynamic model with a lagged measure of diversity, Campos et al. (2011) found that diversity has a significant negative impact on economic growth. A range of reasons can be posited for this negative economic impact. Some studies, for example, have linked ethnic diversity with reduced social cohesion leading to conflicts (Kochan et al. 2003; Lieber 2009; Roberson and Kulik 2007). At the organisational level, diversity that is poorly managed can reduce staff morale and productivity, provoke conflict between employees and managers, and harm social cohesion (Kochan et al. 2003; Roberson and Kulik 2007; Wrench 2005). Diversity may also result in the perpetration of, and exposure to, prejudice and racism, marginalisation of minorities, deterioration of social capital, and political conflict (Stahl et al. 2010), with any potential benefits offset by the costs of such phenomena (Campos et al. 2011; Montalvo and Reynal-Querol 2005; Triana et al. 2015).
This study extends the empirical evidence in this literature in two ways. First, most studies have used cross-sectional data in analysing the economic impact of diversity. We use a large longitudinal data that allows us to investigate this relationship accounting for variation over time. Second, Australia has unique migrant characteristics due to its geographic isolation from source countries which allows it to control the flow of migrants through specific gateways. In addition, per capita, it is one of the largest migrant-receiving countries in the world with 26 % of the population born overseas and an additional 20 % having at least one parent born overseas (Australian Bureau of Statistics, ABS 2014). In the capital cities, the average is even higher with the migrant population accounting for 28.9 % of the urban residents. More than 19 % of the overseas-born population, aged 5 years and over, speak a language other than English at home (ABS 2012b). Yet, the economic impact of this diversity has yet to be investigated. Therefore, in this study, we analyse the causal impact of diversity on wages by examining the regional variation in the cultural composition of the labour force.
We use the Household, Income and Labour Dynamics in Australia (HILDA) data which clusters individuals based on their postcodes. Using the country of birth variable from census data, we create a local government area (LGA)-level measure of diversity (fractionalisation index). Following Pischke and Velling (1997), Card (2001), Ottaviano and Peri (2006), and Bellini et al. (2013), we specify an instrumental variable estimation to overcome the endogeneity of diversity in our model. Following Longhi (2013), we address time and individual effects by specifying fixed effects in addition to an ordinary least squares (OLS) model. We also account for heterogeneity in the effects of diversity using sub-sample analysis based on the ancestry, mobility, skill, and residency of respondents in our data.
The current evidence in relation to diversity varies depending on the type and context under which diversity is studied. Studies such as Alesina and La Ferrara (2004), Fearon (2003), and Montalvo and Reynal-Querol (2005) conceptualise diversity in terms of ethnic composition, which they link with interethnic conflicts. Earlier studies focused mainly on the diversity of immigrants in a country and indicate a zero or negative (but small) correlation between the influx of immigrants and native wages and no association between the proportion of immigrants and native rates of employment (Altonji and Blank 1999; Borjas et al. 1997; Friedberg and Hunt 1995). However, assuming a perfect substitutability between migrants and native workers, Borjas et al. (2008) found a negative effect of immigrant share on men and women's wages. Pischke and Velling (1997), on the other hand, found the national impact of immigration to be minimal in a German regional study which accounted for the self-selection of migrants into local labour markets. A more recent cross-national study by Sanderson (2013) showed that immigration raises the overall living standards in host countries in the long-term (although this is attenuated in high fertility contexts).
Other studies have focused on diversity in terms of country of birth. Ottaviano and Peri (2006) who investigated the impact of immigrants on 160 US cities found that "on average, cultural diversity has a net positive effect on the productivity of U.S.-born citizens because it is positively correlated with both the average wage received and the average rent paid by U.S.-born individuals" (p. 11). They concluded that US-born urban residents living in areas where the share of foreign-born residents increased (in 1970–1990) had a substantial rise in their wages and the rental prices they pay. Using similar approaches, Bellini et al. (2013) found a positive wage effect of diversity in 12 European regions and Nathan (2011) found a positive impact of diversity on wages for a range of British studies. Another study by D'Amuri et al. (2010) found that the flow of new immigrants depressed the employment levels and wages of old immigrants while having no meaningful effect on the employment and wages of natives. This is contradicted by Longhi (2013) who analysed the labour market effects from seven international studies. After accounting for individual and time heterogeneity, she found that the lagged measure of diversity was negatively associated with wages and employment. Similarly, Angrist and Kugler (2003) found diversity to be weakly but negatively associated with the level of employment in European data. Borjas et al. (2008) further report that even accounting for long-run adjustments, an increased supply of immigrants lowers native wages. In this study, we test some of the previous findings using longitudinal data, over a 10 year period, to account for individual and time effects and the time lag in measuring the effect of diversity.
The rest of this paper is organised as follows: Sections 2 and 3 describe the data and analytical methods, respectively, detailing the measurement issues related to diversity. Section 4 presents the results while Section 5 concludes and discusses the implications of this research.
Description of data
Three main datasets are used in this paper. The primary data source is the 2001–2011 HILDA Survey. The second is demographic data from the 2001 and 2011 censuses while the third data includes a set of annual population estimates compiled from the ABS creative commons release for the period 2001–2011.
The HILDA Survey
This paper uses the un-confidentialised version of the HILDA Survey (Release 11) which has postcode data that allows for area-level analysis (see Watson 2012, for detailed description of the HILDA Survey). This postal area data is exploited to measure an index of diversity as the key variable in measuring the distribution of foreign nationals and to merge with an instrument and an alternative index of diversity computed from census data. The initial sample interviewed in the first wave (wave 1) consisted of 7682 households while the corresponding sample of enumerated persons was 19,914 people. Out of these, 24 % are children aged below 15 years. Of those who are eligible, the response rate was 92.3 % (n = 13,969). In subsequent waves, new household members and children increased while overseas emigration and deaths decreased the sample size. In addition, attrition rates of 3.7–13.2 % across waves contributed to the decline of the sample size in subsequent waves.Footnote 2 On average, 13,438 respondents were interviewed every year of which 7228 individuals continued to participate in the survey each year without missing any wave. However, the combined sample size for the 11 waves (waves 1–11) including those who were added in subsequent waves is 26,028. Out of this, a long panel was constructed for the 11 waves amassing an overall sample of 286,308 observations.
Since the sample of interest in this paper is the labour market performance of those who are potentially active in the labour force, the sample was restricted to the working age group. The total sample size for waves 1–11 in the long panel of those aged 16–45 years is 79,636 (27.8 %). Of these, those who are employed and earning wages account for 64.7 % (n = 51,538). Finally, in the multivariate analyses, the sample was further restricted to allow for a longer time period (3-year lag), with the final sample of 44,634 (56 %).
Country of birth data in ABS census 2001 and 2011
Census data was used to generate an index of diversity at the local government area (LGA) level as well as an instrument based on the projected population. This data, which includes the regional distribution of Australians by country of birth, was obtained from the ABS via TableBuilder using the 2001 and 2011 census data (ABS 2012a). A total of 293 countries classified under the four-digit Standard Australian Classification of Countries (SACC) are available in the census (ABS 2011a), along with a total of 2516 postcodes.Footnote 3 These data are also available at the LGA level.
The LGA is a geographically contiguous classification which divides Australia into 676 regional categories. Each LGA, also known as a local council, handles community-related tasks and town planning within its jurisdiction. Diverse local entities including cities, towns, suburbs, shires, and villages make up these local councils. We originally obtained the census data at the postcode level. To make the analysis relevant to regional labour market and community characteristics, we decided to broaden the classification to the LGA level. The LGA-level data also accounts for socio-demographic and regional policy differences across Australia. Using the 2006 ABS Postal Area Concordances that map LGAs and postcodes, the postcode data was merged and then collapsed into 676 LGAs across Australia. The LGA-level data was used to generate the regional distribution of diversity and the main instrument. Then the LGA and postal area data were used to merge these census-based data with the HILDA data. Although the merging variable is the postcode, the diversity index used for analysis is at the LGA level. The diversity measure from the census is used for spatial analysis, visually portraying changes in diversity over the 2001–2011 period.
Annual population estimates 2001–2011
Australia's annual population estimates for the period 2001 to 2011 were obtained from the ABS website. These datasets include an aggregate distribution of the population estimates by country of birth at the national level.Footnote 4 A total of 255 countries of origin were represented in these datasets. From this distribution, the annual growth rate of the population was estimated and was utilised along with the 2001 census data in the calculation of time-variant shift-share instruments for each wave (see Section 3.7 for a detailed discussion).
The cultural diversity literature uses the fractionalisation index in measuring the impact of diversity on economic outcomes. Ethno-linguistic fractionalisation (ELF) is defined as the "probability that two randomly chosen citizens in a country belong to a different ethnic group where(in) group belonging is attributed by language" (Neumann and Graeff 2013). Vigdor (2008) uses a slightly different approach, the probability that a randomly selected individual is an immigrant, to estimate an assimilation index in the USA. Others have used the country of birth data to measure cultural diversity (Alesina et al. 2013; Bellini et al. 2013; Damelang and Haas 2012; Longhi 2013; Ottaviano and Peri 2006). Given the availability of data in the Australian context, this study uses country of birth/nationality instead of ethnicity/language.Footnote 5 Specifically, the proportion of the nationals of each country of origin (birth) in each LGA in Australia is used to compute a fractionalisation index. This index (hereafter diversity index) has a similar theoretical interpretation to the Herfindahl Index which is widely used in marketing research to measure the market/monopoly power of firms located in specific areas (Gomez-Mejia and Palich 1997) and is given by
$$ D{I}_{rt}=1-{\displaystyle \sum_{i=1}^I}{C}_{irt}^2\kern3em \forall\ i=1,2, \dots N,\ t=1,2,\dots T $$
where C irt represents the proportion of the nationals of country i in region (LGA) r in a given year t. The values fall in the range [0, 1] with "zero" indicating perfect homogeneity and "1" indicating perfect heterogeneity. We use the HILDA panel to construct the index of diversity. For the sake of visual comparison, we also estimated the indices for the 2001 and 2011 censuses (see Fig. 1). However, given the annual time series nature of our data, our main analyses is based on the indices constructed from HILDA.
Change in the distribution of fractionalisation in Australia: 2001 to 2011. a Fractionalisation in the 2001 census; b fractionalisation in the 2011 census
These maps are constructed based on ABS census data. Although we had diversity data available for 676 LGAs, the 2011 Australian Standard Geographical Classification (ASGC, ABS 2011b) digital boundaries allow for only 560 LGAs upon which the maps reported in Fig. 1 are based. The first figure, Fig. 1a, is the distribution of the diversity index based on the 2001 census while Fig. 1b is based on the 2011 census. A comparison of the two figures indicates that cultural diversity increased in several regions over the decade. This is particularly visible in the metropolitan areas including Sydney and Melbourne (see Figs. 2 and 3, respectively). Both interregional mobility and international migration have contributed to this demographic change (Hugo and Harris 2011). Therefore, the analysis of diversity in this study accounts for both factors, by using a predicted instead of actual diversity index.
Fractionalisation in the Sydney metropolitan area in 2001 (a) and 2011 (b)
Fractionalisation in the Melbourne metropolitan area in 2001 (a) and 2011 (b)
Share of migrants
The "share of migrants" is an alternative measure to assess whether the proportion of immigrants in a region per se has any effect on weekly wages. In addition, a diversity index is estimated for migrants excluding the Australian-born population. This is then included to see whether diversity among migrants (as opposed to diversity in general) contributes to labour market outcomes.
Weekly wages
The main dependent variable in this analysis is the log of weekly wages. Originally, HILDA respondents were asked a series of questions such as "For your [job/main job] what was the total gross amount of your most recent gross pay before tax or anything else was taken out?" Responses were recorded as "gross weekly wages and salaries" for the responding persons. For the complete panel, the mean weekly wage was $651.6 (SE = $254.1).
Other control variables
In addition to diversity (fractionalisation) and the share of migrants, standard demographic variables (age, age squared, gender, marital status) are included as control variables. English language fluency is also included, as the ability to speak English well is usually associated with labour market outcomes for migrants (Dustmann and Fabbri 2003). Foreign-born HILDA respondents were asked how well they spoke English with four response options ranging from "very well" to "not at all". The third and fourth options ("not well" and "not at all") are collapsed because those who responded with the fourth option were negligible (0.02 %).
Analytic framework
In analysing the HILDA data, this study aims to test the hypothesis that cultural diversity can have a positive impact on labour market outcomes by boosting regional economic growth. The labour market channel involves a dynamic interaction between employment and wages. However, in this study, the main focus is the impact of diversity on wages, taking into consideration regional variations. The effect of diversity on wages can be estimated via panel data analysis that accounts for individual and regional effects over different time periods. The first model estimated is a simple OLS model of the log of weekly wages (ln(w irt )) for each employed respondent aged 16–45 years.
$$ \ln \left({w}_{irt}\right)={\alpha}_{1i}+{\beta}_1{\mathrm{div}}_r+{\delta}_1{X}_{irt}+{\varepsilon}_{1irt} $$
where the main variable of interest is the diversity index div r . As suggested in the literature, further explanatory variables (X irt ) are included such as weekly number of hours worked and job tenure as well as time indicator variables. In addition, age and its square, dummies for female, marital status, and region as well as English language skill and education indicators are included where appropriate.
Apart from the observable characteristics, there can be individual heterogeneity that can affect the relationship between diversity and labour market outcomes. Longhi (2013) shows that the positive wage effects of diversity reported in cross-sectional studies (Nathan 2011) can be explained by individual differences. A fixed effect (FE) model is therefore estimated in this study capturing the unobserved individual characteristics among HILDA respondents. All the explanatory time-variant variables included in OLS are also included in the FE models.
Endogeneity of cultural diversity
The impact of cultural diversity on an economy is confounded due to the possibility of reverse causality, whereby Eq. (2) results in a spurious correlation (Friedberg and Hunt 1995). Our purpose is to determine the effect of diversity on wages, but a two-way causality between diversity and wages is possible. While diversity can directly affect economic performance, it is also possible that people from diverse backgrounds can self-select to live in places with economic opportunities.
The impact of diversity on economic outcome can be positive or negative. On the positive side, diversity can augment economic performance as it can stimulate creativity and problem solving. Diversity can also boost economic growth by drawing labour from a pool of immigrants. On the negative side, it can deplete trust and social capital due to ethnic/racial fragmentation. This can in turn weaken economic performance. Whether the positive effects of diversity on economic performance outweigh the negative ones, at one level, is a simple empirical question. However, when economic outcomes directly or indirectly affect diversity rather than the reverse, there arises an econometric issue.
In this study, the issue of reverse causality arises when variations in regional weekly wages resulted in the concentration of people from diverse cultural backgrounds in specific regions. For example, in Australia, there is no restriction in the mobility of immigrants within the country, and potentially, immigrants can move to places with more perceived economic opportunities (see Hugo and Harris 2011). The HILDA data, for example, shows substantial internal migration across waves among the respondents. Therefore, reverse causality cannot be ruled out from a regression of economic performance on cultural diversity. Instead of diversity causing variation in regional labour market outcomes, the economic conditions such as prospects of employment may be driving the regional distribution of diversity. This poses an econometric issue, endogeneity, in estimating the causal effect of cultural diversity on employment outcomes. The effect of the explanatory variable, diversity, as measured by the share of foreign country citizens in a region (LGA) is confounded by the possibility of migrants' concentration in response to economic incentives. Therefore, the coefficient of diversity cannot be consistently estimated due to correlation with the error term in the wage regression where the share of migrants is endogenous. This entails the violation of the Gauss-Markov (zero conditional mean) assumption in OLS (Wooldridge 2010).
A suitable procedure to correct the endogeneity problem is to apply instrumental variable (IV) estimation (see Baltagi 2008; Wooldridge 2010). The main challenge in applying this procedure is the identification of a valid instrument. If such an instrument can be found, the confounding, for example, between diversity and economic performance can be disentangled and causal relationship between these two variables established. In this study, the shift-share method is used to instrument for the index of cultural diversity and the share of migrants. Following Bellini et al. (2013), Ottaviano and Peri (2006), Longhi (2013), and, recently, Alesina et al. (2013), two-stage least squares (2SLS) estimation is applied to OLS and FE models.
Identification strategy
For an instrumental variable estimation to be specified for Eq. (2), two assumptions should be satisfied. First, the instrument chosen should be correlated with cultural diversity, the key explanatory variable, and second, it should not be correlated with economic performance. In addition, a correctly specified model should not omit relevant variables. Several instruments have been developed in the literature to solve the endogeneity issue in relation to cultural diversity. Altonji and Card (1991) use the 1970 immigrant stock in the USA while Hunt (1992) uses regional temperature and French repatriates of 1962 in a French-Algerian migration study. Ottaviano and Peri (2006) use distance from gateway cities in the USA while Longhi (2013) uses "the proportion of minorities joining the 'New Deal Program'" in the UK. As detailed in the introductory section, mixed (positive and negative) results were obtained by these studies regarding the impact of diversity on economic outcomes.
An instrument suitable for the data used in this paper is the shift-share variable first utilised by Card (2001) in assessing the local labour market impact of immigrant flow in the USA. This instrument was later used in modelling the causal effect of cultural diversity on wages and rental prices for US cities (Ottaviano and Peri 2006) and European regions (Bellini et al. 2013). The shift-share analysis assumes that the regional migrant distribution can be used to generate an exogenous variable using two-time-period data. For example, the 2001 Australian census datasets have regional distribution of Australians based on their country of birth which along with ABS annual population estimates can be used to construct a measure of diversity.Footnote 6 The latter is composed of nationally aggregated distribution and has annual estimates by country of birth for the period 1992–2014. In this study, we use the period 2001–2011. These datasets offer two variables that are relevant here. One is the total number of population in each region by country of birth, and the other is the total number of population in each region. From these variables, it is possible to calculate the annual population growth in Australia by country of origin. Then these annualised estimates can be used along with the baseline (2001) regional population data to estimate the predicted population for each year up to 2011. Since these predicted figures are based on historical (year 2001) regional distribution rather than actual regional distribution, they are not confounded by population growth that could have resulted from economically driven mobility. Therefore, they are assumed to be exogenous to regional economic shocks.
The primary purpose is to estimate the predicted version of the share of migrants in Australia. First, the overall growth rate in the Australian population between time t (which is 2001) and time t + 1 is required. Formally, this rate g i is given by
$$ {g}_i=\frac{\left({p}_i^{t+1}-{p}_i^t\right)}{p_i^t} $$
where \( {p}_i^{t+1} \) and \( {p}_i^t \) represent the total number of the Australian population born in country i in the years t and t + 1, respectively. The next step is to generate the predicted number of Australian residents born in country i and residing in region r based on Eq. (3). This is given by the formula
$$ {p}_{ir}^{*t+1}={p}_{ir}^t\left(1+{g}_i\right) $$
where * indicates that the value is predicted for the year t. Summing this value (p i r (* t + 1)) across all countries of birth provides the predicted total population for each region (LGA) in the next year.
$$ {P}_r^{*t+1}={\displaystyle \sum_i}{p}_{ir}^{*t+1} $$
where \( {P}_r^{*t+1} \) indicates the predicted total number of all residents in each region in t + 1. This value which differs from the actual population in that year is used to calculate the predicted diversity index (DI st as in Eq. (1)). Furthermore, this analysis is repeated to estimate the predicted migrant share in each region. The value is then used to calculate the predicted diversity index among migrants. Finally, the two instruments, namely the predicted diversity index and predicted share of migrants, are merged into the individual-level HILDA data based on the postal area variable.Footnote 7
Generally, the indices of diversity and the instruments generated using the 2001 census data and population estimates are correlated, satisfying the relevance criteria. However, the correlation coefficients are larger for diversity measure based on annual population estimates with r = 0.40 compared to diversity based on a 3-year lag where r = 0.30. On the other hand, the exogeneity criteria are also met, as the instruments are not correlated with the error term. The correlation coefficients between the residual and the two instruments (predicted diversity index and predicted share of migrants) are r = 0.06 and r = 0.06, respectively.
As indicated in Section 2.1, this analysis is based on the age-restricted sample of HILDA respondents. The overall age of the whole sample (waves 1–11) ranges 15–99 years. Excluding those below 16 years as well as those aged 46 years and over (27.8 %), the final sample size is n = 79,636. However, in all estimations, the reported sample sizes differ from the original due to the application of sampling weights and lagging (dropping waves 1–3). Weighted descriptive statistics summarising the characteristics of this sample utilised in this study are presented in Tables 1 and 2.Footnote 8
Table 1 Weighted mean and standard errors of demographic and socio-economic variables
Table 2 Weighted descriptive statistics of metric variables in the HILDA individual person respondent sample aged 15–45 years (waves 4–11)
All the variables reported in Table 1 are strictly time-invariant except education. The table shows that HILDA participants on the average are roughly evenly distributed by gender, with females making 52 % of the sample. More than three quarters (78.9 %, and 78.2 % for all waves) are born in Australia, with 2.2 % of these identifying as Indigenous. The biggest source of the migrant sample is made up of those from Asia-Pacific countries (10.9 %). Married respondents are more than those who were never married in both the whole (52.8 to 35.2 %) and restricted samples (53 to 36.2 %). Overall, 14.5 % of the total sample and 14.2 % of the restricted sample speak English well or above. These indicate that 61 and 69.5 % of those who are foreign-born speak English well or above, respectively. In addition, 22.7 and 20.8 % of each sample did not complete high school, while 26.4 and 27.6 % have a college degree or above. Finally, 66.1 % of the whole and 65.5 % of the restricted sample reside in major urban areas.
Table 2 reports the weighted descriptive statistics of the metric variables which are utilised in further analysis. The standard errors are estimated using jackknife replication. The mean age of the sample in the panel is roughly 34 years. Of those who are in the labour force, about 4 % were unemployed at the time they were interviewed. For those who are employed (employment status = 1), the average number of hours worked is 38.5, with a weekly average wage (in logs) of 6.67. In monetary terms, weekly wages and salaries ranged between $2 and $10,070, with more than 32.1 % of the sample earning below the 2001–2011 average annual minimum wage of $502.3 a week. Respondents who were unemployed at the time of interview indicated an average reservation wage (the lowest wage per hour they considered acceptable) of 2.82 (in log, or $16.8), above the average annual (2001–2011) minimum wage of $13.22 per hour.
The mean values of the diversity indices in the sample are 0.36, indicating a fairly large concentration of foreign nationals in the HILDA data. On average, the LGA level share of foreign nationals in Australia was as high as 23 %, roughly double the 2011 OECD average of 12.5 % (OECD 2013).
Additional descriptive statistics to compare Table 1 with the 2001 and 2011 censuses are reported in Table 8 in the Appendix. For brevity, we will not discuss the table in detail. However, we note that the proportions reported for some of the variables are roughly equivalent to those reported for the HILDA data. The distribution of ancestry and geographic origin in the 2001 census are equivalent to the unrestricted HILDA sub-sample while the distribution of the HILDA variables gender, state, and section of state are equivalent to both censuses. In case of marital status, the categories "separated" and "never married" in the HILDA differ from the 2001 census by 5 and 3.6 %, respectively.
Table 3 reports the wage effects of diversity and other covariates estimated using ordinary least squares (OLS) and fixed effect (FE) models. The OLS results indicate that cultural diversity has a positive impact on weekly wages while the FE results indicate no impact on wages. Although we found results that are robust to estimator type in annualised diversity measure (not reported here), introducing a 3-year lag to diversity appears to eliminate the positive results in FE estimation. In the OLS estimates, column 1 indicates that cultural diversity has strong positive effect on weekly wages for the whole sample controlling for age, age-squared, gender, and marital status (β = 0.20, p < 0.01). Column 2 introduces human capital variables including English language skill, education, weekly hours worked, and job tenure with the result still indicating a strong positive relationship but smaller coefficient (β = 0.09, p < 0.01). Further introduction of a non-linear specification in column 3 indicates the persistence of a strong positive relationship with a larger coefficient size, with the quadratic coefficient indicating an upper bound for the positive effect of diversity.
Table 3 Weighted OLS and FE estimates showing the impact of diversity on wages. Dependent variable: log of weekly wages
Column 4 introduces additional controls including a dummy variable for movers, ancestry, and a person's residence to account for heterogeneity in the impact of diversity. The dummy variable "movers" (=1 if a person ever moved to another LGA) controls for self-selection bias that can arise due to the movement of individuals to places with better paying regions. "Ancestry" refers to whether a respondent is Australian-born (ancestry = 1) or a migrant (ancestry = 0). The residency dummy variable classifies respondents into rural (residence = 0) and urban residents (residence = 1). Controlling for these variables appears to have minimal effect on the results, with just a slight decline in the coefficient size.
The saturated OLS model (column 4, F = 240.1) explained 60.7 % of the variance with diversity having a statistically significant effect. The corresponding FE models (columns 5–7) all indicated no relationship between cultural diversity and weekly wages. This indicates that controlling for individual differences eliminated the positive effects of diversity. The result corroborates previous OLS and FE results obtained using UK data (Longhi 2013). Although contemporaneous diversity appears to have strong effect on weekly wages, it is not robust to the estimator type when the dependent variable is lagged. In an alternative analysis where contemporaneous diversity is categorised (results not shown), the authors found diversity to be statistically significant in the OLS but not in the FE estimation. This contradicts the expectation that higher levels of diversity yield negative labour market outcomes due to communication issues and polarisation that can possibly arise in the workplace (Zhan et al. 2015).
Furthermore, we run alternative regressions using the share of migrants instead of the diversity index. Table 4 which reports similar results indicates a strong positive relationship between migrant share and log weekly wages. As in Table 3, the share of migrants in HILDA yields strongly significant coefficients in the OLS models (β = 0.33, p < 0.1, F = 238.1 for the saturated model and β = 0.44, p < 0.1, F = 243.8 for the non-linear model). The saturated model fit (column 4) explains 60.7 % of the variance in the wage regression. However, none of the FE models show a significant association between migrant share and weekly wages.
Table 4 Weighted OLS and FE estimates showing the impact of share of migrants on wages. Dependent variable: log of weekly wages
Overall, both Tables 3 and 4 indicate that the relationship between cultural diversity or the share of migrants and weekly wages among HILDA respondents aged 16–46 years varies depending on the estimator type used. We find consistent relationship in the OLS but not in the FE models. The effect in the OLS models are stronger when non-linearity is accounted for, indicating a decline in the effect of diversity beyond a certain point, and slightly smaller when heterogeneity is controlled for. The coefficients for movers, ancestry, and residence are statistically significant indicating variation in the effect of diversity for different groups.
Instrumental variable estimates
Diversity based on country of birth data is considered to be endogenous as economic opportunities can attract people from different countries. This can then result in more prosperous regions with higher weekly wages becoming culturally more diverse rather than diversity causing higher weekly wages. Our findings (Table 3) show that the models based on the diversity index generated from the HILDA data appear to indicate the existence of endogeneity. An endogeneity test of the wage model with the diversity index as an independent variable yields a Durbin score chi-squared of 164.6 (p < 0.01) which implies that the diversity index is endogenous. This is also the case with the share of migrants which yields a Durbin score chi-squared of 36.6 (p < 0.01). Instrumental variable estimation is therefore specified for both the OLS and FE models. Table 5 reports the IV results.
Table 5 Instrumental variable estimation showing the impact of diversity on wages. Dependent variable: log of weekly wages
Models 1 and 3 report results estimated using the predicted diversity index (shift-share) instrument based on census data and population estimates with a 3-year lag. In all the models, the results in Tables 3 and 4 are replicated consistently. The index of diversity is strongly significant indicating a positive impact of diversity on weekly wages in the OLS model (β = 0.63, p < 0.01). For the FE model, we detect no relationship. In comparison to the corresponding models in Tables 3 and 4, instrumenting has increased the coefficient size in the IV-OLS model. Yet, the explanatory power is not affected. From the overidentification restriction tests of the instrument reported, it can be seen that both models are overidentified and the null hypotheses (H 0: the models are underidentified) can be rejected at less than the 1 % level. Further, in both models, the presence of weak instruments can be rejected at the 1 % level, implying that the instruments are relevant.
Columns 2 and 4 report IV models estimated using the predicted share of migrants. The results are similar to those for the index of diversity replicating the findings in Table 4. While no relationship is evident in the FE model, coefficients in the IV-OLS model (β = 0.69, p < 0.01) indicate that the share of migrants is positively related to the log of weekly wages. Relevance is also maintained, with the null that the models are underidentified and that the instrument is weak satisfactorily rejected at less than the 1 % level.
Generally, instrumenting yields statistically significant OLS coefficients and has corrected the endogeneity with no effect on the FE models. Overall, our results indicate that there is causal relationship between cultural diversity and weekly wages at the regional level, after correcting for the possibility of endogeneity through the shift-share method using predicted population from the baseline data rather than using actual population data. However, the results are not robust to the type of estimator we use.
Heterogeneity tests
The effect of diversity can vary for individuals depending on their individual and group characteristics, such as ancestry, skill level, self-selection due to mobility, and residency. To account for these heterogeneities, we re-estimate the wage model for different sub-samples. Tables 6 and 7 report the results for the re-estimated models.
Table 6 Weighted OLS and FE estimates comparing the impact of diversity on wages for different groups. Dependent variable: log of weekly wages
Columns 1–4 of Table 6 report findings comparing the Australian-born and migrant samples. The OLS models (columns 1 and 2) indicate that the index of diversity is a strong positive predictor of weekly wages for both samples with the effect on wages larger among those born in Australia (β = 0.65, p < 0.01) than among migrants (β = 0.44, p < 0.01). The FE model is marginally significant for the Australian-born sample while not statistically significant for the migrant sample.
Columns 5–8 compare the findings for those who moved between LGAs in at least 1 year and those who did not move throughout the HILDA Survey. Again, the OLS models indicate strong positive relationship with the effect of diversity on weekly wages slightly larger among movers (β = 0.64, p < 0.01) than non-movers (β = 0.61, p < 0.01). However, the FE models are not statistically significant for both groups.
A similar analysis in Table 7 compares two sets of groups based on skill (columns 1–4) and residency (columns 5–8). For the first set of groups, i.e. skilled vs. unskilled groups, we found positive effect of cultural diversity on weekly wages in the OLS model. The effect of diversity is larger among those who are skilled (β = 0.72, p < 0.01) by a factor of 1.7 compared to the unskilled (β = 0.42, p < 0.01). Similarly, the second set, urban vs. rural residents indicate a positive effect of diversity in OLS, with effects larger among rural (β = 0.65, p < 0.01) compared to urban residents (β = 0.41, p < 0.01). However, in the IV-FE models, although the instruments are strongly significant, the coefficients of both measures are not statistically significant at the 5 % level.
Diversity is a complex concept as it varies depending on the context in which it is studied. It can be expressed in the form of differences in race, linguistic background, national origin, ethnic background, or culture. This fluid notion of what constitutes diversity has been addressed by researchers using a range of conceptualisations. Vigdor (2008) uses culture to define diversity, focusing on the latter as a "measure of cultural dissimilarity" between groups or individuals. Alesina and La Ferrara (2004) use ethnicity in measuring diversity as a fractionalisation based on ethnic origin. Alesina et al. (2013), Bellini et al. (2013), Longhi (2013), and Ottaviano and Peri (2006) take the country of birth to construct an index of diversity. In all of these studies, diversity is conceptualised as a continuum of dissimilarity which ranges from perfect homogeneity to perfect heterogeneity.
This study applied the Ottaviano and Peri (2006) version of diversity (based on country of birth) to Australian data. This has some limitations in terms of capturing detailed differences in race, ethnicity, and language existing due to the heterogeneity of migrants from any particular country of origin. However, country of birth is an important source of diversity in Australia where race and ethnicity categories are not regularly used in labour market analyses. This is also the case in the HILDA Survey which collects the most comprehensive Australian labour-market-related household data.
Although there is underrepresentation of migrants in HILDA (21.3 % migrants), at least two patterns emerge from this study. First, on the average, there is substantial diversity in Australian regions, as high as 0.36. However, there is high degree of variability in the index of diversity at the LGA level within the [0, 1] range. Second, the effect of diversity, as measured by the degree of concentration of foreign nationals at the LGA level, on wages varies depending on the model specification. All OLS regressions indicate a strong positive impact of diversity on weekly wages among HILDA respondents. On average, respondents who reside in more diverse environment tend to earn better weekly wages than those who live in more homogenous regions. This result shows that more than half of the variation is explained by the specified models. However, although these findings appear to be robust in non-linear specifications with time and a range of other controls, they are not robust when a fixed effect specification is specified.
Generally, the results of this longitudinal study replicate two aspects of previous, mostly cross-sectional findings, using US and European data. First, our main results partially corroborate Ottaviano and Peri (2006) who found that a substantial rise in wages was experienced by US citizens living in areas where the share of migrants rose between 1970 and 1990. Similar findings were obtained by Bellini et al. (2013) for European regions constituting 12 countries in a study that used regional GDP per capita as a proxy for regional wages. Second, the findings that indicate a positive effect of diversity tend to disappear after controlling for individual fixed effects. This robustness issue supports the findings reported by Longhi (2013).
The main concerns with this kind of analysis, as discussed in the literature, are endogeneity and selection bias. We addressed the problem of endogeneity by applying instrumental variable estimation to 10 years of longitudinal data using the shift-share method. We used the predicted rather than actual fractionalisation data to instrument for diversity index, with results that are consistent after instrumenting. Using a spatial analysis of cultural diversity, employed in only a few studies to date (Longhi 2013; Ottaviano and Peri 2006; Pischke and Velling 1997), we found the positive effect of diversity on weekly wages based on country of birth as reported in studies to be limited by the estimator type used. This is particularly the case with the introduction of a 3-year lag to the index of diversity which yields valid instruments and statistically significant OLS coefficients. In this estimation, people living in relatively more diverse regions tend to earn better wages than those who live in less diverse regions. To account for selection bias, we controlled for those who moved LGAs. Further, we controlled for heterogeneity issues by including ancestry, skill differences, and residency in addition to other demographic and human capital factors. Our results appear to be robust to self-selection and heterogeneity issues although the models were robust only in OLS while the FE models were not robust.
Finally, our findings also indicate that although contemporaneous levels of cultural diversity strongly and positively affect weekly wages, the impact of previous levels of diversity on weekly wages is not conclusive. This may explain the variation in the literature where some studies report that cultural diversity increases wages across time (Bellini et al. 2013; Ottaviano and Peri 2006) while others do not (Longhi 2013). Future research should explore this further to examine whether the type of data used to construct diversity and the time lag employed has an effect on the outcome of diversity in studies.
The phrase cultural diversity is used throughout this paper in reference to a heterogeneous group of people making a society. We use the country of birth as a measure although ethnicity, race, religion, language, and/or nationality can also be used to assess this form of diversity.
Attrition is relatively higher among single persons, unemployed, younger (15–24 year olds), low skilled, and Indigenous people.
Although 2156 postcodes (including "offshore" and "no usual address") are available in the 2006 and 2011 censuses, seven postcodes have missing values. The ABS's TableBuilder instrument enables researchers to build population-level tables of diverse demographic and socio-economic issues based on the 2006 and 2011 censuses. The ABS provided us with a country of origin by postal area data for the 2001 census. However, between the 2001 and 2011 data, there is a discrepancy of 206 postcodes, with 63 postcodes excluded in 2011 while 143 additional postcodes included.
These datasets are not disaggregated by region of residence. Therefore, they only indicate the total residents and new arrivals irrespective of their residence or mobility inside Australia.
The country of birth data is arguably a crude indicator of cultural diversity in that Australian-born respondents have a range of different ethnicities (and to a lesser extent languages) that are not accounted for in this study. In addition, the contribution of migrants to diversity will be overestimated to some extent in that some of them share ethnicity and language with Australian-born respondents.
The annual population estimates were obtained from annual population estimates from the ABS creative commons web page.
NB: the index measures diversity at the LGA level although, originally, the individual identifier in HILDA is the postcode.
The observations vary from the restricted final sample due to missing values and population weighting.
ABS. Census QuickStats. Canberra: Australian Bureau of Statistics; 2001. www.abs.gov.au, Accessed 19 Jan 2016.
ABS. Census of population and housing: population growth and distribution 2001, ABS Catalogue No. 2035.0, Canberra: Australian Bureau of Statistics; 2003.
ABS. Standard Australian Classification of Countries (SACC), 2011. Canberra: Australian Bureau of Statistics; 2011a.
ABS. Australian Standard Geographical Classification (ASGC) Digital Boundaries, Australia, July 2011. Canberra: Australian Bureau of Statistics; 2011b.
ABS. Census of population and housing 2011. Canberra: Australian Bureau of Statistics; 2012a.
ABS. Reflecting a nation: stories from the 2011 census, 2012–2013. Canberra: Australian Bureau of Statistics; 2012b.
ABS. Australian social trends. Canberra: Australian Bureau of Statistics; 2014.
Alesina A, La Ferrara E. Ethnic diversity and economic performance. Cambridge: National Bureau of Economic Research; 2004.
Alesina A, La Ferrara E. Ethnic diversity and economic performance. J Econ Lit. 2005;43:762–800.
Alesina A, Harnoss J, Rapoport H. Birthplace diversity and economic prosperity. Cambridge: National Bureau of Economic Research; 2013.
Altonji JG, Card D. The effects of immigration on the labor market outcomes of less-skilled natives. In: Abowd JM, Freeman RB, Editors. Immigration, trade, and the labor market. Chicago: University of Chicago Press; 1991. pp. 201–34.
Altonji JG, Blank RM. Race and gender in the labor market. In: Ashenfelter OC, Editor. Handbook of labor economics, vol 3. North-Holland, Amsterdam, the Netherlands: North-Holland; 1999. pp. 3143–259.
Angrist JD, Kugler AD. Protective or counter‐productive? Labour market institutions and the effect of immigration on EU natives. Econ J. 2003;113:F302–31.
Baltagi B. Econometric analysis of panel data, Vol. 4, New York: John Wiley & Sons; 2008.
Bellini E, Ottaviano GI, Pinelli D, Prarolo G. Cultural diversity and economic performance: evidence from European regions. In: Geography, institutions and regional economic performance. Berlin: Springer; 2013. p. 121–41.
Bertone S, Leahy M. Social equity, multiculturalism and the productive diversity paradigm: reflections on their role in corporate Australia. In: Phillips SK, editor. Everyday diversity: Australian multiculturalism in practice. Altona: Common Ground Publishing Pty. Ltd; 2001. p. 113–44.
Borjas GJ, Freeman RB, Katz LF, DiNardo J, Abowd JM. How much do immigration and trade affect labor market outcomes? Brookings Pap Econ Act. 1997;1997:1–90.
Borjas GJ, Grogger J, Hanson GH. Imperfect substitution between immigrants and natives: a reappraisal. Cambridge: National Bureau of Economic Research; 2008.
Campos NF, Saleh A, Kuzeyev V. Dynamic ethnic fractionalization and economic growth. J Int Trade Econ Dev. 2011;20:129–52.
Card D. Immigrant inflows, native outflows, and the local market impacts of higher immigration. J Labor Econ. 2001;19:22–61.
D'Amuri F, Ottaviano GI, Peri G. The labor market impact of immigration in Western Germany in the 1990s. Eur Econ Rev. 2010;54:550–70.
Damelang A, Haas A. The benefits of migration: cultural diversity and labour market success. Eur Soc. 2012;14:362–92.
Dustmann C, Fabbri F. Language proficiency and labour market performance of immigrants in the UK. Econ J. 2003;113:695–717.
Fearon JD. Ethnic and cultural diversity by country. J Econ Growth. 2003;8:195–222.
Friedberg RM, Hunt J. The impact of immigrants on host country wages, employment and growth. J Econ Perspect. 1995;9:23–44.
Gomez-Mejia LR, Palich LE. Cultural diversity and the performance of multinational firms. J Int Bus Stud. 1997;28:309–35.
Herring C. Does diversity pay? Race, gender, and the business case for diversity. Am Sociological Rev. 2009;74:208–24.
Hugo G, Harris K. Population distribution effects of migration in Australia. Adelaide: University of Adelaide; 2011.
Hunt J. The impact of the 1962 repatriates from Algeria on the French labor market. Ind Labor Relations Rev. 1992;45:556–72.
OECD. International migration outlook 2013. Paris: OECD Publishing; 2013.
Juvonen J, Nishina A, Graham S. Ethnic diversity and perceptions of safety in urban middle schools. Psychological Sci. 2006;17:393–400.
Kochan T et al. The effects of diversity on business performance: report of the diversity research network. Hum Resour Manage. 2003;42:3–21.
Kohler P. Economic discrimination and cultural differences as barriers to migrant integration: is reverse causality symmetric? Geneva: Graduate Institute of International and Development Studies Working Paper; 2012.
Lieber LD. The hidden dangers of implicit bias in the workplace. Employment Relations Today. 2009;36:93–8.
Longhi S. Impact of cultural diversity on wages, evidence from panel data. Reg Sci Urban Econ. 2013;43:797–807.
Montalvo JG, Reynal-Querol M. Ethnic diversity and economic development. J Dev Econ. 2005;76:293–323.
Nathan M. The economics of super-diversity: findings from British cities, 2001–2006. London: Spatial Economics Research Centre, London School of Economics; 2011.
Neumann R, Graeff P. Method bias in comparative research: problems of construct validity as exemplified by the measurement of ethnic diversity. J Math Sociology. 2013;37:85–112.
Ottaviano GI, Peri G. The economic value of cultural diversity: evidence from US cities. J Econ Geogr. 2006;6:9–44.
Page SE. The difference: how the power of diversity creates better groups, firms, schools, and societies. Princeton: Princeton University Press; 2008.
Perotin V, Robinson A, Loundes J. Equal opportunities practices and enterprise performance: a comparative investigation on Australian and British data. Int Labour Rev. 2003;142:471–505.
Pischke J-S, Velling J. Employment effects of immigration to Germany: an analysis based on local labor markets. Rev Econ Stat. 1997;79:594–604.
Putnam RD. E pluribus unum: diversity and community in the twenty‐first century the 2006 Johan Skytte Prize Lecture. Scand Political Stud. 2007;30:137–74.
Richard OC. Racial diversity, business strategy, and firm performance: a resource-based view. Acad Manage J. 2000;43:164–77.
Roberson L, Kulik CT. Stereotype threat at work. Acad Manage Perspect. 2007;21:24–40.
Sanderson MR. Does immigration promote long-term economic development? A global and regional cross-national analysis, 1965–2005. J Ethnic Migration Stud. 2013;39:1–30.
Stahl GK, Maznevski ML, Voigt A, Jonsen K. Unraveling the effects of cultural diversity in teams: a meta-analysis of research on multicultural work groups. JInt Bus Stud. 2010;41:690–709.
Suedekum J, Wolf K, Blien U. Cultural diversity and local labour markets. Reg Stud. 2014;48:173–91.
Triana MC, Jayasinghe M, Pieper JR. Perceived workplace racial discrimination and its correlates: a meta‐analysis. J Organizational Behav. 2015;36:491–513.
Vigdor JL. Measuring immigrant assimilation in the United States. New York: Manhattan Institute for Policy Research; 2008.
Watson N. Longitudinal and cross-sectional weighting methodology for the HILDA Survey. 2012.
Wooldridge JM. Econometric analysis of cross section and panel data. 2nd ed. Cambridge: MIT Press; 2010.
Wrench J. Diversity management can be bad for you. Race Class. 2005;46:73–84.
Zhan S, Bendapudi N, Yy H. Re‐examining diversity as a double‐edged sword for innovation process. J Organizational Behav. 2015;36:1026–49.
This paper uses unit record data from the Household, Income and Labour Dynamics in Australia (HILDA) Survey. The HILDA project was initiated and is funded by the Australian Government Department of Families, Housing, Community Services and Indigenous Affairs (FaHCSIA) and is managed by the Melbourne Institute of Applied Economic and Social Research (Melbourne Institute). The findings and views reported in this article, however, are those of the authors and should not be attributed to either FaHCSIA or the Melbourne Institute. The first author was supported by an Australian Postgraduate Award (Industry) as part of linkage project LP100200057 funded by the Australian Research Council (ARC), Victorian Health Promotion Foundation, and the Australian Human Rights Commission. He is also grateful to Mrs. Lidia Abraham for her assistance in the analysis of this research. The second author is supported by an ARC Future Fellowship (FT130101148). We would like to thank an anonymous referee and the editor for the useful remarks.
Responsible editor: Denis Fougère
Alfred Deakin Institute for Citizenship & Globalisation, Faculty of Arts and Education, Deakin University, Burwood, Australia
Amanuel Elias
& Yin Paradies
Search for Amanuel Elias in:
Search for Yin Paradies in:
Correspondence to Amanuel Elias.
The IZA Journal of Migration is committed to the IZA Guiding Principles of Research Integrity. The authors declare that they have observed these principles.
Descriptive analysis of two Australian censuses
Table 8 Demographic characteristics
Elias, A., Paradies, Y. The regional impact of cultural diversity on wages: evidence from Australia. IZA J Migration 5, 12 (2016) doi:10.1186/s40176-016-0060-4
Accepted: 31 March 2016
Instrumental variable
Shift-share | CommonCrawl |
Darboux theorem
Darboux theorem may may refer to one of the following assertions:
Darboux theorem on local canonical coordinates for symplectic structure;
Darboux theorem on intermediate values of the derivative of a function of one variable.
For Darboux theorem on integrability of differential equations, see Darboux integral.
1 Darboux theorems for symplectic structure
1.1 Local equivalence
1.2 Relative versions
1.3 Comments
2 Darboux theorem for intermediate values of differentiable functions
Darboux theorems for symplectic structure
2010 Mathematics Subject Classification: Primary: 37Jxx,53Dxx [MSN][ZBL]
Recall that a symplectic structure on an even-dimensional manifold $M^{2n}$ is a closed nondegenerate $C^\infty$-smooth differential 2-form $\omega$: $$ \omega\in\varLambda^2(M),\qquad \rd \omega=0,\qquad \forall v\in T_p M\quad \exists w\in T_p M:\ \omega_p(v,w)\ne0. $$
The matrix $S(z)$ of a symplectic structure, $S_{ij}(z)=\omega(\frac{\partial}{\partial z_i},\frac{\partial}{\partial z_i})=-S_{ji}(z)$ in any local coordinate system $(z_1,\dots,z_{2n})$ is antisymmetric and nondegenerate[1]: $\omega=\frac12\sum_{i,j=1}^{2n} S_{ij}(z)\,\rd z_i\land \rd z_j$.
The standard symplectic structure on $\R^{2n}$ in the standard canonical coordinates $(x_1,\dots,x_n,p_1,\dots,p_n)$ is given by the form $$ \omega=\sum_{i=1}^n \rd x_i\land \rd p_i.\tag 1 $$
Local equivalence
Theorem (Darboux theorem[2], sometimes also referred to as the Darboux-Weinstein theorem[3]).
Any symplectic structure locally is $C^\infty$-equivalent to the standard to the standard syplectic structure (1): for any point $a\in M$ there exists a neighborhood $M\supseteq U\owns a$ and "canonical" coordinate functions $(x,p):(U,a)\to (\R^{2n},0)$, such that in these coordinates $\omega$ takes the form $\sum \rd x_i\land\rd p_i$.
In particular, any two symplectic structures $\omega_1,\omega_2$ on $M$ are locally equivalent near each point: there exists the germ of a diffeomorphism $h:(M,a)\to(M,a)$ such that $h^*\omega_1=\omega_2$.
Relative versions
Together with the "absolute" version, one has a "relative" version of the Darboux theorem[2][4]: if $M$ is a smooth manifold with two symplectic structures $\omega_1,\omega_2$, and $N$ is a submanifold on which the two 2-forms coincide[5], then near each point $a\in N\subseteq M$ one has a diffeomorphism $h:(M,a)\to(M,a)$ transforming $\omega_1$ to $\omega_2$ and identical on $N$: $$ \omega_1=\omega_2\Big|_{TN}\ \implies\ \exists h\in\operatorname{Diff}(M,a):\quad h^*\omega_1=\omega_2,\quad h|_N\equiv\operatorname{id}. $$
The assertion of the Darboux theorem on local normalization of antisymmetric 2-forms should be compared with a similar question about symmetric nondegenerate forms, which (if positive) define a Riemannian metric on $M$. It is well known that, although at a given point $a$ the Riemannian metric can be brought to the canonical form $\left<v,v\right>=\sum_{i=1}^n v_i^2$, such transformation is in general impossible in any open neighborhood of $a$: the obstruction, among other things, is represented by the curvature of the metric (which is zero for the "constant" standard Euclidean metric).
In the same way the relative Darboux theorem means that submaniolds of the symplectic manifold have no "intrinsic" geometry: any two submanifolds $N,N'$ with equivalent (eventually, quite degenerate) restrictions of $\omega$ on $TN$, resp., $TN'$, can be transformed to each other by a diffeomorphism preserving the symplectic structure.
↑ Another way to formulate the nondegeneracy is to require that the highest wedge power $\omega\land\cdots\land\omega$ ($n$ times) is a nonvanishing volume form.
↑ 2.0 2.1 Arnold V. I., Givental A. B., Symplectic Geometry, Dynamical systems, IV, 1–138, Encyclopaedia Math. Sci., 4, Springer, Berlin, 2001. MR1866631. Chap. 2, Sect. 1
↑ Guillemin V., Sternberg S., Geometric asymptotics, Mathematical Surveys, No. 14. American Mathematical Society, Providence, R.I., 1977. xviii+474 pp. MR0516965, Chap. IV, Sect. 1.
↑ McDuff, D., Salamon, D., Introduction to symplectic topology (Second edition). Oxford Mathematical Monographs. Oxford University Press, New York, 1998. x+486 pp. MR1698616, Sect. 3.2.
↑ This means that the 2-forms $\omega_i$ take the same value on any pair of vectors tangent to $N$. This condition is weaker than coincidence of the forms $\omega_i$ at all points of $N$.
Darboux theorem for intermediate values of differentiable functions
2010 Mathematics Subject Classification: Primary: 26A06 [MSN][ZBL]
If $f:[a,b]\to\R$ is a function which is differentiable at all points of the segment $[a,b]\subseteq\R$ (the right and left derivatives are assumed at the endpoints $a,b$ respectively), then its derivative assumes all intermediate values[1] (i.e., the range of the derivative $f'=\frac{\rd f}{\rd x}:[a,b]\to\R$ is a connected set).
For functions $f\in C^1[a,b]$ whose derivative is continuous, this is a simple consequence of the intermediate value theorem for the derivative. For functions whose derivative exists at all points but is discontinuous, e.g., $f(x)=x^2\sin(1/x)$, $0\ne x\in[-1,1]$, $f(0)=0$, the assertion follows from the Fermat principle (the derivative of a differential function at an extremal point vanishes), applied to a suitable combination $f(x)-\alpha x$. See also the Darboux property.
↑ Darboux's theorem (2012). In Encyclopædia Britannica. Retrieved from [the EB site].
Darboux theorem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Darboux_theorem&oldid=30975
This article was adapted from an original article by L.D. Kudryavtsev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Darboux_theorem&oldid=30975"
Dynamical systems and ordinary differential equations
Dynamical systems and ergodic theory
Real functions | CommonCrawl |
Langevin equation
From Encyclopedia of Mathematics
In 1908 P. Langevin [a1] proposed the following equation to describe the natural phenomenon of Brownian motion (the irregular vibrations of small dust particles suspended in a liquid):
$$ \tag{a1 } \frac{dv ( t) }{dt} = - \gamma v ( t) + L ( t). $$
Here $ v ( t) $ denotes the velocity at time $ t $ along one of the coordinate axes of the Brownian particle, $ \gamma > 0 $ is a friction coefficient due to the viscosity of the liquid, and $ L ( t) $ is a postulated "Langevin forceLangevin force" , standing for the pressure fluctuations due to thermal motion of the molecules comprising the liquid. This Langevin force was supposed to have the properties
$$ \mathbf E ( L ( t)) = 0 \ \ \textrm{ and } \ \ \mathbf E ( L ( t) L ( s)) = D \cdot \delta ( t - s). $$
The Langevin equation (a1) leads to the following diffusion (or "Fokker–Planck" ) equation (cf. Diffusion equation) for the probability density on the velocity axis:
$$ \tag{a2 } { \frac \partial {\partial t } } \rho _ {t} ( v) = \ \gamma \frac \partial {\partial v } ( v \rho _ {t} ( v)) + { \frac{1}{2} } D ^ {2} \frac{\partial ^ {2} }{\partial v ^ {2} } \rho _ {t} ( v). $$
The equations (a1) and (a2) provided a conceptual and quantitative improvement on the description of the phenomenon of Brownian motion given by A. Einstein in 1905. The quantitative understanding of Brownian motion played a large role in the acceptance of the theory of molecules by the scientific community. The numerical relation between the two observable constants $ \gamma $ and $ D $, namely $ D = 2 \gamma kT/M $( where $ T $ is the temperature and $ M $ the particle's mass), gave the first estimate of Boltzmann's constant $ k $, and thereby of Avogadro's number.
The Langevin equation may be considered as the first stochastic differential equation. Today it would be written as
$$ dv ( t) = - \gamma u ( t) dt + D dw ( t), $$
where $ w ( t) $ is the Wiener process (confusingly called "Brownian motion" as well). The solution of the Langevin equation is a Markov process, first described by G.E. Uhlenbeck and L.S. Ornstein in 1930 [a2] (cf. also Ornstein–Uhlenbeck process).
The Langevin equation is a heuristic equation. The program to give it a solid foundation in Hamiltonian mechanics has not yet fully been carried through. Considerable progress was made by G.W. Ford, M. Kac and P. Mazur [a3], who showed that the process of Uhlenbeck and Ornstein can be realized by coupling the Brownian particle in a specific way to an infinite number of harmonic oscillators put in a state of thermal equilibrium.
In more recent years, quantum mechanical versions of the Langevin equation have been considered. They can be subdivided into two classes: those which yield Markov processes and those which satisfy a condition of thermal equilibrium. The former are known as "quantum stochastic differential equations" [a4], the latter are named "quantum Langevin equations" [a5].
[a1] P. Langevin, "Sur la théorie de mouvement Brownien" C.R. Acad. Sci. Paris , 146 (1908) pp. 530–533
[a2] G.E. Uhlenbeck, L.S. Ornstein, "On the theory of Brownian motion" Phys. Rev. , 36 (1930) pp. 823–841
[a3] G.W. Ford, M. Kac, P. Mazur, "Statistical mechanics of assemblies of coupled oscillators" J. Math. Phys. , 6 (1965) pp. 504–515
[a4] C. Barnett, R.F. Streater, I.F. Wilde, "Quasi-free quantum stochastic integrals for the CAR and CCR" J. Funct. Anal. , 52 (1983) pp. 19–47
[a5] R.L. Hudson, K.R. Parthasarathy, "Quantum Itô's formula and stochastic evolutions" Commun. Math. Phys. , 93 (1984) pp. 301–323
Langevin equation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Langevin_equation&oldid=47575
This article was adapted from an original article by H. Maassen (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Langevin_equation&oldid=47575"
TeX auto
TeX done | CommonCrawl |
Humanistic and Economic Burden of Atopic Dermatitis for Adults and Adolescents in the Middle East and Africa Region | springermedizin.de Skip to main content
Open Access 29.11.2022 | Review
Humanistic and Economic Burden of Atopic Dermatitis for Adults and Adolescents in the Middle East and Africa Region
verfasst von: Baher Elezbawy, Ahmad Nader Fasseeh, Essam Fouly, Mohamed Tannira, Hala Dalle, Sandrine Aderian, Laila Carolina Abu Esba, Hana Al Abdulkarim, Alfred Ammoury, Esraa Altawil, Abdulrahman Al Turaiki, Fatima Albreiki, Mohammed Al-Haddab, Atlal Al-Lafi, Maryam Alowayesh, Afaf Al-Sheikh, Mahira Elsayed, Amin Elshamy, Maysa Eshmawi, Assem Farag, Issam Hamadah, Meriem Hedibel, Suretha Kannenberg, Rita Karam, Mirna Metni, Noufal Raboobee, Martin Steinhoff, Sherif Abaza, Mohamed Farghaly, Zoltán Kaló
Erschienen in: Dermatology and Therapy | Ausgabe 1/2023
Humanistic Burden
Economic Burden: Healthcare Costs
Economic Burden: Indirect Costs
Validation Meetings
Total Burden
Medical Writing, Editorial, and Other Assistance
Compliance with Ethics Guidelines
Prior Presentation
Atopic dermatitis (AD) is a chronic skin disease that poses a significant burden on both patients and the society. AD causes the highest loss in disability-adjusted life years compared with other skin diseases. This study aimed to estimate the economic and humanistic burden of AD in adults and adolescents in seven countries in the Middle East and Africa region (Egypt, Lebanon, Saudi Arabia, Kuwait, Algeria, South Africa, and United Arab Emirates).
We conducted a literature review to identify country-specific data on this disease. Subsequently, meetings were organized with experts from each country to complete the missing data. The data were aggregated and calculation models were created to estimate the value of the humanistic and economic burden of the disease in each country. Finally, we conducted meetings with local experts to validate the results, and the necessary adjustments were made.
On average, a patient with AD loses 0.19 quality-adjusted life years (QALYs) annually owing to this disease. The average annual healthcare cost per patient is highest in the United Arab Emirates, with an estimated value of US $3569 and a population-level indirect cost of US $112.5 million. The included countries allocated a range of 0.20–0.77% of their healthcare expenditure to AD-related healthcare services and technologies. The indirect cost of AD represents approximately 67% of the total disease cost and, on average, approximately 0.043% (range 0.022–0.059%) of the gross domestic product (GDP) of each country.
Although the humanistic and economic burdens differ from country to country, AD carries a significant socioeconomic burden in all countries. The quality of life is severely affected by the disease. If AD is controlled, the costs, especially indirect costs, could decrease and the disease burden could be alleviated significantly.
Supplementary file1 (DOCX 42 KB)
The online version contains supplementary material available at https://doi.org/10.1007/s13555-022-00857-0.
Key Summary Points
The burden of atopic dermatitis has not been sufficiently quantified in Africa and the Middle East.
The quality of life of patients and caregivers is severely affected by atopic dermatitis.
Atopic dermatitis carries a significant socioeconomic burden worldwide.
There is an opportunity to decrease the disease burden through proper management.
By controlling diseases, the costs and quality of life loss burden can be alleviated significantly.
Atopic dermatitis (AD) is a chronic skin disease that significantly decreases the quality of life of patients [ 1 ]. It may also lead to economic losses for patients and societies, especially in a severe state [ 2 ]. AD is occasionally mistaken for a pediatric disease because it is very common in children; however, recent studies have shown that AD is also common in adults, with a prevalence ranging from 2.1% to 4.9% [ 3 ]. This disease creates a significant humanistic and economic burden for individual patients and society [ 4 , 5 ]. The Global Burden of Disease study estimated that AD has the highest burden of disability-adjusted life years (DALYs) among skin diseases, exceeding that of psoriasis (75% higher), urticaria (82% higher), and scabies (more than 100% higher) [ 6 ]. Globally, the age-standardized rate of disability-adjusted life years is higher for AD than for other serious diseases, such as liver cirrhosis and alcohol-associated chronic liver diseases [ 6 ].
The treatments for AD include a wide range of topical and systemic agents, targeted therapies, and phototherapies. The treatment costs vary among these options, from inexpensive topical anti-inflammatory agents and emollients to expensive targeted therapies [ 7 ]. In addition to direct healthcare costs, AD also implies a hidden indirect cost that represents a considerable proportion of the total cost [ 8 ].
The prevalence of AD and its manifestations are affected by the climate. The disease tends to manifest more in dry weather [ 9 , 10 ]; therefore, the burden may vary according to the climate of each country. The burden of AD in the Middle East and Africa has been discussed in a recent literature review [ 11 ], and other reviews have estimated its prevalence or burden in specific cities [ 12 , 13 ]; however, to our knowledge, this is the first study to quantify the burden of the disease in adults and adolescents in specific countries in the region. Country-specific burden data are essential to allow decision-makers to make evidence-based decisions and efficiently allocate the available resources.
This study aimed to estimate the economic and humanistic burden of AD in adults and adolescents in seven countries in the Middle East and Africa region: Algeria, Egypt, Kuwait, Lebanon, Saudi Arabia (KSA), South Africa, and the United Arab Emirates (UAE).
Primary and secondary data were used to estimate the disease burden. We conducted a literature search and expert interviews to obtain and validate the data on humanistic and economic burdens in the seven selected countries. Additionally, calculation models were created using Microsoft Excel to quantify the burden in each country. We used a bottom-up approach to estimate the humanistic and economic burdens. The values of quality-adjusted life years (QALYs) lost, as well as the healthcare costs and indirect costs incurred by an average patient with AD, were multiplied by the number of patients with AD in a country to estimate the total burden. In general, this study had a conservative approach: if we could not find an accurate estimate of an input, its lower estimate was used; therefore, the actual burden is safely more than the estimate we have provided.
For the bottom-up calculation, the data on the number of adults and adolescents with AD in each country were required. These prevalence data should be stratified by age group because the quality of life and prevalence differ significantly among age groups. We used prevalence data estimates for the seven countries from the Global Burden of Disease study [ 14 ]. The 2019 prevalence data (latest reports) are presented in Table 1. The prevalence details by age and sex are shown in Table S1.
Patients with AD aged 10–74 years in the selected countries.
Source: Global Burden of Disease Results Tool (Global Health Data Exchange) [ 34 ]
2019 prevalence of AD, n
Male patients
Female patients
AD atopic dermatitis
To estimate the humanistic burden of AD in the seven selected countries, we multiplied the number of patients in each country by the average loss in quality of life annually (the value of utility lost per patient in 1 year).
There were no country-level data regarding the values of the annual utilities lost owing to AD; therefore, we opted to use data from international studies to calculate the age-standardized QALYs lost. We specifically searched for studies reporting the quality of life subgrouped by age because the utility loss differs among different age groups.
Beikert et al. [ 15 ] reported the quality-of-life values for patients with AD sub-grouped by age as EuroQoL 5-dimension (EQ-5D) visual analog scale values. To use these data to estimate the utility loss per age group, we converted the data into 0–1 utility values. There was no ready-made tool for this conversion; therefore, a regression model was built on the basis of five studies identified in the literature [ 16 ‐ 20 ]. Each of these studies included EQ-5D index utility values and EQ-5D visual analog scale results for the same group of patients. We used these values to create a regression model and converted the EQ-5D visual analog scale values to EQ-5D index utility values.
Beikert et al. reported only values for patients aged ≥ 18 years; therefore, we used the data from another study (Ezzedine et al.) [ 21 ] to determine the quality of life for patients aged 10–18 years. Ezzedine et al. reported the utility values for patients aged 12–14 and 15–17 years. These values were used as proxies for the quality of life for those in the 10–14 and 15–19 age groups, respectively, to match the prevalence age structure grouping. In the study by Ezzedine et al., the quality-of-life values were reported on the basis of the children and adult versions of the Dermatology Life Quality Index (DLQI) questionnaire results, which were converted into EQ-5D index utility values through a specialized online tool [ 22 ].
After collecting the utility values for all patient age groups, we calculated the utility loss from the general population (the utility each patient with AD loses owing to the disease compared with the utility of the general population). The utility of the general population for each age group was reported by Janssen et al. [ 23 ] in 20 countries worldwide. We calculated the average utility for all countries, and assumed that this would be the baseline utility for each age group. The study reported values for those aged 18–75 years. We assumed that the patients in the 10–15 and 15–19 age groups would have the same quality of life as the 18–24 age subgroup.
Finally, to calculate the utility loss owing to AD, the utility value for a patient with AD in each subgroup was subtracted from that for the general population in the same subgroup. The humanistic burden in each country was calculated by multiplying the number of patients in each age group by the average utility lost for the same age group over 1 year. The product represents the QALYs lost per country per year owing to AD. The age-standardized utility loss per patient for each country was calculated by dividing the total QALYs lost by the number of patients with AD in each country. This value was calculated to allow comparability between countries.
To calculate the monetary value of QALYs lost owing to AD, the annual QALYs lost in the previous step were multiplied by the gross domestic product (GDP) per capita for each country in 2019 USD. To allow for comparability between countries, the total monetary value of QALYs lost was divided by each country's GDP, and countries were compared by the monetary value of QALYs lost as a percentage of GDP. We obtained GDP and GDP per capita values from the 2018 World Health Organization Global Health Expenditure database [ 24 ].
The healthcare costs items included outpatient visits, hospitalization, topical treatments, systematic treatments, targeted therapy, and phototherapy sessions. As the economic data are not transferable across countries, we collected the local data on the costs from each country. We conducted a series of structured interviews with experts from each country to estimate the healthcare costs of AD. The questionnaire used in the interview was based on a scoping review conducted to identify the relevant cost components related to the disease. This questionnaire was validated by a healthcare professional who recommended that the questionnaire should be stratified by severity levels (mild, moderate, and severe) because each level requires different interventions and, therefore, has different costs.
We conducted interviews with two or three healthcare professionals from each country. For each country, at least two experts were interviewed. If the results of the two estimates differed significantly (more than double the average), a third interview with a different expert was conducted. Among the three results, the lowest two results were chosen as per the conservative approach of the study.
The data collected during the interviews included the severity distribution among patients and the details of healthcare costs, such as healthcare resource utilization, outpatient visits, length of hospital stay, lab tests, and topical and systemic treatments for each severity level.
The public unit costs of treatments or services for patients with AD were collected for each country from online official price lists, online pharmacy prices, and hospital prices or expert interviews, if all the previous data were unavailable. The questionnaire template and details of each domain can be found in Tables S2 and S3. To allow for comparability between countries, the cost values were converted to 2019 USD using the annual average exchange rate from the World Bank database [ 25 ]. The values of healthcare costs for AD as a percentage of the total healthcare expenditure were calculated for each country to assess the relative healthcare cost burden. We obtained data on healthcare expenditures from the 2018 World Health Organization Global Health Expenditure database [ 24 ].
The questionnaire was sent to each healthcare professional to understand its structure, and an online structured 2-h interview was conducted with each healthcare professional to complete the questionnaire. The interviewers completed the questionnaires on the basis of the experts' answers. A total of 17 clinical experts were interviewed. These experts were selected on the basis of a convenience sampling technique in each country, choosing accessible healthcare professionals who have experience in dermatology.
The questionnaires aimed to provide data on the annual average cost burden of AD per patient per country. To estimate the total healthcare cost per country, we multiplied the number of patients in each country by the average cost per patient (obtained from the questionnaire).
Not all patients with AD are diagnosed, and not all patients are treated [ 3 ]. The untreated population will, of course, incur no healthcare costs. Hanifin et al. estimated the percentage of AD cases diagnosed by a physician to be 37.1% [ 26 ]. Accordingly, the healthcare costs in our study were multiplied by 37.1% to adjust for the proportion of diagnosed and treated patients.
On the basis of the literature search conducted, the indirect costs of AD are mainly related to productivity loss owing to absenteeism and presenteeism of patients and their caregivers. Absenteeism was defined as the number of days the patient was absent from work or school, and presenteeism was defined as the number of days the patient was at work or school, but was not productive [ 27 ].
The average annual presenteeism and absenteeism values for each patient with AD were calculated on the basis of a literature search of several studies that included numerical data on presenteeism and absenteeism owing to AD. A list of studies reporting absenteeism and presenteeism data is presented in Table S4. Few studies mentioned data on absenteeism for caregivers; most studies that included these data focused only on children. Therefore, because our study adopted a conservative approach and included adults and adolescents, the caregiver burden was excluded from our calculations. The reported presenteeism and absenteeism values were estimated on the basis of the weighted average of the AD severity.
The following example shows how presenteeism and absenteeism values were estimated from each study:
If patients with AD of mild severity represent 50% of the study population, and are absent for 5 days on average owing to AD, patients with moderate AD represent 35% and are absent for 15 days, and patients with severe AD represent 15% and are absent for 25 days, then the average absenteeism value would be calculated as 50% × 5 + 35% × 15 + 15% × 25 = 11.5 days of absenteeism annually for an average patient with AD.
The average productivity lost by patients in the literature was adapted to local settings, considering the prevalence of working age, employment rate, sex, and labor force participation rate (LFPR) [ 28 ‐ 30 ]. These inputs were used to calculate the AD-related indirect costs owing to absenteeism and presenteeism.
To calculate the value of indirect costs for a whole country population, the approach was to multiply the number of patients in the working age group (age, 15–65 years) by the cost of 1 day of presenteeism or absenteeism, and the annual number of days lost. The cost of 1 day was calculated on the basis of the average salary in the country and number of working days per year. Simultaneously, the number of working patients was adjusted to the LFPR and unemployment rate by sex.
The following equation was created and used to calculate the productivity lost:
$$\begin{gathered} \left( {\left( {{\text{LFPR male }} \times \, \left( {{1 } - {\text{ unemployment rate}}} \right) \, \times {\text{ prevalence male}}} \right) + \left( {{\text{LFPR female }} \times \, \left( {{1 } - {\text{ unemployment rate}}} \right) \, \times {\text{ prevalence female}}} \right)} \right) \hfill \\ \times \left( {\text{absenteeism OR presenteeism value}} \right) \times {\text{ Average daily salary}}{.} \hfill \\ \end{gathered}$$
Our results are based on several sources. Local experts from each country validated the extracted and synthesized data. We conducted meetings with experts (payers and healthcare professionals) in the field to validate our results regarding the humanistic and economic burden in light of their local settings and culture. The healthcare professionals involved in the initial data collection did not contribute to validation.
Two research team members managed and coordinated each validation meeting (principal researcher and senior researcher). The meetings were conducted online with local experts who provided feedback about the results, recommended some changes, and provided better or more updated references for some data points. The meetings were recorded and transcribed, and all the key points of the validators were addressed. The research findings and calculations were updated after the validation meetings, and the estimates were adjusted on the basis of recommendations.
An example of the changes recommended by validators and applied to the results is using the unemployment rate reported by the Department of Statistics in South Africa [ 28 ] rather than another older estimate. Additionally, in South Africa experts recommended adding the average dispensing fee to drug prices instead of using the single exit price. In Lebanon, experts advised on using the average salary provided by the Salary Explorer website [ 31 ]. A summary of the results of the validation meetings and modifications can be found in Table S5.
Compliance with Ethics Guidelines This study is based on previously conducted research and does not include any new studies with human participants or animals performed by any of the authors.
The humanistic burden of AD is expressed as the utility loss per age group. The estimated utility value of an average patient with AD ranges from 0.54 to 0.77 (adjusted from Beikert et al. [ 15 ] and Ezzedine et al. [ 21 ]). Compared with the average population, the patients with AD are estimated to lose between 0.09 and 0.28 QALYs annually owing to AD. The details of the lost utility per patient are presented in Table 2.
Estimated annual utility lost per patient with AD, by age group
Age range, years
Average non-patient utilitya
Average patient utilityb
Average utility lost per patient
≥ 75
aValues adapted from Janssen et al. [ 23 ]
bValues adapted from Beikert et al. [ 15 ] and Ezzedine et al. [ 21 ]
At the country level, the aggregated QALY loss is higher in countries with larger populations. Egypt suffered the highest QALY loss, and Kuwait had the lowest QALY loss owing to AD. The aggregated AD humanistic burden is approximately 334,000 QALYs lost annually in the seven countries included in this study. The age-standardized utility loss per patient per country ranged from 0.185 to 0.189. The average utility loss per patient for the seven countries was estimated at 0.187. The details of humanistic burden including QALYs lost per country and utility lost per patient are shown in Fig. 1.
Annual lost QALYs per country and utility loss per patient owing to AD. AD atopic dermatitis, QALY quality-adjusted life year
The cost of AD per patient largely depends on the economic status and the prices of healthcare services of each country. The costs for each severity level were determined, and the weighted average was calculated to provide a single estimate for an average patient. The average annual healthcare cost was calculated for each country; the healthcare cost domains are detailed in Table S3.
In Algeria, the annual cost per patient is US $312. This cost is the lowest among the seven countries. The results showed that the UAE and Kuwait had a remarkably high average cost per patient compared with other countries in the region: US $3569 and US $2880 per patient, respectively. In most of the questionnaires conducted, the use of targeted therapies, with prices much higher than those of other topical or systemic interventions, was considered one of the main cost drivers. In countries where targeted therapies are more frequently used, the average cost per patient tends to be much higher than that in countries where targeted therapies are not commonly prescribed.
For country-level costs, the UAE also had the highest annual cost at US $112.5 million, followed by Saudi Arabia and Egypt with US $99.5 million and US $95.5 million, respectively. The lowest annual cost was in Lebanon at US $13.6 million. The total healthcare costs of the seven countries combined were estimated at more than US $460 million. The annual healthcare cost estimates are presented in Table 3.
Average annual healthcare cost for AD per patient and per country
Average annual cost per patient
Annual cost per country (million)
All costs are in 2019 USD
Using the absolute healthcare cost values for these countries, which do not share the same income level or healthcare expenditure, makes it difficult to compare the burdens of these countries. Therefore, we calculated the healthcare cost burden of AD as the ratio of the annual healthcare expenditure in each country. Egypt showed the highest cost for AD per healthcare expenditure at 0.77%, and South Africa and Saudi Arabia showed the lowest, at only 0.2%. On average, the healthcare cost of AD accounts for approximately 0.4% of the total health expenditure in these countries. The details are shown in Fig. 2.
Annual cost of AD as a percentage of total health expenditure. AD atopic dermatitis
The literature search showed an annual productivity loss of 6.1 days of absenteeism and 22.9 days of presenteeism owing to AD for an average patient (average of all severity-level patients). This means that, on average, each patient with AD loses approximately 28.9 days of productivity annually because of the disease.
Compared with the other countries included in this study, Saudi Arabia had the highest annual loss in indirect costs owing to AD (US $364 million), followed by the UAE (US $228 million) and South Africa (US $152 million). Kuwait, Egypt, Algeria, and Lebanon had much lower values, ranging from US $33 million in Lebanon (the lowest) to US $62 million in Kuwait. To show the relative effect of the disease on each country, these values were divided by the respective GDP of each country. The indirect cost of AD as a percentage of GDP was the highest in Lebanon (0.061%) and lowest in Egypt (0.022%). The average indirect cost, as a percentage of the national GDP for the seven countries, was 0.041%. The details of indirect costs are shown in Fig. 3.
Absenteeism, presenteeism, and total indirect costs as absolute values, and total indirect costs as a percentage of national GDP. GDP gross domestic product. All costs are in 2019 USD
The total burden of AD comprises the total economic burden (healthcare and indirect costs) and the monetary value of the QALYs lost owing to the disease.
The economic burden of countries owing to AD was calculated as the sum of healthcare and indirect costs of each country. The total economic burden of AD in Saudi Arabia was observed to be the highest, at US $463 million annually. The aggregated economic burden of the seven countries exceeds US $1.4 billion annually.
Indirect costs represented a significant portion of the total economic burden, ranging from 37% in Egypt to 79% in Saudi Arabia. On average, the indirect costs represented 67% of the total AD cost.
The monetary value of QALYs lost was calculated as the product of QALYs lost and GDP per capita for each country. The QALYs lost were translated into a monetary loss ranging from US $66.9 million in Lebanon to approximately US $1.5 billion in Saudi Arabia.
Table 4 presents a summary of the healthcare and indirect costs and their contribution to the total economic burden as a percentage as well as the monetary value of the QALYs lost. The sum of these values (total economic burden and monetary value of QALYs lost) provides an estimate of the total burden of AD in adults and adolescents in each country.
Total annual monetary burden of AD as sum of economic burden and monetary value of QALYs lost (humanistic burden)
Economic burden
Monetary Value of QALYs lost
Healthcare costsa
Indirect costsa
Total economic burdenb
All costs are shown in 2019 USD per million
AD atopic dermatitis, QALY quality-adjusted life year
aUSD (% of total economic burden)
bThe sum of healthcare costs and indirect costs
As the seven countries differ in their economic status and size, the relative burden of the disease was calculated by dividing the estimated values for each country by its GDP. The AD healthcare costs ranged from 0.013% to 0.038% of the GDP in these countries. The indirect costs ranged from 0.022% to 0.061%. The total economic burden ranges from 0.046% to 0.085%. The loss was much higher when including the humanistic burden in the calculation because each QALY lost owing to the disease was translated into monetary losses. The estimated monetary value of the QALYs lost ranged from 0.104% to 0.191% of each country's GDP. On the basis of this, the total burden of the disease ranges from 0.164% to 0.265% of the national GDP in these countries. The monetary value of QALYs lost represented a considerable share of this total burden, with the humanistic burden representing approximately 2.4 times the total economic burden in all countries. Details of the relative burden of AD are presented in Table 5.
AD healthcare costs, indirect costs, and total economic burden as a percentage of national GDP
Cost as % of GDP
Healthcare cost
Total economic burdena
AD atopic dermatitis, GDP gross domestic product, QALY quality-adjusted life year
aThe sum of healthcare costs and indirect costs
Our results show that AD in adults and adolescents causes a significant burden in all seven countries that were studied in the Middle East and Africa region. These results were obtained despite the heterogeneous age structures, income levels, and population sizes in these countries. The aggregated results show that, on average, patients with AD lose 19% of their health-related quality of life owing to their disease. This value is comparable to the utility decrements of more severe conditions, such as kidney transplantation [ 32 ]. The value of the total QALYs lost per country was associated with population size, with Egypt (most populous among the included countries) experiencing the greatest loss and Kuwait (least populous) experiencing the lowest loss.
The average healthcare cost per patient was highest in higher-income countries (the UAE and Kuwait). Medical interventions in these countries seem to be relatively more expensive, resulting in higher costs per patient. On the basis of the questionnaire results, more advanced treatments, such as targeted therapies and phototherapy, are more common in higher-income countries. The healthcare cost of AD represents 0.20–0.77% of the total healthcare expenditure in the countries studied here, with an unweighted average of 0.4%, which is comparable to other significant contributors to healthcare expenditure. For example, in Germany in 2019, screening programs represented 0.6% of the total healthcare expenditure and maternity services represented 0.3% [ 33 ]. For country-level healthcare costs, the calculated values were affected by the population size and income level. The UAE had the highest burden owing to its high GDP per capita, followed by Saudi Arabia, which has a lower GDP per capita, but a larger population, and Egypt, which has the largest population, but a lower GDP per capita.
The indirect costs are also related to income level and population size. Among the countries studied, Saudi Arabia had the highest indirect costs related to AD. This is probably owing to the fact that among the seven countries, Saudi Arabia is the only country that has a combination of a relatively large population and a high per capita GDP. Egypt, for example, has the largest population, but has a low average annual salary; therefore, the indirect costs were not high.
Presenteeism contributed more than absenteeism to indirect costs. The indirect costs represent a significantly greater portion of the total burden than healthcare costs in most countries, accounting for up to 79% of the total economic burden in Saudi Arabia. Only Algeria and Egypt had lower indirect costs than healthcare costs. However, the indirect costs of AD pose a substantial societal burden, representing an average of 61% of the economic burden.
The total burden was significantly affected when humanistic burden was translated into an economic figure. In the UAE and Egypt, the monetary value of QALYs lost exceeded three times the aggregated healthcare and indirect costs. The humanistic burden represented 2.4 times the total economic burden on average for all countries. This shows that AD is associated with a significant hidden burden that may be considered much higher than the direct, tangible burden.
Owing to the scarcity of local data for the included countries, the age-standardized QALYs lost and lost productivity were calculated by adjusting the international data to local demographics. This approach may not have captured the exact local burden and, more importantly, may have ignored, to some extent, the differences in disease severity across countries. The estimated burden is probably an underestimation owing to the prevalence estimates from the Global Burden of Disease study, which are significantly lower than those of most other studies reporting the prevalence of AD. However, owing to the lack of age-stratified prevalence data in other studies, we used the best available estimates.
When we calculated the total economic burden, we assumed that the healthcare costs of AD were equal to the total direct costs, excluding other cost components that may contribute to direct costs, such as direct nonmedical costs.
On the basis of the experts' opinions, other factors were not accounted for in the study, such as the effect on mental health, use of antidepressants, side effects of treatments, effect on career choice, and psychological effect on caregivers. However, these are partially accounted for in humanistic burden estimates.
Another factor confirming that our economic burden estimate for AD should be considered as a minimum estimate is the extra expense incurred by patients owing to the disease (e.g., personal care products and other informal costs). These expenses are usually difficult to calculate, but negatively affect a patient's financial state.
For these reasons, further local studies are recommended to obtain a more accurate estimate of the burden of AD that considers the local healthcare system and various cultural aspects, specifically in terms of productivity loss and quality of life burden.
AD carries a considerable burden, mainly owing to the poor quality of life and significant productivity loss in patients. However, unlike diseases with high mortality, resource allocation is less prioritized for AD because the disease mainly affects the quality of life rather than the life years of the patients.
This study explored the humanistic and economic burdens of AD in adult and adolescent patients, combining the estimates of the minimum economic burden expected from healthcare and indirect costs related to the disease, which is significant in the geographic regions of the Middle East and Africa, as elsewhere. More evidence-based studies in the Middle East and Africa are needed for lobbying governments to allocate resources to help ease the burden of the disease. In addition, several interventions can be studied to alleviate this burden in these countries. These interventions should aim to optimize the treatment of AD to decrease the burden.
Abbvie funded this research and participated the review and approval of the publication. All authors had access to relevant data and participated in the drafting, review, and approval of this publication. Abbvie funded the journal's rapid service.
The Authors would like to thank all contributors for their commitment and dedication to this publication. Editage provided English language editing to produce this manuscript using funding from AbbVie.
All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work, and have given their approval for this version to be published. Zoltán Kaló, Sherif Abaza, Baher Elezbawy and Ahmad N Fasseeh conceptualized the study design. BE and ANF conducted the literature search. EF, BE and ANF conducted the interviews and validation meetings with the experts. Mohamed Tannira, Hala Dalle, Sandrine Aderian and Sherif Abaza facilitated the interviews and the validation meetings. Baher Elezbawy, Ahmad N Fasseeh and Zoltán Kaló conducted the analysis and drafted the manuscript. Laila Carolina Abu Esba, Hana Al Abdulkarim, Alfred Ammoury, Esraa Altawil, Abdulrahman Al Turaiki, Fatima Albreiki, Mohammed Al-Haddab, Atlal Al-Lafi, Maryam Alowayesh, Afaf Al-Sheikh, Mahira Elsayed, Amin Elshamy, Maysa Eshmawi, Assem Farag, Issam Hamadah, Meriem Hedibel, Suretha Kannenberg, Rita Karam, Mirna Metni, Noufal Raboobee, Martin Steinhoff, and Mohamed Farghaly revised the information presented and suggested edits related to their respective countries. All authors revised and approved the final version of the manuscript.
AbbVie sponsored the analysis and interpretation of Data; in reviewing and approval of the final version. Ahmad N Fasseeh, Sherif Abaza, Zoltán Kaló are shareholders in Syreon Middle East. Baher Elezbawy and Essam Fouly are employees at Syreon Middle East. Mohamed Tannira, Hala Dalle, and Sandrine Aderian are AbbVie employees and may hold AbbVie stock. For Laila Carolina Abu Esba, Hana Al Abdulkarim, Alfred Ammoury, Esraa Altawil, Abdulrahman Al Turaiki, Fatima Albreiki, Mohammed Al-Haddab, Atlal Al-Lafi, Maryam Alowayesh, Afaf Al-Sheikh, Mahira Elsayed, Amin Elshamy, Maysa Eshmawi, Assem Farag, Issam Hamadah, Meriem Hedibel, Suretha Kannenberg, Rita Karam, Mirna Metni, Noufal Raboobee, Martin Steinhoff, and Mohamed Farghaly, no conflict of interest and no authorship payments were done.
This study is based on previously conducted research and does not contain any new studies with human participants or animals performed by any of the authors.
These data were previously presented at Virtual ISPOR Europe 2021 conference.
All data generated or analysed during this study are included in this published article/as supplementary information files.
Open AccessThis article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc/4.0/.
Below is the link to the electronic supplementary material.
Zurück zum Zitat Talamonti M, Galluzzo M, Silvaggio D, Lombardo P, Tartaglia C, Bianchi L. Quality of life and psychological impact in patients with atopic dermatitis. J Clin Med Res. 2021;10(6):1298. https://doi.org/10.3390/jcm10061298. CrossRef Talamonti M, Galluzzo M, Silvaggio D, Lombardo P, Tartaglia C, Bianchi L. Quality of life and psychological impact in patients with atopic dermatitis. J Clin Med Res. 2021;10(6):1298. https://doi.org/10.3390/jcm10061298. CrossRef
Zurück zum Zitat Toron F, Neary MP, Smith TW, Gruben D, Romero W, Cha A, et al. Clinical and economic burden of mild-to-moderate atopic dermatitis in the UK: a propensity-score-matched case-control study. Dermatol Ther. 2021;11(3):907–28. https://doi.org/10.1007/s13555-021-00519-7. CrossRef Toron F, Neary MP, Smith TW, Gruben D, Romero W, Cha A, et al. Clinical and economic burden of mild-to-moderate atopic dermatitis in the UK: a propensity-score-matched case-control study. Dermatol Ther. 2021;11(3):907–28. https://doi.org/10.1007/s13555-021-00519-7. CrossRef
Zurück zum Zitat Barbarot S, Auziere S, Gadkari A, Girolomoni G, Puig L, Simpson EL, et al. Epidemiology of atopic dermatitis in adults: results from an international survey. Allergy. 2018;73(6):1284–93. https://doi.org/10.1111/all.13401. CrossRef Barbarot S, Auziere S, Gadkari A, Girolomoni G, Puig L, Simpson EL, et al. Epidemiology of atopic dermatitis in adults: results from an international survey. Allergy. 2018;73(6):1284–93. https://doi.org/10.1111/all.13401. CrossRef
Zurück zum Zitat Silverberg JI, Gelfand JM, Margolis DJ, Boguniewicz M, Fonacier L, Grayson MH, et al. Patient burden and quality of life in atopic dermatitis in US adults: a population-based cross-sectional study. Ann Allergy Asthma Immunol. 2018;121(P3):340–7. https://doi.org/10.1016/j.anai.2018.07.006. CrossRef Silverberg JI, Gelfand JM, Margolis DJ, Boguniewicz M, Fonacier L, Grayson MH, et al. Patient burden and quality of life in atopic dermatitis in US adults: a population-based cross-sectional study. Ann Allergy Asthma Immunol. 2018;121(P3):340–7. https://doi.org/10.1016/j.anai.2018.07.006. CrossRef
Zurück zum Zitat Ariëns LF, Van Nimwegen KJ, Shams M, de Bruin DT, Van der Schaft J, Van Os-Medendorp H, et al. Economic burden of adult patients with moderate to severe atopic dermatitis indicated for systemic treatment. Acta Derm Venereol. 2019;99(9):762–8. https://doi.org/10.2340/00015555-3212. CrossRef Ariëns LF, Van Nimwegen KJ, Shams M, de Bruin DT, Van der Schaft J, Van Os-Medendorp H, et al. Economic burden of adult patients with moderate to severe atopic dermatitis indicated for systemic treatment. Acta Derm Venereol. 2019;99(9):762–8. https://doi.org/10.2340/00015555-3212. CrossRef
Zurück zum Zitat Laughter MR, Maymone MBC, Mashayekhi S, Arents BWM, Karimkhani C, Langan SM, et al. The global burden of atopic dermatitis: lessons from the Global Burden of Disease Study 1990–2017. Br J Dermatol. 2021;184(2):304–9. https://doi.org/10.1111/bjd.19580. CrossRef Laughter MR, Maymone MBC, Mashayekhi S, Arents BWM, Karimkhani C, Langan SM, et al. The global burden of atopic dermatitis: lessons from the Global Burden of Disease Study 1990–2017. Br J Dermatol. 2021;184(2):304–9. https://doi.org/10.1111/bjd.19580. CrossRef
Zurück zum Zitat Howe, W. Treatment of atopic dermatitis (eczema). 2021. https://www.uptodate.com/contents/treatment-of-atopic-dermatitis-eczema#:~:text=Topical%20corticosteroids%20%E2%80%94%20For%20patients%20with,for%20two%20to%20four%20weeks/. Accessed 17 Dec 2021. Howe, W. Treatment of atopic dermatitis (eczema). 2021. https://www.uptodate.com/contents/treatment-of-atopic-dermatitis-eczema#:~:text=Topical%20corticosteroids%20%E2%80%94%20For%20patients%20with,for%20two%20to%20four%20weeks/. Accessed 17 Dec 2021.
Zurück zum Zitat Druss BG, Rosenheck RA, Sledge WH. Health and disability costs of depressive illness in a major US corporation. Am J Psychiatry. 2000;157(8):1274–8. https://doi.org/10.1176/appi.ajp.157.8.1274. CrossRef Druss BG, Rosenheck RA, Sledge WH. Health and disability costs of depressive illness in a major US corporation. Am J Psychiatry. 2000;157(8):1274–8. https://doi.org/10.1176/appi.ajp.157.8.1274. CrossRef
Zurück zum Zitat Ibekwe PU, Ukonu BA. Impact of weather conditions on atopic dermatitis prevalence in Abuja, Nigeria. J Natl Med Assoc. 2019;111(1):88–93. https://doi.org/10.1016/j.jnma.2018.06.005. CrossRef Ibekwe PU, Ukonu BA. Impact of weather conditions on atopic dermatitis prevalence in Abuja, Nigeria. J Natl Med Assoc. 2019;111(1):88–93. https://doi.org/10.1016/j.jnma.2018.06.005. CrossRef
Zurück zum Zitat Silverberg JI, Hanifin J, Simpson EL. Climatic factors are associated with childhood eczema prevalence in the United States. J Invest Dermatol. 2013;133(7):1752–9. https://doi.org/10.1038/jid.2013.19. CrossRef Silverberg JI, Hanifin J, Simpson EL. Climatic factors are associated with childhood eczema prevalence in the United States. J Invest Dermatol. 2013;133(7):1752–9. https://doi.org/10.1038/jid.2013.19. CrossRef
Zurück zum Zitat Al-Afif KAM, Buraik MA, Buddenkotte J, Mounir M, Gerber R, Ahmed HM, et al. Understanding the burden of atopic dermatitis in Africa and the Middle East. Dermatol Ther. 2019;9(2):223–41. https://doi.org/10.1007/s13555-019-0285-2. CrossRef Al-Afif KAM, Buraik MA, Buddenkotte J, Mounir M, Gerber R, Ahmed HM, et al. Understanding the burden of atopic dermatitis in Africa and the Middle East. Dermatol Ther. 2019;9(2):223–41. https://doi.org/10.1007/s13555-019-0285-2. CrossRef
Zurück zum Zitat Alqahtani JM. Atopy and allergic diseases among Saudi young adults: a cross-sectional study. J Int Med Res. 2020;48(1):0300060519899760. https://doi.org/10.1177/0300060519899760. CrossRef Alqahtani JM. Atopy and allergic diseases among Saudi young adults: a cross-sectional study. J Int Med Res. 2020;48(1):0300060519899760. https://doi.org/10.1177/0300060519899760. CrossRef
Zurück zum Zitat Hossny E, Shousha G, Wassif GO, Hana S. A study of health-related quality of life in pediatric atopic dermatitis. Egypt J Pediatr Allergy Immunol. 2020;18(2):61–9. https://doi.org/10.21608/ejpa.2020.117838. CrossRef Hossny E, Shousha G, Wassif GO, Hana S. A study of health-related quality of life in pediatric atopic dermatitis. Egypt J Pediatr Allergy Immunol. 2020;18(2):61–9. https://doi.org/10.21608/ejpa.2020.117838. CrossRef
Zurück zum Zitat Vos T, Lim SS, Abbafati C, Abbas KM, Abbasi M, Abbasifard M, et al. Global burden of 369 diseases and injuries in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019. The Lancet. 2020;396(10258):1204–22. https://doi.org/10.1016/S0140-6736(20)30925-9. CrossRef Vos T, Lim SS, Abbafati C, Abbas KM, Abbasi M, Abbasifard M, et al. Global burden of 369 diseases and injuries in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019. The Lancet. 2020;396(10258):1204–22. https://doi.org/10.1016/S0140-6736(20)30925-9. CrossRef
Zurück zum Zitat Beikert FC, Langenbruch AK, Radtke MA, Kornek T, Purwins S, Augustin M. Willingness to pay and quality of life in patients with atopic dermatitis. Arch Dermatol Res. 2014;306(3):279–86. https://doi.org/10.1007/s00403-013-1402-1. CrossRef Beikert FC, Langenbruch AK, Radtke MA, Kornek T, Purwins S, Augustin M. Willingness to pay and quality of life in patients with atopic dermatitis. Arch Dermatol Res. 2014;306(3):279–86. https://doi.org/10.1007/s00403-013-1402-1. CrossRef
Zurück zum Zitat Andersen L, Nyeland ME, Nyberg F. Higher self-reported severity of atopic dermatitis in adults is associated with poorer self-reported health-related quality of life in France, Germany, the UK and the USA. Br J Dermatol. 2020;182(5):1176–83. https://doi.org/10.1111/bjd.1845. CrossRef Andersen L, Nyeland ME, Nyberg F. Higher self-reported severity of atopic dermatitis in adults is associated with poorer self-reported health-related quality of life in France, Germany, the UK and the USA. Br J Dermatol. 2020;182(5):1176–83. https://doi.org/10.1111/bjd.1845. CrossRef
Zurück zum Zitat Le PH, Vo TQ, Nguyen NH. Quality of life measurement alteration among Vietnamese: impact and treatment benefit related to eczema. J Pak Med Assoc. 2019;69(suppl 2):S49–56. Le PH, Vo TQ, Nguyen NH. Quality of life measurement alteration among Vietnamese: impact and treatment benefit related to eczema. J Pak Med Assoc. 2019;69(suppl 2):S49–56.
Zurück zum Zitat Lee SH, Lee SH, Lee SY, Lee B, Lee S, Park YL. Psychological health status and health-related quality of life in adults with atopic dermatitis: a nationwide cross-sectional study in South Korea. Acta Derm Venereol. 2018;98(1):89–97. https://doi.org/10.2340/00015555-2797. CrossRef Lee SH, Lee SH, Lee SY, Lee B, Lee S, Park YL. Psychological health status and health-related quality of life in adults with atopic dermatitis: a nationwide cross-sectional study in South Korea. Acta Derm Venereol. 2018;98(1):89–97. https://doi.org/10.2340/00015555-2797. CrossRef
Zurück zum Zitat Misery L, Seneschal J, Reguiai Z, Merhand S, Héas S, Huet F, et al. The impact of atopic dermatitis on sexual health. J Eur Acad Dermatol Venereol. 2019;33(2):428–32. https://doi.org/10.1111/jdv.15223. CrossRef Misery L, Seneschal J, Reguiai Z, Merhand S, Héas S, Huet F, et al. The impact of atopic dermatitis on sexual health. J Eur Acad Dermatol Venereol. 2019;33(2):428–32. https://doi.org/10.1111/jdv.15223. CrossRef
Zurück zum Zitat Katoh N, Saeki H, Kataoka Y, Etoh T, Teramukai S, Takagi H, et al. Atopic dermatitis disease registry in Japanese adult patients with moderate to severe atopic dermatitis (ADDRESS-J): baseline characteristics, treatment history and disease burden. J Dermatol. 2019;46(4):290–300. https://doi.org/10.1111/1346-8138.14787. CrossRef Katoh N, Saeki H, Kataoka Y, Etoh T, Teramukai S, Takagi H, et al. Atopic dermatitis disease registry in Japanese adult patients with moderate to severe atopic dermatitis (ADDRESS-J): baseline characteristics, treatment history and disease burden. J Dermatol. 2019;46(4):290–300. https://doi.org/10.1111/1346-8138.14787. CrossRef
Zurück zum Zitat Ezzedine K, Shourick J, Merhand S, Sampogna F, Taïeb C. Impact of atopic dermatitis in adolescents and their parents: a French study. Acta Derm Venereol. 2020;100:00015555–23653. https://doi.org/10.2340/00015555-3653. CrossRef Ezzedine K, Shourick J, Merhand S, Sampogna F, Taïeb C. Impact of atopic dermatitis in adolescents and their parents: a French study. Acta Derm Venereol. 2020;100:00015555–23653. https://doi.org/10.2340/00015555-3653. CrossRef
Zurück zum Zitat DLQI to EQ-5D tool. Broadstreetheor.com. 2014. https://dlqi.broadstreetheor.com/. Accessed 23 Aug 2021. DLQI to EQ-5D tool. Broadstreetheor.com. 2014. https://dlqi.broadstreetheor.com/. Accessed 23 Aug 2021.
Zurück zum Zitat Janssen MF, Szende A, Cabases J, Ramos-Goñi JM, Vilagut G, König HH. Population norms for the EQ-5D-3L: a cross-country analysis of population surveys for 20 countries. Eur J Health Econ. 2019;20(2):205–16. https://doi.org/10.1007/s10198-018-0955-5. CrossRef Janssen MF, Szende A, Cabases J, Ramos-Goñi JM, Vilagut G, König HH. Population norms for the EQ-5D-3L: a cross-country analysis of population surveys for 20 countries. Eur J Health Econ. 2019;20(2):205–16. https://doi.org/10.1007/s10198-018-0955-5. CrossRef
Zurück zum Zitat World Health Organization. Global Health Expenditure Database. 2018. https://apps.who.int/nha/database/Select/Indicators/en/. Accessed 20 Sep 2021. World Health Organization. Global Health Expenditure Database. 2018. https://apps.who.int/nha/database/Select/Indicators/en/. Accessed 20 Sep 2021.
Zurück zum Zitat Worldbank.org. Official exchange rate (LCU per US$, period average) | Data. 2021. https://data.worldbank.org/indicator/PA.NUS.FCRF. Accessed 14 Sep 2021. Worldbank.org. Official exchange rate (LCU per US$, period average) | Data. 2021. https://data.worldbank.org/indicator/PA.NUS.FCRF. Accessed 14 Sep 2021.
Zurück zum Zitat Hanifin JM, Reed ML, Eczema Prevalence and Impact Working Group. A population-based survey of eczema prevalence in the United States. Dermatitis. 2007;18(2):82–91. https://doi.org/10.2310/6620.2007.06034. CrossRef Hanifin JM, Reed ML, Eczema Prevalence and Impact Working Group. A population-based survey of eczema prevalence in the United States. Dermatitis. 2007;18(2):82–91. https://doi.org/10.2310/6620.2007.06034. CrossRef
Zurück zum Zitat Mitchell RJ, Bates P. Measuring health-related productivity loss. Popul Health Manag. 2011;14(2):93–8. https://doi.org/10.1089/pop.2010.0014. CrossRef Mitchell RJ, Bates P. Measuring health-related productivity loss. Popul Health Manag. 2011;14(2):93–8. https://doi.org/10.1089/pop.2010.0014. CrossRef
Zurück zum Zitat Country Profiles. ILOSTAT. 2020. https://ilostat.ilo.org/data/country-profiles/. Accessed 31 Oct 2021. Country Profiles. ILOSTAT. 2020. https://ilostat.ilo.org/data/country-profiles/. Accessed 31 Oct 2021.
Zurück zum Zitat General Authority of Statistics-Kingdom of Saudi Arabia. Labor Force. 2020 https://www.stats.gov.sa/en/814/. Accessed 24 Aug 2021. General Authority of Statistics-Kingdom of Saudi Arabia. Labor Force. 2020 https://www.stats.gov.sa/en/814/. Accessed 24 Aug 2021.
Zurück zum Zitat Africa S. More people participate in the South African labor market in the 4th quarter of 2020 | Statistics South Africa. Statssa.gov.za. 2021. http://www.statssa.gov.za/?p=14031#:~:text=The%20unemployment%20rate%20increased%20from,of%20the%20QLFS%20in%202008/ . Accessed 20 Sep 2021. Africa S. More people participate in the South African labor market in the 4th quarter of 2020 | Statistics South Africa. Statssa.gov.za. 2021. http://www.statssa.gov.za/?p=14031#:~:text=The%20unemployment%20rate%20increased%20from,of%20the%20QLFS%20in%202008/ . Accessed 20 Sep 2021.
Zurück zum Zitat The Complete Guide. Salaryexplorer.com. Average Salary in Lebanon 2021. 2021. http://www.salaryexplorer.com/salary-survey.php?loc=119&loctype=1/. Accessed 20 Sep 2021. The Complete Guide. Salaryexplorer.com. Average Salary in Lebanon 2021. 2021. http://www.salaryexplorer.com/salary-survey.php?loc=119&loctype=1/. Accessed 20 Sep 2021.
Zurück zum Zitat Francis A, Didsbury MS, Van Zwieten A, Chen K, James LJ, Kim S, et al. Quality of life of children and adolescents with chronic kidney disease: a cross-sectional study. Arch Dis Child. 2019;104:134–40. https://doi.org/10.1136/archdischild-2018-314934. CrossRef Francis A, Didsbury MS, Van Zwieten A, Chen K, James LJ, Kim S, et al. Quality of life of children and adolescents with chronic kidney disease: a cross-sectional study. Arch Dis Child. 2019;104:134–40. https://doi.org/10.1136/archdischild-2018-314934. CrossRef
Zurück zum Zitat Federal Statistical Office. Health expenditure by functions of health care. 2019. https://www.destatis.de/EN/Themes/Society-Environment/Health/Health-Expenditure/Tables/functions.html;jsessionid=748464D51403C94AD92824D49D338E4B.live741. Accessed 19 Oct 2021. Federal Statistical Office. Health expenditure by functions of health care. 2019. https://www.destatis.de/EN/Themes/Society-Environment/Health/Health-Expenditure/Tables/functions.html;jsessionid=748464D51403C94AD92824D49D338E4B.live741. Accessed 19 Oct 2021.
Zurück zum Zitat Global Health Data Exchange. GBD Results Tool| GHDx. 2021. http://ghdx.healthdata.org/gbd-results-tool/. Accessed 20 Aug 2021. Global Health Data Exchange. GBD Results Tool| GHDx. 2021. http://ghdx.healthdata.org/gbd-results-tool/. Accessed 20 Aug 2021.
Baher Elezbawy
Ahmad Nader Fasseeh
Essam Fouly
Mohamed Tannira
Hala Dalle
Sandrine Aderian
Laila Carolina Abu Esba
Hana Al Abdulkarim
Alfred Ammoury
Esraa Altawil
Abdulrahman Al Turaiki
Fatima Albreiki
Mohammed Al-Haddab
Atlal Al-Lafi
Maryam Alowayesh
Afaf Al-Sheikh
Mahira Elsayed
Amin Elshamy
Maysa Eshmawi
Assem Farag
Issam Hamadah
Meriem Hedibel
Suretha Kannenberg
Rita Karam
Mirna Metni
Noufal Raboobee
Martin Steinhoff
Sherif Abaza
Mohamed Farghaly
Zoltán Kaló
Springer Healthcare
Dermatology and Therapy / Ausgabe 1/2023
Weitere Artikel der Ausgabe 1/2023
Novel and Off-Label Biologic Use in the Management of Hidradenitis Suppurativa, Pyoderma Gangrenosum, Lichen Planus, and Seborrheic Dermatitis: A Narrative Review
Treatment of Acne Vulgaris During Pregnancy and Lactation: A Narrative Review
Dermoscopy of Bacterial, Viral, and Fungal Skin Infections: A Systematic Review of the Literature
Efficacy and Safety of Dimethyl Fumarate in Patients with Moderate-to-Severe Plaque Psoriasis: Results from a 52-Week Open-Label Phase IV Clinical Trial (DIMESKIN 1)
Elevating the Standard of Care for Patients with Psoriasis: 'Calls to Action' from Epicensus, a Multistakeholder Pan-European Initiative
Outcomes of Biologic Use in Asian Compared with Non-Hispanic White Adult Psoriasis Patients from the CorEvitas Psoriasis Registry
Neu im Fachgebiet Innere Medizin
31.01.2023 | Sportmedizin | Nachrichten
Kann Sport in jungen Jahren vor Arteriosklerose schützen?
31.01.2023 | Hämophilie | Nachrichten
Neues Faktor-VIII-Regime schützt Patienten mit Hämophilie A
31.01.2023 | Typ-2-Diabetes | Nachrichten
Gesund leben reduziert Diabetesfolgen an Augen, Nerven, Nieren
30.01.2023 | Ketoazidose | Nachrichten
Diabetische Ketoazidose – ohne Hyperglykämie und ohne SGLT-2-Hemmer
Update Innere Medizin
Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert – ganz bequem per eMail.
» zur Themenseite Innere Medizin rotated-square
Mail Icon II | CommonCrawl |
Metformin-loaded β-TCP/CTS/SBA-15 composite scaffolds promote alveolar bone regeneration in a rat model of periodontitis
Part of a collection:
Tissue Engineering Constructs and Cell Substrates
Wanghan Xu1,2 na1,
Wei Tan3 na1,
Chan Li4,
Keke Wu5,
Xinyi Zeng1 &
Liwei Xiao1
Journal of Materials Science: Materials in Medicine volume 32, Article number: 145 (2021) Cite this article
Periodontitis is a progressive infectious inflammatory disease, which leads to alveolar bone resorption and loss of periodontal attachment. It is imperative for us to develop a therapeutic scaffold to repair the alveolar bone defect of periodontitis. In this study, we designed a new composite scaffold loading metformin (MET) by using the freeze-drying method, which was composed of β-tricalcium phosphate (β-TCP), chitosan (CTS) and the mesoporous silica (SBA-15). The scaffolds were expected to combine the excellent biocompatibility of CTS, the good bioactivity of β-TCP, and the anti-inflammatory properties of MET. The MET-loaded β-TCP/CTS/SBA-15 scaffolds showed improved cell adhesion, appropriate porosity and good biocompatibility in vitro. This MET composite scaffold was implanted in the alveolar bone defects area of rats with periodontitis. After 12 weeks, Micro-CT and histological analysis were performed to evaluate different degrees of healing and mineralization. Results showed that the MET-loaded β-TCP/CTS/SBA-15 scaffolds promoted alveolar bone regeneration in a rat model of periodontitis. To our knowledge, this is the first report that MET-loaded β-TCP/CTS/SBA-15 scaffolds have a positive effect on alveolar bone regeneration in periodontitis. Our findings might provide a new and promising strategy for repairing alveolar bone defects under the condition of periodontitis.
Periodontitis is a common chronic infectious disease with tissue destruction around teeth such as gingiva, periodontal ligament, and alveolar bone [1, 2]. Among them, alveolar bone defects were often unable to be healed naturally by the body's repair mechanism. For oral treatment, alveolar bone resorption is the severe and irreversible damage associated with periodontitis. The complex oral inflammation environment poses a massive challenge for oral restoration and subsequent orthodontic treatment. At the same time, the loss of alveolar bone caused by periodontitis brings huge trouble to patients. To repair the residual alveolar bone defect in an inflammatory environment is a problem plaguing dentists, directly affecting the stability of subsequent dental implants. Bone grafting is one of the most effective methods to treat bone defects caused by periodontitis [3]. Although autogenous bone transplantation is extensively used to treat bone defects, it has disadvantages, including insufficient sources and related complications at the donor site [4, 5]. Furthermore, allogeneic bone is limited for clinical application due to rejection reaction and the risk of infection [6, 7]. With the development of biomaterial medicine, artificial bone scaffolds served as an alternative solution have shown promising potential to treat periodontal tissue bone defect. Many natural and synthetic materials such as collagen, glass ceramics, calcium sulfate, natural and artificial polymers are considered cell carriers and bone conduction materials [8]. Like the mineral phase of bone tissue, β-tricalcium phosphate (β-TCP) has appropriate biocompatibility, bioactivity, and osteoconductive ability. It also has unique features in biodegradation, solubility, and absorbance. Therefore, β-TCP or its composites are widely investigated as biomedical materials for bone defect repair and periodontal therapy [9, 10]. Chitosan (CTS) plays a crucial role in many biological activities, including antimicrobial and wound healing [11]. This material is applied to hemostasis, control of hypertension, drug delivery and has been extensively used as bone scaffold [12, 13]. More importantly, due to the positive effect on wound healing and anti-inflammation, CTS has received significant attention in dentistry. It can be applied in all fields of dentistry including preventive dentistry, conservative dentistry, endodontics, surgery, periodontology, prosthodontics and orthodontics. CTS demonstrated a considerable ability to reduce cariogenic bacteria, and the lack of cytotoxicity, which might enable its usage in the prevention of dental caries. The CTS-containing dentifrice had a higher protective potential against the demineralization of enamel, compared to the dentifrice without CTS [14]. As a representative silica-based mesoporous material, the mesoporous silica (SBA-15) is structured with uniform hexagonal pores and a tunable diameter of 5–15 nm. This mesoporous silica sieve has been attracting growing attention owing to its unique physico-chemical properties, such as high specific surface area, chemical inertness, narrow pore size distribution, sufficient active sites for grafting a variety of functional chemical groups, thermodynamic stability, and low cost. Based on these excellent characteristics, SBA-15 and its related hybrid materials have been broadly applied in selective adsorption, catalysis, drug delivery, imaging, and sensors [15]. Because of its good hydrothermal stability, drugs and growth factors of appropriate size are incorporated into the pores of the particles to form drug carrier complexes [16, 17].
Metformin (MET), an old antidiabetic drug, is widely used to treat type 2 diabetes by glycemic control [18]. Additionally, this drug has been shown to improve bone quality and decrease the risk of bone fractures in patients with diabetes. Increasingly studies have shown that MET can promote type I collagen synthesis and osteogenic differentiation in osteoblasts [19]. MET had an osteogenic effect by activating the AMP-activated kinase signal pathway to induce mesenchymal stem cells' mineralization and osteogenic differentiation [20]. In vivo studies have also reported that MET had a positive effect against alveolar bone loss through osteoblasts differentiation in rats of periodontitis [21]. As discussed above, both in vitro and in vivo studies had demonstrated that MET might exhibit pro-osteogenic potential, which could be considered in treating bone loss disease.
In this study, we developed a new MET-loaded β-TCP/CTS/SBA-15 composite scaffold and then demonstrated that this synthesized scaffold could promote alveolar bone repair with periodontitis. We performed an animal model of alveolar bone defect with periodontitis and evaluated the role of this synthesized scaffold in alveolar bone repair. The potential effect of this MET-loaded composite scaffold on the alveolar bone repair with periodontitis has never been identified by previous investigators. This new composite scaffold might provide a new promising method for future bone regeneration therapy.
All materials were purchased from Aladdin. CTS was in powder form (200–400 mPa.s), and MET came in a powder, crystals or chunks (purity: >97%). β-TCP was a white amorphous powder, odorless and tasteless (biomedical grade; particle size: 2–10 μm). SBA-15 was a nano-level white powder (particle size: 6–11 nm; relative crystallinity: ≥90%). Unless otherwise specified, all reagents were not required for further treatment.
Preparation of MET-loaded β-TCP/CTS/SBA-15 scaffolds
The MET-loaded β-TCP/CTS/SBA-15 scaffolds were prepared via a freeze-drying method as published protocols [22]. The experimental scaffolds were prepared by mixing β-TCP, CTS, SBA-15, and MET at a mass ratio of 5:10:5:2. Briefly, MET and SBA-15 were thoroughly mixed in a centrifuge tube at low temperature for half an hour. The solution was fully mixed with CTS, β-TCP, and acetic acid. The mixture was poured into a mold and placed in a freeze-dryer cold trap. After pre-freezing at −20 °C for 2 h, porous scaffolds were obtained by lyophilizing at −80 °C for 24 h. Glutaraldehyde was dripped on the surface of the scaffold at low temperature for 1 h and rinsed with enzyme-free water twice. The above freeze-drying steps were repeated. The whole operation was carried out under strict sterile conditions, and the scaffold was stored at −80 °C before further experiments.
Characterization of scaffolds
Metformin release from scaffolds
MET release from the scaffolds was measured. The scaffolds (N = 10) were placed in a centrifuge tube, and each centrifuge tube was added with 5 mL deionized water. 1 mL water of each centrifuge tube was collected for the measurement of MET release by Thermo spectrophotometer (NanoDrop 2000c) between 1 and 168 h at the specified timepoints. After each collection, 1 mL deionized water was supplemented into the centrifuge tube.
Mechanical behavior testing
The scaffold's compressive strength and elastic modulus were determined using a universal mechanical testing machine (MTS, USA). Before conducting the mechanical test, the samples were ground to obtain a neat surface, followed by ultrasonic cleaning to remove debris and dry. The test samples were cylinders with a diameter of 6 mm and a height of 3 mm. The plate head was used, the speed was set at 0.5 mm/min, and pressure was slowly applied at 500 N. The experiment ended when the deformation exceeded 10%. The linear region of the stress-strain curve was used to determine the elastic modulus value. The elastic modulus of different groups was both of the mean value of 5 scaffolds.
Scanning electron microscopy (SEM)
The surface and internal morphology of the scaffold and the adhesion of bone marrow mesenchymal stem cells (BMSCs) to the scaffold were observed by scanning electron microscope (SEM, S-3400N). The control scaffolds used a gold-palladium coating of 10 nm, as previously reported [23]. The scaffolds with the cells were rinsed three times with phosphate buffer saline (PBS) and fixed in 2.5% glutaraldehyde and 1% osmium. Then the samples were dehydrated with ethanol and dried overnight. After sputtering gold was completed, the samples were investigated by SEM. By randomly selecting six fields of electron microscope photos, the maximum aperture of all scaffolds in each area of view was measured under the electron microscope.
Porosity measurement
The porosity of each sample was determined by the liquid replacement method. The porosity (ε%) was evaluated by subtracting the weight of scaffolds in wet (Ww) and dry (Wd) state. The scaffolds were immersed in anhydrous ethanol in the Ww measurement until they reached saturation and then weighed. The total porosity of the scaffold was determined using the following formula, where ρ represents the alcohol concentration (g/cm3), V represents the volume before immersion.
$$\varepsilon \% = \left( {{{{{{\mathrm{Ww}}}}}} - {{{{{\mathrm{Wd}}}}}}} \right)/\rho {{{{{\mathrm{V}}}}}} \times 100$$
A Fourier transform infrared (FTIR) spectrophotometer was used to detect the spectra of components of the MET, β-TCP, CTS and SBA-15. Scaffolds in different groups were placed in the groove of the sample plate and pressed. The spectra of β-TCP/CTS/SBA-15 and MET/β-TCP/CTS/SBA-15 scaffolds were recorded in the wavelength range 500–4000 cm−1.
X-ray diffraction (XRD) was used to measure the sample's phase composition over a diffraction angle (2θ) range of 5–80°, and the phase composition of the sample was analyzed using software according to the diffraction pattern.
Three-month-old Sprague-Dawley male rats (300 ± 20 g) were included in the following experiments. All animal experiments were approved by the Ethics Committee of the Department of Experimental Zoology of Central South University and were under the guidelines for the ethical treatment of animals. The animals were given free access to food and water, and the feeding conditions were room temperature (25 ± 2 °C), humidity (60 ± 5%) under a 12-h light/12-h dark cycle. All rats were adapted to the environment for 7 days before the experiment.
Harvest and culture of rat BMSCs and Alizarin Red staining
As previously reported, the whole bone marrow culture method was used to isolate rat BMSCs [24]. After removing the skin and soft tissues on the bone surface, both ends of the lower limbs in rats were cut off. The bone marrow washing fluid was immediately collected using bone marrow niches rinsing with PBS (Hyclone) under sterile conditions. Then the eluate was centrifuged at 1200 rpm for 5 min, and the supernatant was discarded. The pellet was resuspended in the complete culture medium supplemented with 10% fetal bovine serum (FBS, Gibco),1% penicillin-streptomycin (Gibco). The suspension was seeded in petri dishes and cultured in a saturated humidity incubator at 37 °C with 5% CO2. The third generation rat BMSCs were cultured in the osteogenic induction medium (DMEM supplemented with 10% FBS, 50 µM ascorbic acid, 10 mM β–glycerol‐phosphate and 10−7 mol/L dexamethasone (Sigma)) for 14 days. The medium was replaced every 2 or 3 days. On day 14 of induction, Alizarin Red staining evaluated calcium nodules on scaffolds and petri dishes.
Biocompatibility
Cell viability assay
The cytotoxicity of the scaffolds was assessed using the cell counting kit-8 (CCK-8, Dojindo) assay. Briefly, BMSCs were seeded on the scaffolds in 96-well plates at a density of 2000 per well. According to the manufacturer's protocol, on days 1, 3, 5, 7 of incubation, 110 μL solution containing a ratio of 10:1 from fresh medium and CCK-8 dye was added into each well and incubated in a CO2 incubator for 4 h. The absorbance was measured at 450 nm (OD450nm) using a microplate reader.
Live/dead cell staining
BMSCs were seeded on the scaffolds (5 × 105 cells/scaffold) in 96-well plates. After 3 days of incubation at 37 °C and 5% CO2, the scaffolds were stained with a Calcein-AM/PI double staining kit (Solarbio). According to the instructions, the scaffolds were rinsed with PBS, stained with dye solution for 20 min in the dark. At the wavelength of 490 ± 10 nm, Calcein-AM detected yellow-green living cells, and propidium iodide (PI) detected red dead cells. The number and proportion of living and dead cells on the scaffold were analyzed by a confocal laser scanning microscope (LSM 780, AxioObserver, Zeiss).
RT-PCR analysis
Rat BMSCs (5 × 105) were seeded in a mixed liquid of MET/β-TCP/CTS/SBA-15 and β-TCP/CTS/SBA-15 scaffolds extract and osteogenic medium, and cultured in 6-well plates for 7 and 14 days in vitro. Cell pellets were obtained by removing the medium, digesting the samples with pancreatic enzymes and centrifuging the samples; the total RNA of the collected BMSCs were extracted using TRIzol reagent (TaKaRa, Japan). Reverse transcription was performed using the Revert Aid Reverse Transcriptase Kit (Thermo, USA), and RT-PCR was performed using SYBR® Green PCR Master Mix (TaKaRa). All results were analyzed with StepOne software (Applied Biosystems, version 2.1) using the comparative CT method with GAPDH as the internal control. The expression levels of specific genes, including the osteogenesis-related genes Runx2, Col1a1, and BMP-2, were also analyzed. The primer sequences used in the PCR analysis are listed in Table 1.
Table 1 Sequences of the primers used in real-time qRT-PCR
Alveolar bone regeneration in vivo
According to the previously published method, the periodontal model was induced by ligatures [25, 26]. Briefly, under sterile conditions, the rats were anesthetized with 1% pentobarbitone (0.4 mL/100 g) intraperitoneal injection. The maxillary area was exposed and placed a ligature (3-0) on the cervix of the first molar of each rat. The ligature was knotted at the mesial site and was gently pressed into the gingival sulcus below the gingival margin. It was necessary to peel off the gingival tissue in the cervix of the molar, causing mechanical damage to the gingival periodontal tissue and further aggravating the destruction of periodontal connective tissue and the loss of alveolar bone. Check the ligation every 2–3 days and re-ligate the molars again if it fell off. At the same time, the eating situation and gingival health status of rats were observed. After 4 weeks of gingival stimulation, dental calculus and plaque were accumulated at the ligated site, which stimulated inflammation and eventually developed into periodontitis.
The animals with periodontitis were randomly divided into three groups (n = 3) and treated as follows: (1) Experimental group: the alveolar bone defects were implanted with MET/β-TCP/CTS/SBA-15 scaffolds; (2) Control group: the alveolar bone defects were implanted with β-TCP/CTS/SBA-15 scaffolds; (3) Blank group: no material was implanted. After the periodontal model was established, the rats were anesthetized. The alveolar bone defects of 3 mm in diameter and depth were prepared with dental drills at the mesial site of maxillary first molars. In this process, continuous spraying of saline and intermittent and low-speed drilling were used to lower the temperature. Then the prepared scaffolds were immediately implanted into the bone defect, and the wound was sutured carefully. In the blank group, no material was implanted. Penicillin at 10,000 U/day was injected intraperitoneally for 3 days to prevent infection. The animals were sacrificed 12 weeks after the operation, and the alveolar bones were evaluated and tested.
Radiographic and histological analysis
For the qualitative evaluation of the implanted scaffolds' stability and bridging between the defect and the scaffold, the alveolar bone was scanned by microcomputer tomography (Micro-CT, Bruke Skyscan1176). At 12 weeks after surgery, alveolar bone samples were obtained, and Micro-CT examined the mineral formation in the defect area. The bone volume density (BV/TV), trabecular number (Tb.N), and trabecular bone thickness (Tb.Th) in the defect area were calculated by Micro-CT analysis software (CT Analyser). The samples were fixed with formalin, decalcified with 17% ethylenediaminetetraacetic acid solution for 4 weeks. Then the samples were dehydrated with graded ethanol and embedded in paraffin. 5 μm thick sections were cut from the middle of the scaffolds and stained with hematoxylin and eosin (H&E), and Masson.
All data were expressed as the mean ± standard deviation (SD), and all experiments were performed at least three times. One-way analysis of variance (ANOVA) was performed to compare values using the Statistical Package for the Social Sciences (SPSS 23.0). A P-value of <0.05 was considered statistically significant.
Cumulative drug release curve and mechanical behavior testing
A cylindrical scaffold with a diameter of 6 mm and a length of 3 mm was prepared by the freeze-drying method (Fig. 1A). The results of the cumulative concentration of MET released from the scaffold were shown in Fig. 1B. The release process showed a fast release in the first 24 h, and the increase lasted for about 72 h. Then the MET concentration reached a plateau at 72–168 h. The mechanical performance test results showed (Fig. 1C) that the incorporation of MET did not affect the compressive strength and elastic modulus of the β-TCP/CTS/SBA-15 scaffold and MET/β-TCP/CTS/SBA-15 scaffold. (P > 0.05)
A Observation of scaffolds materials. B Metformin release from the MET/β-TCP/CTS/SBA-15 scaffolds. C Test the mechanical properties of the scaffolds in the MET/β-TCP/CTS/SBA-15 group and the β-TCP/CTS/SBA-15 group and plot the stress-strain curve
Characterization of the MET/β-TCP/CTS/SBA-15 scaffolds
The internal structures of scaffolds were analyzed by SEM (Fig. 2A). The scaffolds had an apparent pore size with sufficient and interconnected pores, which provided a more suitable microenvironment for nutrient exchange and promoted the adhesion, proliferation of transplanted cells. BMSCs had good adhesion on the scaffold and were attached to the surface of the scaffold through pseudopodia. The porosity of the scaffolds was about 80% (Fig. 2B) and potentially meeting the requirements for tissue engineering applications as reported in the literature. The porosity of the scaffolds in the MET/β-TCP/CTS/SBA-15 group and the β-TCP/CTS/SBA-15 group was similar. The experimental test results showed that the incorporation of MET did not affect the porosity of the β-TCP/CTS/SBA-15 scaffold and MET/β-TCP/CTS/SBA-15 scaffold.
A SEM images of the MET/β-TCP/CTS/SBA-15 scaffolds seeded with BMSCs. The arrow indicated the pseudopodia. B The porosity of the MET/β-TCP/CTS/SBA-15 scaffolds and the β-TCP/CTS/SBA-15 scaffolds, respectively. C FTIR of MET/β-TCP/CTS/SBA-15 scaffolds and β-TCP/CTS/SBA-15 scaffolds. D XRD of MET/β-TCP/CTS/SBA-15 scaffolds and β-TCP/CTS/SBA-15 scaffolds
FTIR of the scaffolds was shown in Fig. 2C. The characteristic peaks of β-TCP could be seen at 602 cm−1, the characteristic peaks of CTS could be seen at 2883 and 1374 cm−1, the characteristic peaks of SBA-15 could be seen at 1044, 812, and 449 cm−1 and the peaks of MET could be seen at 3366, 1620, and 553 cm−1. The XRD test results were shown in Fig. 2D. The wave peaks corresponding to β-TCP, CTS, and SBA-15 could be seen in scaffolds in both groups. With the increase in the MET content in the MET/β-TCP/CTS/SBA-15 group samples, the size of the MET wave peaks increased. In contrast, the heights of the peaks corresponding to β-TCP, CTS, and SBA-15 decreased slightly, indicating that we successfully prepared the scaffold by combining the three components. With the addition of MET, the proportions of β-TCP, CTS, and SBA-15 gradually decreased, and the heights of their characteristic peaks decreased. The characteristic peaks of MET appeared in the MET/β-TCP/CTS/SBA-15 scaffold, indicating successful incorporation of MET into the scaffold.
Characterization of rat BMSCs and Alizarin Red Staining
The cells of passage 3 reached 70–80% confluence in a week of culture, and most cells showed a fibroblast-like shape and formed colonies (Fig. 1A). The osteoblast induction culture medium was incubated with the rat BMSCs seeded on the culture dish and the scaffold, and then stained with Alizarin Red 14 days later. The mineralized matrix synthesized by BMSCs covered the surface of the MET scaffold, which was thick and abundant. In contrast, in the control group, there were far fewer mineral nodules on the surface of the β-TCP/CTS/SBA-15 scaffold, and the scaffold surface was smoother (Fig. 3B).
A Morphology of the third generation rat BMSCs. B The scaffolds in the β-TCP/CTS/SBA-15 group and the MET/β-TCP/CTS/SBA-15 group were cultured for 14 days after inoculation with rat BMSCs, stained with alizarin red. The arrow indicated the mineral nodules. C Incubate rat BMSCs with ordinary osteogenic induction medium, scaffold extract of β-TCP/CTS/SBA-15 group plus osteogenic medium, and MET/β-TCP/CTS/SBA-15 group scaffold extract plus osteogenic medium. Then stain BMSCs with alizarin red
The typical Alizarin Red stained image of BMSCs mineral synthesis was shown in Fig. 3C. The osteogenic induction medium, the osteogenic induction medium mixed with the control scaffold extract, and the osteogenic induction medium mixed with the MET scaffold extract were used to incubate BMSCs, respectively. After 14 days, orange-yellow calcium deposits in the cell dishes could be observed under the microscope, which confirmed that the BMSCs we extracted could be induced to differentiate into osteoblasts and had osteogenic potential. A large number of calcium deposits could be observed in the petri dishes of the experimental group, which had more calcium deposits than the β-TCP/CTS/SBA-15 scaffold group. Moreover, it could be observed that compared with the culture dishes in which stem cells were incubated only with ordinary osteogenic induction medium, the latter two had more abundant mineral deposits. It suggested that the scaffold material had particular osteoinductive potential.
The cell viability of stem cells on the composite scaffold was assessed by CCK-8 assay and live/dead cell staining, respectively. Over time, the number of cells on the scaffolds increased during 7 days of incubation, consistent with the increase in absorbance (Fig. 4A). However, there was no statistical difference in cell growth rate between the two groups. This data proved that the incorporation of MET had no adverse effects on cell viability and proliferation. Results of the live/dead cell staining on the third day showed that a large number of live cells (green fluorescence) and a few dead cells (red fluorescence) on the MET/β-TCP/CTS/SBA-15 scaffolds could be observed under a confocal microscope (Fig. 4B). We found that cell immunofluorescence staining was consistent with the results of the CCK-8 assay. The results of these cytotoxicity tests indicated that the composite scaffolds had good biocompatibility.
A The proliferation of BMSCs attached on scaffolds was confirmed by CCK-8 assay. B Representative live/dead images of BMSCs on MET/β-TCP/CTS/SBA-15 scaffolds, with live cells stained green and dead cells stained red
As shown in Fig. 5, the RT-PCR results showed that the BMP-2, COL1a1, and Runx2 gene expression levels in BMSCs in the MET/β-TCP/CTS/SBA-15 group were higher than those in the β-TCP/CTS/SBA-15 group, with statistically significant differences between the two groups.
A Effects of the scaffolds on mRNA expression of the BMP-2 gene on days 7 and 14. B Effects of the scaffolds on mRNA expression of the COL1a1 gene on days 7 and 14. C Effects of the scaffolds on mRNA expression of the Runx2 gene on days 7 and 14. Symbol "**" shows the significant difference between groups (P < 0.05). Symbol "****" shows the significant difference between groups (P < 0.01)
Alveolar bone defect regeneration using MET/β-TCP/CTS/SBA-15 scaffold
In the rat model of periodontitis induced by ligation (Fig. 6A), red and swollen gums, bleeding on probing, and periodontal pockets could be observed. Micro-CT scan showed that compared with the control group, the alveolar bone of the rats in the periodontitis group was low and flat. Part of the alveolar bone was lost, root furcations were detectable (Fig. 6B), and the rat model of periodontitis was successfully established. The scaffolds of each group were implanted into the alveolar bone defect of Sprague-Dawley rats with periodontitis. At 12 weeks, the bone specimens were examined by Micro-CT. NRecon reconstructed the images to create 3D images of alveolar bone. Most of the bone defect in the MET/β-TCP/CTS/SBA-15 group had been repaired, while most of the bone defect in the β-TCP/CTS/SBA-15 group still existed and the bone defect repair was limited, and there was almost no bone defect repair in the blank group (Fig. 6C).
A The operation of the periodontal model. B Scan the periodontal tissues around the molars of the periodontitis group and normal healthy rats with Micro-CT. C Repair of alveolar bone defect in MET/β-TCP/CTS/SBA-15 group, β-TCP/CTS/SBA-15 group and blank group
Also, the BV/TV and the Tb.N of the MET/β-TCP/CTS/SBA-15 group were better than those of other groups (Fig. 7A). Histological results further confirmed the Micro-CT results. H&E staining showed that a certain amount of new bone was formed at both the center and periphery of the bone defect in the MET/β-TCP/CTS/SBA-15 group, while a small amount of new bone was mainly formed around the junction of the β-TCP/CTS/SBA-15 scaffold and the bone defect. Masson staining results showed that the MET/β-TCP/CTS/SBA-15 group had abundant collagen fiber formation. Unlike the normal bone tissue, there was a little bone formation in the central area of the defect, and the arrangement of new bone was disordered in the β-TCP/CTS/SBA-15 group. (Fig. 7B).
A Measurements of BV/TV, Tb.Th, and Tb.N of the alveolar bone. BV/TV trabecular bone volume fraction, Tb.N trabecular number, Tb.Th trabecular thickness. B Images of H&E staining, Masson staining of the alveolar bone. The red arrow indicated new bone formation. Symbol "**" shows the significant difference between groups (P < 0.05)
As one of the two primary diseases of the oral cavity, periodontitis could cause periodontal tissue destruction, tooth looseness and loss, and gum inflammation. Alveolar bone destruction is its most crucial diagnostic feature. The treatment of periodontitis is a hot topic currently studied by scholars at home and abroad, and it is essential to find reasonable and effective drugs. Inhibition of bone resorption and promotion of bone regeneration is the key point in treating bone defects induced by periodontitis. MET is a commonly used antidiabetic drug. In vitro and in vivo studies had shown that MET stimulated mesenchymal stem cells to differentiate into osteoblasts and was used for tissue regeneration [27, 28]. MET at a 50 mg/kg concentration decreased bone loss, oxidative stress, and inflammatory response of ligature-induced periodontitis in rats [19]. Recent clinical evidence also indicated that locally delivered MET significantly improved chronic periodontitis radiological and clinical parameters. MET gel in the periodontal pockets of patients with chronic periodontitis could significantly reduce the probing depth (PD), increase clinical attachment level, and improve intrabony defects depth reduction in vertical bone [29, 30]. Systemic administration might cause unnecessary drug loading to other parts, causing undesirable side effects and significantly reducing the efficacy of drugs at the infected site. In contrast, the dose required for topical administration to produce a therapeutic effect at the target site was much lower [31, 32]. Therefore, composite scaffold materials were widely used for the repair of bone defects. In this study, we synthesized the new MET-loaded β-TCP/CTS/SBA-15 scaffolds by using the freeze-drying method, which played a role in limiting periodontitis and repairing alveolar bone defects. β-TCP and CTS were the matrix material of the scaffold. β-TCP mainly played a role in promoting bone healing. In addition to its anti-inflammatory and wound healing effects, CTS had the effect of adsorbing drugs, and it could slow-release MET. SBA-15 was mesoporous silica, which has the most vital drug absorption capacity and was the main component of the scaffold for drug absorption and sustained release. Our study is the first to report on the use of MET-containing scaffolds that repair alveolar bone defects under periodontitis.
The ideal scaffold material not only repairs bone defects and stabilizes the skeleton but also has non-toxicity and good biocompatibility. The use of a drug is a double-edged sword. Low-concentration drugs are non-toxic to all cells but may not be effective. The high concentration of the drug may be toxic to all cells, but the effect is significant. Therefore, we need to find a suitable concentration of MET, which can reduce drug's toxicity to cells and ensure that the drug exerts a specific effect and promotes the osteogenic differentiation of stem cells. According to previous experimental reports, 50–500 μM MET effectively promoted stem cells proliferation and osteogenic differentiation [19]. Compared with untreated cells, stem cells treated with 250 and 500 μM MET significantly increased mineralization at these concentrations [28]. The drug release curve of the MET/β-TCP/CTS/SBA-15 scaffold indicated that the cycle of drug release was long, and the final concentration of MET could reach 0.043 mg/mL, which was within the above range. The experimental results showed that both the BMSCs planted on the surface of the scaffold or cultured in the scaffold extract had the apparent bone formation and mineralization, and the mineral nodules in the β-TCP/CTS/SBA-15 group were significantly less than the MET/β-TCP/CTS/SBA-15 group. The above information proved that the concentration of MET we selected could promote the osteogenic differentiation of stem cells, which was in line with our purpose.
The ideal scaffold has porosity and pore size characteristics that create an ideal proliferation space for cells and facilitate the exchange of nutrients and metabolite removal. The porosity of the MET/β-TCP/CTS/SBA-15 scaffold was 84.3 ± 4.2%, and the pore size of the scaffold ranged from 100 to 300 μm. These characteristics were consistent with the ideal scaffold requirements [33]. We evaluated the biocompatibility of the scaffold in vitro. SEM observations showed that the surface of the MET/β-TCP/CTS/SBA-15 scaffold was rough, providing a recognition site for cell adhesion, and the BMSCs attached to the scaffold had good morphology and vitality. Through close observation at continuous time points, the results of the CCK-8 experiment indicated that the density and viability of living cells contained in the MET/β-TCP/CTS/SBA-15 group and the β-TCP/CTS/SBA-15 group were not statistically different. The stained images of live and dead cells under the confocal microscope were consistent with the above results, which indicated the excellent biocompatibility of the MET/β-TCP/CTS/SBA-15 scaffold composite scaffold.
The scaffold needs to have osteogenic differentiation ability to promote bone defect repair. Through the previous experiments, we observed that MET/β-TCP/CTS/SBA-15 scaffolds could promote the osteogenic differentiation of BMSCs in vitro. The RT-PCR results indicated that the mRNA expression levels of the bone-related genes Runx2, Col1a1 and BMP-2 in cells within the MET/β-TCP/CTS/SBA-15 scaffolds were significantly higher than those in cells in the β-TCP/CTS/SBA-15 group. The above results indicated that the MET/β-TCP/CTS/SBA-15 group had a better osteogenic performance than the β-TCP/CTS/SBA-15 group, mainly due to the effect of MET. In vivo results were consistent with in vitro results. This new MET/β-TCP/CTS/SBA-15 scaffold simulated the basic structure of bone tissue and promoted the repair of alveolar bone defects in a periodontitis environment. Micro-CT calculation analysis and histological analysis showed that a large amount of new bone was formed in the MET/β-TCP/CTS/SBA-15 scaffold group. The ability to repair alveolar bone defects was significantly higher than that of other groups. Recently, some scholars designed a biodegradable CTS-based MET intrapocket dental film, which effectively inhibited the loss of alveolar bone in rats with periodontitis [34]. From a preventive medicine point of view, their intrapocket dental film focused on preventing the loss of alveolar bone caused by periodontitis, which was primary prevention. The MET/β-TCP/CTS/SBA-15 scaffolds focused on repairing the bone defect in the complex periodontitis environment, which belonged to tertiary prevention. Their experimental results suggested that MET and CTS could inhibit the loss of alveolar bone under periodontitis. The MET/β-TCP/CTS/SBA-15 scaffold containing these two components could further promote alveolar bone defect repair under periodontitis conditions.
This study demonstrated that MET was an effective drug that could promote osteogenic differentiation of BMSCs along osteogenic lineages. MET enhanced the performance of β-TCP/CTS/SBA-15 scaffolds in vitro without the highly cytotoxic effects of chemotherapy drugs. Studies based on different models often produce different results, and further studies are needed to elucidate the mechanism of MET promoting osteogenesis at the molecular level.
In summary, we successfully developed bifunctional MET/β-TCP/CTS/SBA-15 scaffolds using the freeze-drying method. The scaffolds exhibited a porous structure and a sustained MET release property. In addition, the scaffolds were found to support BMSCs adhesion and promote osteogenesis in vitro. More importantly, we found that the MET/β-TCP/CTS/SBA-15 scaffolds could promote the repair of alveolar bone defects in a periodontitis environment. Our study suggests that the MET-loaded β-TCP/CTS/SBA-15 scaffolds could promote alveolar bone regeneration in a rat model of periodontitis. This new scaffold is a promising candidate for the treatment of alveolar bone defects in a periodontitis environment.
Chapple IL, Bouchard P, Cagetti MG, Campus G, Carra MC, Cocco F. et al. Interaction of lifestyle, behaviour or systemic diseases with dental caries and periodontal diseases: consensus report of group 2 of the joint EFP/ORCA workshop on the boundaries between caries and periodontal diseases. J Clin Periodontol. 2017;44:S39–s51. https://doi.org/10.1111/jcpe.12685.
Cochran DL. Inflammation and bone loss in periodontal disease. J Periodontol. 2008;79:1569–76. https://doi.org/10.1902/jop.2008.080233.
Barone A, Covani U. Maxillary alveolar ridge reconstruction with nonvascularized autogenous block bone: clinical results. J Oral Maxillofac Surg. 2007;65:2039–46. https://doi.org/10.1016/j.joms.2007.05.017.
Nkenke E, Neukam FW. Autogenous bone harvesting and grafting in advanced jaw resorption: morbidity, resorption and implant survival. Eur J Oral Implantol. 2014;7:S203–217.
Ng MH, Duski S, Tan KK, Yusof MR, Low KC, Rose IM. et al. Repair of segmental load-bearing bone defect by autologous mesenchymal stem cells and plasma-derived fibrin impregnated ceramic block results in early recovery of limb function. BioMed Res Int. 2014;2014:345910. https://doi.org/10.1155/2014/345910.
Lohmann CH, Andreacchio D, Köster G, Carnes DL,Jr., Cochran DL, Dean DD. et al. Tissue response and osteoinduction of human bone grafts in vivo. Arch Orthop Trauma Surg. 2001;121:583–90. https://doi.org/10.1007/s004020100291.
Zhong W, Sumita Y, Ohba S, Kawasaki T, Nagai K, Ma G. et al. In vivo comparison of the bone regeneration capability of human bone marrow concentrates vs. platelet-rich plasma. PloS One. 2012;7:e40833. https://doi.org/10.1371/journal.pone.0040833.
Ku KL, Wu YS, Wang CY, Hong DW, Chen ZX, Huang CA. et al. Incorporation of surface-modified hydroxyapatite into poly(methyl methacrylate) to improve biological activity and bone ingrowth. R Soc Open Sci. 2019;6:182060. https://doi.org/10.1098/rsos.182060.
Yaszemski MJ, Payne RG, Hayes WC, Langer R, Mikos AG. Evolution of bone transplantation: molecular, cellular and tissue strategies to engineer human bone. Biomaterials. 1996;17:175–85. https://doi.org/10.1016/0142-9612(96)85762-0.
Balhaddad AA, Kansara AA, Hidan D, Weir MD, Xu HHK, Melo MAS. Toward dental caries: exploring nanoparticle-based platforms and calcium phosphate compounds for dental restorative materials. Bioact Mater. 2019;4:43–55. https://doi.org/10.1016/j.bioactmat.2018.12.002.
Madihally SV, Matthew HW. Porous chitosan scaffolds for tissue engineering. Biomaterials. 1999;20:1133–42. https://doi.org/10.1016/s0142-9612(99)00011-3.
Schimpf U, Nachmann G, Trombotto S, Houska P, Yan H, Björndahl L. et al. Assessment of oligo-chitosan biocompatibility toward human spermatozoa. ACS Appl Mater Interfaces. 2019;11:46572–84. https://doi.org/10.1021/acsami.9b17605.
Ghaee A, Nourmohammadi J, Danesh P. Novel chitosan-sulfonated chitosan-polycaprolactone-calcium phosphate nanocomposite scaffold. Carbohydr Polym. 2017;157:695–703. https://doi.org/10.1016/j.carbpol.2016.10.023.
Wieckiewicz M, Boening KW, Grychowska N, Paradowska-Stolarz A. Clinical application of chitosan in dental specialities. Mini Rev Med Chem. 2017;17:401–9. https://doi.org/10.2174/1389557516666160418123054.
Yuan S, Wang M, Liu J, Guo B. Recent advances of SBA-15-based composites as the heterogeneous catalysts in water decontamination: a mini-review. J Environ Manag. 2020;254:109787. https://doi.org/10.1016/j.jenvman.2019.109787.
Song SW, Hidajat K, Kawi S. Functionalized SBA-15 materials as carriers for controlled drug delivery: influence of surface properties on matrix-drug interactions. Langmuir: ACS J Surf colloids. 2005;21:9568–75. https://doi.org/10.1021/la051167e.
Mellaerts R, Aerts CA, Humbeeck J Van, Augustijns P, Mooter G Van den, Martens JA. Enhanced release of itraconazole from ordered mesoporous SBA-15 silica materials, Chem Commun. 2007;1375–7. https://doi.org/10.1039/b616746b.
Kim YD, Park KG, Lee YS, Park YY, Kim DK, Nedumaran B. et al. Metformin inhibits hepatic gluconeogenesis through AMP-activated protein kinase-dependent regulation of the orphan nuclear receptor SHP. Diabetes. 2008;57:306–14. https://doi.org/10.2337/db07-0381.
Cortizo AM, Sedlinsky C, McCarthy AD, Blanco A, Schurman L. Osteogenic actions of the anti-diabetic drug metformin on osteoblasts in culture. Eur J Pharmacol. 2006;536:38–46. https://doi.org/10.1016/j.ejphar.2006.02.030.
Marycz K, Tomaszewski KA, Kornicka K, Henry BM, Wroński S, Tarasiuk J. et al. Metformin decreases reactive oxygen species, enhances osteogenic properties of adipose-derived multipotent mesenchymal stem cells in vitro. Increases Bone Density Vivo Oxid Med Cell Longev. 2016;2016:9785890. https://doi.org/10.1155/2016/9785890.
Araújo AA, Pereira A, Medeiros C, Brito GAC, Leitão RFC, Araújo LS. et al. Júnior, Effects of metformin on inflammation, oxidative stress, and bone loss in a rat model of periodontitis. PloS One. 2017;12:e0183506. https://doi.org/10.1371/journal.pone.0183506.
Hu Q, Li B, Wang M, Shen J. Preparation and characterization of biodegradable chitosan/hydroxyapatite nanocomposite rods via in situ hybridization: a potential material as internal fixation of bone fracture. Biomaterials. 2004;25:779–85. https://doi.org/10.1016/s0142-9612(03)00582-9.
Guex AG, Puetzer JL, Armgarth A, Littmann E, Stavrinidou E, Giannelis EP. et al. Highly porous scaffolds of PEDOT:PSS for bone tissue engineering. Acta Biomater. 2017;62:91–101. https://doi.org/10.1016/j.actbio.2017.08.045.
Thibault RA, Scott Baggett L, Mikos AG, Kasper FK. Osteogenic differentiation of mesenchymal stem cells on pregenerated extracellular matrix scaffolds in the absence of osteogenic cell culture supplements. Tissue Eng Part A. 2010;16:431–40. https://doi.org/10.1089/ten.TEA.2009.0583.
Nakamura-Kiyama M, Ono K, Masuda W, Hitomi S, Matsuo K, Usui M. et al. Changes of salivary functions in experimental periodontitis model rats. Arch Oral Biol. 2014;59:125–32. https://doi.org/10.1016/j.archoralbio.2013.11.001.
Boas Nogueira AV, Chaves de Souza JA, Kim YJ, Damião de Sousa-Neto M, Chan Cirelli C, Cirelli JA. Orthodontic force increases interleukin-1β and tumor necrosis factor-α expression and alveolar bone loss in periodontitis. J Periodontol. 2013;84:1319–26. https://doi.org/10.1902/jop.2012.120510.
Gao Y, Li Y, Xue J, Jia Y, Hu J. Effect of the anti-diabetic drug metformin on bone mass in ovariectomized rats. Eur J Pharmacol. 2010;635:231–6. https://doi.org/10.1016/j.ejphar.2010.02.051.
Bak EJ, Park HG, Kim M, Kim SW, Kim S, Choi SH. et al. The effect of metformin on alveolar bone in ligature-induced periodontitis in rats: a pilot study. J Periodontol. 2010;81:412–9. https://doi.org/10.1902/jop.2009.090414.
Pradeep AR, Nagpal K, Karvekar S, Patnaik K, Naik SB, Guruprasad CN. Platelet-rich fibrin with 1% metformin for the treatment of intrabony defects in chronic periodontitis: a randomized controlled clinical trial. J Periodontol. 2015;86:729–37. https://doi.org/10.1902/jop.2015.140646.
Pradeep AR, Rao NS, Naik SB, Kumari M. Efficacy of varying concentrations of subgingivally delivered metformin in the treatment of chronic periodontitis: a randomized controlled clinical trial. J Periodontol. 2013;84:212–20. https://doi.org/10.1902/jop.2012.120025.
Barat R, Srinatha A, Pandit JK, Anupurba S, Mittal N. Chitosan inserts for periodontitis: influence of drug loading, plasticizer and crosslinking on in vitro metronidazole release. Acta Pharmaceutica. 2007;57:469–77. https://doi.org/10.2478/v10007-007-0037-1.
Hau H, Rohanizadeh R, Ghadiri M, Chrzanowski W. A mini-review on novel intraperiodontal pocket drug delivery materials for the treatment of periodontal diseases. Drug Deliv Transl Res. 2014;4:295–301. https://doi.org/10.1007/s13346-013-0171-x.
Shahriarpanah S, Nourmohammadi J, Amoabediny G. Fabrication and characterization of carboxylated starch-chitosan bioactive scaffold for bone regeneration. Int J Biol Macromol. 2016;93:1069–78. https://doi.org/10.1016/j.ijbiomac.2016.09.045.
Khajuria DK, Patil ON, Karasik D, Razdan R. Development and evaluation of novel biodegradable chitosan based metformin intrapocket dental film for the management of periodontitis and alveolar bone loss in a rat model. Arch Oral Biol. 2018;85:120–9. https://doi.org/10.1016/j.archoralbio.2017.10.009.
These authors contributed equally: Wanghan Xu, Wei Tan
Department of Orthodontics, Medical Center of Stomatology, The Second Xiangya Hospital, Central South University, Changsha, 410011, Hunan, PR China
Wanghan Xu, Xinyi Zeng & Liwei Xiao
Department of Stomatology, Affiliated Xiaoshan Hospital, Hangzhou Normal University, Hangzhou, 311202, Zhejiang, PR China
Wanghan Xu
Department of Spine Surgery, The Third Xiangya Hospital of Central South University, Changsha, 410011, Hunan, PR China
Wei Tan
Department of Metabolism and Endocrinology, Hunan Provincial Key Laboratory for Metabolic Bone Diseases, National Clinical Research Center for Metabolic Diseases, the Second Xiangya Hospital, Central South University, Changsha, 410011, Hunan, China
Chan Li
Department of Cardiovascular Medicine, The Second Xiangya Hospital, Central South University, Changsha, Hunan, 410011, PR China
Keke Wu
Xinyi Zeng
Liwei Xiao
Correspondence to Liwei Xiao.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Xu, W., Tan, W., Li, C. et al. Metformin-loaded β-TCP/CTS/SBA-15 composite scaffolds promote alveolar bone regeneration in a rat model of periodontitis. J Mater Sci: Mater Med 32, 145 (2021). https://doi.org/10.1007/s10856-021-06621-8 | CommonCrawl |
The Church of the larger Hilbert space
== Introduction ==
John Smolin coined the phrase "Going to the Church of the Larger Hilbert Space" for the dilation constructions of channels and states, which not only provide a neat characterization of the set of permissible quantum operations but are also a most useful tool in quantum information science.
According to Stinespring's dilation theorem, every completely positive and trace-preserving map, or channel, can be built from the basic operations of (1) tensoring with a second system in a specified state, (2) unitary transformation, and (3) reduction to a subsystem. Thus, any quantum operation can be thought of as arising from a unitary evolution on a larger (dilated) system. The auxiliary system to which one has to couple the given one is usually called the ancilla of the channel. Stinespring's representation comes with a bound on the dimension of the ancilla system, and is unique up to unitary equivalence.
Stinespring's dilation theorem
We present Stinespring's theorem in a version adapted to completely positive and trace-preserving maps between finite-dimensional quantum systems. For simplicity, we assume that the input and output systems coincide. The theorem applies more generally to completely positive (not necessarily trace-preserving) maps between C * − algebras.
Stinespring's dilation: Let T : S(H) → S(H) be a completely positive and trace-preserving map between states on a finite-dimensional Hilbert space H. Then there exists a Hilbert space K and a unitary operation U on H ⊗ K such that
$T(\varrho) = tr_{\mathcal{K}} U^{}( \varrho \otimes |0\rangle \langle 0|)U^{\dagger}$
for all $\varrho \in S(\mathcal{H})$, where trK denotes the partial trace on the K − system.
The ancilla space K can be chosen such that dimK ≤ dim2H. This representation is unique up to unitary equivalence.
Kraus decomposition
It is sometimes useful not to go to a larger Hilbert space, but to work with operators between the input and output Hilbert spaces of the channel itself. Such a representation can be immediately obtained from Stinespring's theorem: We introduce a basis ∣k⟩ of the ancilla space K and define the Kraus operators tk in terms of Stinespring's unitary U as
⟨a∣tk∣b⟩ : = ⟨a ⊗ k∣U∣b ⊗ 0⟩
The Stinespring representation then becomes the operator-sum decomposition or Kraus decomposition of the quantum channel T:
Kraus decomposition: Every completely positive and trace-preserving map T : S(H) → S(H) can be given the form
$T(\varrho) = \sum_{k=1}^{K} t_{k}^{} \, \varrho \, t_{k}^{\dagger}$
for all $\varrho \in S(\mathcal{H})$. The K ≤ dim2H Kraus operators tk : H → H satisfy the completeness relation ∑ktk † tk = 1.
Purification of quantum states
Quantum states are channels $\varrho: \mathbb{C} \rightarrow S(\mathcal{H})$ with one-dimensional input space C (cf. Channel (CP map)). We may thus apply Stinespring's dilation theorem to conclude that $\varrho$ can be given the representation
$\varrho = tr_{\mathcal{K}} |\psi\rangle\langle \psi |$,
where ∣ψ⟩ = U∣0⟩ is a pure state on the combined system H ⊗ K. In other words, every mixed state $\varrho$ can be thought of as arising from a pure state ∣ψ⟩ on a larger Hilbert space. This special version of Stinespring's theorem is usually called the GNS construction of quantum states, after Gelfand and Naimark, and Segal.
For a given mixed state with spectral decomposition $\varrho = \sum_k p_k \, |k\rangle \langle k| \, \in S(\mathcal{H})$, such a purification is given by the state
$|\psi \rangle = \sum_k \, \sqrt{p_k} \, |k \rangle \otimes |k\rangle \, \in \mathcal{H} \otimes \mathcal{H}$.
M. A. Nielsen, I. L. Chuang: Quantum Computation and Quantum Information; Cambridge University Press, Cambridge 2000
K. Kraus: States, Effects, and Operations; Springer, Berlin 1983
E. B. Davies: Quantum Theory of Open Systems; Academic Press, London 1976
V. Paulsen: Completely Bounded Maps and Operator Algebras; Cambridge University Press, Cambridge 2002
M. Keyl: Fundamentals of Quantum Information Theory; Phys. Rep. 369 (2002) 431-548; quant-ph/0202122
W. F. Stinespring: Positive Functions on C * − algebras; Proc. Amer. Math. Soc. 6 (1955) 211
I. M. Gelfand, M. A. Naimark: On the Imbedding of Normed Rings into the Ring of Operators in Hilbert space; Mat. Sb. 12 (1943) 197
I. E. Segal: Irreducible Representations of Operator Algebras; Bull. Math. Soc. 61 (1947) 69
Channel (CP map)
Measurements and preparations
Category:Handbook of Quantum Information Category:Mathematical Structure | CommonCrawl |
Outsourcing in Shiraz University of Medical Sciences; a before and after study
Omid Barati1,
Maryam Najibi1,2,
Ali Reza Yusefi1,2,
Hajar Dehghan1,2,3 &
Sajad Delavari1
Journal of the Egyptian Public Health Association volume 94, Article number: 13 (2019) Cite this article
Outsourcing is a kind of participation between public and private sector. This should be monitored and supervised to enhance the quality of outsourced services and to prevent new problems in this area. Shiraz University of Medical Sciences (SUMS) hospitals increasingly use outsourcing in recent years.
The present research aimed at comparing outsourced departments of SUMS from economic view, accessibility of services, and service quality during the years 2010–2012.
A before and after descriptive and analytical design was applied in outsourced departments of SUMS in 2014. First, 17 indicators were extracted by Delphi technique. Then, all outsourced units were assessed using economic, access to services, and quality indicators during 2010 to 2012.
After outsourcing, in all pharmacies and dentistry units, except one, loss decreased and benefit increased from public sector viewpoint. The number of personnel for one pharmacy and two laboratories was decreased, while it remained unchanged for dentistry units. The total number of clients was increased for all pharmacies and laboratories and decreased for one dentistry unit. Patient satisfaction for pharmacies, laboratories, and dentistry units was 73.4%, 80.3%, and 78.5%, respectively. Also, employer's satisfaction from contraction was 60%, 68%, and 93.3% for pharmacies, laboratories, and dentistry units, respectively.
Outsourcing as an effective strategy resulted in increase in the personnel, client, and stakeholder satisfaction. Also, it increased benefit and decreased cost for public sector. It is recommended that rules for the implementation of this strategy and monitoring the private sector should be defined.
In recent decades, reform in health systems mainly focused on financing, resource allocation, service delivery, and equity. In all reforms, arrangement of public and private sector and balance between them is a critical issue [1]. Reform in service delivery was performed via a policy named decentralization which mainly aimed at improving efficiency and responsiveness [1]. Privatization is one of the main types of decentralization which could be defined as adaptation of public management with market rules. Accordingly, privatization is not just a change in institutional possession but it could change the management manner, goals, and incentives [2].
Outsourcing is one of the forms of public-private partnership (PPP). In fact, outsourcing is a purchasing mechanism in which an organization purchases a special service at an agreed quality and quantity for a determined period from a service provider which is out of the organization [3, 4] and controlled via a contract or collaborative management [5]. Outsourcing could merge private sector advantages—such as efficiency [1, 6] and consumer satisfaction [7, 8]—into the public sector and avoid its disadvantages—such as inattention to equity and social responsibility [9]. This could result in generating internal market or quasi-market in the public sector that promotes competition [10, 11]. Also, outsourcing could improve accessibility, equity, equality, and efficiency and meanwhile create an atmosphere for collaboration of the private sector with the public sector [3].
Studies on outsourcing support it as it improves productivity and quality and decreases the costs. For example, in Greece, Maschuris and Kondylis stated that outsourcing in public hospital could result in improvement in service quality and patient satisfaction [7]. Taiwanese public hospitals decreased the number of needed staff along with improvement in their productivity and morale [12]. Similarly, an Indian study indicated that public hospitals that use outsourcing could decrease direct and indirect costs by about 40% [13]. So, outsourcing in health market is growing in all developing countries as a type of reform [14] for improving effectiveness [15, 16] and efficiency [16].
Outsourcing could resolve several issues in health systems and hospitals that result in the development of private sector partnership. As Aksan et al. stated, private sector partnership in Turkey is growing during recent years [17]. Mayson et al. suggest advances in treatment technology, difference in accessibility of healthcare, and resource scarcity as reasons for private sector growth. Even in developed countries, outsourcing is a strategy for reducing the executive workload of government. Bellenghi et al. assert that about 80% of US hospitals devolve health information services to expert deliverer based on the directive of the Association of Health Information Outsourcing Services (AHIOS) [18]. Nevertheless, inappropriate management of outsourcing could hinder managers to achieve their goals and create some deficiencies [5].
In Iran, outsourcing is growing as stated in several upstream documents to improve service quality and patient satisfaction and to decrease healthcare costs [14, 19]. Thus, policymakers need to appraise outsourcing to understand its weaknesses and strengths and consequently make decisions for its improvement. Shiraz University of Medical Sciences has outsourced health services based on upstream regulations in IR Iran. It outsources a number of service delivery units annually, based on managers' decisions using different types of contracts with the private sector. Since Iranian medical universities are in the beginning of outsourcing, they need to have a better understanding of its consequences and outcomes. Given the importance and wide range of outsourcing, it seems necessary to evaluate outsourcing in terms of its goal attainment and determining the suitable type of contract. The present research aimed at comparing outsourced departments of Shiraz University of Medical Sciences (SUMS) from economic view, accessibility of services, and service quality during the period of 2010 to 2012.
Types of outsourcing
Type of outsourcing in Shiraz University of Medical Sciences was mostly lease, management, and collaboration, any type of which has its own profits and loss based on the type of the service.
In the lease contract, the contractor rents the governmental medical center by paying some fees and operates the medical center; instead, the contractor has the right to collect the income. In this case, all the commercial risks are transferred to the contractor. The responsibility of capital costs would be on the governmental sector. In the management contract, the governmental sector pays the private organization to manage a medical center unit and the government offers all the services needed as well. In this type, the decisions for hiring medical and health specialist workforce and procurement and purchasing of the medication and medical supplies are done by the government. However, the responsibility of the capital costs and commercial risks still remains on the government. In collaborative outsourcing, the profits are divided between the private sector and government based on their agreement [20].
Collaboration between governmental and private sectors in the form of contract could bring about potential risks. For instance, the presence of the private sector next to the governmental sector in an unorganized and uncoordinated pattern can cause cost pressure and overload on the governmental sector. Consideration of the determined framework and content could be an important starting point in creating coordination and solidarity in the process of contracting. Also, identifying the type of contract between governmental and private sectors in the health sector, type of payment in these contracts, method of monitoring, and supervision on contracts will have a potential effect on the contracts which they should specify in the contents of the contract precisely. In this case, there will not be any ambiguity and uncertainty for the parties of the contract. In addition, it could be used as guidance and instruction which help to improve the relationship between governmental and private sectors in the health field [21].
Study design and data collection
A before and after descriptive design was applied in outsourced departments of SUMS. All outsourced departments which comprised of five pharmacies, five laboratories, and three dentistry departments during 2010 to 2012 were surveyed. First, a review of studies about outsourcing was conducted and 17 indices were identified for outsourcing assessment. Afterward, these indices were evaluated using the Delphi method in two rounds. Study population was experts who have sufficient knowledge about outsourcing to finalize indices. Experts were members of outsourcing workgroup of SUMS, hospital managers, deputies of treatment and logistics of SUMS, and professors and researchers in the field of healthcare and hospital management. Inclusion criteria were having experience of more than 1 year, having related education in the field of health management, and having sufficient information and knowledge about outsourcing. Based on inclusion criteria, 30 experts were selected using purposive sampling and were informed about the Delphi objectives and methods. Finally, 25 experts agreed to collaborate in the study.
A Delphi questionnaire for prioritizing criteria of outsourcing evaluation was administered among experts for the first round. The questionnaire was closed response with three choices including "agree," "vague," and "disagree" for each question. At the first round, data were analyzed and agreements below 30% were ignored, between 31% and 70% were entered into the second round, and more than 71% were confirmed. Then, the second round of Delphi was done with criteria based on three choices of the first round. Finally, 10 criteria were selected for outsourcing evaluation which was categorized into three domains including economic, accessibility, and service quality. Economic criteria were investment expenditures, current costs, salary and compensation costs, overhead costs (water, electricity, and gas), revenue, profit, and loss before and after outsourcing. Accessibility criteria were the number of personnel, number of clients, and activity hours. Service quality criteria included patients' satisfaction and employers' satisfaction from the contractor. After finalizing the criteria, a form was designed for gathering data about criteria before and after outsourcing based on available documents.
For measuring patient satisfaction in outsourced departments, a survey was done on patients who are referred to outsourced departments for getting services. Based on sampling formula, a sample of 384 was selected for each department with 0.95 degree of accuracy. Accordingly, a sample of 1152 was selected using stratified sampling based on share of pharmacy, laboratories, and dentistry departments.
Patient satisfaction questionnaire included 15 closed-answer questions with Likert choices "totally agree," "agree," "no difference," "disagree," and "totally disagree". Questionnaire validity was approved using expert opinions, and reliability was tested with Cronbach's alpha which was calculated as 0.85. Since satisfaction before outsourcing had not been measured, satisfaction comparison was not possible. Also, because of heterogeneity issues, we could not compare outsource departments with public ones. Thus, the comparison was not done and only satisfaction after outsourcing was analyzed. For measuring employer (SUMS) satisfaction, a checklist was used and filled based on monthly assessments by SUMS inspectors.
Data were analyzed using descriptive statistics via MS Excel and SPSS software version 13 (SPSS Inc., Chicago, IL, USA).
Profit/loss change percentage is calculated based on the following formula:
$$ \mathrm{Profit}/\mathrm{loss}\ \mathrm{change}\ \mathrm{percentage}=\frac{\mathrm{Profit}/\mathrm{loss}\ \mathrm{after}\ \mathrm{outsourcing}-\mathrm{Profit}/\mathrm{loss}\ \mathrm{before}\ \mathrm{outsourcing}}{\mathrm{Profit}/\mathrm{loss}\ \mathrm{before}\ \mathrm{outsourcing}}\times 100 $$
During 2010 to 2012, 13 medical units in Shiraz University of Medical Science were outsourced to private sector, among which, 67.9% were leased including 5 pharmacies, 1 dentistry, and 3 lab units; 15.4% were collaborative including 2 dentistry units and 1 physiotherapy units; and 7.7% were management type including 2 lab units (Table 1).
Table 1 The outsourced medical units in Shiraz University of Medical Sciences, Iran, during 2010–2012
The highest cost and profits in terms of economic indicators belong to Zeinabie Hospital's pharmacy with 1812% profit, and the least costs and profits in terms of economic indicators belong to Ali Asghar Hospital's pharmacy with − 78%. Moreover, in terms of access indicators, Hafez Hospital gained maximum percentage of personnel changes with 175% growth in the number of staff and Ali Asghar Hospital's pharmacy gained minimum percentage of personnel changes. As to the customers, except Ghir Karzin Hospital's pharmacy, the remaining pharmacies experienced increased amounts of customers after outsourcing; the highest growth in terms of customers belonged to Lamerd Hospital's pharmacy.
In terms of indicators of the quality of service, the satisfaction of patients was good in 60% of pharmacies, while 40% were evaluated as moderate. Furthermore, based on the committee of reduction tenure report of Shiraz University of Medical Sciences, the employer satisfaction of the contractor's performance was moderate, good, and weak in 60%, 20%, and 20% of pharmacies, respectively (Table 2).
Table 2 Comparison of the performance of outsourced pharmacies in Shiraz University of Medical Sciences before and after outsourcing, Iran, 2010–2012
In the outsourced labs, in terms of economic indicator, the highest percentage change in profit and loss belonged to Lamerd Hospital's lab with 214% profit and the lowest percentage change in profit and loss belonged to Ghotb Aldin Hospital's lab with 76% profit; the income of the university was zero in the labs with management type of outsourcing; in this way, all the income from Ghotb Aldin Hospital's lab was allocated to the private sector, and in Dabiran Medical Center's lab, capita rural insurance has been paid as service delivery to the contractor; thus, the total income and profits of the university was zero.
Also, in terms of access indicators, Ebn-e-Sina Hospital had the highest growth in the number of staff and Ghotb Aldin Hospital, with a decrease of staff from 11 to 8 persons, had the lowest growth in the number of staff. In terms of the client's referral to the labs, there were an increased number of clients after outsourcing for all labs; the highest amount belonged to Shooshtari Hospital's lab.
In terms of service quality indicators, the patient's satisfaction with the performance of 80% of labs was evaluated as good and perfect while 20% of the labs were evaluated as moderate. Moreover, based on the committee of reduction tenure report of Shiraz University of Medical Sciences, the satisfaction of the employer from the contractor's performance in 60% of the units was evaluated as good, 20% as moderate, and 20% as weak (Table 3).
Table 3 Comparison of the lab's performance in Shiraz University of Medical Sciences before and after the outsourcing, Iran, 2010–2012
In terms of economic indicators in the outsourced dentistry units, the highest percentage of change in profit and loss belonged to Khonj Medical Center's dentistry units with 798% profit, and the lowest percentage of change in profit and loss belonged to Sede Eghlid dentistry unit with − 60% of loss.
In terms of access indicators, there was no change in the number of staff in Sede Eghlid dentistry unit and also the number of clients decreased after outsourcing.
In terms of the employer satisfaction (Shiraz University of Medical Sciences) from the contractor, the results showed that the lowest grade belonged to Sede Eghlid dentistry unit, and in two other units, it was evaluated as perfect. Besides, in terms of patient satisfaction, there were no major changes between the units (Table 4).
Table 4 Comparison of the dentistry unit's performance in Shiraz University of Medical Sciences before and after the outsourcing process, Iran, 2010–2012
Comparison of the situation of the outsourced units of Shiraz University of Medical Sciences before and after the outsourcing process revealed that the type of outsourcing in this university was mostly lease, management, and collaboration, any type of which has its own profits and loss based on the type of the service.
The results also revealed that outsourcing of health services based on leasing method is a kind of successful strategy for pharmacies because, in most of the pharmacy units, we experienced the growth of profit and also improvement of access indicator. Tourani et al. found that pharmacy outsourcing at Firoozgar Hospital in Tehran had resulted in cost saving in personnel, medication, and leasing costs. Besides, the number of personnel (as an index for accessibility) was increased from 9 to 14 while their educational level and the number of performed prescriptions were increased. Moreover, the time spent by the manager for managing pharmacy affairs was decreased [3]. A research in the USA showed that outsourcing resulted in decrease in pharmacy cost and number of personnel that saved $59,000 in the first year [22]. As all these study shows, pharmacy costs were decreased after outsourcing which is corresponding with current findings.
In the current research, the only exception from decreasing pharmacy cost was Ali Asghar Hospital's pharmacy. Studies showed that the governmental sector is lost due to inaccurate expertise rent cost and selection of inappropriate contractor. Due to pharmaceutical sanctions, the contractor is delayed in paying the insurance claims from the insurance companies and is faced with lack of liquidity and lack of supply of the medicine, which finally caused the employer's dissatisfaction and cancelation of the contract for the next year. Based on Maschuris and Kondylis research in Greece, some factors such as cost of contract (suggested price), quality of services, diversity and range of services, past history, financial state, and also contractor's reputation should be considered in choosing the contractor by the medical units to fulfill outsourcing goals such as cost savings, patients' satisfaction, resolution of the lack of fund and staff, and management focus on core activities [7].
With respect to the service quality indicator in the present study, patient satisfaction, most of the units were in a desirable level and the employer satisfaction from the contractor's performance was reported in a moderate level.
The study also showed that 40% of the outsourced pharmacies were moderate regarding the quality of service indicators. Mohaghegh et al. revealed that the pharmacies' outsourcing approach including increase of patient satisfaction may fail even though it promised care improvements; thus, the measures such as clear and comprehensive contracts should be taken, and practical mechanisms such as monitoring and evaluation should be accomplished [23]. One of the factors that could hinder improving customer satisfaction due to outsourcing is the limited number of private suppliers and hence lack of competitive market [24]. Therefore, by considering patient satisfaction in evaluating the outsourced unit performance, it is suggested that in the provision of the contract, patient satisfaction has to be defined as the contractor's controlling tool, and in the case of low satisfaction, penalty system will be specified, and by emphasizing on performance contract, payment mechanism will be modified in a way that the payments to private sectors in addition to prescription cost will be related to customer's satisfaction.
In the present research, the outsourced labs had a loss before outsourcing due to low tariffs of lab services, the high cost of consuming materials, and high annual personnel costs. Therefore, the university has used the lease or management outsourcing strategy for outsourcing. In comparison of these two types of outsourcing, the lease outsourcing was more successful because the increase in its profitability was remarkably higher than management outsourcing. Some factors including staff's low income and benefits compared with the governmental sector, longer working hours, diversity in services, saving raw materials, and patients' marketing strategy have caused profit for the private sector. Also, in this outsourcing, besides an increase of clients and a decrease in costs, a significant growth was seen in the profit of the private sector.
Omrani et al. showed that after labs' outsourcing, laboratories income increased 51% due to more customers, longer working hours, not referring to other labs, and diversity in the lab's experiments [25]. Another study in Iran showed the positive effect of signing a contract in lab leasing to the private sector by comparing income and expenses of these units before and after the contract [26].
In the management outsourcing of Ghotb Aldin Hospital's lab, although the goals of outsourcing such as cost saving and prevention of loss have been fulfilled, it is possible to convert this unit to a profitable one by using some solutions such as increase in the service diversity, patient marketing, and leasing outsourcing. Also, in management outsourcing, Dabiran Medical Center's lab was profitable for the university but it was outsourced to the private sector to improve service access and quality indicators as well as governmental sector obligation to decrease tenure. In this regard, rural per capita amount of insurance was paid to the private sector and the contractor was obliged to deliver high-quality services, attract maximum clients, and pay current fees of the lab. In management outsourcing, even though there were no financial benefits for the government, an improvement in the access to service indicators by using two non-governmental workforces, quick response, and more client attraction was experienced.
In terms of quality of service indicators, the satisfaction indicators were in a desirable level in the outsourced labs. The research in Iran by Omrani et al. showed that responsibility and behavior of the lab's personnel improved after outsourcing [25]. In order to improve patient and personnel satisfaction, it is suggested to hire specialized workforce and try to decrease medical errors. A study in Arizona revealed that the medical error increased after lab outsourcing. The number of misdiagnosis increased, causing physical and financial problems to the patients due to the use of unprofessional workforce in the labs [27]. Based on the abovementioned results, it is recommended that if the patients' satisfaction and the service quality are low in the outsourced labs, it is possible to decrease the contractor's payment as a fine solution.
Findings indicate that from one and three dentistry units that were outsourced by leasing and partnership, respectively, profitability was increased in two partnership outsourcing units (Khonj and Bigherd). Besides, in these two units, accessibility and number of patients were increased and the satisfaction of SUMS and patients' satisfaction from delivered services were high. In the lease outsourcing of Sede Eghlid Dentistry Center, this unit was profitable for the university before outsourcing. Due to the placement of this unit in a deprived area, the obligation to decrease government tenure, and the absence of appropriate contractor, the university outsourced the unit as a lease unit in order to prevent closure and improve access to service indicators. The condition of this unit in terms of patient satisfaction and employee after outsourcing was in a desirable level.
Finally, the results of the study showed that in most of the outsourced units with the increase in the number of staff in private sector and with the increase in the number of clients, we experienced increase in profitability and reduced costs for the governmental sector; therefore, outsourcing for medical units can be an effective strategy. As several studies revealed, if outsourcing strategy is done by risk and cost assessment along with careful and measured approach, it can be an effective strategy resulting in benefits for management, staff, contractors, patients, and even hospitals [28,29,30]. Moreover, it is suggested that hospitals, especially the governmental ones, can use outsourcing benefits as a strategy for a full-time service and reduction of the human resource constraints, more manager's effort to hospital management, time-saving, productivity improvement, and staff morale. Also, an outsourcing strategy can be used as an approach to help the hospitals to attract new sources without paying any costs [12].
Outsourcing in SUMS curative units has increased benefit, accessibility indicators, and service quality. Hence, outsourcing could be suggested as a reform mechanism in the health system. Moreover, defining indicators for evaluation of outsourcing and continuous monitoring of indicators are highly recommended for better analysis by policymakers.
Jan S. Book reviews: health economics for developing countries, a practical guide. Health Econ. 2002;11(2):181–2.
Perrot J. Is contracting a form of privatization? Bull World Health Organ. 2006;84(11):910–1.
Tourani S, Maleki M, Ghodousi-Moghadam S, Gohari M. Efficiency and effectiveness of the Firoozgar teaching hospital's pharmacy after outsourcing, Tehran, Iran. J Health Adm. 2010;12(38):59–70.
Deshpande V, Schwarz LB, Atallah MJ, Blanton M, Frikken KB. Outsourcing manufacturing: secure price-masking mechanisms for purchasing component parts. Prod Oper Manag. 2011;20(2):165–80.
Roberts V. Managing strategic outsourcing in the healthcare industry. J Healthc Manag. 2001;46(4):239–49.
Perrot J. Different approaches to contracting in health systems. Bull World Health Organ. 2006;84(11):859–66.
Moschuris SJ, Kondylis MN. Outsourcing in public hospitals: a Greek perspective. J Health Organ Manag. 2006;20(1):4–14.
Akbulut Y, Terekli G, Yıldırım T. Outsourcing in Turkish hospitals: a systematic review. Ankara J Health Serv. 2012;11(2):25–33.
Preker AS, Harding A. Innovations in Health Service Delivery: The Corporatization of Public Hospitals. Health, Nutrition, and Population. Washington, DC: World Bank; 2003.
Saltman RB, Bankauskaite V, Vrangbaek K, editors. Decentralization in health care: strategies and outcomes. Maidenhead England: Open University Press/McGraw-Hill; 2007.
Roberts M, Hsiao W, Berman P, Reich M, editors. Getting health reform right: a guide to improving performance and equity. New York: Oxford University Press; 2004.
Hsiao CT, Pai JY, Chiu H. The study on the outsourcing of Taiwan's hospitals: a questionnaire survey research. BMC Health Serv Res. 2009;9(1):78.
Chandra H. Financial management analysis of outsourcing of the hospital services for cost containment and efficiency: case study of Sanjay Gandhi post-graduate institute of medical sciences, Lucknow, India. J Financ Manag Analy. 2007;20(1):70–7.
Siddiqi S, Masud TI, Sabri B. Contracting but not without caution: experience with outsourcing of health services in countries of the Eastern Mediterranean Region. Bull World Health Organ. 2006;84(11):867–75.
Albreht T. Privatization processes in health care in Europe-a move in the right direction, a 'trendy' option, or a step back? Eur J Pub Health. 2009;19(5):448–50.
Laamanen R, Simonsen-Rehn N, Suominen S, Øvretveit J, Brommels M. Outsourcing primary health care services-how politicians explain the grounds for their decisions. Health Policy. 2008;88(2–3):294–307.
Aksan HA, Ergin I, Ocek Z. The change in capacity and service delivery at public and private hospitals in Turkey: a closer look at regional differences. BMC Health Serv Res. 2010;10(1):300.
Bellenghi GM, Coffey B, Fournier JE, McDavid JP. Release of information: are hospitals taking a hit? Healthc Financ Manage. 2008;62(11):118–22.
Barati Marnani A, Gudaki H. Comparative study on privatization of health care provision on contract basis. J Health Adm. 2005;8(21):105–10.
Chalkley M, McVicar D. Choice of contracts in the British National Health Service: an empirical study. J Health Econ. 2008;27(5):1155–67.
Vatankhah S, Maleki MR, Tofighi SH, Barati O, Rafiei S. The study of management contract conditions in healthcare organizations of selected countries. J Health Inf Manag. 2012;9(3):431.
Gates DM, Smolarek RT, Stevenson JG. Outsourcing the preparation of parenteral solutions. Am J Health Syst Pharm. 1996;53(18):2176–8.
Mohaghegh B, Asadbaygi M, Barati Marnani A, Birjandi M. The impact of outsourcing the pharmaceutical services on outpatients' satisfaction in Lorestan rural health centers. J. Hosp. 2011;10(3):1–10.
Young S. Outsourcing in the Australian health sector: the interplay of economics and politics. Int J Pub Sec Manag. 2005;18(1):25–36.
Omrani MMH, Khazar S, Ghalami S, Farajzadeh F. Laboratories performance after outsourcing in the hospitals of Shahid Beheshti University of Medical Sciences. J Lab Sci. 2013;7(2):42–8.
Roointan AR. Management improvement and use of resources with outsourcing to private sector in Aligoodarz health care grid. In: Proceedings of the 1st Nationwide Conference of Resource Management in Hospitals. Tehran: 2002. p. 279.
Chasin BS, Elliott SP, Klotz SA. Medical errors arising from outsourcing laboratory and radiology services. Am J Med. 2007;120(9):819.e9–e11.
Yigit V, Tengilimoglu D, Kisa A, Younis MZ. Outsourcing and its implications for hospital organizations in Turkey. J Health Care Finance. 2007;33(4):86–92.
Roberts JG, Henderson JG, Olive LA, Obaka D. A review of outsourcing of services in health care organizations. J Outsourc Organ Inf Manag. 2013;2013:1.
Liu X, Hotchkiss DR, Bose S. The impact of contracting-out on health system performance: a conceptual framework. Health Policy. 2007;82(2):200–11.
The researchers thank the authorities of Shiraz University of Medical Sciences and the respectful management and all participants who kindly helped us to conduct the study.
This study is supported by Shiraz University of Medical Sciences with the grant number of 93–7287.
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Health Human Resources Research Center, School of Management and Medical Informatics, Shiraz University of Medical Sciences, Shiraz, Iran
Omid Barati, Maryam Najibi, Ali Reza Yusefi, Hajar Dehghan & Sajad Delavari
Health Services Management, Student Research Committee, Shiraz University of Medical Sciences, Shiraz, Iran
Maryam Najibi, Ali Reza Yusefi & Hajar Dehghan
Health Care Management and Informatics School, Almas Building, Alley 29, Qasrodasht Ave, Shiraz, Iran
Hajar Dehghan
Omid Barati
Maryam Najibi
Ali Reza Yusefi
Sajad Delavari
OB participated in the design of the study. MN participated in the design of the study, performed the statistical analysis and coordination, and helped to draft the manuscript. AY helped to draft the manuscript. HD participated in the design of the study, performed the statistical analysis and coordination, and helped to draft the manuscript. SD helped to draft the manuscript. All authors read and approved the final manuscript.
Correspondence to Hajar Dehghan.
The paper does not involve the use of any animal or human data or tissue. But, all procedures performed in the study, were in accordance with the ethical standards of the Shiraz University of Medical Sciences ethics committee (ethics committee code: 93–7287) and the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Not Applicable. The paper does not involve the use of any individual person's data.
Barati, O., Najibi, M., Yusefi, A.R. et al. Outsourcing in Shiraz University of Medical Sciences; a before and after study. J. Egypt. Public. Health. Assoc. 94, 13 (2019). https://doi.org/10.1186/s42506-019-0010-0 | CommonCrawl |
Delegation principle for multi-agency games under ex post equilibrium
JDG Home
Technology transfer: Barriers and opportunities
October 2018, 5(4): 331-341. doi: 10.3934/jdg.2018020
A non-iterative algorithm for generalized pig games
Fabián Crocce and Ernesto Mordecki ,
Centro de Matemática, Facultad de Ciencias, Universidad de la República, Iguá 4225, CP 11400, Montevideo, Uruguay
* Corresponding author: Ernesto Mordecki
Received October 2017 Revised September 2018 Published October 2018
Figure(1) / Table(4)
We provide a polynomial algorithm to find the value and an optimal strategy for a generalization of the Pig game. Modeled as a competitive Markov decision process, the corresponding Bellman equations can be decoupled leading to systems of two non-linear equations with two unknowns. In this way we avoid the classical iterative approaches. A simple complexity analysis reveals that the algorithm requires $O(\mathbf{s}\log\mathbf{s})$ steps, where $\mathbf{s}$ is the number of states of the game. The classical Pig and the Piglet (a simple variant of the Pig played with a coin) are examined in detail.
Keywords: Dice games, simple stochastic games, polynomial algorithm.
Mathematics Subject Classification: Primary: 91A15, 91A60; Secondary: 90C47.
Citation: Fabián Crocce, Ernesto Mordecki. A non-iterative algorithm for generalized pig games. Journal of Dynamics & Games, 2018, 5 (4) : 331-341. doi: 10.3934/jdg.2018020
D. Auger, P. Coucheney and Y. and Strozecki, Finding optimal strategies of almost acyclic simple stochastic games, Theory and applications of models of computation, Lecture Notes in Comput. Sci., 8402 (2014), 67–85.Google Scholar
M. de Berg, M. van Kreveld, M. Overmars and O. Schwarzkopf, Computational Geometry: Algorithms and Applications, (2nd. rev. ed.) Springer, Berlin 2000. doi: 10.1007/978-3-662-04245-8. Google Scholar
A. Condon, The complexity of stochastic games, Information and Computation, 96 (1992), 203-224. doi: 10.1016/0890-5401(92)90048-K. Google Scholar
A. Condon, On algorithms for simple stochastic games, Advances in Computational Complexity Theory, J. Cai (Ed.), DIMACS Series in Discrete Mathematics and Theoretical Computer Science AMS, 14 (1993), 51–71. Google Scholar
J. Filar and K. Vrieze, Competitive Markov Decision Processes, Springer, New York, 1997. Google Scholar
H. Gimbert and F. Horn, Simple stochastic games with few random vertices are easy to solve, Foundations of software science and computational structures, 5–19, Lecture Notes in Comput. Sci., 4962, Springer, Berlin. 2008 doi: 10.1007/978-3-540-78499-9_2. Google Scholar
J. Haigh and M. Roters, Optimal strategy in a dice game, Journal of Applied Probability, 37 (2000), 1110-1116. doi: 10.1239/jap/1014843089. Google Scholar
N. Halman, Simple stochastic games, parity games, mean payoff games and discounted payoff games are all LP-type problems, Algorithmica, 49 (2007), 37-50. doi: 10.1007/s00453-007-0175-3. Google Scholar
T. D. Hansen, P. B. Miltersen and U. Zwick, Strategy iteration is strongly polynomial for 2- player turn-based stochastic games with a constant discount factor, Innovations in Computer Science (ICS'11), (2011), 253–263.Google Scholar
A. J. Hoffman and R. M. Karp, On nonterminating stochastic games, Management Science, 12 (1966), 359-370. doi: 10.1287/mnsc.12.5.359. Google Scholar
R. Ibsen-Jensen and P. B. Miltersen, Solving simple stochastic games with few coin toss positions, Algorithms–ESA 20112, LNCS, 7501 (2012), 636–647. doi: 10.1007/978-3-642-33090-2_55. Google Scholar
G. Louchard, Recent studies on the dice race problem and its connections, Math. Appl. (Warsaw), 44 (2106), 63-86. doi: 10.14708/ma.v44i1.1124. Google Scholar
T. M. Liggett and S. A. Lippman, Stochastic games with perfect information and time average payoff, SIAM Review, 11 (1969), 604-607. doi: 10.1137/1011093. Google Scholar
J. Matoušek, M. Sharir and E. Welzl, A subexponential bound for linear programming, Algorithmica, 16 (1996), 498-516. doi: 10.1007/BF01940877. Google Scholar
J. von Neumann and O. Morgenstern, Theory of Games and Economic Behavior, Princeton University Press, Princeton, New Jersey. 1944. Google Scholar
T. Neller and C. Presser, Optimal play of the dice game pig, The UMAP Journal, 25 (2004), 25-47. Google Scholar
M. Roters, Optimal stopping in a dice game, Journal of Applied Probability, 35 (1998), 229-235. doi: 10.1239/jap/1032192566. Google Scholar
L. S. Shapley, Stochastic games, Proceedings of the Natural Academy of Sciences, USA, 39 (1953), 1095-1100. doi: 10.1073/pnas.39.10.1953. Google Scholar
R. Tripathi, E. Valkanova and V. S. Anil Kumar, On strategy improvement algorithms for simple stochastic games, Journal of Discrete Algorithms, 9 (2011), 263-278. doi: 10.1016/j.jda.2011.03.007. Google Scholar
H. Tijms, Dice games and stochastic dynamic programming, Morfismos, 11 (2004), 1-14. Google Scholar
H. Tijms and J. van der Wal, A real-world stochastic two-person game, Probab. Engrg. Inform. Sci., 20 (2006), 599-608. doi: 10.1017/S0269964806060372. Google Scholar
O. J. Vrieze, S. H. Tijs, T. E. S. Raghavan and J. A. Filar, A finite algorithm for the switching control stochastic game, Operations-Research-Spektrum, 5 (1983), 15-24. Google Scholar
Figure 1. Function $y = f_{b, a}(x)$ (solid line) intersects $x = f_{a, b}(y)$ (dashed line) at the solution $x = v(a, b)$, $y = v(b, a)$ in one instance of the Piglet game
Figure Options
Download as PowerPoint slide
Algorithm1
Algorithm 1 General backward algorithm.
1: for $b$ from $1$ to $N$ do
2: for $a$ from $1$ to $b$ do
3: Find $v(a, b, \tau)\colon 0\leq \tau <a$ and $v(b, a, \tau)\colon 0\leq \tau <b$
4: end for
Table Options
Download as excel
Algorithm 2 Solving step 3 for fixed a, b.
1: for$i$ from $1$ to $a$ do
2: Find the points defining $f_{a, b, i}$
4: for $i$ from $1$ to $b$ do
5: Find the points defining $f_{b, a, i}$
7: Find $x$ and $y$ that solve system (7)
8: for $i$ from $1$ to $a-1$ do
9: compute v(a, b, a-i)
10: end for
11: for $i$ from $1$ to $b-1$ do
12: compute v(b, a, b-i)
Table 1. Pig Game with different targets
Target of the game ($N$) value of the game $v(N, N)$
10 0.70942388
100 0.530592071
200 0.52152913
1000 0.50963900
1 Obtained by Neller and Presser [16]
Table 2. Values of $v(a, b)$ for the piglet game with $N = 3$
Oliver Juarez-Romero, William Olvera-Lopez, Francisco Sanchez-Sanchez. A simple family of solutions for forest games. Journal of Dynamics & Games, 2017, 4 (2) : 87-96. doi: 10.3934/jdg.2017006
Xiangxiang Huang, Xianping Guo, Jianping Peng. A probability criterion for zero-sum stochastic games. Journal of Dynamics & Games, 2017, 4 (4) : 369-383. doi: 10.3934/jdg.2017020
Jingzhen Liu, Ka-Fai Cedric Yiu. Optimal stochastic differential games with VaR constraints. Discrete & Continuous Dynamical Systems - B, 2013, 18 (7) : 1889-1907. doi: 10.3934/dcdsb.2013.18.1889
Alain Bensoussan, Jens Frehse, Christine Grün. Stochastic differential games with a varying number of players. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1719-1736. doi: 10.3934/cpaa.2014.13.1719
Lin Xu, Rongming Wang, Dingjun Yao. Optimal stochastic investment games under Markov regime switching market. Journal of Industrial & Management Optimization, 2014, 10 (3) : 795-815. doi: 10.3934/jimo.2014.10.795
Sylvain Sorin, Guillaume Vigeral. Reversibility and oscillations in zero-sum discounted stochastic games. Journal of Dynamics & Games, 2015, 2 (1) : 103-115. doi: 10.3934/jdg.2015.2.103
Beatris Adriana Escobedo-Trujillo, José Daniel López-Barrientos. Nonzero-sum stochastic differential games with additive structure and average payoffs. Journal of Dynamics & Games, 2014, 1 (4) : 555-578. doi: 10.3934/jdg.2014.1.555
Beatris Adriana Escobedo-Trujillo, Alejandro Alaffita-Hernández, Raquiel López-Martínez. Constrained stochastic differential games with additive structure: Average and discount payoffs. Journal of Dynamics & Games, 2018, 5 (2) : 109-141. doi: 10.3934/jdg.2018008
Alain Bensoussan, Jens Frehse, Jens Vogelgesang. Systems of Bellman equations to stochastic differential games with non-compact coupling. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1375-1389. doi: 10.3934/dcds.2010.27.1375
Beatris A. Escobedo-Trujillo. Discount-sensitive equilibria in zero-sum stochastic differential games. Journal of Dynamics & Games, 2016, 3 (1) : 25-50. doi: 10.3934/jdg.2016002
Qingmeng Wei, Zhiyong Yu. Time-inconsistent recursive zero-sum stochastic differential games. Mathematical Control & Related Fields, 2018, 8 (3&4) : 1051-1079. doi: 10.3934/mcrf.2018045
Antoine Hochart. An accretive operator approach to ergodic zero-sum stochastic games. Journal of Dynamics & Games, 2019, 6 (1) : 27-51. doi: 10.3934/jdg.2019003
Alan Beggs. Learning in monotone bayesian games. Journal of Dynamics & Games, 2015, 2 (2) : 117-140. doi: 10.3934/jdg.2015.2.117
Konstantin Avrachenkov, Giovanni Neglia, Vikas Vikram Singh. Network formation games with teams. Journal of Dynamics & Games, 2016, 3 (4) : 303-318. doi: 10.3934/jdg.2016016
Hassan Najafi Alishah, Pedro Duarte. Hamiltonian evolutionary games. Journal of Dynamics & Games, 2015, 2 (1) : 33-49. doi: 10.3934/jdg.2015.2.33
Yonghui Zhou, Jian Yu, Long Wang. Topological essentiality in infinite games. Journal of Industrial & Management Optimization, 2012, 8 (1) : 179-187. doi: 10.3934/jimo.2012.8.179
Rui Mu, Zhen Wu. Nash equilibrium points of recursive nonzero-sum stochastic differential games with unbounded coefficients and related multiple\\ dimensional BSDEs. Mathematical Control & Related Fields, 2017, 7 (2) : 289-304. doi: 10.3934/mcrf.2017010
Tyrone E. Duncan. Some linear-quadratic stochastic differential games for equations in Hilbert spaces with fractional Brownian motions. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5435-5445. doi: 10.3934/dcds.2015.35.5435
Libin Mou, Jiongmin Yong. Two-person zero-sum linear quadratic stochastic differential games by a Hilbert space method. Journal of Industrial & Management Optimization, 2006, 2 (1) : 95-117. doi: 10.3934/jimo.2006.2.95
Alejandra Fonseca-Morales, Onésimo Hernández-Lerma. A note on differential games with Pareto-optimal NASH equilibria: Deterministic and stochastic models†. Journal of Dynamics & Games, 2017, 4 (3) : 195-203. doi: 10.3934/jdg.2017012
Fabián Crocce Ernesto Mordecki | CommonCrawl |
3.3: What is Central Tendency?
3: Descriptive Statistics
{ "3.3.01:_Introduction_to_Measures_of_Central_Tendency" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.3.02:_Measures_of_Central_Tendency-_Mode" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.3.03:_Measures_of_Central_Tendency-_Median" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.3.04:_Measures_of_Central_Tendency-_Mean" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.3.05:_Summary_of_Measures_of_Central_Tendency" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()" }
{ "3.01:_Introduction_to_Descriptive_Statistics" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.02:_Math_Refresher" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.03:_What_is_Central_Tendency" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.04:_Interpreting_All_Three_Measures_of_Central_Tendency" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.05:_Introduction_to_Measures_of_Variability" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.06:_Introduction_to_Standard_Deviations_and_Calculations" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.07:_Practice_SD_Formula_and_Interpretation" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.08:_Interpreting_Standard_Deviations" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.09:_Putting_It_All_Together-_SD_and_3_M\'s" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.10:_Measures_of_Central_Tendency_and_Variability_Exercises" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()" }
3.3.5: Summary of Measures of Central Tendency
[ "article:topic", "license:ccbyncsa", "showtoc:yes", "authorname:forsteretal", "source[1]-stats-7093", "source[2]-stats-7093", "licenseversion:40", "source@https://irl.umsl.edu/oer/4" ]
https://stats.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fstats.libretexts.org%2FSandboxes%2Fmoja_at_taftcollege.edu%2FPSYC_2200%253A_Elementary_Statistics_for_Behavioral_and_Social_Science_(Oja)_WITHOUT_UNITS%2F03%253A_Descriptive_Statistics%2F3.03%253A_What_is_Central_Tendency%2F3.3.05%253A_Summary_of_Measures_of_Central_Tendency
More on the Mean and Median
Comparing Measures of Central Tendency
In the previous section we saw that there are several ways to define central tendency. This section defines the three most common measures of central tendency: the mean, the median, and the mode. The relationships among these measures of central tendency and the definitions given in the previous section will probably not be obvious to you.
This section gives only the basic definitions of the mean, median and mode. A further discussion of the relative merits and proper applications of these statistics is presented in a later section.
The arithmetic mean is the most common measure of central tendency. It is simply the sum of the numbers divided by the number of numbers. The symbol "\(\mu \)" (pronounced "mew") is used for the mean of a population. The symbol "\(\overline{\mathrm{X}}\)" (pronounced "X-bar") is used for the mean of a sample. The formula for \(\mu \) is shown below:
\[\mu=\dfrac{\sum \mathrm{X}}{N} \nonumber\]
where \(\Sigma \mathbf{X} \) is the sum of all the numbers in the population and \(N\) is the number of numbers in the population.
The formula for \(\overline{\mathrm{X}} \) is essentially identical:
\[\overline{\mathrm{X}}=\dfrac{\sum \mathrm{X}}{N} \nonumber\]
where \(\Sigma \mathbf{X} \) is the sum of all the numbers in the sample and \(N\) is the number of numbers in the sample. The only distinction between these two equations is whether we are referring to the population (in which case we use the parameter \(\mu \)) or a sample of that population (in which case we use the statistic \(\overline{\mathrm{X}}\)).
As an example, the mean of the numbers 1, 2, 3, 6, 8 is 20/5 = 4 regardless of whether the numbers constitute the entire population or just a sample from the population.
Figure \(\PageIndex{1}\) shows the number of touchdown (TD) passes thrown by each of the 31 teams in the National Football League in the 2000 season. The mean number of touchdown passes thrown is 20.45 as shown below.
\[\mu=\dfrac{\sum X}{N}=\dfrac{634}{31}=20.45 \nonumber \]
Figure \(\PageIndex{1}\): Number of touchdown passes. (CC-BY-NC-SA Foster et al. from An Introduction to Psychological Statistics)
Although the arithmetic mean is not the only "mean" (there is also a geometric mean, a harmonic mean, and many others that are all beyond the scope of this course), it is by far the most commonly used. Therefore, if the term "mean" is used without specifying whether it is the arithmetic mean, the geometric mean, or some other mean, it is assumed to refer to the arithmetic mean.
Median The median is also a frequently used measure of central tendency. The median is the midpoint of a distribution: the same number of scores is above the median as below it. For the data in Figure \(\PageIndex{1}\), there are 31 scores. The 16th highest score (which equals 20) is the median because there are 15 scores below the 16th score and 15 scores above the 16th score. The median can also be thought of as the 50th percentile.
When there is an odd number of numbers, the median is simply the middle number. For example, the median of 2, 4, and 7 is 4. When there is an even number of numbers, the median is the mean of the two middle numbers. Thus, the median of the numbers 2, 4, 7, 12 is:
\[\dfrac{4+7}{2}=5.5 \nonumber \]
When there are numbers with the same values, each appearance of that value gets counted. For example, in the set of numbers 1, 3, 4, 4, 5, 8, and 9, the median is 4 because there are three numbers (1, 3, and 4) below it and three numbers (5, 8, and 9) above it. If we only counted 4 once, the median would incorrectly be calculated at 4.5 (4+5 divided by 2). When in doubt, writing out all of the numbers in order and marking them off one at a time from the top and bottom will always lead you to the correct answer.
The mode is the most frequently occurring value in the dataset. For the data in Figure \(\PageIndex{1}\), the mode is 18 since more teams (4) had 18 touchdown passes than any other number of touchdown passes. With continuous data, such as response time measured to many decimals, the frequency of each value is one since no two scores will be exactly the same (see discussion of continuous variables). Therefore the mode of continuous data is normally computed from a grouped frequency distribution. Table \(\PageIndex{1}\) shows a grouped frequency distribution for the target response time data. Since the interval with the highest frequency is 600-700, the mode is the middle of that interval (650). Though the mode is not frequently used for continuous data, it is nevertheless an important measure of central tendency as it is the only measure we can use on qualitative or categorical data.
Table \(\PageIndex{1}\): Grouped Frequency Distribution
In the section "What is central tendency," we saw that the center of a distribution could be defined three ways:
the point on which a distribution would balance
the value whose average absolute deviation from all the other values is minimized
the value whose squared difference from all the other values is minimized.
The mean is the point on which a distribution would balance, the median is the value that minimizes the sum of absolute deviations, and the mean is the value that minimizes the sum of the squared deviations.
Table \(\PageIndex{2}\) shows the absolute and squared deviations of the numbers 2, 3, 4, 9, and 16 from their median of 4 and their mean of 6.8. You can see that the sum of absolute deviations from the median (20) is smaller than the sum of absolute deviations from the mean (22.8). On the other hand, the sum of squared deviations from the median (174) is larger than the sum of squared deviations from the mean (134.8).
Table \(\PageIndex{2}\): Absolute & squared deviations from the median of 4 and the mean of 6.8.
Absolute Deviation from Median
Absolute Deviation from Mean
Squared Deviation from Median
Squared Deviation from Mean
2 2 4.8 4 23.04
4 0 2.8 0 7.84
9 5 2.2 25 4.84
16 12 9.2 144 84.64
Total 20 22.8 174 134.8
Figure \(\PageIndex{2}\) shows that the distribution balances at the mean of 6.8 and not at the median of 4. The relative advantages and disadvantages of the mean and median are discussed in the section "Comparing Measures" later in this chapter.
Figure \(\PageIndex{2}\): The distribution balances at the mean of 6.8 and not at the median of 4.0. (CC-BY-NC-SA Foster et al. from An Introduction to Psychological Statistics)
When a distribution is symmetric, then the mean and the median are the same. Consider the following distribution: 1, 3, 4, 5, 6, 7, 9. The mean and median are both 5. The mean, median, and mode are identical in the bell-shaped normal distribution.
How do the various measures of central tendency compare with each other? For symmetric distributions, the mean and median, as is the mode except in bimodal distributions. Differences among the measures occur with skewed distributions. Figure \(\PageIndex{3}\) shows the distribution of 642 scores on an introductory psychology test. Notice this distribution has a slight positive skew.
Figure \(\PageIndex{3}\): A distribution with a positive skew. (CC-BY-NC-SA Foster et al. from An Introduction to Psychological Statistics)
Measures of central tendency are shown in Table \(\PageIndex{3}\). Notice they do not differ greatly, with the exception that the mode is considerably lower than the other measures. When distributions have a positive skew, the mean is typically higher than the median, although it may not be in bimodal distributions. For these data, the mean of 91.58 is higher than the median of 90. This pattern holds true for any skew: the mode will remain at the highest point in the distribution, the median will be pulled slightly out into the skewed tail (the longer end of the distribution), and the mean will be pulled the farthest out. Thus, the mean is more sensitive to skew than the median or mode, and in cases of extreme skew, the mean may no longer be appropriate to use.
Table \(\PageIndex{3}\): Measures of central tendency for the test scores.
The distribution of baseball salaries (in 1994) shown in Figure \(\PageIndex{4}\) has a much more pronounced skew than the distribution in Figure \(\PageIndex{3}\).
Figure \(\PageIndex{3}\): A distribution with a very large positive skew. This histogram shows the salaries of major league baseball players (in thousands of dollars). (CC-BY-NC-SA Foster et al. from An Introduction to Psychological Statistics)
Table \(\PageIndex{4}\) shows the measures of central tendency for these data. The large skew results in very different values for these measures. No single measure of central tendency is sufficient for data such as these. If you were asked the very general question: "So, what do baseball players make?" and answered with the mean of $1,183,000, you would not have told the whole story since only about one third of baseball players make that much. If you answered with the mode of $250,000 or the median of $500,000, you would not be giving any indication that some players make many millions of dollars. Fortunately, there is no need to summarize a distribution with a single number. When the various measures differ, our opinion is that you should report the mean and median. Sometimes it is worth reporting the mode as well. In the media, the median is usually reported to summarize the center of skewed distributions. You will hear about median salaries and median prices of houses sold, etc. This is better than reporting only the mean, but it would be informative to hear more statistics.
Table \(\PageIndex{4}\): Measures of central tendency for baseball salaries (in thousands of dollars).
This page titled 3.3.5: Summary of Measures of Central Tendency is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Foster et al. (University of Missouri's Affordable and Open Access Educational Resources Initiative) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
3.3.4: Measures of Central Tendency- Mean
3.4: Interpreting All Three Measures of Central Tendency | CommonCrawl |
Threshold dynamics and sensitivity analysis of a stochastic semi-Markov switched SIRS epidemic model with nonlinear incidence and vaccination
The threshold dynamics of a discrete-time echinococcosis transmission model
Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model
Weiwei Liu 1, , Jinliang Wang 1, and Yuming Chen 2,,
School of Mathematical Science, Heilongjiang University, Harbin 150080, China
Department of Mathematics, Wilfrid Laurier University, Waterloo, ON N2L 3C5 Canada
* Corresponding author: Yuming Chen
Received December 2019 Revised July 2020 Published November 2020
Fund Project: This research was partially supported by the Graduate Students Innovation Research Program of Heilongjiang University (No. YJSCX2020-211HLJU) (WL); the National Natural Science Foundation of China (Nos. 12071115, 11871179), Natural Science Foundation of Heilongjiang Province (Nos. LC2018002, LH209A021), Heilongjiang Provincial Key Laboratory of the Theory and Computation of Complex Systems (JW); and NSERC of Canada (No. RGPIN-2019-05892) (YC)
Taking account of spatial heterogeneity, latency in infected individuals, and time for shed bacteria to the aquatic environment, we build a delayed nonlocal reaction-diffusion cholera model. A feature of this model is that the incidences are of general nonlinear forms. By using the theories of monotone dynamical systems and uniform persistence, we obtain a threshold dynamics determined by the basic reproduction number $ \mathcal {R}_0 $. Roughly speaking, the cholera will die out if $ \mathcal{R}_0<1 $ while it persists if $ \mathcal{R}_0>1 $. Moreover, we derive the explicit formulae of $ \mathcal{R}_0 $ for two concrete situations.
Keywords: Spatial heterogeneity, basic reproduction number, uniform persistence, threshold dynamics.
Mathematics Subject Classification: Primary: 92D30; Secondary: 35B35, 35B40, 35Q92, 37C75.
Citation: Weiwei Liu, Jinliang Wang, Yuming Chen. Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020316
L. J. S. Allen, B. M. Bolker, Y. Lou and A. L. Nevai, Asymptotic profiles of the steady states for an SIS epidemic reaction-diffusion model, Discrete Contin. Dyn. Syst., 21 (2008), 1-20. doi: 10.3934/dcds.2008.21.1. Google Scholar
T. Berge, S. Bowong and J. M.-S. Lubuma, Global stability of a two-patch cholera model with fast and slow transmissions, Math. Comput. Simul., 133 (2017), 142-164. doi: 10.1016/j.matcom.2015.10.013. Google Scholar
F. Brauer, Z. Shuai and P. van den Driessche, Dynamics of an age-of-infection cholera model, Math Biosci. Eng., 10 (2013), 1335-1349. doi: 10.3934/mbe.2013.10.1335. Google Scholar
F. Capone, V. De Cataldis and R. De Luca, Influence of diffusion on the stability of equilibria in a reaction-diffusion system modeling cholera dynamic, J. Math. Biol., 71 (2015), 1107-1131. doi: 10.1007/s00285-014-0849-9. Google Scholar
M. C. Eisenberg, Z. Shuai, J. H. Tien and P. van den driessche, A cholera model in a patchy environment with water and human movement, Math. Biosci., 246 (2013), 105-112. doi: 10.1016/j.mbs.2013.08.003. Google Scholar
S. A. Gourley and J. Wu, Delayed non-local diffusive systems in biological invasion and disease spread, in Nonlinear Dynamics and Evolution Equations, Vol. 48, American Matehmatical Society, Province, 2006, pp. 137–200. doi: 10.1007/s00285-006-0050-x. Google Scholar
Z. Guo, F.-B. Wang and X. Zou, Threshold dynamics of an infective disease model with a fixed latent period and non-local infections, J. Math. Biol., 65 (2012), 1387-1410. doi: 10.1007/s00285-011-0500-y. Google Scholar
J. K. Hale, Asymptotic Behavior of Dissipative Systems, American Mathematical Society, Providence, RI, 1988. doi: 10.1090/surv/025. Google Scholar
D. M. Hartley, J. G. Morris Jr and D. L. Smith, Hyperinfectivity: A critical element in the ability of V. cholerae to cause epidmeics?, PLOS Med., 3 (2006), 63-69. doi: 10.1371/journal.pmed.0030007. Google Scholar
H. Li, R. Peng and F.-B. Wang, Varying total population enhances disease persistence: qualitative analysis on a diffusive SIS epidemic model, J. Differential Equations, 262 (2017), 885-913. doi: 10.1016/j.jde.2016.09.044. Google Scholar
J. Lin, R. Xu and X. Tian, Global dynamics of an age-structured cholera model with both human-to-human and environment-to-human transmissions and saturation incidence, Appl. Math. Modelling, 63 (2018), 688-708. doi: 10.1016/j.apm.2018.07.013. Google Scholar
J. Lin, R. Xu and X. Tian, Global dynamics of an age-structured cholera model with multiple transmissions, saturation incidence and imperfect vaccination, J. Biol. Dyn., 13 (2019), 69-102. doi: 10.1080/17513758.2019.1570362. Google Scholar
P. Magal and X.-Q. Zhao, Global attractors and steady states for uniformly persistent dynamical systems, SIAM J. Math. Anal., 37 (2005), 251-275. doi: 10.1137/S0036141003439173. Google Scholar
R. H. Martin Jr. and H. L. Smith, Abstract functional differential equations and reaction-diffusion systems, Trans. Amer. Math. Soc., 321 (1990), 1-44. doi: 10.2307/2001590. Google Scholar
J. A. J. Metz and O. Diekmann, Age dependence, The dynamics of physiologically structured populations (Amsterdam, 1983), Lecture Notes in Biomath., 68, Springer, Berlin, 1986,136–184. doi: 10.1007/978-3-662-13159-6_4. Google Scholar
J. B. H. Njagarah and F. Nyabadza, A metapopulation model for cholera transmission dynamics between communities linked by migration, Appl. Math. Comput., 241 (2014), 317-331. doi: 10.1016/j.amc.2014.05.036. Google Scholar
M. H. Protter and H. F. Weinberger, Maximum Principles in Differential Equations, Springer-Verlag, New York, 1984. doi: 10.1007/978-1-4612-5282-5. Google Scholar
H. L. Smith, Monotone Dynamical Systems: An Introduction to the Theory of Competitive and Cooperative Systems, Amer. Math. Soc., Math. Surveys and Monographs, vol. 41, 1995. Google Scholar
H. L. Smith and X. -Q. Zhao, Robust persistence for semidynamical systems, Nonlinear Anal. TMA, 47 (2001), 6169-6179. doi: 10.1016/S0362-546X(01)00678-2. Google Scholar
H. R. Thieme, Convergence results and a Poincaré-Bendixson trichotomy for asymptotically autonomous differential equations, J. Math. Biol., 30 (1992), 755-763. doi: 10.1007/BF00173267. Google Scholar
H. R. Thieme, Spectral bound and reproduction number for infinite-dimensional population structure and time heterogeneity, SIAM J. Appl. Math., 70 (2009), 188-211. doi: 10.1137/080732870. Google Scholar
H. R. Thieme and X.-Q. Zhao, A non-local delayed and diffusive predator-prey model, Nonlinear Anal. Real World Appl., 2 (2001), 145-160. doi: 10.1016/S0362-546X(00)00112-7. Google Scholar
J. H. Tien and D. J. D. Earn, Multiple transmission pathways and disease dynamics in a waterborne pathogen model, Bull. Math. Biol., 72 (2010), 1506-1533. doi: 10.1007/s11538-010-9507-6. Google Scholar
F.-B. Wang, J. Shi and X. Zou, Dynamics of a host-pathogen system on a bounded spatial domain, Commun. Pure Appl. Anal., 14 (2015), 2535-2560. doi: 10.3934/cpaa.2015.14.2535. Google Scholar
X. Wang and J. Wang, Analysis of cholera epidemics with bacterial growth and spatial movement, J. Biol. Dyn., 9 (2015), 233-261. doi: 10.1080/17513758.2014.974696. Google Scholar
W. Wang and X.-Q. Zhao, A nonlocal and time-delayed reaction-diffusion model of dengue transmission, SIAM J. Appl. Math., 71 (2011), 147-168. doi: 10.1137/090775890. Google Scholar
X. Wang, X.-Q. Zhao and J. Wang, A cholera epidemic model in a spatiotemporally heterogeneous environemnt, J. Math. Aanal. Appl., 468 (2018), 893-912. doi: 10.1016/j.jmaa.2018.08.039. Google Scholar
World Health Organization, Cholera fact shettes, January 2019, available from http://www.who.int Google Scholar
J. Wu, Theory and Applications of Partial Functional Differential Equations, Applied Mathematical Science, vol. 119, Springer, New York, 1996. doi: 10.1007/978-1-4612-4050-1. Google Scholar
Y. Wu and X. Zou, Asymptotic profiles of steady states for a diffusive SIS epidemic model with mass action infection mechanism, J. Differential Equations, 261 (2016), 4424-4447. doi: 10.1016/j.jde.2016.06.028. Google Scholar
Y. Wu and X. Zou, Dynamics and profiles of a diffusive host-pathogen system with distinct dispersal rates, J. Differential Equations, 264 (2018), 4989-5024. doi: 10.1016/j.jde.2017.12.027. Google Scholar
K. Yamazaki and X. Wang, Global well-posedness and asymptotic behavior of solutions to a reaction-convection-diffusion cholera epidemic model, Discrete Cont. Dynam. Syst., 21 (2016), 1297-1316. doi: 10.3934/dcdsb.2016.21.1297. Google Scholar
K. Yamazaki and X. Wang, Global stability and uniform persistence of the reaction-convection-diffusion cholera epidemic model, Math. Biosci. Eng., 14 (2017), 559-579. doi: 10.3934/mbe.2017033. Google Scholar
Yu Jin, Xiang-Qiang Zhao. The spatial dynamics of a Zebra mussel model in river environments. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020362
Attila Dénes, Gergely Röst. Single species population dynamics in seasonal environment with short reproduction period. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020288
Yancong Xu, Lijun Wei, Xiaoyu Jiang, Zirui Zhu. Complex dynamics of a SIRS epidemic model with the influence of hospital bed number. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021016
Cuicui Li, Lin Zhou, Zhidong Teng, Buyu Wen. The threshold dynamics of a discrete-time echinococcosis transmission model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020339
Zhimin Li, Tailei Zhang, Xiuqing Li. Threshold dynamics of stochastic models with time delays: A case study for Yunnan, China. Electronic Research Archive, 2021, 29 (1) : 1661-1679. doi: 10.3934/era.2020085
Xin Zhao, Tao Feng, Liang Wang, Zhipeng Qiu. Threshold dynamics and sensitivity analysis of a stochastic semi-Markov switched SIRS epidemic model with nonlinear incidence and vaccination. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021010
Mahir Demir, Suzanne Lenhart. A spatial food chain model for the Black Sea Anchovy, and its optimal fishery. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 155-171. doi: 10.3934/dcdsb.2020373
Patrick W. Dondl, Martin Jesenko. Threshold phenomenon for homogenized fronts in random elastic media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 353-372. doi: 10.3934/dcdss.2020329
Niklas Kolbe, Nikolaos Sfakianakis, Christian Stinner, Christina Surulescu, Jonas Lenz. Modeling multiple taxis: Tumor invasion with phenotypic heterogeneity, haptotaxis, and unilateral interspecies repellence. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 443-481. doi: 10.3934/dcdsb.2020284
Hao Wang. Uniform stability estimate for the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 657-680. doi: 10.3934/dcds.2020292
Yanan Li, Zhijian Yang, Na Feng. Uniform attractors and their continuity for the non-autonomous Kirchhoff wave models. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021018
Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276
Yi-Long Luo, Yangjun Ma. Low Mach number limit for the compressible inertial Qian-Sheng model of liquid crystals: Convergence for classical solutions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 921-966. doi: 10.3934/dcds.2020304
Juntao Sun, Tsung-fang Wu. The number of nodal solutions for the Schrödinger–Poisson system under the effect of the weight function. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021011
Masaru Hamano, Satoshi Masaki. A sharp scattering threshold level for mass-subcritical nonlinear Schrödinger system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1415-1447. doi: 10.3934/dcds.2020323
Cung The Anh, Dang Thi Phuong Thanh, Nguyen Duong Toan. Uniform attractors of 3D Navier-Stokes-Voigt equations with memory and singularly oscillating external forces. Evolution Equations & Control Theory, 2021, 10 (1) : 1-23. doi: 10.3934/eect.2020039
Ludovick Gagnon, José M. Urquiza. Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation. Evolution Equations & Control Theory, 2021, 10 (1) : 129-153. doi: 10.3934/eect.2020054
Izumi Takagi, Conghui Zhang. Existence and stability of patterns in a reaction-diffusion-ODE system with hysteresis in non-uniform media. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020400
Klemens Fellner, Jeff Morgan, Bao Quoc Tang. Uniform-in-time bounds for quadratic reaction-diffusion systems with mass dissipation in higher dimensions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 635-651. doi: 10.3934/dcdss.2020334
Weiwei Liu Jinliang Wang Yuming Chen | CommonCrawl |
Is spectral density sometimes normalized by sampling rate rather than bin size?
I'm a scientist conducting an experiment that requires some signal processing. My expertise is not in signal processing, thus here I am. We've basically re-created an experiment conducted by other scientists, attempting to check their results. Here is a link to their paper: Ultrasensitive Inverse Weak-Value Tilt Meter
In short, a laser bounces off of some mirrors, one of which is oscillating at a controlled sinusoidal frequency, onto a quadrant detector, which outputs an electrical signal to an oscilloscope where we record it. So, you end up with a noisy record that has a tiny, known sine wave hiding in it.
Everything I've read indicates that to calculate spectral density, you must:
Get the spectrum by performing an FFT* on the record
Normalize the spectrum by the bin size, which is sampling rate divided by number of samples (Fs/N)
*For clarification, when I refer to an FFT, I'm referring to the single sided, absolute value, of the FFT, normalized by the number of sample points, N. So, we took the FFT of the signal, threw away the negative frequencies, doubled the positive frequency values (except DC and Nyquist), and divided by N. I checked this method by feeding signals directly from a function generator to the oscilloscope and verifying that the resulting peaks matched the frequency and amplitude of the inputs.
But, in the paper linked above, they seem to have normalized their spectrum by sampling rate only. I say this because at the top of the first column on page 3, they point out that the sampling rate is 1 kHz, and in the footnote on page 3, they point out that the peak in their spectral density plot (Figure 4) is 1.6 nrad / sqrt(1kHz). They make no mention of bin size or number of samples (N). Since I'm trying to directly compare my numbers to theirs, I need to know definitively what is going on here. Are there two definitions for spectral density? Thanks in advance.
fft sampling power-spectral-density
benbald
benbaldbenbald
The use of $rad/\sqrt{\text{Hz}}$ suggests that this is phase noise specifically (a spectral density due to phase fluctuations), and typically in my use this has been described as a power spectral density (units of $rad^2/\text{Hz}$), so this is just the square root of that quantity.
The reason the DFT (of which the FFT computes) is divided by $N$ is to normalize the FFT to be the same units of the time domain signal, specifically using the following normalized form of the DFT:
$$X_1(k) = \frac{1}{N}\sum_{n=0}^{N-1}x[n]W_N^{nk}$$
Versus the typically version that is not normalized which the FFT returns:
$$X(k) = \sum_{n=0}^{N-1}x[n]W_N^{nk}$$
With such a normalization, the magnitude of $x[n]$ at any specific frequency will match the magnitude of $X(k)$ for that frequency. For example, if we had a time domain waveform of a sinusoidal phase error versus time given as:
$$\phi[n] = A\cos(\omega n) = \frac{A}{2}e^{j\omega n} + \frac{A}{2}e^{-j\omega n} \space \text{rad}$$
Then assuming $\pm\omega$ were exactly on a bin centers (for the DFT due to its circular nature $-\omega = N-\omega$), the resulting two bins in $X_1(k)$ would have a magnitude of $\frac{A}{2}$, matching the magnitudes of the time domain waveform.
As a power spectral density (meaning we are interested in the power over a given frequency range) the normalized power of each frequency index in the DFT (aka bin) is then:
$$|X_1(k)|^2 = \frac{|X(k)|^2}{N^2} \space \frac{\text{rad}^2}{\text{bin}}$$
(Where the units of $\text{rad}^2$ for the power quantity $|X_1(k)|^2$ only make sense if x[n] was the phase noise in units of radians).
$\frac{\text{rad}^2}{\text{bin}}$ is a power quantity per bin. To make this the recognized form of the power spectral density in power/Hz we recognize that $Nd = f_s$ where $N$ is the number of samples in the DFT, $f_s$ is the sampling rate, and $d$ is the spacing of each frequency index (bin as the OP used) in Hz resulting in the spectral width of each bin in Hz:
$$d = \frac{f_s}{N} \space \frac{\text{Hz}}{\text{bin}}$$
$$ \frac{|X(k)|^2}{N^2} \frac{\text{rad}^2}{\text{bin}} \times d^{-1} \frac{\text{bin}}{\text{Hz}} = \frac{|X(k)|^2}{N^2}\frac{N}{f_s} \frac{\text{rad}^2}{\text{Hz}} = \frac{|X(k)|^2}{N f_s} \frac{\text{rad}^2}{\text{Hz}}$$
This result would specifically be what we typically notate as $\scr{L}_{\phi}(f)$ as the two-sided power spectral density due to phase fluctuations (since the DFT contains both sides of the spectrum, in contrast to the one-sided PSD which is $S_\phi(f) = 2\scr{L}_{\phi}(f)$.).
Note we say "due to phase fluctuations" since the units here were phase. It is also interesting how the phase unit in radians when squared is the power unit relative to the carrier (often expressed as dBc/Hz). This is clear for small angles given the small angle approximation $sin(\theta) \approx \theta$, or geometrically the quadrature component being the noise as phase noise relative to the in-phase component being the carrier that has been rotated due to that phase, such that the ratio of the two is the phase unit in radians, for small angles!) This is why when phase noise is dominant this computation will match the actual power measurement we see under test with a spectrum analyzer.
The OP clarified in his comments that his question is specific to the peak at 30 Hz offset as shown in this plot:
It isn't specified but assuming this is a two-sided spectral density, the peak of a single tone would have a total power independent of density, so we would typically report its result as $\text{rad}^2$ and not $\text{rad}/\text{Hz}$ (or the magnitude quantity as the square root $\text{rad}$ as used in this plot, meaning this plot is $\sqrt{\scr{L}_{\phi}(f)}$). The paper also incorporates a moving average of 5 and suggests in a foot note that the peak would be $\approx 1.6 \text{nrad}/\sqrt{1\text{kHz}}/5$, and the plot was scaled (moved up or down) such that the level of the tone landed on this expectation.
I suggest that the peak would be at either $\approx 1.6 \text{nrad}/20$ or $\approx 1.6 \text{nrad} \sqrt{2}/20$ depending on if the spectrum is intended to be double-sided or single-sided which should be specified. The sampling rate does not change the value of the tone on the spectral density when the units are already in nrad, so there should also be no $\sqrt{1\text{kHz}}$ in that answer - The sine wave theoretically occupies zero bandwidth, or for practical reasons we can assume we integrated that power over a small bandwidth in order to measure the peak we see. Either way the density becomes a single figure for the tone independent of bandwidth. Any windowing applied in the time domain prior to the FFT (other than the rectangular window) will also shift the value of the tone differently from the values for the noise. Further details below.
To confirm that assumption, here is my prediction of where such a tone would be:
The 1.6 nrad oscillation is specified as the peak to peak value and thus is of the form:
$$\phi(t) = \frac{1.6}{2} \cos(2\pi f t) \space\space \text{nrad}$$
with $f=30e3$
If the spectrum is two-sided (as $\sqrt{\scr{L}_\phi(f)}$ rather than one-sided as $\sqrt{S_{\phi}(f)}$), then the spectrum is only showing the upper half of this two-sided spectrum, with both sides given by:
$$\phi(t) = \frac{1.6}{2} \cos(2\pi f t) = \frac{1.6}{4}e^{j 2\pi f t} + \frac{1.6}{4}e^{-j 2\pi f t} \space\space\text{nrad}$$
Thus prior to the effect of the moving average filter (MAF), I would predict the tone shown on a double-sided spectrum to be at:
$$\frac{(1.6e-9)}{4} = (4e-10) \space \text{rad}$$
Notice the units are $\text{rad}$ and not $\text{rad}/\sqrt{\text{Hz}}$ as the standard deviation of the tone itself is not a density spread across frequency, unlike that of the noise.
I assume the moving average filter that is mentioned was done on the frequency domain samples. If in the time domain there would be an additional loss of 0.963 but I don't see evidence of such a moving average response in the plot, in which case with a moving average of frequency samples, the tone is reduced by a factor of 5 as the author had done, resulting in $(4e-10)/5 = (8e-11)$.
If the plot was supposed to be a single-sided spectrum $\sqrt{S_{\phi}(f)}$, then the result would be $\sqrt{2}$ larger or $1.13e-10$, which is consistent with the standard deviation of $\phi(t)$ reduced by the MAF.
Neither of these results match the plot, but this is where I would expect a 30 Hz tone after a moving average of 5 samples when sampled at 1 KHz if the units of the spectral density are $\text{nrad}/\sqrt{\text{Hz}}$, for either case of a single-sided or double-sided spectral density. Also note that my computation was independent of the bin size or number of samples since as the author of the paper was intending to do (and perhaps did if I made an error in my prediction) was to predict the expected value of that tone and then scale the plot accordingly. My earlier answer shows how I would scale the result from the DFT directly in which case the bin size and number of samples would be involved.
As a further note since these spectrums are being derived from FFT's and since the OP is interested ultimately in assessing noise: We must also be careful to account for the equivalent noise bandwidth due the effect of windowing especially if we are normalizing the plot based on the power of a tone. (and other effects such as scalloping loss etc which have been minimized by choosing a tone at or near a bin center as was done). Any windowing done on the time domain signal other than the rectangular window will widen the bandwidth of each bin beyond the single bin as given by the rectangular window, which means that the noise measured will be larger than the actual noise! Further the window has a loss reducing the signal from the tone and the noise, but because of the effectively wider noise bandwidth of each bin the noise will go down less than the tone (the tone only occupies one bin)! The effect of the moving average in frequency on SNR is also affected by the window since the adjacent noise bins are no longer uncorrelated. I detail this further in this post: Find the Equivalent Noise Bandwidth
Dan BoschenDan Boschen
$\begingroup$ Thanks for your reply. I think I understand it. I added an update to the question to clarify that the FFT I'm referring to is abs(X)/N. So when I talk about normalizing that to spectral density, I'm dividing by sqrt(Fs/N), which yields the square root of your result above. So it seems that we've arrived at the same result (yours is for power, mine for amplitude), which still leaves me puzzled about what they've done in the Ultrasensitive Inverse Weak-Value Tilt Meter paper. $\endgroup$
– benbald
$\begingroup$ @user3308243 ok that makes more sense and in which case the units would be rad/root-Hz following my explanation. I will look at the paper closer, to your question. $\endgroup$
– Dan Boschen
$\begingroup$ @user3308243 So I am confused where your question is. They say the sampling rate is 1KHz, and they say the peak of the PSD is is 1.6 nrad / sqrt(1kHz) which is what we would expect as proper units for the square root of the PSD as I explained. Not sure why you mention it should be 1.6 nrad? A density would be per Hz (power is per Hz, square root of power is per root-Hz.). Maybe I didn't read enough of the paper, can you clarify? $\endgroup$
$\begingroup$ Ultimately the plot in Figure 4 looks right to me in that the horizontal axis is in Hz and therefore the root-PSD would be rad/root-Hz. $\endgroup$
$\begingroup$ Ah, yes. Thank you very much. I think I'm starting to see where my confusion is coming from. You are being very clear, and helpful, and the amount of information you've provided above is invaluable. $\endgroup$
Not the answer you're looking for? Browse other questions tagged fft sampling power-spectral-density or ask your own question.
Find the Equivalent Noise Bandwidth
How do you calculate spectral flatness from an FFT?
FFT frequency is always half of expected
A few questions about the output value of dsp.SpectrumAnalyzer
RMS averaging in spectrum analyzers
Deciding on the FFT length for water waves in flume (wave facility)
Applying periodogram-based PSD
Decreasing Sample Rate in DFT (FFT) for Audio Analysis
FFT and Power Spectrum Normalization
How many bins to include when calculating SNR from FFT?
Calculating total power for the signal | CommonCrawl |
Adam's Brain Dump
Bookshelf Media diet Photos Archive 🎲 Search
Cognitive biases as visual illusions
Jan 26, 2023 ⌘
I'm intrigued by Adam Mastroianni's framing of cognitive biases as being akin to visual illusions.
Often we look at all these "irrational" biases- everything from Anchoring bias to the Zeigarnik effect - as deficiencies in our brain, failures in reasoning. Even more so when we think that we see others falling prey to them whilst we ourselves are of course clever, unbiased, objective thinkers (see also the fundamental attribution error bias).
But we don't poke fun at people for being able to see visual illusions. We don't laugh at them for being bad at seeing. Often you can know an illusion is an illusion and still not be unable to see the effect; it's not a matter of education or sophistication. Sure, you can program yourself to say "squares A and B are the same colour" but you can't necessarily viscerally sense that they are.
(image from Wikipedia)
Rather, we take these illusions as evidence of what your visual system does in order to process the unimaginably large input that is everyday reality into something you can understand and act on. Seeing the things that aren't there in visual illusions is evidence that your visual system is working well. They simply reveal what is being done behind the scenes by a healthy visual processing system.
Likewise, responding to cognitive illusions is not necessarily a matter of "You're stupid" or "Your brain is deficient". They're just the healthy and natural result of the incredible set of assumptions and processing shortcuts your brain has at its disposal to let you live a rich and full life in a world that's far too complex for you to fully comprehend.
All this of course doesn't mean we shouldn't work to reduce our tendency towards cognitive biases, most critically on the occasions where their end product is one that causes harm to ourselves or other people. But the fact that someone has the bias does not make them stupid.
Over here I wrote about 3 papers that investigate the effectiveness of rainforest carbon credit projects on the prevention of deforestation.
Although the findings are being contested by the owners of these schemes, the bad news for now is that they all point towards these schemes preventing far less carbon release than was claimed, with many projects having no impact at all.
Slightly astonished to learn that there's a 2022 series of UK Beauty and the Geek. It's on a channel I don't have, but I'm a bit shocked if the original premise aged well when considering modern-day progressive sensibilities.
The FT has a fascinating look into life as a high-ranking female spy now that for the first time ever 3 out of the 4 of the Director Generals of Britain's Secret Intelligence Service, aka MI6, are women.
Such high ranking positions have traditionally been the preserve of men, with women mostly being recruited as secretaries or for the purpose of honeytraps. Or "I like my girls to have good legs" as Vernon Kell, the founder of the former British Security Service apparently said.
But of course the very fact that people still don't readily imagine a spy as being female opens up potential avenues of exploitation that might be less available to the more traditional James Bond demographic.
Human Rights Watch is concerned that the UK is increasingly turning into an abuser, rather than protector, of human rights.
From the press release surrounding the release of their 2023 report:
The UK government introduced laws that stripped rights of asylum seekers and other vulnerable people, encouraged voter disenfranchisement, limited judicial oversight of government actions, and placed new restrictions on the right to peaceful protest.
In "products I'm really not sure need to exist" news, a company called Reviver will sell you an app-enabled digital number plate for your car, if you live in one of the very few US states where it's allowed.
It seems to be yet another way companies have found to add ongoing subscriptions to your driving experience - I may return to this topic soon. In return for your $20-25 a month, it enables such unmissable features as being able to switch your number plate into dark mode and displaying a tiny app-controlled banner under the plate; essentially a microtweet for anyone driving way too close to you I guess.
One feature that I can actually see some potential use for is that it contains enough tracking technology that you can see the location of your car by using the accompanying app.
The problem is that until recently it wasn't just you and Reviver that could see where you are. Security researchers managed to find a way to alter their own user account so that they could see the live location of every vehicle who had one of these number plates. And a lot more besides:
Track the physical GPS location and manage the license plate for all Reviver customers (e.g. changing the slogan at the bottom of the license plate to arbitrary text)
Update any vehicle status to "STOLEN" which updates the license plate and informs authorities
Access all user records, including what vehicles people owned, their physical address, phone number, and email address
Access the fleet management functionality for any company, locate and manage all vehicles in a fleet
That vulnerability has been fixed now. But it's events like this that make me wonder whether it's really necessary to put app connections and internet-connected surveillance technology in absolutely everything a product designer can dream up, even if it's possible to imagine the odd use for it.
Played Into the Breach 🎮.
This is a turn-based strategy game in which you control mech pilots to save the world from alien invasion. Not the most original storyline and by it's look I first assumed it was akin to the famous XCOM series. But it's not really, because here you know exactly what your enemies are going to do in advance, with no guessing or random luck involved. In some ways it felt more like a puzzle game.
If you lose you get sent back to the beginning to try again. Often I find that style of "roguelike" game frustratingly repetitive. But here the missions differ a bit each playthrough. You also have the ability to upgrade your mechs in different ways and unlock lots of new characters so it doesn't get boring. You get the chance to send one of your heroes back through time which gives me the sense of not having lost all my progress (when I remember to do it).
Each fight takes place within small land grid and only takes a few minutes to play, making it well suited to mobile devices, although it's available on all sorts of platforms. It won a lot of awards in past years, deservedly so. And if you're a Netflix subscriber you can play it for free.
Happy 10th anniversary to what unfortunately turned out to be the most evergreen meme of all time, This Is Fine.
Some thoughts from the creator of the comic it's from, K. C. Green.
Microsoft has created a language model called "Vall-E" that can simulate a person's voice saying whatever they choose with the only input needed from the person being 3 second clip of the actual person saying something. I guess a fraction of a TikTok video would do the job.
It can even preserve emotion - so if you have a 3 second clip of your friend angrily shouting about something then they could in theory make a clip of your friend angrily shouting about something completely different.
Right now I don't think you can play with the model yourself. Some people might feel that's in some ways for the best , until society has figured out some idea of how we're going to deal with the sheer amount of future "recordings" of things that never happened that everyone is going to be able to produce with minimal effort using with tools like this and the various other generative AI type tools that are already out there.
But I imagine someone else will release a more public tool all too soon. It didn't take long for folk to figure out how to get AI tools to generate images that are really rather against what most of the systems designers involved wanted them to do.
In the mean time you can hear some Vall-E samples on this page. Scroll down a bit and compare the "Speaker Prompts" - which are the 3 second actual recordings of someone's voice they fed it - with the "VALL-E" output, which is what the model produced based on it.
A few of us just started the Statistical Rethinking online course in order to learn more about using Bayesian data analysis in the field of causal inference, connecting scientific models to evidence.
Whilst we didn't get there in time to register for a place in the live class, instructor Richard McElreath is kindly providing recorded lectures, slides, homework and memes for all. It's based on his book Statistical Rethinking, although that's not necessarily required to follow along.
I'm unduly nerd-excited to have received a tote bag from the Office of National Statistics.
Last year Europe had its hottest ever summer. Almost half of its countries broke their previous monthly temperature records at least once.
Whilst we didn't see the same temperature rises everywhere, globally the last 8 years are the 8 warmest on record 😬.
Just realised that the Obsidian text editor supports MathJax.
So for the mathematically inclined, write your Latex equations either inline by surrounding them with $s, or on their own line starting with $$.
It'll transform e.g.: $$ p = \frac{(W+L+1)!}{W!L!} p^W(1-p)^L $$
Coming soon: the first mainstream UK TV show based on deepfakes
Prepare yourself for the ITV sketch show "Deep Fake Neighbour Wars", the first mainstream UK TV show based on deepfake technology (as far as I know).
Appearing on ITVX on the 26th January, it's a comedy sketch show seemingly showing people living their everyday mundane lives. Except these aren't either characters created for the show or actors doing impressions of other people. They are 'real' megastar celebrities - well, real in the sense that they look and and sound like them, even though the celebrities concerned haven't been anywhere near it.
Instead, the show was first acted out by less well-known actors. StudioNeural - "the world's first provider of synthetic media for long form TV" - then used deepfake technology to replace their faces with ultra-realistic visages of actual bigtime celebrities.
From a summary of the preview:
We meet loved up Nicki Minaj and Tom Holland who are not happy with Mark Zuckerberg next door, Idris Elba gets a shock when new neighbour Kim Kardashian starts making her presence known in their communal garden, Harry Kane's perfect patio is damaged by upstairs neighbour Stormzy and dental hygienist Billie Eilish clashes with neighbour Beyonce when she starts working from home.
Mainstreaming deep fake technology for entertainment like this does feel like a potential turning point for popularizing this technology. It'll allow things to be done that never otherwise could have been. Up until now deepfakes have usually been discussed as in the context of nefarious use-cases - fake news, fraudulent political persuasion, personal attacks or revenge porn. But even for entirely benign uses, as the Guardian notes, there's not really much in the way of established etiquette or guidelines for the use of this very new technology yet.
What is fair in the name of comedy, vs what is some kind of unethical exploitation? Whether this be of the celebrities or the relatively nameless actors who are hidden behind the fakes (until the end of the show anyway in this particular case).
Unless a message is constantly superimposed over the broadcast that this isn't real, what are the implications in a world where it's extremely common for short clips to be taken out of videos and shared on social media?
To borrow an example from the same article, Spitting Image was an extremely funny satire (the original one at least), but how would it have come across if instead of using caricature puppets the show used representations of the people being satirised that were basically indistinguishable from their actual IRL selves?
Nadine Dorries, admittedly someone who one might argue has a strong agenda of her own, was worried that the This England TV drama which purports to document a dramatised version of how the UK government dealing with Covid-19 (amongst other things) was dangerous:
Admittedly, the producers put a disclaimer before each episode, stating that the drama is fictional, based on true events. But the fact that scenes are interspersed with real news footage makes it very deceptive. Also, many scenes involving politicians and civil servants are eerily convincing.
How much more convincing would it have been if Boris Johnson was played by his digital doppelganger as opposed to Kenneth Branagh?
Jan 9, 2023 ⌘
Watched season 1 of The Traitors 📺.
A reality show where by day a group of strangers complete missions to build up a prize pot, and by night they viciously accuse each other of treachery in order to eject them from the show entirely.
Which is fair enough, because a few of them are in fact traitors. Claudia Winkelman secretly assigned them that role at the start. The viewer knows who they are, the participants do not. They have to figure it out by whatever means they can. Each night they must vote out the person they collectively believe is most traitorous. Of course, to maintain their subterfuge the traitors also have to pretend that's what they're doing too and put their public votes in too.
If the "faithful" vote out all the traitors then they share the prize pot. But if even one traitor remains by the end of the series then the surviving traitors get it all.
Each night whilst the others sleep the anonymous traitors also get to metaphorically kill one of the contestants, kicking them off the show without the ability to defend themselves.
Honestly, it feels a little grim and exploitative to watch. I really hope there's a pile of expert psychologists behind the scenes to help the players cope with the virulent suspicion, deception, mistrust, arguments, paranoia, confrontations and the rest of it.
But watching the participants - the genuinely legit and the traitors trying to appear as such - try to figure out who the traitors are provides incredible examples of all sorts of cognitive biases - confirmation bias, the halo effect, salience bias, a kind of pareidolia, overconfidence, the gambler's fallacy and a ton of herd instinct to name but a few. And all this at the same time as being put into an unfamiliar place with unfamiliar people.
It's such a fascinating example of all these psychological processes going on, some that we can probably recognise very well in ourselves if we stop and think about it for a minute in between screaming at the TV, that it turns out to be compulsive viewing. Even if it's probably not exactly great for most of the folk involved.
It's getting on for four months since Mahsa Amini, who had been arrested in Iran for not wearing her hijab in the style the authorities prefer, died when in the custody of the Iranian police after being beaten. The resulting protests are ongoing, at incredible personal risk to the brave souls who take part and their loved ones.
As of yesterday, Iran has now formally executed 4 protestors, the latest two likely having been tortured beforehand. But the overall death toll is far greater, with at least 516 demonstrators including 70 children known to have died. Almost 20,000 others have been arrested, hundreds of whom may face the death penalty.
Today's Observer reports that:
NHS trusts with record waiting lists are promoting "quick and easy" private healthcare services at their own hospitals, offering patients the chance to jump year-long queues
This, whilst not a new thing, does not feel good in the midst of a growing NHS crisis, particularly if it's on the rise. The private services that are provided - whether a £300 MRI scan or a £10,000 hip replacement - supposedly don't impact the services the NHS provides under its public funding. But they often take place in the same premises with the same staff that you would get if you made it to the top of the public waiting list.
I'm sure there's some technicality that allows them to say it's not sucking resources from our still much loved public health system. But, simplistically, at the end of the day it surely is potential life-or-death British medical capacity that is exclusively available to the rich rather than the person that needs it the most.
The ideal of course would be that the NHS working conditions and funding was improved such that there was no incentive for hospitals or medical staff to operate a private practice at all. For now though, we'll have to decide whether to laugh or cry at the Shalborune Private Health Care website's claim that "We believe quality healthcare should be readily accessible."
15th time lucky - the US elects a speaker of the House
After repeating the same vote 15 times over the last week, the US House of Representatives has finally managed to elect a speaker, Kevin McCarthy.
It's been the longest contest for the position since 1859, preventing any of the real business of the House taking place whilst the battle continued.
But if it felt like a long time, it was certainly no 1855. That contest stretched on for two whole months and required 133 ballots to take place before a speaker was selected.
In the end they temporarily changed the rules so as to accept a plurality of votes as opposed to the standard absolute majority rules that require the victor to have gotten the support of at least half of the voters. This allowed Nathaniel Banks won the contest with 103 votes from a possible 214 electors.
The repeated election attempts over the past few days apparently became so tedious that the representatives elect started bringing in comics, books, iPad games and their own children in order to keep themselves entertained. Perhaps my favourite photo of the event was the below one, taken by Anna Moneymaker, showing U.S. representative Katie Porter's research into how to live the good life.
Finished reading The It Girl by Ruth Ware 📚.
A thriller set in the grounds of Oxford University, the kind of backdrop that usually appeals to me. A popular rich girl that everyone knows was murdered by a creepy college porter…or was she?
You can probably guess the general answer to that, but I didn't figure out the specific solution until very near the end. A compulsive read with satisfying twists and turns, if not particularly challenging. Just what I needed for the holidays.
Watched season 5 of The Crown 📺.
Starting to get into the years I actually remember something about now. I love this as a way of accidentally learning the country's history. The only problem is it seems a lot of it isn't true. John Major seems to be particularly annoyed at it.
The show does admit to being fiction. But not knowing which bits are true makes it hard to know what to take from it. Though an article in the Atlantic is probably right to conclude that "the show is so popular that its interpretation of history will become the definitive one for millions of viewers."
From the NYT:
Without a speaker, the United States House of Representatives essentially becomes a useless entity.
Entering day 3 of the US not really having much of a government.
Perhaps the rules on electing a speaker need revision for the future. The job is different in the UK, but here if there's no majority for the speaker we remove the candidate with the lowest number of votes plus any with minimal support before trying again.
The Financial Times looks into why the UK's NHS is in such a disastrous condition at present. It turns out it's not all that complicated to understand.
There are currently lots of ill people - a new wave of Covid-19 is once more sweeping the nation, and the flu (amongst other infectious diseases) is also surging. A real twindemic.
We don't have enough hospital beds. This is partly because we haven't built enough capacity in the first place. But also because the lack of anywhere to discharge patients who still need some amount of social care (but not hospital-level care) to means that thousands of people are unnecessarily stuck in hospital.
There are not enough staff. All parts of the workforce have staff shortages. Those that are there are exhausted, demoralized and in recent times are occasionally on strike or leaving for better opportunities elsewhere. To be clear, these problems started way before the current spate of strikes were on the agenda.
There has not been enough investment. This is nothing new, it's been going on for at least a decade. NHS demand is constantly rising at present, so funding needs to rise substantially beyond inflation just to maintain performance. This hasn't happened for at least a decade. The UK has amongst the lowest healthcare capital spending as % of GDP of it's peer countries, leaving us with fewer beds, MRI scanners, CT scanners and so on. The chart below, also from the FT, may provide a clue as to why.
Unlike previous generations, UK and US millennials are not becoming more conservative over time
There's a widely held belief that younger people tend to start off being politically left-wing (or liberal, socialist, whatever one wants to call the axis). Then, as time goes on and they start to age, they end up with more conservative views and political preferences.
This trend is encapsulated in a maxim whose origin and exact wording is much quibbled over but often turns up in this sort of form.
A man who is not a Liberal at sixteen has no heart; a man who is not a Conservative at sixty has no head.
Personally I hope and trust the implied value judgement isn't true, otherwise, sorry, looks like I'm just getting stupider as I get older.
But, treating it as purely descriptive of a trend, the idea that people tend to get more conservative when they get older does usually seem to be true. Of course it isn't necessarily their age that causes these changes; it may be to do with the fact that people's wealth, status, position in life, psychology and so on tends to alter in an on-average predictable way as they get older.
However, John Burn-Murdoch of the FT notes that it's just not happening that way for UK and US millennials.
They started off liberal as other generations did. But now, even though the older ones are now 40+ years old, they're still very liberal, compared to the rest of the population at least.
Why so? Burn-Murdoch hypothesises that this is a cohort effect due to British and US millennials entering adulthood during the aftermath of the financial crisis. They emerge into an economic environment where generating enough wealth to, for example, own a home is often a ludicrous pipe-dream.
The primary UK and US conservative party's respective fixation on culture war topics probably also doesn't help them much. The typical conservative side of the relevant arguments tends to be less attractive to more academically educated folk, and millennials are the best-educated generation at present.
Morten Støstad produced some follow-up work that showed that this trend also exists elsewhere, particularly in English-speaking countries like Australia, Canada and New Zealand. His first chart looks at English speaking countries and shows similar findings to Burn-Murdoch's original graph.
In the second though, Støstad finds that in many other non-English-speaking countries, for example Germany, France, Italy and Spain, the millennials do seem to be behaving "as normal" and becoming more conservative over time.
Dec 31, 2022 ⌘
Played The Pharaoh's Tomb from Exit: The Game 🎲.
The Exit advent calendar was so good we couldn't stop. This one, from the same makers, is a one-session game where you attempt to solve puzzles to free yourself from an ancient Egyptian tomb. It's rated as one of the harder ones, perhaps because you need to figure out what the puzzles even are and which order to solve them in as well as the puzzle solution itself - but we got there in the end.
Played the The Mysterious Ice Cave advent calendar from Exit: The Game 🎲.
This is an advent calendar that gives you a puzzle to solve each day of advent. The answer to each one tells you which door to open next in your attempt to escape from a catastrophic mountain avalanche.
We actually ignored the entire premise and completed it over two lengthy sessions. The puzzles were fun and varied, some harder than others but most of them felt very fair.
Something happens towards the end that was one of my favourite ever twists in these kind of games. Even more fun than a chocolate calendar for anyone that likes puzzle escape room kind of things, highly recommended.
© 2023 Adam | CommonCrawl |
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.
Where is the evidence that the electron is pointlike?
I'm writing a piece about the electron, and I'm having trouble finding evidence to back up the claim that the evidence is pointlike.
People tend to say the observation of a single electron in a Penning trap shows the upper limit of the particle's radius to be 10-22 meters. But when you look at Hans Demelt's Nobel lecture you read about an extrapolation from the measured g value, which relies upon "a plausible relation given by Brodsky and Drell (1980) for the simplest composite theoretical model of the electron". This extroplation yields an electron radius of R ≈ 10-20 cm, but it isn't a measurement. Especially when "the electron forms a 1 μm long wave packet, 30 nm in diameter".
It's similar when you look at The anomalous magnetic moment and limits on fermion substructure by Brodsky and Drell. You can read things like this: "If the electron or muon is in fact a composite system, it is very different from the familiar picture of a bound state formed of elementary constituents since it must be simultaneously light in mass and small in spatial extension". The conclusion effectively says if an electron is composite it must be small. But there's no actual evidence for a pointlike electron.
Can anybody point me at some evidence that the electron is pointlike?
particle-physics electrons standard-model elementary-particles point-particles
Qmechanic♦
John DuffieldJohn Duffield
$\begingroup$ physics.stackexchange.com/q/264676 $\endgroup$
– Constantine Black
$\begingroup$ Related: physics.stackexchange.com/q/24001/2451 , physics.stackexchange.com/q/119732/2451 and links therein. $\endgroup$
– Qmechanic ♦
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$
– David Z
One who is familiar with the history of particle physics, and physics in general, knows that physics is about observations fitted with mathematical models.
This review examines the limits on size we presently accept for the fundamental particles which presently are at the foundation of the present standard model of particle physics.
This analysis of what "point like " means is reasonable in my opinion.
The size of a particle is determined by how the particle responds to scattering experiments, and therefore is (like the size of a balloon) somewhat context-dependent. (The context is given by a wave function and determines the detailed state of the particle.)
On the other hand, the deviations from being a point are usually described by means of context-independent form factors that would be constant for a point particle but become momentum-dependent for particles in general. They characterize the particle as a state-independent entity. Together with a particle's state, the form factors contain everything that can be observed about single particles in an electromagnetic field.
The measurable quantities are the form factors:
For example, in electron scattering at low energies, the cross section for scattering from a point-like target is given by the Rutherford scattering formula. If the target has a finite spatial extent, the cross section can be divided into twofactors, the Rutherford cross section and the form factor squared.
From these measurable form factors one can get a limit for the size of the electron, no proof of real "point" nature can be given. Models are only validated or falsified, and the "point" nature of the electron is a model of the existing data involving electrons which has not been falsified.
The point nature works in the present standard model of physics because our experiments cannot probe smaller distances than these limits, and the SM which depends on these pointlike elementary blocks WORKS.
anna vanna v
$\begingroup$ Regarding "SM which depends on these pointlike elementary blocks WORKS", remember that the used perturbative QFT has a few infinities (also energy of electric field of perfect point charge), which are manually removed - especially ultraviolet divergence: restricting minimal distance, and divergence of perturbative series: restricting sizes of scenarios (Feynman diagrams) which could fit there. Assuming this is not just mathematical idealization, but fundamental particles are indeed perfect points, would we need these two restrictions - mathematical tricks to remove infinities? $\endgroup$
– Jarek Duda
$\begingroup$ @JarekDuda This has to be answered by a theorist. Maybe if you ask it as a question. You could also try asking at physicsoverflow.org a site where many theorists answer.. $\endgroup$
– anna v
$\begingroup$ I have asked this kind of questions to many theoreticians and experimentalists, but did not get anything concrete. Usually "evidence" for point-like electron leads to g-factor argument, which leads to Dehmelt's 1988 paper where he literally gets it from extrapolation by fitting parabola to two points (no kidding!) - gathered materials: physics.stackexchange.com/questions/397022/… $\endgroup$
Before addressing the pointlike nature of the electron, let's consider the proton. It was found that when the energy with which particles (such as electrons) scatter off a proton exceeds a certain level (about 1 GeV), it starts to resolve the proton. What we mean by that is that, below this energy the scattering cross-section seems to follow a scale invariant curve (a pure power law), while at this scale, the curve for the scattering cross-section as a function of energy changes its behaviour, indicating the presence of a specific scale. This scale (1 GeV) is called the QCD scale, because it turns out that quantum chromodynamics (QCD - the underlying theory that binds together the constituent in the proton) becomes confined at this scale (below the QCD scale the QCD interactions become invisible; above it one sees the effects of this interaction because one can peer inside the proton).
The energy with which a scattering experiment is performed determines the resolution of the experiment. What this means is that the energy translates to a distance [see clarification below]. For higher energies, one can observe smaller distances. Above the QCD scale the resolution is small enough that one can observe distances smaller than size of the proton.
One other very important thing to notice is that the mass of the proton is also roughly equal to the QCD scale (using Eintein's famous equation $E=mc^2$). This is important because that means that the same scale is responsible both for the size of the proton and the proton's mass.
Now let's turn to the electron and the question of its pointlike nature. Obviously, we cannot do scattering experiments at infinite energies. Therefore, the resolution with which we can observe the electron is limited by the largest energy that we can produce in collider experiments. The cross-section that we observe from a pointlike particle is therefore determined by the resolution with which we observe it. With current experiments one sees that the scattering cross-section of the electron follows a scale invariant curve. Hence, no scale where the electron is resolved has yet been seen. An important observation though is that the energies with which the scattering has been done, far exceeds the mass of the electron. So if there does exist an energy scale where the electron would be resolved, such a scale would be very high above the scale set by the mass of the electron.
The thing about scales in physics is that they don't just fall out of the air. There are usually very specific dynamics involved that produce such scales. In the case of particles bound together, one would normally expect the mass of the bound particle to have roughly the same scale as that associated with the size of the bound particle. If the size of a particle is so much smaller than its mass, then there would have to be an amazingly powerful reason for that.
For this reason, although we cannot measure the electron's size to infinitely small distances, it is believed that the electron must be pointlike.
Clarification:
Just to address some of the comments. To resolve means that one observes something with a resolution that is smaller than the size of the object. The resolution of an observation refers to a physical distance. In particle physics, for instance, the resolution is directly related to the energy of the scattering process. (Energy gives the frequency via $\hbar$ and frequency gives wavelength via $c$. The wavelength is the physical distance that defines the resolution.) The notion of a resolution is widely applicable in observations. For example, in astronomy a telescope would be able to resolve a galaxy if the resolution of the telescope is smaller than the size of the galaxy in the image.
Some comments seem to suggest that the electron should have a finite size due to the electric field that is produced by its electric charge. Unfortunately this does not work either. The electric field decays as a power law away from the electron. Such a power law does not have a scale. It is scale invariant. As a result the field cannot give a scale that one can interpret as the size of the electron.
See here for Particle Data Group information about lepton (electron) compositeness.
flippiefanusflippiefanus
There's never any direct experimental proof that anything does not exist, including a nonzero electron radius. But we have a very good (you might even say "Standard") Model that describes the electron as a point particle and accurately explains all known experimental data (at least, data describing processes involving electrons). With no experimental reason to expect electrons to have a nonzero radius, Occam's Razor suggests that we should consider electrons to be pointlike until there is a concrete reason not to.
Of course, it's completely possible that one day, higher-energy experiments will discover that electrons are composite or extended structures, and if that happens then we'll need to revise our assumption. There's precedent for this in the history of particle physics: the neutron, proton, and pion, among others, were all once assumed to be pointlike elementary particles, until a better model came along that described them as quark-gluon bound states.
tparkertparker
$\begingroup$ @Rococo I'm not sure if I agree that "pointlike" and "elementary" mean the same thing. In string theory, strings are elementary but extended, and their mathematical description differs from that of usual QFT. I think "pointlike" is a strictly stronger notion than "elementary," and roughly corresponds to the definition I gave in my last comment. $\endgroup$
– tparker
$\begingroup$ @tparker Oh, that is certainly not something that had occurred to me- fair enough. But it still seems to me that it is worth emphasizing that the above statements are true within the standard model and equivalently at any energy we have ever accessed, which seems relevant given that the OP's question was about what the experimental evidence says. $\endgroup$
– Rococo
In order to answer this question we have to agree on the meaning of point-like. (This is not so obvious since nature happens to be quantum rather than classical) In practice, one has to specify a framework where the definition can be operationally, at least in principle, tested against the experimental evidence.
The tentative definition that I will adopt in the following is: a particle is point-like if every physical process (say a scattering), at any energy scale (or kinematical configuration) above a certain threshold, agrees with the prediction made by a perturbative renormalizable quantum field theory where the particle is elementary. An equivalent definition could be that the action for such a particle is dominated by its free kinetic energy at all scales above a certain threshold (or, again, that the theory is always around a Gaussian fixed point). In practice I am trading point-like for elementary which is a (slightly) better defined concept.
I had to include the notion of perturbativity to speak of particles in the first place, that is of (presumably effective) field theories that are close to a gaussian fixed point in at least a finite energy range. This definition is not perfect, but it makes clear that a theory of particles strongly interacting at all scales isn't in fact a theory of particles after all.
The proton isn't elementary because its interactions at or above the confinement scale are strongly coupled and, moreover, the theory would require infinitely many terms in the lagrangian making it non-renormalizable too. The pions, on the other hand would seem to be elementary at small energy (essentially because of their Goldstone boson nature and Adler's theorem) but the interactions become strong again at $E\sim\Lambda_\textrm{QCD}$. The interactions are non-renormalizable too. In fact, the requirement of non-renormalizability and the strong interactions usually go together in concrete realizations of compositeness.
Buying this tentative definition for point-like, we can ask whether the electron is so. The answer is yes: it is point-like, to the best of our present knowledge. In other words, up to the energy scale of the order of few tens of $\mathrm{TeV}$'s that we have been able to explore experimentally (the precise number depends on various things that would take us very far), there is no sign that the electron isn't described by the renormalizable weakly coupled quantum field theory known as the Standard Model at all scales above the $\Lambda_\mathrm{QCD}$. In such a theory the electron is an elementary field.
Various caveats are in order. First, I am neglecting gravity which makes the SM non-renormalizable (and gravity may becomes strong at $M_\textrm{Planck}$). In the leading quantum theory of gravity that explains the dynamics at the Planck scale, string theory, the electron isn't quite a particle nor point-like. The Planck lenght is however so small that we can safely ignore this point for most of the questions. Second, the gauge coupling for the hypercharge in the Standard Model is believed to have a Landau pole that may break the theory at even larger energy scales than Planck. Hence, one can safely neglect the Landau pole too (quantum gravity effects kick-in much earlier).
Say one day we discover a discrepancy between the predictions of the Standard Model (SM) concerning the electrons and the experimental data. To be concrete, imagine one day we discover a 5~$\sigma$ discrepancy in the $g_{e}-2$ of the electron. Would that mean that the electron is composite? No, at least non-necessarily. In fact, the extra corrections $\delta_\textrm{BSM}$ in $(g_{e}-2)=\delta_\textrm{SM}+\delta_\textrm{BSM}$ could be accounted for a new weakly coupled renormalizable field theory valid above a new threshold (the mass of the new particles involved in producing $\delta_\textrm{BSM}$) where the electron is still an elementary field. There exist several models beyond the SM where this is the case: they go beyond the SM coupling new weakly interacting particles to the electron, changing some of its low energy properties; however, above the mass of these new states the electron is still accounted as an elementary particle coupled weakly to the old fields and few new ones. On the other hand, the $\delta_{BSM}$ could be explained by the electron being compositeness, i.e. non point-like. This would be the correct explanation should the new weakly coupled renormalizable theory expressed in terms of other fields than the electron. One could still insist to use the electron above the compositeness scale but the theory would be strongly interacting and non-renormalizable, in such a variable.
TwoBsTwoBs
A very active are of research right now is in measuring the electron's electric dipole moment (EDM), which first caught my attention after this Science paper was published and one of the senior-most authors (John Doyle) told me that he wanted the title to be "How round is the electron?"
This followed a Nature paper with the title "Improved measurement of the shape of the electron".
Using the Standard Model, it has been predicted that the EDM is at most $10^{-38} ~e\cdot $cm, and many physicists have been aggressively trying to experimentally determine the EDM with better and better precision, knowing that if they find a lower bound larger than $10^{-38} ~e\cdot $cm it would constitute a violation of the Standard Model's prediction.
There are the results so far, and all they've been able to find so far is that the upper limit of the EDM is less than $1.1^{-29} ~e\cdot $cm which is very much compatible with the Standard Model prediction (you would need to find the lower limit to be larger than $10^{-38} ~e\cdot $cm to get a violation).
Here's a summary of the constant improvement in experimental lowering of the upper bound on the EDM in the last 2 decades:
Upper limit on EDM
2002 $1.6 \times 10^{-27} ~e\cdot $cm Physical Review Letters. 88 (7): 071805.
2011 $1.1 \times 10^{-27} ~e\cdot $cm Nature. 473, pages 493–496
2014 $8.7 \times 10^{-29} ~e\cdot $cm Science. Vol. 343, Issue 6168, pp. 269-272
Based on the above timeline it seems that it will take a long time for experiments to reach the Standard Model prediction of $10^{-38} ~e\cdot $cm.
Back to your question:
Regarding the "size", current experimental limitations prevent us from confidently saying much about the size of particles at the size scale of electrons, and even the size of the proton (which is expected to be much larger than the electron) is at the center of one of the biggest open problems in physics right now: The proton radius puzzle. I went into more detail about this here: Relative size of electrons and quarks .
Regarding the "shape", if the electron is not perfectly round, we at least know that the EDM is no larger than $1.1 \times 10^{-29} ~e\cdot $cm (provided that you trust this conclusion from the 2018 Nature paper).
$\begingroup$ A nice answer to the wrong question: the EDM is only vaguely related to the electron's size. Compare with the neutron EDM, where the neutron's intrinsic size (~ 1 fm) is quite firmly established, and where the CP-allowed electromagnetic moments related to the neutron's size in a sensible order-of-magnitude way. $\endgroup$
– rob ♦
$\begingroup$ The EDM part is more about the shape, which is also related to whether or not it's a "point" particle, but I clarified this at the end when I said that I talked more about "size" in my answer to the question "Relative size of electrons and quarks" and the EDM part is relevant to the "shape". $\endgroup$
This question is about the energy of an electron. Since the energy stored in the electromagnetic field of an electron $$u_{EM}=\frac{\varepsilon}{2}|\mathbb E|^2+\frac{1}{2\mu}|\mathbb B|^2$$ must be a significant part of the energy of the electron, even the field must be regarded as a part of the electron. Which thus not is a "point".
But that was the classic model. In QED the electron is defined to be pointlike and that works well, as far as it has been possible to calculate and measure. But also astronomical calculations give good results for pointlike stars and planets. Also, I think it is a disadvantage that a "single" electron is considered to be field-less. In reality, however, no electrons are totally single, so one might wonder how close two electrons have to be before they are not single.
LehsLehs
$\begingroup$ you are talking of this en.wikipedia.org/wiki/Classical_electron_radius . But the electron is a quantum mechanical entity and has to be treated with quantum mechanical mathematical tools. $\endgroup$
$\begingroup$ the QED field creation operator has zero energy if there is no electron there, and one electron is generated by the electron creation operator if it is there, all at one point. Thst is what "point particles" means in the standard model. $\endgroup$
$\begingroup$ I don't see how this answers the question. If you regard the electric field of an electron as "part" of it, the electron is infinitely extended, which is a patently useless notion of size. $\endgroup$
– ACuriousMind ♦
$\begingroup$ @ACuriousMind: you cannot separate an electron from its electromagnetic field - there are no neutral electrons. The electron's electromagnetic field is part of what it is. In fact the electron's electromagnetic field is what it is. $\endgroup$
– John Duffield
$\begingroup$ @dmckee : re electrons scatter as if they were point-like in experiments sensitive to sizes around an attometer. What experiments? $\endgroup$
Not the answer you're looking for? Browse other questions tagged particle-physics electrons standard-model elementary-particles point-particles or ask your own question.
How do we prove or disprove that a particle has no internal structure?
What is the smallest particle? What is the building block of an electron?
Does the electron have size?
Have Preon models been ruled out?
In QED why is the electron a point particle?
Experimental justification for modelling electron as point instead of charged shell?
Do electrons have shape?
What is the mass density distribution of an electron?
Is there an electric dipole moment in an electron?
Is charge point-like or a smear?
Why do physicists believe that particles are pointlike?
Why is an electron considered a point-particle?
What is the meaning of the size of a particle in QFT? | CommonCrawl |
A Bonus-Malus Framework for Cyber Risk Insurance and Optimal Cybersecurity Provisioning
Qikun Xiang,Ariel Neufeld,Gareth W. Peters,Ido Nevat,Anwitaman Datta
The cyber risk insurance market is at a nascent stage of its development, even as the magnitude of cyber losses is significant and the rate of cyber risk events is increasing. Existing cyber risk insurance products as well as academic studies have been focusing on classifying cyber risk events and developing models of these events, but little attention has been paid to proposing insurance risk transfer strategies that incentivize mitigation of cyber loss through adjusting the premium of the risk transfer product. To address this important gap, we develop a Bonus-Malus model for cyber risk insurance. Specifically, we propose a mathematical model of cyber risk insurance and cybersecurity provisioning supported with an efficient numerical algorithm based on dynamic programming. Through a numerical experiment, we demonstrate how a properly designed cyber risk insurance contract with a Bonus-Malus system can resolve the issue of moral hazard and benefit the insurer.
A New Approach to Risk Attribution and its Application in Credit Risk Analysis
Frei, Christoph
How can risk of a company be allocated to its divisions and attributed to risk factors? The Euler principle allows for an economically justified allocation of risk to different divisions. We introduce a method that generalizes the Euler principle to attribute risk to its driving factors when these factors affect losses in a nonlinear way. The method splits loss contributions over time and is straightforward to implement. We show in an example how this risk decomposition can be applied in the context of credit risk.
A Note of Caution on Quantifying Banks' Recapitalization Effects
Schmidt, Kirsten,Noth, Felix,Tonzer, Lena
Unconventional monetary policy measures like asset purchase programs aim to reduce certain securities' yield and alter financial institutions' investment behavior. These measures increase the institutions' market value of securities and add to their equity positions. We show that the extent of this recapitalization effect crucially depends on the securities' accounting and valuation methods, country-level regulation, and maturity structure. We argue that future research needs to consider these factors when quantifying banks' recapitalization effects and consequent changes in banks' lending decisions to the real sector.
An Empirical Investigation of the Volatility Spill-over and Asymmetries between Nifty Index and Rupee- Dollar Exchange Rate
Shahani, Rakesh,TOMAR, PRATEEK
The present study is an attempt to investigate the conditional volatility of returns of the two major segments of Indian financial markets viz. Re/$ Exchange Rate and Nifty Index Stock Index using GARCH (p,q) methodology. The period of the study has been taken to be April 2007-March 2017 and the data has been collected as monthly closing prices of the two variables, namely rupee dollar exchange rate and NSE Nifty. The analysis has been carried on first differenced (log transformed) prices. For studying the spill-over of volatility from a market to another, squared residuals (after standardization) from another market have been included as variance regressors. Further to find out whether or not there was any asymmetric returns of the markets under study, Threshold GARCH (T-GARCH) Model has been employed. The results of the study revealed the presence of conditional volatility of returns. The optimal model was identified as ARCH (1) when Re/$ Exchange Rate was the dependent variable while it was GARCH (1,1) when Nifty Index was taken as dependent variable. The bi-directional volatility spill-over (contemporaneous) was clearly evident by the two models and the same was captured by the variance regressors i.e. the standardized squared residuals. Further the results showed no sign of any asymmetry in volatility as reflected by the T-GARCH coefficients.
An alternative quality of life ranking on the basis of remittances
Dóra Gréta Petróczy
Remittances provide an essential connection between people working abroad and their home countries. This paper considers these transfers as a measure of preferences revealed by the workers, underlying a ranking of countries around the world. In particular, we use the World Bank bilateral remittances data of international salaries and interpersonal transfers between 2010 and 2015 to compare European countries. The suggested least squares method has favourable axiomatic properties. Our ranking reveals a crucial aspect of quality of life and may become an alternative to various composite indices.
An analysis of Uniswap markets
Guillermo Angeris,Hsien-Tang Kao,Rei Chiang,Charlie Noyes,Tarun Chitra
Uniswap -- and other constant product markets -- appear to work well in practice despite their simplicity. In this paper, we give a simple formal analysis of constant product markets and their generalizations, showing that, under some common conditions, these markets must closely track the reference market price. We also show that Uniswap satisfies many other desirable properties and numerically demonstrate, via a large-scale agent-based simulation, that Uniswap is stable under a wide range of market conditions.
Asymptotic expansion for the Hartman-Watson distribution
Dan Pirjol
The Hartman-Watson distribution with density $f_r(t)$ is a probability distribution defined on $t \geq 0$ which appears in several problems of applied probability. The density of this distribution is expressed in terms of an integral $\theta(r,t)$ which is difficult to evaluate numerically for small $t\to 0$. Using saddle point methods, we obtain the first two terms of the $t\to 0$ expansion of $\theta(\rho/t,t)$ at fixed $\rho >0$. An error bound is obtained by numerical estimates of the integrand, which is furthermore uniform in $\rho$. As an application we obtain the leading asymptotics of the density of the time average of the geometric Brownian motion as $t\to 0$. This has the form $\mathbb{P}(\frac{1}{t} \int_0^t e^{2(B_s+\mu s)} ds \in da) = (2\pi t)^{-1/2} g(a,\mu) e^{-\frac{1}{t} J(a)} (1 + O(t))$, with an exponent $J(a)$ which reproduces the known result obtained previously using Large Deviations theory.
Automated and Distributed Statistical Analysis of Economic Agent-Based Models
Andrea Vandin,Daniele Giachini,Francesco Lamperti,Francesca Chiaromonte
We propose a novel approach to the statistical analysis of simulation models and, especially, agent-based models (ABMs). Our main goal is to provide a fully automated and model-independent tool-kit to inspect simulations and perform counterfactual analysis. Our approach: (i) is easy-to-use by the modeller, (ii) improves reproducibility of results, (iii) optimizes running time given the modeller's machine, (iv) automatically chooses the number of required simulations and simulation steps to reach user-specified statistical confidence, and (v) automatically performs a variety of statistical tests. In particular, our framework is designed to distinguish the transient dynamics of the model from its steady-state behaviour (if any), estimate properties of the model in both "phases", and provide indications on the ergodic (or non-ergodic) nature of the simulated processes -- which, in turns allows one to gauge the reliability of a steady-state analysis. Estimates are equipped with statistical guarantees, allowing for robust comparisons across computational experiments. To demonstrate the effectiveness of our approach, we apply it to two models from the literature: a large scale macro-financial ABM and a small scale prediction market model. Compared to prior analyses of these models, we obtain new insights and we are able to identify and fix some erroneous conclusions.
Canary in the Coal Mine: COVID-19 and Soybean Futures Market Liquidity
Peng, Kun,Hu, Zhepeng,Robe, Michel A.,Adjemian, Michael
We document the impact of the early stages of the COVID-19 pandemic on liquidity in U.S. agricultural markets. Soybean futures liquidity is affected the earliest, the most, and the longest. Soybean depth drops by half for outright futures and by over nine tenths for calendar spreads, and soybean bid-ask spreads increase significantly, starting on the night of February 12 to 13, 2020â€"a full two weeks before (i) liquidity evaporates in U.S. bond and equity markets and (ii) soybean prices start to fall sharply. The timing of the soybean liquidity drop coincides with overnight news of bleak COVID-19 developments in China (a key source of world demand for oilseeds). Following a series of emergency interventions by the U.S. Federal Reserve, liquidity recovers in the outright marketâ€"but depth remains abnormally low for calendar spreads. These patterns cannot be explained by other factors, such as changes in soybean futures trading volume or price volatility: the COVID-19 shock was novel, and it destroyed soybean-market liquidity in a way that foretold financial-market developments two weeks later. In contrast to soybeans, we find little evidence of a drop in corn or wheat futures liquidity until U.S. financial and crude oil markets sink in early March. Soybeans were truly the canary in the coal mine.
Combination of window-sliding and prediction range method based on LSTM model for predicting cryptocurrency
Yifan Yao,Lina Wang
The present study aims to establish the model of the cryptocurrency price trend based on financial theory using the LSTM model with multiple combinations between the window length and the predicting horizons, the random walk model is also applied with different parameter settings.
Andrews, Spencer,Colacito, Ric,Croce, Mariano (Max) Massimiliano,Gavazzoni, Federico
The slope carry consists of taking a long (short) position in the long-term bonds of countries with steeper (flatter) yield curves. The traditional carry is a long (short) position in countries with high (low) short-term rates. We document that: (i) the slope carry risk premium is negative (positive) in the pre (post) 2008 period, whereas it is concealed over longer samples; (ii) the traditional carry risk premium is lower post-2008; and (iii) there has been a sharp decline in expected global growth and global inflation post-2008. We connect these empirical findings through an equilibrium model in which investors price news shocks, financial markets are complete, and countries feature heterogeneous exposure to news shocks about both global output expected growth and global inflation.
Do Credit Rating Agencies Care About Our International Tax Planning Strategy When Assigning Credit Ratings?
Ma, Zhiming,Stice, Derrald,Wang, Danye
International tax planning strategies, by their very nature, increase firms' free cash flows, which could improve companies' creditworthiness. However, these strategies also bring information and agency problems, which may reduce their creditworthiness. To understand which of these effects dominates, this study examines the effect of international tax planning on credit ratings. We find that credit analysts incorporate information related to international tax planning when analyzing a firm's credit risk and that high international tax planning is associated with less favorable credit ratings. We also find that this effect is mitigated by a higher conflict of interest for the bond rating agencies. Furthermore, we find that the effect of international tax planning operates through the channels of future cash flow effects, agency costs, and information risk. Our results are robust to a difference-in-differences research design using The American Jobs Creation Act of 2004 as an exogenous shock to the benefits from international tax planning, and we document that the effect of international tax planning is different from and incremental to overall tax avoidance.
Do the propensity and drivers of academics' engagement in research collaboration with industry vary over time?
Giovanni Abramo,Francesca Apponi,Ciriaco Andrea D'Angelo
This study is about public-private research collaboration. In particular, we want to measure how the propensity of academics to collaborate with their colleagues from private firms varies over time and whether the typical profile of such academics change. Furthermore, we investigate the change of the weights of main drivers underlying the academics' propensity to collaborate with industry. In order to achieve such goals, we apply an inferential model on a dataset of professors working in Italian universities in two subsequent periods, 2010-2013 and 2014-2017. Results can be useful for supporting the definition of policies aimed at fostering public-private research collaborations, and should be taken into account when assessing their effectiveness afterwards.
Does Bank Efficiency Affect the Bank Lending Channel in China?
Fungáčová, Zuzana,Kerola, Eeva,Weill, Laurent
This work examines the impact of bank efficiency on the bank lending channel in China. Using a sample of 175 Chinese banks over the period 2006â€"2017, we investigate how the reaction of the loan supply to monetary policy actions depends on a bank's efficiency. While bank efficiency does not exert an impact on the effectiveness of monetary policy transmission overall, it does favor the transmission of monetary policy for banks with low loan-to-deposit ratios. In addition, the expansion of shadow banking activities has been associated with a positive impact of bank efficiency on monetary policy transmission. These results suggest that bank efficiency may influence the bank lending channel in certain cases.
Dynamic Structural Impact of the COVID-19 Outbreak on the Stock Market and the Exchange Rate: A Cross-country Analysis Among BRICS Nations
Rupam Bhattacharyya,Sheo Rama,Atul Kumar,Indrajit Banerjee
COVID-19 has impacted the economy of almost every country in the world. Of particular interest are the responses of the economic indicators of developing nations (such as BRICS) to the COVID-19 shock. As an extension to our earlier work on the dynamic associations of pandemic growth, exchange rate, and stock market indices in the context of India, we look at the same question with respect to the BRICS nations. We use structural variable autoregression (SVAR) to identify the dynamic underlying associations across the normalized growth measurements of the COVID-19 cumulative case, recovery, and death counts, and those of the exchange rate, and stock market indices, using data over 203 days (March 12 - September 30, 2020). Using impulse response analyses, the COVID-19 shock to the growth of exchange rate was seen to persist for around 10+ days, and that for stock exchange was seen to be around 15 days. The models capture the contemporaneous nature of these shocks and the subsequent responses, potentially guiding to inform policy decisions at a national level. Further, causal inference-based analyses would allow us to infer relationships that are stronger than mere associations.
Empowering Patients Using Smart Mobile Health Platforms: Evidence From A Randomized Field Experiment
Anindya Ghose,Xitong Guo,Beibei Li,Yuanyuan Dang
With today's technological advancements, mobile phones and wearable devices have become extensions of an increasingly diffused and smart digital infrastructure. In this paper, we examine mobile health (mHealth) platforms and their health and economic impacts on the outcomes of chronic disease patients. To do so, we partnered with a major mHealth firm that provides one of the largest mobile health app platforms in Asia specializing in diabetes care. We designed and implemented a randomized field experiment based on detailed patient health activities (e.g., steps, exercises, sleep, food intake) and blood glucose values from 1,070 diabetes patients over several months. Our main findings show that the adoption of the mHealth app leads to an improvement in both short term metrics (such as reduction in patients' blood glucose and glycated hemoglobin levels) and longer-term metrics (such as hospital visits, and medical expenses). Patients who adopted the mHealth app undertook higher levels of exercise, consumed healthier food with lower calories, walked more steps and slept for longer times on a daily basis. A comparison of mobile vs. PC enabled version of the same app demonstrates that the mobile has a stronger effect than PCs in helping patients make behavioral modifications with respect to diet, exercise and life style, which ultimately leads to an improvement in their healthcare outcomes. We also compared outcomes when the platform facilitates personalized health reminders to patients vs. generic reminders. We found that personalized mobile message with patient-specific guidance can have an inadvertent effect on patient app engagement, life style changes, and health improvement. Overall, our findings indicate the potential value of mHealth technologies, as well as the importance of mHealth platform design in achieving better healthcare outcomes.
Emu Deepening and Sovereign Debt Spreads: Using Political Space to Achieve Policy Space
Kataryniuk, Iván,Mora-Bajén, Víctor,Pérez, Javier J.
Sovereign spreads within the European Monetary Union (EMU) arise because markets price-in heterogeneous country fundamentals, but also re-denomination risks, given the incomplete nature of EMU. This creates a permanent risk of financial fragmentation within the area. In this paper we claim that political decisions that signal commitment to safeguarding the adequate functioning of the euro area influence investors' valuations. We focus on decisions conducive to enhancing the institutional framework of the euro area ("EMU deepening"). To test our hypothesis we build a comprehensive narrative of events (decisions) from all documents and press releases issued by the Council of the EU and the European Council during the period January 2010 to March 2020. We categorize the events as dealing with: (i) economic and financial integration; (ii) fiscal policy; (iii) bailouts. With our extremely rich narrative at hand, we conduct event-study regressions with daily data to assess the impact of events on sovereign bond yields and find that indeed decisions on financial integration drive down periphery spreads. Moreover, while decisions on key subjects present a robust effect, this is not the case with prior discussions on those subjects at the Council level. Finally, we show that the impacts arise from reductions in peripheral sovereign spreads, and not by the opposite movement in core countries. We conclude that EU policy-makers have at their disposal significant "political space" to reduce fragmentation and gain "policy space".
FRM Financial Risk Meter for Emerging Markets
Souhir Ben Amor,Michael Althof,Wolfgang Karl Härdle
The fast-growing Emerging Market (EM) economies and their improved transparency and liquidity have attracted international investors. However, the external price shocks can result in a higher level of volatility as well as domestic policy instability. Therefore, an efficient risk measure and hedging strategies are needed to help investors protect their investments against this risk. In this paper, a daily systemic risk measure, called FRM (Financial Risk Meter) is proposed. The FRM-EM is applied to capture systemic risk behavior embedded in the returns of the 25 largest EMs FIs, covering the BRIMST (Brazil, Russia, India, Mexico, South Africa, and Turkey), and thereby reflects the financial linkages between these economies. Concerning the Macro factors, in addition to the Adrian and Brunnermeier (2016) Macro, we include the EM sovereign yield spread over respective US Treasuries and the above-mentioned countries currencies. The results indicated that the FRM of EMs FIs reached its maximum during the US financial crisis following by COVID 19 crisis and the Macro factors explain the BRIMST FIs with various degrees of sensibility. We then study the relationship between those factors and the tail event network behavior to build our policy recommendations to help the investors to choose the suitable market for in-vestment and tail-event optimized portfolios. For that purpose, an overlapping region between portfolio optimization strategies and FRM network centrality is developed. We propose a robust and well-diversified tail-event and cluster risk-sensitive portfolio allocation model and compare it to more classical approaches
Gendered impact of COVID-19 pandemic on research production: a cross-country analysis
Giovanni Abramo,Ciriaco Andrea D'Angelo,Ida Mele
The massive shock of the COVID-19 pandemic is already showing its negative effects on economies around the world, unprecedented in recent history. COVID-19 infections and containment measures have caused a general slowdown in research and new knowledge production. Because of the link between R&D spending and economic growth, it is to be expected then that a slowdown in research activities will slow in turn the global recovery from the pandemic. Many recent studies also claim an uneven impact on scientific production across gender. In this paper, we investigate the phenomenon across countries, analysing preprint depositions. Differently from other works, that compare the number of preprint depositions before and after the pandemic outbreak, we analyse the depositions trends across geographical areas, and contrast after-pandemic depositions with expected ones. Differently from common belief and initial evidence, in few countries female scientists increased their scientific output while males plunged.
Group Quantization of Quadratic Hamiltonians in Finance
Santiago Garcia
The Group Quantization formalism is a scheme for constructing a functional space that is an irreducible infinite dimensional representation of the Lie algebra belonging to a dynamical symmetry group. We apply this formalism to the construction of functional space and operators for quadratic potentials -- gaussian pricing kernels in finance. We describe the Black-Scholes theory, the Ho-Lee interest rate model and the Euclidean repulsive and attractive oscillators. The symmetry group used in this work has the structure of a principal bundle with base (dynamical) group a semi-direct extension of the Heisenberg-Weyl group by SL(2,R), and structure group (fiber) the positive real line.
Implied Equity Premium and Market Beta
Chow, Victor,Gu, Jiahao,Wang, Zhan
Martin (2017) shows that the arbitrage-free measure of return-volatility mimicked by a portfolio of options contracts is a close approximation of ex-ante equity risk premium. We argue, nevertheless, the left-tail volatility-asymmetry downward bias his (symmetric) SVIX approach. This paper provides a simple procedure to correct this bias by adding a risk-neutral measure of volatility-asymmetry (AVIX2) to the SVIX2. The option-implied market beta of individual stocks is a weighted sum of that of SVIX and AVIX. Empirically, our findings suggest these implied betas possess significant predictability of return and the hedging ability against bear/crashing markets.
Information Sharing Among Strategic Traders: The Role of Disagreement
Balasubramaniam, Swaminathan
In a duopoly model of informed speculation, I show that competing traders share information when they disagree enough. Traders can lose competitive rents by sharing private information, but with sufficient disagreement, they can engage in profitable belief arbitrage by trading against each other's signal. Traders, however, would gain by over-reporting their signals so that competitors make large opposing trades. When information is verifiable, truthful disclosure emerges due to an "unraveling" argument. Mediators (say, sell-side analysts or brokers) could facilitate partial information sharing by aggregating and distributing information in an incentive-compatible manner. Disagreement makes the market more liquid, but information sharing undermines the liquidity benefits.
Moving From 'Developmental' to 'Anti-Developmental' Local Financial Models in East Asia: Abandoning a Winning Formula
Bateman, Milford
One of the decisive but often overlooked factors in the creation of the East Asian 'economic miracle' was the part played by a variety of heterodox sub-national state, community and cooperatively owned and controlled financial systems, institutions and lending models. Beginning with Japan after 1945, local financial systems were (re)constructed across East Asia in a way that very efficiently operationalised key development policy goals through targeted local enterprise development. Yet in spite of marked success with this 'developmental' local financial model, from the 1980s onwards the international development community, led by the US government and the World Bank, began an effort to discredit and replace it with a new commercially-oriented private sector- led local financial model promoting mass individual entrepreneurship with the help of a for-profit microcredit sector. This article begins by briefly summarising why such 'developmental' local financial models were important to East Asia's economic miracle before I turn to examining why, how and what happened when after 1980 the international development community quietly set out to undermine and destroy them. I conclude from this analysis that the international development community's desire to begin to impose its own neoliberal ide- ology and narrow elite-driven enrichment goals in East Asia far outweighed the ongoing development successes registered by the 'developmental' local financial models that emerged after 1945.
POTENCIAIS APLICAÇÕES DE BLOCKCHAIN NO MERCADO DE CAPITAIS (Potential Applications of Blockchain in Capital Markets)
Schechtman, David
Portuguese abstract: ABSTRATO: Com o avanço de blockchain e smart contracts, bem como constantes anúncios de projetos de grande escala utilizando estas ferramentas, parece cada vez mais provável que diversos aspectos da sociedade e economia passem a adotar estas tecnologias e, sendo consequentemente alterados estruturalmente. O mercado de valores mobiliários também possivelmente adotará smart contracts e blockchain. O presente artigo busca, após introduzir de maneira acessível para profissionais da área do direito aspectos de blockchain e smart contracts, analisar de que modo estas tecnologias poderão vir a ser adotadas e beneficiar o mercado de capitais.English abstract: The constant advancement of blockchain and smart contracts, as well as regular announcements of large scale projects using these tools makes it even more probable that several aspects of our society and economy will adopt these technologies and therefore change structurally. The capital market might also adopt smart contracts and blockchain. This paper aims to, after introducing in an accessible manner for legal professionals the concepts of blockchain and smart contracts, analyze how these technologies could be adopted to benefit the capital market.
Research Methods of Assessing Global Value Chains
Sourish Dutta
My study would follow two phases of analysis i.e. the first phase/preliminary view would be developed through the widest range of available and applicable methodologies followed by a second phase/in-depth assessment and discussion of the identified challenges, opportunities, and policy options.
Some results on the risk capital allocation rule induced by the Conditional Tail Expectation risk measure
Nawaf Mohammed,Edward Furman,Jianxi Su
Risk capital allocations (RCAs) are an important tool in quantitative risk management, where they are utilized to, e.g., gauge the profitability of distinct business units, determine the price of a new product, and conduct the marginal economic capital analysis. Nevertheless, the notion of RCA has been living in the shadow of another, closely related notion, of risk measure (RM) in the sense that the latter notion often shapes the fashion in which the former notion is implemented. In fact, as the majority of the RCAs known nowadays are induced by RMs, the popularity of the two are apparently very much correlated. As a result, it is the RCA that is induced by the Conditional Tail Expectation (CTE) RM that has arguably prevailed in scholarly literature and applications. Admittedly, the CTE RM is a sound mathematical object and an important regulatory RM, but its appropriateness is controversial in, e.g., profitability analysis and pricing. In this paper, we address the question as to whether or not the RCA induced by the CTE RM may concur with alternatives that arise from the context of profit maximization. More specifically, we provide exhaustive description of all those probabilistic model settings, in which the mathematical and regulatory CTE RM may also reflect the risk perception of a profit-maximizing insurer.
Strength and Weakness in Numbers? Unpacking the Role of Prevalence in the Diffusion of Reverse Mergers
Naumovska, Ivana,Zajac, Edward J.,Lee, Peggy M.
A common prediction in research on practice diffusion is a "strength in numbers" effect (i.e., that a growing number of past adopters will increase the number of future adopters). We advance and test a theoretical perspective to explain when and how practice prevalence may also generate a "weakness in numbers" effect. Specifically, in seeking to explain the diffusion of reverse mergers (RMs) â€" a controversial practice that allows a private firm to go public by merging with a publicly listed "shell company" â€" we suggest that prevalence affected their diffusion in a complex way, based on two divergent social influence pathways, creating: (1) a direct and positive effect of practice prevalence on potential adopters, who view prevalence as evidence of the practice's value, and (2) an indirect and negative effect, mediated through third-party evaluators (i.e., investors, and the media) who view prevalence as a cause for concern and skepticism. We also highlight the utility of this theoretical framework by analyzing how a decline in the status of past adopters exerts a negative effect on diffusion through both social influence pathways. Employing structural equation modeling techniques, we find support for the hypothesized relationships and we discuss the implications of the study for future research on practice diffusion.
The "Fake News" Effect: Experimentally Identifying Motivated Reasoning Using Trust in News
Michael Thaler
Motivated reasoning posits that people distort how they process new information in the direction of beliefs they find more attractive. This paper creates a novel experimental design to identify motivated reasoning from Bayesian updating when people enter into the experiment with endogenously different beliefs. It analyzes how subjects assess the veracity of information sources that tell them the median of their belief distribution is too high or too low. A Bayesian would infer nothing about the source veracity from this message, but a motivated reasoner would believe the source were more truthful when it reports the direction that he is more motivated to believe. Experimental results show novel evidence for politically-motivated reasoning about immigration, income mobility, crime, racial discrimination, gender, climate change, gun laws, and the performance of other subjects. Motivated reasoning from messages on these topics leads people's beliefs to become more polarized and less accurate, even though the messages are uninformative.
The Role of a Nation's Culture in the Country's Governance: Stochastic Frontier Analysis
Vladimír Holý,Tomáš Evan
What role does culture play in determining institutions in a country? This paper argues that the establishment of institutions is a process originating predominantly in a nation's culture and tries to discern the role of a cultural background in the governance of countries. We use the six Hofstede's Cultural Dimensions and the six Worldwide Governance Indicators to test the strength of the relationship on 94 countries between 1996 and 2019. We find that the strongest cultural characteristics are Power Distance with negative effect on governance and Long-Term Orientation with positive effect. We also determine how well countries transform their cultural characteristics into institutions using stochastic frontier analysis.
The effects of citation-based research evaluation schemes on self-citation behavior
Giovanni Abramo,Ciriaco Andrea D'Angelo,Leonardo Grilli
We investigate the changes in the self-citation behavior of Italian professors following the introduction of a citation-based incentive scheme, for national accreditation to academic appointments. Previous contributions on self-citation behavior have either focused on small samples or relied on simple models, not controlling for all confounding factors. The present work adopts a complex statistics model implemented on bibliometric individual data for over 15,000 Italian professors. Controlling for a number of covariates (number of citable papers published by the author; presence of international authors; number of co-authors; degree of the professor's specialization), the average increase in self-citation rates following introduction of the ASN is of 9.5%. The increase is common to all disciplines and academic ranks, albeit with diverse magnitude. Moreover, the increase is sensitive to the relative incentive, depending on the status of the scholar with respect to the scientific accreditation. A further analysis shows that there is much heterogeneity in the individual patterns of self-citing behavior, albeit with very few outliers.
The impact of social influence in Australian real-estate: market forecasting with a spatial agent-based model
Benjamin Patrick Evans,Kirill Glavatskiy,Michael S. Harré,Mikhail Prokopenko
Housing markets are inherently spatial, yet many existing models fail to capture this spatial dimension. Here we introduce a new graph-based approach for incorporating a spatial component in a large-scale urban housing agent-based model (ABM). The model explicitly captures several social and economic factors that influence the agents' decision-making behaviour (such as fear of missing out, their trend following aptitude, and the strength of their submarket outreach), and interprets these factors in spatial terms. The proposed model is calibrated and validated with the housing market data for the Greater Sydney region. The ABM simulation results not only include predictions for the overall market, but also produce area-specific forecasting at the level of local government areas within Sydney as arising from individual buy and sell decisions. In addition, the simulation results elucidate agent preferences in submarkets, highlighting differences in agent behaviour, for example, between first-time home buyers and investors, and between both local and overseas investors.
Transaction Cost Analytics for Corporate Bonds
Xin Guo,Charles-Albert Lehalle,Renyuan Xu
Electronic platform has been increasingly popular for the execution of large orders among asset managers dealing desks. Properly monitoring each individual trade by the appropriate Transaction Cost Analysis (TCA) is the first key step towards this electronic automation. One of the challenges in TCA is to build a benchmark for the expected transaction cost and to characterize the price impact of each individual trade, with given bond characteristics and market conditions.
Taking the viewpoint of an investor, we provide an analytical methodology to conduct TCA in corporate bond trading. With limited liquidity of corporate bonds and patchy information available on existing trades, we manage to build a statistical model as a benchmark for effective cost and a non-parametric model for the price impact kernel. Our TCA analysis is conducted based on the TRACE Enhanced dataset and consists of four steps in two different time scales. The first step is to identify the initiator of a transaction and the riskless-principle-trades (RPTs). With the estimated initiator of each trade, the second step is to estimate the bid-ask spread and the mid-price movements. The third step is to estimate the expected average cost on a weekly basis via regularized regression analysis. The final step is to investigate each trade for the amplitude of its price impact and the price decay after the transaction for liquid corporate bonds. Here we apply a transient impact model (TIM) to estimate the price impact kernel via a non-parametric method.
Our benchmark model allows for identifying and improving best practices and for enhancing objective and quantitative counter-party selections. A key discovery of our study is the need to account for a price impact asymmetry between customer-buy orders and consumer-sell orders.
What We Do In The Shadows: Chinese Shadow Credit Growth and Monetary Policy
Shieh, Harrison
This paper evaluates the effect of Chinese monetary policy shocks on credit creation through the shadow banking sector in mainland China. I identify monetary policy shocks by constructing a measure of monetary policy surprises based on changes to the 1-Year Interest Rate Swaps on the 7-Day Repo Rate on monetary policy announcement dates. A two-stage local projection was then estimated, using the surprise measure as an instrument. The results give two key findings: 1) shadow credit expands in response to contractionary monetary policy, and 2) I provide additional evidence of the transmission of monetary policy through the interest rate channel.
When Does it Pay Off to Learn a New Skill? Revealing the Complementary Benefit of Cross-Skilling
Fabian Stephany
This work examines the economic benefits of learning a new skill from a different domain: cross-skilling. To assess this, a network of skills from the job profiles of 14,790 online freelancers is constructed. Based on this skill network, relationships between 3,480 different skills are revealed and marginal effects of learning a new skill can be calculated via workers' wages. The results indicate that learning in-demand skills, such as popular programming languages, is beneficial in general, and that diverse skill sets tend to be profitable, too. However, the economic benefit of a new skill is individual, as it complements the existing skill bundle of each worker. As technological and social transformation is reshuffling jobs' task profiles at a fast pace, the findings of this study help to clarify skill sets required for designing individual re-skilling pathways. This can help to increase employability and reduce labour market shortages.
'Size & Fit' of Piecemeal Liquidation Processes: Aggravating Circumstances and Side Effects
Cocozza, Rosa,Masera, Rainer
This paper investigates the actual impact of new accounting and regulatory requirements on banks' provisioning policies and earnings management in the context of the capital adequacy of Euro Area (EA) credit institutions. This paper also examines whether loan-loss provisions signal managements' expectations concerning future bank profits to investors. Evidence drawn from the 2011-2019 period indicates that earnings management is an important determinant of LLPs for EA intermediaries. During recent years, small bank managers are much more concerned with their credit portfolio quality and do not use LLPs for discretionary purposes apart from income smoothing. The paper gives evidence of a lack of flexibility in the Balance-Sheet of smaller banks and provides some policy refinement to avoid disorderly piecemeal liquidation. | CommonCrawl |
Central European Institute for Cosmology and Fundamental Physics
Get here
Cosmo: Wednesday, 25/01/2023
Thomas Sotiriou (University of Nottingham, UK)
Strong Gravity and Fundamental Physics
Strings: Monday, 23/01/2023
Ivano Basile (LMU, Germany)
Infinite distances in multicritical CFTs and higher-spin holography
David Kubiznak (Charles University, Prague, Czechia)
Remarkable symmetries of rotating black holes
Strings: Monday, 12/12/2022, 14:00, room 226
Dieter van den Bleeken (Bogazici University, Turkey)
Bosonic supersymmetry
Cosmo: Thursday, 08/12/2022, 16:00, SOLID building lecture hall
Marco Chianese (Università degli Studi di Napoli, Italy)
Novel probes of sub-GeV dark matter
Cosmo: Thursday, 01/12/2022, 16:00, lecture hall
Luc Blanchet (Université Paris-Saclay, France)
Dark matter at galactic scales & MOND
Pranjal Nayak (CERN, Switzerland)
Random Matrices in the Quantum Mechanical Description of Black Holes
Andrew Miller (Université Catholique de Louvain, Belgium)
Probes of dark matter with gravitational-wave detectors
Strings: Monday, 07/11/2022, room 226 14:00
Blagoje Oblak (l'Ecole Polytechnique, France)
Flat JT Gravity and the Schwarzian of BMS2
Giorgio Torrieri (Universidade Estadual de Campinas, Brazil)
The equivalence principle and inertial-gravitational decoherence
Strings: Thursday, 03/11/2022, room 226 15:00
Jordan Francois (U. Mons, Belgium)
Presymplectic structure of gauge theories: boundaries, edge modes, variational connections and all that…
Eugenia Boffo (Charles University, Czech republic)
Spin field for the N=1 particle in the worldline
Cosmo: Thursday, 27/10/2022
Enrico Barausse (SISSA, Trieste, Italy)
Gravitational wave generation in effective field theories of dark energy
Swapnamay Mondal (Dublin Institute of Advanced Studies, Ireland)
Supersymmetric black holes and TTbar deformation
Katy Clough (Queen Mary University, London, UK)
Black holes in fundamental field environments - the impact of initial data
Chrysoula Markou (UMons, Belgium)
Advances in spin-2 physics
Rafael Porto (DESY, Hamburg, Germany)
Precision Gravity: From the LHC to LISA and ET
Yuichi Miyashita (Tokyo Institute of Technology, Japan)
Topological defects in nonlocal field theories
Shlomo Razamat (Technion, Israel)
On IR dualities across dimensions
Diana López Nacir (University of Buenos Aires, Argentina)
Cosmological perturbations for Self-Interacting Warm Dark Matter scenario, numerical implementation and some observational constraints
Shun-Pei Miao (National Cheng Kung University (NCKU), Tainan, Taiwan)
Gauge Independent Effective Field Equations
Richard P. Woodard (University of Florida at Gainesville, USA)
Summing Large Logarithms from Loops of Inflationary Gravitons
Alexander Zhuk (Odessa I. I. Mechnikov National University, Ukraine & Center for Advanced Systems Understanding (CASUS), Görlitz, Germany)
Relativistic approach to the large-scale structure formation: cosmic screening vs. gevolution
13/06/2022, 14:00, room 226
Souvik Banerjee (Wurzburg, Germany)
Wormholes, Berry phases and factorization
Matteo Fasiello (IFT, Madrid, Spain)
Probing the Early Universe with Gravitational Waves
Matthijs Hogervorst (EPFL Lausanne, France)
Cosmology meets Conformal Bootstrap
Gianmassimo Tasinato (Swansea University, UK)
Probing the Physics of Inflation with Gravitational Wave Experiments
Max Guillen (Upsalla, Sweden)
Pure Spinor Field Theory Description of 10D super-Yang-Mills: Scattering Amplitudes and Color-Kinematics Duality
Lavinia Heisenberg (Heidelberg University, Germany)
Tensions in LCDM and how to solve them
Claire Zukowski (UvA, Netherlands)
Virasoro Entanglement Berry Phases
Michal Artymowski (Wyszynski University, Warsaw, Poland)
New applications of unparticles: Inflation, dark energy, bouncing cosmologies, and the Hubble tension
Lorenz Eberhardt (IAS Princeton, USA)
Off-shell Partition Functions in 3d Gravity
Earl Bellinger (Max Planck Institute, Garching, Germany)
Asteroseismic probes of stellar evolution and fundamental physics
Caner Unal (Ben-Gurion University of the Negev, Israel)
Spin in Active BHs and properties of ultralight particles
Jose Beltran Jimenez (University of Salamanca, Spain)
Quicksand in Affinesia
Alex Belin (CERN, Switzerland)
Quantum chaos, OPE coefficients and wormholes
Marek Lewicki (University of Warsaw, Poland)
Search for new physics through primordial gravitational waves
Pavel K. Kovtun (University of Victoria, Canada)
Hydrodynamics beyond hydrodynamics
Dong-Gang Wang (University of Cambridge, United Kingdom)
Bootstrapping Inflation: cosmological correlators with broken boosts
Christos Charmousis (IJCLab Orsay, France)
Compact objects in scalar tensor theories
Sergey Ketov (Leibniz University Hannover, Germany & Tokyo Metropolitan University, Japan)
Formation of primordial black holes after Starobinsky inflation in supergravity
Daniel Green (UC, San Diego, USA)
A Tail of Eternal Inflation
Dmitri Semikoz (APC Paris, France)
Measurements of cosmological magnetic fields in the voids of large scale structure
24/02/2022, 16:00, lecture hall
Matthias Bartelmann (ITP, Heidelberg University, Germany)
Sk Jahanur Hoque (Charles University, Prague, Czechia)
Mass loss law for weak gravitational fields: With a positive cosmological constant
Ke-Pan Xie (Nebraska University, USA)
Primordial black holes from a cosmic phase transition: The collapse of Fermi-balls
Alexander Ganz (Jagiellonian University, Krakow, Poland)
Minimally modified gravity and its phenomenological properties
Adolfo Cisterna (TIFPA-INFN, Trento, Italy)
Taub-NUT spacetimes at the service of exact black bounces: Black holes, wormholes and bouncing cosmologies with a self-interacting scalar field
Valerie Domcke (CERN, Geneva, Switzerland)
Cosmology with axion-like particles
Marco Crisostomi (SISSA, Trieste, Italy)
Gravitational wave generation in dark energy theories
Ville Vaskonen (IFAE, Barcelona, Spain)
Probing dark matter through gravitational waves
Anamaria Hell (LMU, Munich, Germany)
Exploring the dualities of massive gauge theories: Aμ vs. Bμν
Graham White (University of Tokyo, Japan)
Archaeology on the origin of matter
Tomislav Prokopec (Utrecht University, Netherlands)
Quantum origin of dark energy and the Hubble tension
Camilo Garcia Cely (DESY, Germany)
The CMB as a detector of gravitational waves
Stian Hartman (University of Oslo, Norway)
Self-interacting Bose-Einstein condensed dark matter; cosmological constraints and simulations
Farbod Hassani (University of Oslo, Norway)
Characterizing the non-linear evolution of dark energy and modified gravity models
Roman Konoplya (Peoples' Friendship U., Moscow, Russia)
Traversable wormholes in General Relativity without exotic matter
Anton Baleato Lizancos (UC Berkeley, USA)
Fundamental Physics with CMB Lensing and Delensing
Jan Burger (University of Iceland, Reykjavik, Iceland)
Conservation of radial actions in time-dependent spherical potentials
17/06/2021 - 4pm, Marco Astorino (INFN, Milano, Italy), Multi-black holes at equilibrium in an external gravitational field
10/06/2021 - 4pm, Francis-Yan Cyr-Racine (University of New Mexico), A cosmological dark matter collider experiment
27/05/2021 - 4pm, Daniel Litim (University of Sussex, UK), Asymptotic safety - from particle physics to quantum gravity
20/05/2021 - 4pm, Macarena Lagos (Columbia University, NY, USA), Interacting Gravitational Waves
13/05/2021 - 4pm, Swetha Bhagwat (Sapienza, University of Rome, Italy), Merger-ringdown consistency test
06/05/2021 - 4pm, Viviana Niro (APC Paris, France), Neutrinos from galactic sources
29/04/2021 - 4pm, Emel Altas (Karamanoglu Mehmetbey University, Turkey), Nonstationary energy in general relativity and approximate analytical description of apparent horizons for initial data with momentum and spin
22/04/2021 - 4pm, Teodor Borislavov Vasilev (Universidad Complutense de Madrid, Spain), Classical and quantum f(R) cosmology: The big rip, the little rip and the little sibling of the big rip
15/04/2021 - 4pm, Famaey Benoit (Observatoire astronomique de Strasbourg, France), MOND: phenomenological review and constraints for model-building
25/03/2021 - 4pm, Johannes Noller (Cambridge University - DAMTP, UK), Testing gravity on all scales
18/03/2021 - 4pm, Kazuya Koyama (Portsmouth U., ICG, UK), General relativistic weak-field limit and Newtonian N-body simulations
11/03/2021 - 4pm, Raissa Mendes (Universidade Federal Fluminenense, Brazil), Probing general relativity with the most extreme neutron stars
04/03/2021 - 4pm, Ali Seraj (Universite Libre de Bruxelles, Belgium), Gravitational memory in modified gravity and two-form symmetries
18/02/2021 - 4pm, François Larrouturou (IAP Paris, France), Refocus on the essentials: introducing the Minimal Theory of Bigravity
11/02/2021 - 4pm, Ondrej Hulik (Charles University, Prague), Generalized and Exceptional geometry, A Unified Framework
04/02/2021 - 4pm, Giulia Cusin (Universite de Geneve, Switzerland), Catalogue and background description of a population of GW sources: features, complementarities and caveats.
28/01/2021 - 4pm, Paolo Benincasa (IFT Madrid, Spain), Towards a reformulation of QFT in expanding universes.
21/01/2021 - 4pm, Timothy Anson (Universite Paris-Saclay, France), Disforming the Kerr metric
14/01/2021 - 4pm, Siddharth Prabhu (Tata Institute of Fundamental Research, India), The holographic nature of quantum information storage in asymptotically flat spacetimes
17/12/2020 - 4pm, Oksana Iarygina (Leiden University, NL), The physical mass scales of multi-field preheating
10/12/2020 - 4pm, Horng Sheng Chia (Institute for Advanced Study, Princeton, USA), Tidal Deformation and Dissipation of Rotating Black Holes
03/12/2020 - 4pm, Antony Lewis (University of Sussex, UK), Concordance, tensions and gravitational lensing
26/11/2020 - 4pm, Eiichiro Komatsu (Max-Planck-Institut für Astrophysik, Germany), Hunting for parity-violating physics in polarisation of the cosmic microwave background
19/11/2020 - 4pm, Mikhail Shaposhnikov (EPFL), Einstein-Cartan gravity: Inflation, Dark Matter and Electroweak Symmetry Breaking
12/11/2020 - 4pm, William Barker (Kavli Institute for Cosmology, Cambridge, UK), Dark energy and radiation from the novel gauge gravity theories
05/11/2020 - 4pm, Fedor Bezrukov (University of Manchester, UK), What do we know about preheating in Higgs Inflation and its relatives?
22/10/2020 - 4pm, Eugene Lim (King's College London, UK), Challenging the Inflationary Paradigm
15/10/2020 - 4pm, Filippo Camilloni (University of Perugia, IT and Niels Bohr Institute, DK), Moving away from the Near-Horizon Attractor of the Extreme Kerr Force-Free Magnetospheres
08/10/2020 - 4pm, Anne Green (University of Nottingham, UK, Primordial Black Holes as a dark matter candidate
12/03/2020, 16:00 Keigo Shimada (Tokyo Institute of Technology), Metric-affine Geometry and Ghost-free structure of Scalar-tensor Theories
02/03/2020, 14:00 Tarek Anous (University of Amsterdam), Areas and entropies in BFSS/gravity duality
27/02/2020, 16:00 Elias Kiritsis (APC, Paris & Crete University), Emergent gravity from Hidden sectors
20/02/2020, 16:00 Georgios Loukes-Gerakopoulos (Astronomical Institute, Prague), Model agnostic approaches to cosmology
13/02/2020, 16:00 Paolo Creminelli (ICTP, Trieste), Initial Conditions for Inflation
30/01/2020, 16:00 Tomas Ledvinka (Charles University, Prague), Dynamic vacuum spacetimes in a computer
29/01/2020, 14:00 Harold Erbin (University of Turin), Machine learning for QFT
23/01/2020, 16:00 Chris Clarkson (Queen Mary University of London) - CANCELLED , General relativity in the era of large scale surveys
16/12/2019, 16:00 Pavel Motloch (CITA, Toronto), Galaxy spins as probes of fundamental physics
12/12/2019, 16:00 Filippo Vernizzi (Universite Paris Saclay), Dark energy after gravitational wave observations
28/11/2019, 16:00 Guillermo Ballesteros (Autonoma University, Madrid), Primordial black hole dark matter from inflation
21/11/2019, 16:00 Giovanni Acquaviva (Charles University, Prague, CZ), Emergent gravity, Bekenstein bound and unitarity
04/11/2019, 16:00 Dionysios Anninos, de Sitter horizons and sphere partition functions
17/10/2019, 16:00 Sk Jahanur Hoque, Conserved charges in asymptotically de Sitter spacetimes
10/10/2019, 16:00 Cosimo Bambi, Testing general relativity using X-ray reflection spectroscopy
03/10/2019, 14:00 Dalimil Mazac, Sphere packing, quantum gravity and extremal functionals
02/10/2019, 11:00 Thales Azevedo, (DF)^2 gauge theories and strings
01/10/2019, 14:00 Poulami Nandi, Field Theories with Conformal Carrollian Symmetry
30/09/2019, 14:00 Antoine Bourget, The Higgs Mechanism - Hasse Diagrams for Symplectic Singularities
19/09/2019, 16:00 Gizem Sengor, Unitarity at the Late Time Boundary of de Sitter
16/08/2019, 14:00 Tomas Prochazka, Thermodynamics Bethe Ansatz
12/08/2019, 14:00 Camilo Garcia-Cely, Self-interacting Spin-2 Dark Matter
01/08/2019, 16:00 Andrei Frolov, Cosmic dust is everywhere
22/07/2019, 14:00 Agnes Ferte, Probing The Accelerated Universe Through Weak Lensing
19/07/2019, 14:00 Xingang Chen, Probing the Origin of the Big Bang Cosmology
16/07/2019, 14:00 Leonardo Modesto, Nonlocal Quantum Gravity
15/07/2019, 11:00 Mirek Rapcak, On Miura transformation for $W_{n|m \times \infty}$ and related geometry
11/07/2019, 14:00 David Svoboda, Paracomplex Geometry and Twisted Supersymmetry
08/07/2019, 14:00 Adolfo Cisterna, Homogenous AdS black strings in General Relativity and Lovelock theories
04/07/2019, 14:00 Cesar Arias, Higher Spins and Topological Strings
24/06/2019, 14:00 Andrea Fontanella, Hidden relativistic symmetry in AdS/CFT
10/06/2019, 14:00 Dmitry Gorbunov, Higgs inflation in weak coupling regime
07/06/2019, 14:00 Mairi Sakellariadou, Stochastic gravitational waves background and its anisotropies
04/06/2019, 13:30 Tanmay Vachaspati, A Classical-Quantum Correspondence
21/05/2019, 16:00 Julian Adamek, Gevolution v1.2 - relativistic N-body simulations and light-cone analysis
13/05/2019, 14:00 Tomislav Prokopec, Quantum corrections during inflation
10/04/2019, 14:00 Ogan Ozsoy, Probing Early Universe on Small Scales
25/03/2019, 14:00 James Bonifacio, Shift Symmetries in (Anti) de Sitter Space
18/03/2019, 14:00 Masahide Yamaguchi, Higher derivative scalar-tensor theory through a non-dynamical scalar field
14/03/2019, 16:00 Stefano Camera, Synergic cosmology across the spectrum
28/02/2019, 16:00 Gizem Sengor, A look at Cosmological Perturbations during Preheating with Effective Field Theory Methods
27/02/2019, 14:00 Antonio Racioppi, The Palatini side of inflationary attractors
25/02/2019, 14:00 Jakub Vicha, Probing the Universe at the Highest Energies with the Pierre Auger Observatory
19/02/2019, 14:00 Yuji Satoh, TBA (room 226)
18/02/2019, 14:00 Vojtech Witzany, Spin-perturbed orbits near black holes
10/12/2018, 14:00 Shinji Mukohyama, Minimalism in modified gravity
26/11/2018, 14:00 Ondrej Pejcha, Explosive deaths of stars: core-collapse supernovae and stellar mergers
23/11/2018, 14:00 Rachel Houtz, Color Unified Dynamical Axion
22/11/2018, 16:00 Antonino Marciano, Non-dynamical torsion from fermions and CMBR phenomenology
20/11/2018, 14:00 Andrea Addazi, Gravitational waves from Dark bubbles in the early Universe
05/11/2018, 14:00 Subodh Patil, Tensor bounds on the hidden universe
25/10/2018, 16:00 Alexey Golovnev, Modified teleparallel gravity
16/10/2018, 14:00 Lasha Berezhiani, Superfluid Dark Matter
15/10/2018, 14:00 Hiroyuki Sagawa, Recent results of the Telescope Array experiment on ultra-high energy cosmic rays (Division Seminar)
15/10/2018, 11:00 Hidehiko Shimada, TBA
08/10/2018, 14:00 Katherine Freese, Inflationary Cosmology in Light of Cosmic Microwave Background Data
17/09/2018, 14:00 Elena de Paoli, A gauge-invariant symplectic potential for tetrad general relativity
30/07/2018, 13:30 Shun-Pei Miao, A Cosmological Coleman Weinberg Potentials and Inflation
27/07/2018, 14:00 Richard Woodard, A Nonlocal Metric Realization of MOND
26/07/2018, 14:00 Dam Thanh Son, From fractional quantum Hall effect to field-theoretical dualities
25/07/2018, 14:00 Oleg Teryaev, Graviitational formfactors and pressure in elementary particles
24/07/2018, 14:00 Renato Costa, Singularity free Universe in double field theory
18/07/2018, 14:00 Massimiliano Rinaldi, Scale-invariant inflation
17/07/2018, 14:00 Alessandro Drago, What do mergers of neutron stars tell us about nuclear physics?
16/07/2018, 14:00 Miroslav Rapcak, Representation Theory of Vertex Operator Algebras and Gukov-Witten defects
12/07/2018, 14:00 Jarah Evslin, Cosmic Expansion Anomalies as Seen by Baryon Acoustic Oscillations
03/07/2018, 14:00 Yi-Zen Chu, Theoretical Explorations in Gravitational Physics
28/06/2018, 14:00 Andreas Albrecht, Perspectives on Cosmic Inflation
27/06/2018, 14:00 Andreas Albrecht, Einselection and Equilibrium
26/06/2018, 14:00 Eugeny Babichev, Hamiltonian vs stability and application to scalar-tensor theories
25/06/2018, 14:00 Wojciech Hellwing, How to falsify CDM (and test its alternatives)?
15/06/2018, 11:00 Emre Kahya, Loop Corrections to Primordial non-Gaussianties
14/06/2018, 16:00 Emre Kahya, GW170817 Falsifies Dark Matter Emulators
12/06/2018, 14:00 Dam Thanh Son, Quantum Hall effect and field-theoretic dualities
06/07/2018, 14:00 Sébastien Clesse, Primordial Black Holes as the Dark Matter
21/05/2018, 14:00 Maksym Ovchynnikov , New physics at the intensity frontier
17/05/2018, 16:00 Lorenzo Pizzuti, Modified gravity with galaxy cluster mass profiles: from data to simulations
15/05/2018, 15:00 Santiago Casas, Model-independent tests of gravity with present data and future surveys
07/05/2018, 14:00 Diego Blas, Probing dark matter properties with pulsar timing
04/05/2018, 14:00 Peter Tinyakov, Compact stars as dark matter probes
03/05/2018, 14:00 Jan Novák, Scalar perturbations of Galileon cosmologies in the mechanical approach in the late Universe
12/04/2018, 14:00 Tomi Koivisto, Symmetric Teleparallelism
10/04/2018, 14:00 Luca Marzola, The 21-cm Line
27/03/2018, 14:00 Julian Adamek, Evolving The Metric - N-body simulations for relativistic cosmology
20/03/2018, 14:00 Patrick Stengel, The Higgs boson can delay Reheating after Inflation
15/03/2018, 14:00 Ilidio Lopes, Impact of dark matter in stellar oscillations
20/02/2018, 14:00 Eric Bergshoeff, Gravity and the spin-2 planar Schroedinger equation
12/02/2018, 14:00 Roberto Oliveri, Gravitational multipole moments from Noether charges
01/02/2018, 16:00 Luca Visinelli, Axions in cosmology and astrophysics
30/01/2018, 14:00 Eleonora Villa, Theoretical systematics in galaxy clustering in LCDM and beyond
23/01/2018, 14:00 Petr Satunin, Constraints on violation of Lorentz invariance from atmospheric showers initiated by multi-TeV photons
12/12/2017, 14:00 David Svoboda, Twisted brackets, fluxes, and deformations of para-Kahler manifolds
11/12/2017, 16:00 Martin Roček, WZW models and generalized geometry
05/12/2017, 14:00 Dimitris Skliros, Coherent states in String Theory
04/12/2017, 14:00 Ed Copeland, Screening mechanisms and testing for them in cosmology and the laboratory
30/11/2017, 14:00 Marc Gillioz, Sum Rules for the "c" Anomaly in 4 Dimensions
28/11/2017, 14:00 Sugumi Kanno, Decoherence of Bubble Universes
27/11/2017, 14:00 Rachel Houtz, Little Conformal Symmetry and Neutral Naturalness
08/11/2017, 14:00 Pierre Fleury, Weak lensing with finite beams
07/11/2017, 14:00 Frederik Lauf, Classification of three-dimensional Chern-Simons-matter theories
06/11/2017, 14:00 Andrei Gruzinov, Particle production by real (astrophysical) black holes
23/10/2017, 14:00 George Pappas, Neutron stars as matter and gravity laboratories
16/10/2017, 14:00 Tessa Baker, Tests of Beyond-Einstein Gravity
02/10/2017 14:00 Piotr Surowka New developments in hydrodynamics
08/09/2017, 14:00 Dani Figueroa Higgs Cosmology: implications of the Higgs for the early Universe
06/09/2017, 14:00 Sergey Ketov Starobinsky inflation in supergravity
06/09/2017, 11:00 Dalimil Mazáč Analytic conformal bootstrap and QFT in AdS2
29/06/2017, 14:00 Bruce Bassett Rise of the Machine: AI and Fundamental Science
28/06/2017, 14:00 Dmitri Semikoz Signatures of a two million year old nearby supernova in antimatter data
02/06/2017, 14:00 David Alonso Science with future ground-based CMB experiments
22/05/2017, 14:00 Mathieu Langer Magnetizing the intergalactic medium during reionization
16/05/2017, 16:00 Sergey Sibiryakov Counts-in-cells statistics of cosmic structure
25/04/2017, 14:00 Ippocratis Saltas What can unimodular gravity teach us about the cosmological constant?
12/04/2017, 14:00 Andrei Nomerotski Status and Plans for Large Synoptic Survey Telescope
06/04/2017, 14:00 Alex Vikman The Phantom of the Cosmological Time-Crystals
03/04/2017, 14:00 Jnan Maharana Scattering of Stringy States and T-duality
27/03/2017, 14:00 Michal Bilek Galaxy interactions in MOdified Newtonian Dynamics (MOND)
27/02/2017, 16:00 Misao Sasaki, Signatures from inflationary massive gravity
23/02/2017, 14:00 Misao Sasaki, Inflation and Beyond
14/12/2016, 14:00 Giovanni Acquaviva,Dark matter perturbations with causal bulk viscosity
09/12/2016, 14:00 David Pirtskhalava Relaxing the Cosmological Constant
14/11/2016, 14:00 Glenn Barnich, Finite BMS transformations
18/10/2016 14:00 Eugeny Babichev, Gravitational origin of dark matter
Strong gravity observations offer a new way to search for new fundamental fields. Scalar fields have been studied extensively in this context. Using them as a case study, I will discuss the following questions: how can new fields leave an imprint on black holes? What can theory tell us about which observations would be more sensitive to this new physics? And are all black holes the same?
I will present the first study of the swampland in higher-spin gravity. In particular, holographic vector models offer a playground to study tensionless-like large-N limits from the point of view of the distance/duality conjecture. I will describe a notion of (information) distance in this discrete landscape and study the decay of higher-spin masses/anomalous dimensions along the limit. In striking contrast to the expected exponential decay, these models lead to a power-like decay. This suggests that stringy exponential decays are characteristic of matrix-like gauge theories, rather than vector models. Further evidence for this arises studying the information distance along coupling variations in Chern-Simons-matter CFTs, where matrix-like degrees of freedom dominate over vector-like ones and the decay is once again exponentially fast.
It is well known that the Kerr geometry admits a non-trivial Killing tensor and its `square root' known as the Killing-Yano tensor. These two objects stand behind the Carter's constant of geodesic motion as well as allow for separability of test field equations in this background. The situation is even more remarkable in higher dimensions, where a single object -- the principal Killing-Yano tensor -- generates a tower of explicit and hidden symmetries responsible for integrability of geodesics and separability of test fields around higher-dimensional rotating black holes. Interestingly, similar yet different structure is already present for the slowly rotating black holes described by the `magic square' version of the Lense-Thirring solution, giving rise to a geometrically preferred spacetime that can be cast in the Painleve-Gullstrand form and admits a tower of exact rank-2 and higher rank Killing tensors whose number rapidly grows with the number of spacetime dimensions.
I will discuss how on a Poisson manifold with involution the space of functions is naturally equipped with a superalgebra structure. I will then illustrate that this explains the appearance of a superalgebra of conserved charges for simple purely bosonic (quantum/classical) mechanical systems, such as the free particle and harmonic oscillator, that were previously observed in the literature
The direct detection of sub-GeV dark matter interacting with nucleons and electrons is hampered by the low recoil energies induced by scatterings in the detectors. Novel ideas are therefore needed to circumvent this experimental limitation. For instance, higher recoil energies in the detector can be achieved in the case of boosted dark matter, where a component of dark matter particles is endowed with large kinetic energies. Furthermore, the scatterings with light dark matter can affect the cosmic-ray transport in astrophysical environments altering the primary and secondary particle spectra observed at the Earth. In this talk, I will review the current status of light dark matter probes and present two interesting scenarios. Firstly, I will show that the current evaporation of primordial black holes (alternative dark matter candidates) with masses from 1014 to 1018 grams is an efficient source of boosted light dark matter. Then, I will investigate the effects of the DM-proton scatterings in star-forming and starburst galaxies, which are well-known cosmic-ray "reservoirs" and well-motivated astrophysical emitters of high-energy neutrinos and gamma-rays through hadronic collisions. For both the two scenarios, I will explore the phenomenological implications and discuss new constraints on the dark matter parameter space.
We review the phenomenology of dark matter at galactic scales and the intriguing MOND (MOdified Newtonian Dynamics) formula for the rotation curves of galaxies and the Tully-Fisher relation. We show that the MOND formula can be tested by the dynamics of planets in the Solar System. We present a particular model dubbed Dipolar Dark Matter which recovers the MOND formula at galactic scales and the standard cosmological model Lambda-CDM at cosmological scales.
In this talk I'll describe the emergence of random matrix theory-like behavior in physical quantum theories, often referred to as quantum ergodicity. I'll describe an effective field theory description of this phenomenon, and how we believe it can be applied to higher dimensions. Finally, in low dimensional examples of AdS/CFT I'll discuss what quantum ergodicity tells us about holographic CFTs, gravitational physics and black holes.
Gravitational-wave interferometers such as LIGO, Virgo and KAGRA can be used to test the existence of dark matter. While most efforts have focused on finding gravitational waves from heavy dark matter, e.g. primordial black holes mergers, or quasi-monochromatic signals from depleting ultralight boson clouds around black holes, the interferometers can also be used to directly detect dark matter that interacts with various components, e.g. the mirrors or the beam splitter. In this sense, the interferometers act like particle physics experiments: the ultralight dark matter particles, of masses 1e-14 to 1e-11 eV, may interact with light, baryons or baryon-leptons in the mirrors and cause a quasi-sinusoidal force on them, or alter the values of the fundamental constants in the interferometer components. Even though these signals are not resulting from gravitational waves, both effects will manifest themselves as differential length changes, which can be precisely measured with LIGO, Virgo and KAGRA. We give an overview of the physics of such dark matter interaction signals, and present the results of recent searches for scalar and vector dark matter particles. While no signal has been found, the constraints that come from analyses of LIGO, Virgo and KAGRA data surpass those of other experiments that were designed to specifically search for dark matter (e.g. MICROSCOPE and the Eöt-Wash torsion balance), and represent a bridge between particle physics and gravitational-wave experiments.
This talk is devoted to Jackiw-Teitelboim (JT) gravity in Bondi gauge, with a vanishing cosmological constant. The asymptotic symmetries of the theory span an infinite-dimensional group commonly dubbed `BMS2' (for Bondi-Metzner-Sachs in two dimensions), but most of the existing literature reduces this group to its warped Virasoro subgroup. I shall argue that one can avoid this reduction and use the BMS2 group throughout. In particular, the boundary action of the system is a BMS-Schwarzian with an extra zero-mode, and its partition function is one-loop exact with respect to the Haar measure on (centrally extended) BMS2. The peculiarities of BMS2 are pointed out, including the fact that it has a single coadjoint orbit at fixed (real) central charges. Allowing for a natural complexification affects this feature sharply, suggesting that more work is required to fully understand the phase space of asymptotically flat data in JT gravity. [Based on arXiv:2112.14609.]
Since the earliest paper on the topic by Matvei Bronstein [1] it was clear that the equivalence principle is incompatible with the usual separation between a "quantum system" and a "classical detector", namely the fact that the charge/mass ratio is "small". A modern treatment, based on open quantum systems and path integrals, can however directly address this issue, and systematically calculate corrections both in the case of a light recoiling detector and in the case of a heavy gravitating one. We illustrate this for an interferometric setup of the type of [2] and show that for all parameters a "semiclassical limit", where one can measure a phase shift due to gravitational attraction between quantum objects, is unlikely.
Based on [3].
[1] M.Bronstein, Gen.Rel.Grav. 44 (2012) 267-283 Original: Matvei Bronstein, Quantentheorie schwacher Gravitationsfelder, Physikalische Zeitschrift der Sowjetunion, Band 9, Heft 2–3, pp. 140–157 (1936).
[2] S.Bose et. al., arXiv:1707.06050 (PRL) C.Marletto, V.Vedral, arXiv:1707.06036 (PRL)
[3] G. Torrieri, arXiv: 2210.08586
The boundary problem is the failure to associate a symplectic structure to a gauge theory over a bounded region of spacetime. Two strategies to circumvent this problem have been much discussed in the recent literature: the edge mode approach by Donnelly & Freidel, and the connection approach by Gomes & Riello. Relying on the bundle geometry of field space, we attempt to make them more systematic, so as to facilitate comparison and to shed some light on conceptual aspects. We illustrate the general results with the standard examples of Yang-Mills theory, the Cartan formulation of General Relativity, and Chern-Simons theory -- thereby reproducing several results of the literature.
In this talk I will address the problem of Ramond-Ramond backgrounds in string theory, from the simplified viewpoint of the N=1 spinning particle. These fields arise as 2-particles excitations of the ground state. BRST cohomology of the worldline model leads to the right equations for the R-R fields. Deformations or twistings of the BRST differential by the latter can also be implemented consistently, yielding target space geometries that support the R-R fields. Based on joint work with Ivo Sachs, arXiv:2206.03243.
I will review how non-linearities can allow for screening solar-system scales from non-tensorial gravitational polarizations, focusing on the case of scalar-tensor theories with derivative self-interactions (K-essence). I will then present fully relativistic simulations in these theories in 1+1 dimensions (stellar oscillations and collapse) and 3+1 dimensions (binary neutron stars), showing how to avoid breakdowns of the Cauchy problem that have affected similar attempts in the past. I will show that screening tends to suppress the (subdominant) dipole scalar emission in binary neutron star systems, but that it fails to quench monopole scalar emission in gravitational collapse, and quadrupole scalar emission in binaries.
The entropy of supersymmetric black holes in string theory compactifications can be related to that of a D- or M-brane system, which in many cases can be further reduced to a two-dimensional conformal field theory (2d CFT). For black holes in M-theory, this relation involves a decoupling limit where the black hole mass diverges. We suggest that moving away from this limit corresponds to a specific irrelevant perturbation of the 2d CFT, namely the supersymmetric completion of the T Tbar deformation. It is demonstrated that the black hole mass matches precisely with the TTbar deformed energy levels, upon identifying the TTbar deformation parameter with the inverse of the leading term of the black hole mass. I will discuss various implications of this novel realization of the TTbar deformation, including a Hagedorn temperature for wrapped M5-branes, and potential change of degeneracies in the deformed theory.
There are several well-motivated scenarios in which fundamental fields could be present around black holes at a sufficient level to impact on the gravitational waveform of a merger. However, developing templates for the impact of such fields is challenging - in particular one issue that requires more attention is how to select and impose appropriate initial conditions for the field that represent their state at the late, dynamical, strong field phase of the merger. A correct specification will be crucial in obtaining sufficiently accurate waveforms and avoiding degeneracies with other effects.
General relativity (GR) can be thought of as the unique theory of interacting massless spin-2 fields, the gravitons; states with similar properties are also present in string theory spectra. In this talk, we will be probing the previously largely unexplored interactions of the graviton with massive spin-2 string states. In particular, we will review recent results on their scattering and formulate a massive realisation of the double copy in string theory for the first time. We will further argue that, unlike open string spectra, closed string spectra may be able to accommodate the "dark graviton", namely the massive spin-2 state that appears in a GR extension known as ghost-free bimetric theory and which has been put forward as a viable dark matter candidate.
Cosmo: Thursday, 13/10/2022, 16:00, room 117
The era of gravitational wave science began in spectacular fashion with several detections already reported by the LIGO-Virgo-KAGRA collaboration, and many more yet to come with the future planned observatories such as LISA and the Einstein Telescope. Motivated by these initial experimental breakthroughs and the expected scientific output, a community effort has been established toward constructing high-accurate waveform models for the emission of gravitational waves from binary systems. In this talk I review how ideas and techniques from particle physics — such as effective field theory methods and modern integration techniques from collider physics — have impacted the state-of-the-art in our analytic understanding of the two-body problem in general relativity.
We consider the topological defects in the context of nonlocal field theories in which Lagrangians contain infinite-order differential operators. In particular, we analyze domain walls. We first determine the asymptotic behavior of the nonlocal domain wall close to the vacua. For the specific domain wall solution under investigation, we derive a theoretical constraint on the nonlocality energy scale, which must be larger than the corresponding symmetry-breaking scale. Subsequently, we find that nonlocality makes the width of the domain wall thinner and the energy per unit area smaller compared to the local case. This talk is based on arXiv:2203.04942.
Strings: Thursday, 03/10/2022, 14:00, room 226
In this talk we will overview recent progress on understanding some aspects of IR dualities between Lagrangian constructions of 4d SCFTs and their engineering starting from 6d. We will illustrate these understandings mainly with the compactifications of D-type conformal matter and higher rank E-string theories down to four dimensions.
The standard cosmological model assumes the existence of dark contributions to the energy content of the Universe. In particular, it assumes a mysterious component known as Dark Matter, which in the standard scenario is considered to be "Cold". In this talk I will consider Self-Interacting Warm Dark Matter (SI-WDM) models as alternative candidates. After introducing the main motivations and the basic phenomenology, I will present a general framework for computing the evolution of cosmological perturbations at linear level, which derives from a Boltzmann hierarchy based on a parametrization of the scattering amplitude that allows one to retain certain model independence on the particular interaction Lagrangian. I will show some results we obtained using a numerical implementation of the framework in an extended version of CLASS code, and in particular some observational constraints we obtained using Milky Way satellites Counts and Lyman-alpha forest from a phenomenological approach. As a result, I will show that sufficient self-interaction could turn the lower bounds on the mass of the WDM particles less restrictive than without self-interaction and in particular relax constraints on the traditional νMSM model.
I describe a technique for removing gauge dependence from graviton loop corrections to the effective field equations. I present explicit results on flat space background for a massless, minimally coupled scalar and for electromagnetism. I then describe how the procedure generalizes to de Sitter background. This talk is based on arXiv:1708.06239
Quantum gravitational corrections on flat space background do not affect particle kinematics at all, and only make fractional changes of order G/r^2 to long range forces. The situation during inflation is very different because (1) the Hubble parameter H allows fractional corrections of the form G H^2 and (2) the continuous production of inflationary gravitons introduces a secular element. As a result, corrections to both particle kinematics and long range forces typically grow like logarithms of the scale factor and/or the spatial separation. If inflation persists long enough, this growth must eventually cause perturbation theory to break down, begging the question of what happens next. I report on recent progress in summing the very similar large logarithms which occur in nonlinear sigma models by combining a variant of Starobinsky's stochastic formalism with a variant of the renormalization group. I discuss how this technique can be generalized to quantum gravity. This talk is based on arXiv:2110.08715 with Shun-Pei Miao and Nick Tsamis.
Due to the modern telescopes, we found that the Universe is filled with a cosmic web which is composed of interconnected filaments of galaxies separated by giant voids. The emergence of this large-scale structure is one of the major challenges of modern cosmology. We study this phenomenon with the help of relativistic N-body cosmological simulation based on General Relativity. It is well known that gravity is the main force responsible for the structure formation in the Universe. In the first part of my talk, I demonstrate that in the cosmological setting gravitational interaction undergoes an exponential cutoff at large cosmological scales. This effect is called cosmic screening. It arises due to the interaction of the gravitational field with the background matter. Then, I compare two competing relativistic approaches to the N-body simulation of the Universe large-scale structure: "gevolution" vs. "screening". To this end, employing the corresponding alternative computer codes, I demonstrate that the corresponding power spectra are in very good agreement between the compared schemes. However, since the perturbed Einstein equations have much simpler form in the "screening" approach, the simulation with this code consumes less computational time, saving almost 40% of CPU (central processing unit) hours.
I shall start by reviewing the connection between the wormholes and entanglement in the context of AdS/CFT. I shall then show how the Berry phase, a geometrical phase encoding information about the topology may be used to reveal similarities between the Hilbert space structure on both sides of the correspondence. This correspondence might open up an exciting new avenue to understanding the factorization puzzle in AdS/CFT. Furthermore, I shall argue how this concept unifies the role of entanglement in "creating" a generic quantum system, ranging from simple quantum mechanical models to entangled CFTs.
Some of our best ideas on early universe physics are about to be put to the test by an unprecedented array of cosmological probes. The data these will collect span a vast range of scales, from the CMB to large scale structure, from pulsar timing arrays all the way to laser interferometers. This combined wealth of new information holds the potential to transform not just our understanding of cosmology, but also particle physics. Probing the earliest accessible epoch, the accelerated expansion known as inflation, is crucial: inflation can provide a cosmological portal to otherwise inaccessible energy scales. The spectacular success of the inflationary paradigm in explaining the origin of cosmic structure demands that we tackle a number of compelling questions still in need of an answer: what is the energy scale of inflation? What fields were active during inflation? In this talk I will review recent progress on the inflationary field content. I will survey different approaches to address the most pressing challenges and provide specific examples. I will then focus on the key observables, starting with primordial gravitational waves, and discuss their prospects for detection.
Holography dictates that information about bulk physics is encoded by a conformally invariant quantum field theory that lives on the boundary of spacetime. It is tantalizing to apply this principle to de Sitter space, which roughly describes the cosmology of the actual universe. Even for the case of matter living on a stationary dS background, the above correspondence is confusing: the boundary theory, which describes cosmological correlators at late times, is a violently non-unitary CFT. This is unfortunate, as unitarity is a key ingredient in proving theorems about the landscape of possible theories. In this talk I will argue that unitarity is nonetheless recovered. In particular, I will give a first example that conformal bootstrap methods can be used to constrain the space of de Sitter theories.
Cosmological inflation predicts the existence of a stochastic background of gravitational waves (GW), whose features depend on the model of inflation under consideration. There exist well motivated frameworks leading to an enhancement of the primordial GW spectrum at frequency scales testable with GW experiments, with specific features as parity violation, anisotropies, and non- Gaussianity. I will explain the properties of such scenarios, and their distinctive predictions for what respect GW observables. I will then discuss perspectives for testing these predictions with future GW experiments.
In this talk I will review the basic ingredients which allows one to formulate 10D super-Yang-Mills on pure spinor superspace. The respective pure spinor master action in the gauge b_{0}V = QΞ, will then be used to show that tree-level scattering amplitudes calculated via perturbiner methods, match those obtained from pure spinor CFT techniques. I will also discuss how to compute pure spinor kinematic numerators through the use of standard Feynman rules, and show these are described by compact expressions involving the b-ghost operator. Remarkably, it will be shown how color-kinematics duality immediately emerges in this pure spinor framework after imposing the Siegel gauge condition b_{0}V = 0.
After introducing the standard model of cosmology and its parameters, I will discuss two important tensions between early and late-time measurements, namely the H0 tension and the sigma8 tension. Considering a small late-time deviation of the standard model, I will derive fully analytical conditions that any late-time dark energy model has to satisfy in order to solve both tensions simultaneously (see arxiv:2201.11623).
I will describe the parallel transport of modular Hamiltonians encoding entanglement properties of a state. The Berry curvature associated to state-changing parallel transport is the Kirillov-Kostant symplectic form on an associated coadjoint orbit, one which differs appreciably from known Virasoro orbits. I will show that the boundary parallel transport process computes a bulk symplectic form for a Euclidean geometry obtained from the backreaction of a cosmic brane, with Dirichlet boundary conditions at the location of the brane. This construction gives a definition for the symplectic form on an entanglement wedge.
Unparticles are a hypothetical new form of matter created from fermions in an SU(N) gauge theory. Unparticles provide a wide spectrum of new cosmological applications. In my talk (based on arxiv:2010.02998 and arxiv:1912.10532), I will show that they can display a cosmological-constant-like behavior, and since then they can be used to generate cosmic inflation or dark energy. I will show realistic bouncing and cyclic Universes filled with unparticles and perfect fluid. I will also discuss constraints on unparticles energy density and their possible role in relaxing the Hubble tension.
I will discuss partition functions in three-dimensional quantum gravity with negative cosmological constant in canonical quantization. I will review the phase space and its quantization in detail, which leads to the computation of the gravity partition functions on 3d manifolds which do not support semiclassical saddles. It is often simpler to consider chiral gravity that only captures the left-movers, since ordinary gravity gives divergent answers. I finally explain a dual description in terms of topological recursion of a certain part of the partition function that is the uplift of the random matrix model for JT-gravity. Based on 2204.09789.
Asteroseismology allows us to determine the properties of stars through the observation of their global oscillations, giving information about the star's mass, radius and age. Beyond being interesting in their own right, these measurements are essential for a variety of endeavours throughout astrophysics, such as galactic archaeology and the characterization of exoplanets. For the stars with the very best observations, it is possible to additionally measure some aspects of the internal stellar structure, such as the density and sound speed profile throughout the stellar core. This in turn presents the exciting opportunity to test the physics of stellar evolution. These asteroseismic tests can range from assessing mixing mechanisms in stellar interiors, to measuring cosmological effects such as a time-variable gravitational constant. In this seminar, I will give an overview of the asteroseismology of low-mass stars, and highlight the progress that is being made toward mapping out their interior structures.
I will discuss the spin modifications to the fundamental plane of BH activity, an empirical correlation between X-ray, radio luminosity and the mass of active BHs. I will further focus on how to extract spin info from those multiwavelength signals and its implications/bounds on ultralight particle properties such as mass, self-interaction and energy density and their possibility to be some fraction of dark matter.
The equivalence principle naturally provides gravity with a geometrical character. However, the precise geometry we employ to describe it admits a certain flexibility. In particular, within a metric-affine framework, Einstein's gravity can be equivalently ascribed to the three independent objects that characterise a connection, i.e., curvature, torsion and non-metricity. After reviewing these three alternative descriptions of gravity, I will uncover a general teleparallel description of GR and how pathologies generally arise beyond the GR equivalents in the landscape of metric-affine theories.
In this talk, I will discuss the statistical distribution of OPE coefficients in chaotic conformal field theories. I will present the OPE Randomness Hypothesis (ORH), a generalization of ETH to CFTs which treats any OPE coefficient involving a heavy operator as a pseudo-random variable with an approximate Gaussian distribution. I will then present some evidence for this conjecture, based on the size of the non-Gaussianities and on insights from random matrix theory. Turning to the bulk, I will argue that semi-classical gravity geometrizes these statistical correlations by wormhole geometries. I will show that the non-Gaussianities of the OPE coefficients predict a new connected wormhole geometry that dominates over the genus-2 wormhole.
We are currently witnessing the dawn of a new era in astrophysics and cosmology, started by the LIGO/Virgo observations of gravitational waves. These signals also open a new window into processes taking place in the first moments of our Universe. This is due to the fact that GWs propagate freely from the moment of their production unlike like photon based signals which can only propagate freely since the Universe became transparent due to recombination.I will discuss prospects for GW detection with the next generation of experiments. Including the problems connected with observation of a primordial signal in the presence of a foreground produced much more recently by astrophysical objects. The specific early Universe sources I will focus on are cosmological first order phase transitions and cosmic string networks. I will also discuss to what extent we can probe the expansion of the Universe using these primordial GW signals.
In this talk, I will discuss two questions. First, do the equations of relativistic hydrodynamics make sense? And second, how universal are the long-distance, late-time predictions of classical hydrodynamics?
Correlation functions of primordial fluctuations provide us an exciting avenue into the physics with extremely high energy in the very early Universe. Recently the bootstrap approach has offered new perspectives and powerful tools to study these cosmological correlators. In this talk, by incorporating the latest developments, I will "bootstrap" two types of correlators generated by boost-breaking interactions during inflation, which are most relevant for the next-generation observations. The first one is the contact correlators arising from higher-derivative self-interactions of the inflaton. The second is the cosmological collider physics, where the masses and spins of heavy particles leave unique imprints in the scalar bispectra. Since the boost symmetries are broken in our consideration, the signals of non-Gaussianity are boosted to be detectable for near-future experiments. Furthermore through the bootstrap approach, we derive for the first time not only a complete set of these correlators systematically, but also their full shape information analytically.
We will review some of the black hole solutions in scalar tensor theories focusing on the different symmetries of the underlying theories and principal properties. We will discuss one particular theory whose origin lies in higher dimensional Lovelock theory and construct a regular traversable wormhole and neutron star metrics.
The Starobinsky model of cosmological inflation is reviewed as the theoretical probe of a more fundamental theory of gravity for the very early Universe. The modified Starobinsky supergravity is introduced, and its observational predictions are derived and compared to the current astrophysical and cosmological observations. The specific mechanism of the primordial black holes production in supergravity is proposed, and its physical predictions for dark matter and induced gravitational waves are discussed in detail.
10/03/2022, 17:00, online
Recent developments in our understanding of quantum field theory in de Sitter space have revealed how to derive the equations Stochastic Inflation and included corrections to them systematically. In this talk, I will review the Soft de Sitter Effective Theory and how it enables to calculate these equations. I will then apply the results to massless λΦ4 theory in de Sitter and calculate the next-to-next-to leading order corrections to the equations and relaxation eigenvalues. We will then apply the same techniques to primordial non-Gaussianity in single-field inflation where we will find the onset of eternal inflation becomes incalculable in some surprising circumstances.
In this talk I'll review recent developments in the measurements of intergalactic magnetic fields in the voids of large scale structure with gamma-ray telescopes and ultra-high energy cosmic ray detectors. In particular, I'll show that the gamma-ray measurement method can be used to detect the primordial magnetic field with a strength of up to 10-11 G, values interesting for reducing H0 tension. Same magnetic field, if produced at QCD phase transition can be responsible for NANOGrav gravitational wave signal. Also I'll discuss how one can distinguish magnetic field produced at inflation by simultaneous measurements of extended emission around several nearby TeV blazars. Finally, I'll show first upper limit on magnetic field in the void of large scale structure from ultra-high energy cosmic ray measurement in the direction of Perseus-Pisces supercluster.
On the track of universality in cosmic structures
Our cosmic neighbourhood is richly structured by galaxies, clusters, and larger objects which are mainly composed of dark matter. Gravitationally bound objects dominated by dark matter exhibit density profiles which are self-similar over many orders of magnitude in mass. Why is this so? I will use kinetic field theory to address this question, and show that cosmic structures develop universal correlations for wide classes of initial conditions.
Bondi's celebrated mass loss formula measures the rate of change of energy carried away from an isolated system (in asymptotically flat space-time) by gravitational radiation. In this talk, we generalize the Bondi-Sachs formalism for de Sitter space-time. We also discuss the mass loss formula for linearized gravitational fields in de Sitter setting.
10/02/2022, 16:00, online seminar
In this talk I will introduce a novel mechanism of forming primordial black holes (PBHs) via a first-order phase transition (FOPT). If a fermion species gains a huge mass in the true vacuum, the corresponding particles get trapped in the false vacuum as they do not have sufficient energy to penetrate the bubble wall. After the FOPT, the fermions are compressed into the false vacuum remnants to form non-topological solitons called Fermi-balls, and then collapse to PBHs due to the interior Yukawa attractive force. After describing the general mechanism, I will discuss a concrete application to the electroweak phase transition and demonstrate that a PBH dark matter scenario is possible.
Minimally modified gravity models are a class of modified gravity models with just two local degrees of freedom as in general relativity. In this talk I want to discuss their general properties such as the existence of a preferred foliation and discuss their phenomenology in the case of inflation and the late universe.
We present a new family of exact four dimensional Taub-NUT spacetimes in Einstein-Λ theory supplemented with a conformally coupled scalar field exhibiting a power-counting super-renormalizable potential. Our configurations are constructed in the following manner: A solution of a conformally coupled theory with a conformal potential, henceforth the seed (gμν,Φ), is transformed by the action of a specific change of frame in addition with a simultaneous shift of the scalar seed. The conformal factor of the transformation and the shift are both affine functions of the original scalar Φ. The new configuration, (g'μν,Φ'), now solves the field equations of a conformally coupled theory with the extended aforementioned renormalizable potential, this under the presence of an effective cosmological constant. The new solution spectrum is notoriously enhanced with respect to the original seed containing regular black holes, wormholes and bouncing cosmologies. For a non-vanishing cosmological constant exact black hole to wormhole and black hole to bouncing cosmology transitions are observable, both smoothly controlled by the mass parameter.
20/01/2022, 16:00, Zoom seminar
Axion-like particles may play a key role in early universe cosmology. They are naturally equipped with the right properties to explain cosmic inflation, can dynamically explain the smallness of the electroweak scale, may be involved in the generation of the matter antimatter asymmetry and are promising dark matter candidates. In this talk I discuss a generic but previously overlooked particle particle production mechanism, resulting in the dual production of gauge fields and fermions induced by axion-like particles. I will discuss how this crucially impacts all of the cosmological scenarios mentioned above and may be probed with upcoming gravitational wave detectors.
The big challenge in describing dark energy as a dynamical field is that we do not see any sign of it in local tests of gravity. Moreover, all gravitational wave events detected so far are in very good agreement with General Relativity predictions. In this talk I will introduce ``kinetic screening'' as a way to overcome this dichotomy and I will present our recent results in testing it with black hole collapse, and the late inspiral and merger of binary neutron stars.
The LIGO-Virgo observations have already demonstrated the power of gravitational wave astronomy. In near future various experiments will probe gravitational wave signals across a broad range of frequencies providing invaluable insight into astrophysics, cosmology and fundamental physics. In this talk I will discuss how we can use these observations to test dark matter properties. I will focus on two signatures of compact dark matter objects: gravitational waves produced by their mergers, and lensing of gravitational waves. I will show that the LIGO-Virgo observations already provide constraints on compact dark matter and I will discuss the future prospects of gravitational wave probes of dark matter.
We compare the massive Kalb-Ramond and Proca fields with a quartic self-interaction and show that the same strong coupling scale is present in both theories. In the Proca theory, the longitudinal mode enters the strongly coupled regime beyond this scale, while the two transverse modes propagate further and survive in the massless limit. In contrast, in case of the massive Kalb-Ramond field, the two transverse modes become strongly coupled beyond the Vainshtein scale, while the pseudo-scalar mode remains in the weak coupling regime and survives in the massless limit. This contradicts the numerous claims in the literature that these theories are dual to each other, which is shown to be false. We show that the difference between the theories can be traced already to the free theories without a self-interaction by studying the behavior of quantum fluctuations for different modes.
18/11/2021, 16:00, Zoom
One of the most convincing reasons to expect physics beyond the Standard model is the inbalance between matter and anti-matter. Some fantastic paradigms exist that can be probed at a low scale including electroweak baryogenesis, mesogenesis and resonant leptogenesis. While these paradigms or worthy of dedicated attention, the elephant in the room is that there are two paradigms that are very minimal and involve physics at scales we cannot possibly reach with Earth based colliders in our life time. I will first discuss the nightmare scenario of thermal leptogenesis implemented with no BSM particle content beyond sterile neutrinos and an inflaton. In this case, measurements of the top and Higgs mass along with inflationary observables can shed some light on the plausibility, or lack thereof, of vanilla leptogenesis. I will then discuss the GUT leptogenesis and Affleck Dine baryogenesis. I argue in both these cases there are generic predictions of a primordial gravitational wave background that can be measured today. The presence of such a signal would lend plausibility to one of these scenarios. Finally I discuss the discriminating power of GWs in discerning the symmetry breaking path through the variable signals of hybrid defects.
Local measurements of the Hubble parameter obtained from the distance ladder at low redshift are in tension with global values inferred from cosmological standard rulers. A key role in the tension is played by the assumptions on the cosmological history, in particular on the origin of dark energy. Here we consider a scenario where dark energy originates from the amplification of quantum fluctuations of a light field in inflation. We show that spatial correlations inherited from inflationary quantum fluctuations can reduce the Hubble tension down to one standard deviation, thus relieving the problem with respect to the standard cosmological model. Upcoming missions, like Euclid, will be able to test the predictions of models in this class.
In complete analogy with axion dark matter, gravitational waves are converted into electromagnetic radiation when they propagate in magnetic fields. I will explain how this effect can be used to detect gravitational waves. With this in mind, I will examine gamma-ray observations by Fermi-LAT and HESS strongly suggesting the existence of a non-vanishing cosmic magnetic field. Then, I will show how the consequent conversion of gravitational waves into radio waves might distort the CMB, leading to bounds that exceed those from current terrestrial experiments. I will discuss prospects from the future gamma-ray observatory CTA and argue that future advances in 21 cm astronomy might push these bounds below the Neff constraint on the radiation density present during the CMB formation.
In this seminar I will talk about a particular kind of light scalar field dark matter, namely those with self-interactions, which I will call self-interacting Bose-Einstein condensed (SIBEC) dark matter. These have been found to possibly solve some of the small-scale issues of LCDM, such as the core-cusp problem, by providing an interaction pressure that supports hydrostatic halo cores on the order of kpc. Unlike their non-interacting ultra-light counterparts (fuzzy dark matter), there is not yet a large body of work dedicated to providing constraints on the strength of the SIBEC-DM self-interaction using large-scale observables. I will present such constraints, which weakly rules out the self-interactions generally thought to be needed to solve the cusp-core problem in the simplest scenario of SIBEC-DM. I will also talk about ongoing efforts to study structure formation in a SIBEC-DM universe using cosmological simulations.
Understanding the reason behind the observed accelerating expansion of the Universe is one of the most notable puzzles in modern cosmology, and conceivably in fundamental physics. In the upcoming years, near future surveys will probe structure formation with unprecedented precision and will put firm constraints on the cosmological parameters, including those that describe properties of dark energy. In light of this, in the first part of my talk, I'm going to show a systematic extension of the Effective Field Theory of Dark Energy framework to non-linear clustering. As a first step, we have studied the k-essence model and have developed a relativistic N-body code, k-evolution. I'm going to talk about the k-evolution results, including the effect of k-essence perturbations on the matter and gravitational potential power spectra and the k-essence structures formed around the dark matter halos. In the second part of my talk, I'm going to show that for some choice of parameters the k-essence non-linearities suffer from a new instability and blow up in finite time. This talk will be based on the following publications and an ongoing work: arXiv:2107.14215, arXiv:2007.04968, arXiv:1910.01105, arXiv:1910.01104, arXiv:1906.04748.
I will start by briefly reviewing wormholes - exotic compact objects the interest to which has been recently largely revived. In [J. Blazquez-Salcedo, C. Knoll, E. Radu, Phys. Rev. Lett. 126 (2021) no.10, 101102] asymptotically flat traversable wormhole solutions were obtained in the Einstein-Dirac-Maxwell theory without using exotic matter. The normalizable numerical solutions found in this work above require a peculiar behavior at the throat: the mirror symmetry relatively the throat leads to the nonsmoothness of gravitational and matter fields. In particular, one must postulate the changing of the sign of the fermionic charge density at the throat requiring coexistence of particle and antiparticles without annihilation and posing a membrane of matter at the throat with specific properties. Apparently this kind of configurations could not exist in nature. We show that there are wormhole solutions, which are asymmetric relative to the throat and endowed by smooth gravitational and matter fields, being, thereby, free from all the above problems. This indicates that such wormhole configurations could also be supported in a realistic scenario.
Gravitational lensing of CMB photons by the matter distribution of the Universe can be both a blessing and nuisance. It's a blessing because of the way it can be harnessed to map the structures responsible for the deflections, and from this, constrain any physics affecting the growth of cosmic structure, such as the sum of the neutrino masses or dark matter. But lensing is also a nuisance because it generates B-mode polarization which obscures the highly-sought-after primordial signal associated with gravitational waves generated during cosmic inflation, our most accessible portal to physics near the GUT scale. In this talk, I will focus on key systematic effects that need to be controlled in order to harness the full potential of the Simons Observatory (SO), CMB-S4, and other upcoming experiments to make progress in these exciting areas. In the first part of my talk, I will briefly review the ways in which emission from galaxies and clusters can bias power spectra and cross-correlations of CMB lensing reconstructions, and describe our ongoing efforts to understand these biases analytically. Then, in the second part, I will explain how the lensing contamination to CMB B-modes can be removed — what is known as delensing — and discuss our recent findings regarding the performance of different delensing methods. I will also summarize preparatory work to delens SO data, and highlight biases to watch out for (and how to mitigate them) when the matter proxy used for delensing is either the cosmic infrared background or a lensing reconstruction derived from the CMB itself.
The Hamiltonian of a particle orbiting in a static, spherically symmetric potential can be written as a function of only two actions, the angular momentum and the radial action, which is a conserved quantity in adiabatically evolving potentials (a so-called adiabatic invariant). For that reason, canonical action-angle coordinates are frequently used as a basis for perturbative calculations, for instance in Hamiltonian perturbation theory, when considering the long-term evolution of near-equilibrium systems. In impulsively (fast) evolving potentials, conservation of radial actions breaks down and actions change discontinuously by a non-deterministic amount. Here, I focus on the transition between adiabatically and impulsively evolution. I show that the evolution of radial actions in mildly time-dependent potentials is given by an oscillation around a constant value, the amplitude of which is set by the rate at which the potential changes. As a consequence, the evolution of a distribution of radial actions is governed by a diffusion equation. Based on the derived drift and diffusion coefficients, I qualitatively discuss the non-linear regime and demonstrate that the non-linear evolution of radial action distributions is given by an asymmetric drift towards lower actions. I illustrate the relevance of these results on two astrophysical examples, accretion onto a cold dark matter (CDM) halo and the cusp-core transformation in a dwarf-sized self-interacting dark matter (SIDM) halo.
17/06/2021 -- Marco Astorino (INFN, Milano, Italy)
Place: Zoom seminar
Analytical and regular solutions in four-dimensional General Relativity representing multiblack hole systems immersed in external gravitational fields are discussed. The external fields background is composed by an infinite multipolar expansion, which allows to regularise the conical singularities of an array of collinear static black holes. Charged, Rotating, NUT and accelerating generalisations are presented. Limits to the binary Majumdar–Papapetrou, Bonnor–Swaminarayan and the Bičák–Hoenselaers–Schmidt metrics are recovered.
10/06/2021 -- Francis-Yan Cyr-Racine (University of New Mexico)
Observations of dark matter structure at the smallest scales can tell us about physical processes taking place in the dark sector at very early times. Here, we point out that the presence of light degrees of freedom coupling to dark matter in the early Universe introduces a localized feature in the halo mass function. This leads to a mass function that is distinct in shape than either warm dark matter or cold dark matter, hence distinguishing these models from other leading classes of dark matter theories. We present analytical calculations of these mass functions and show that they closely match N-body simulations results. We also discuss the impact of these mass functions at high-redshift on the 21-cm signal from cosmic dawn. We briefly discuss how current constraints on the abundance of small-scale dark matter structure do not directly apply to these models due to the multi-scale nature of their mass function.
27/05/2021 -- Daniel Litim (University of Sussex, UK)
Fixed points under the renormalisation group are key for a fundamental definition of quantum field theory. They can be free such as in asymptotic freedom of QCD, or interacting, such as in asymptotic safety. In this talk, I provide rigorous results for interacting UV and IR fixed points in general 4d QFTs with or without supersymmetry. I will also discuss the state of the art for fixed points in 4d quantum gravity with or without matter, including an overview of results and open challenges.
20/05/2021 -- Macarena Lagos (Columbia University, NY, USA)
Gravitational waves (GWs) allow us to probe the content of the Universe and the behaviour of gravity on cosmological scales, through information contained in their propagation. For instance, the presence of dynamical fields interacting non-minimally with gravity may induce a non-trivial propagation of GWs, changing their propagation speed, dispersion relation, or detected amplitude, among others. In this talk, I will discuss particular cosmological scenarios where GWs interact with another tensor field, such as in the theory of massive bigravity. I will illustrate explicitly how the GW signal from a coalescence of black holes gets distorted during propagation, generating specific features such as echoes of the GW signal emitted. These strong features suggest that stringent constraints on interacting GWs can be placed with current and future GW detectors.
13/05/2021 -- Swetha Bhagwat (Sapienza, University of Rome, Italy)
In this talk I will talk about a recent work where we propose a new test of GR. The gravitational waves emitted during the coalescence of binary black holes offers an excellent probe to test the behaviour of strong gravity at different length scales. In this work, we propose a test called the merger-ringdown consistency test that focuses on probing horizon-scale dynamics of strong-gravity using the binary black hole ringdowns. This test is a modification of the more traditional inspiral-merger-ringdown consistency test. I will present a proof-of-concept study of this test using simulated binary black hole ringdowns embedded in the Einstein Telescope-like noise. Furthermore, we use a deep learning framework, setting a precedence for performing precision tests of gravity with neural networks.
06/05/2021 -- Viviana Niro (APC Paris, France)
The HAWC telescopes has recently revealed new spectra for gamma-ray sources in the Galactic plane. In this talk I will review the possibility of detecting these sources at KM3 detectors. I will consider, with particular emphasis, the 2HWC J1825-134 source. Amongst the HAWC sources, it is indeed the most luminous in the multi-TeV domain and therefore is one of the first that should be searched for with a neutrino telescope in the northern hemisphere. I will show the prospects to detect this source at the KM3NeT detector and comment on the possibilities for others neutrino telescopes. I will consider, moreover, the gamma-ray sources eHWC J1907+063, eHWC J2019+368 and 2HWC J1857+027. For these sources, I will show the prediction for neutrinos at the IceCube detector, presenting the calculation of the statistical significance, considering 10 and 20 years of running time, and I will comment on the current results reported by the collaboration.
29/04/2021 -- Emel Altas (Karamanoglu Mehmetbey University, Turkey)
Using the time evolution equations of (cosmological) general relativity in the first order Fischer-Marsden form, we construct an integral that measures the amount of nonstationary energy on a given spacelike hypersurface in D dimensions. We also construct analytical initial data for a slowly moving and rotating black hole for generic orientations of the linear momentum and the spin. We solve the Hamiltonian constraint approximately and work out the properties of the apparent horizon and show the dependence of its shape on the angle between the spin and the linear momentum. In particular, a dimple, whose location depends on the mentioned angle, arises on the two-sphere geometry of the apparent horizon. We exclusively work in the case of conformally flat initial metrics.
22/04/2021 -- Teodor Borislavov Vasilev (Universidad Complutense de Madrid, Spain)
The big rip, the little rip and the little sibling of the big rip are cosmological doomsdays predicted by some phantom dark energy models that could describe the future evolution of our own Universe. When the Universe evolves towards either of these future cosmic events, all bounded structures and, ultimately, space-time itself are ripped apart. Nevertheless, it is commonly belief that quantum gravity effects may smooth or avoid these (classical) singularities. In this talk I will review the occurrence of these rip-like events in the scheme of alternative metric $f(R)$ theories of gravity from both classical and quantum points of view. The quantum analysis will be performed in the framework of $f(R)$ quantum geometrodynamics. This is a canonical quantization procedure based on the Wheeler-DeWitt equation for the case of $f(R)$ theories of gravity. In this context, I will discuss the avoidance of these (classical) singularities by means of the DeWitt criterion.
15/04/2021 -- Famaey Benoit (Observatoire astronomique de Strasbourg, France)
In this talk I will summarize the intriguing phenomenology associated to galaxy scaling relations, a phenomenology which is still challenging to understand in the standard context. I will show how the MOND paradigm of Milgrom naturally solves most of these puzzles. I will also show that it however comes with new puzzles. I will focus the end of the talk on some of the main constraints that should be taken into account for MONDian model-building, especially on galaxy cluster scales.
25/03/2021 -- Johannes Noller (Cambridge University, DAMTP, UK)
Recent years have seen great progress in probing gravitational physics on a vast range of scales, from the very largest cosmological scales to the microscopic ones associated with high energy particle physics. In this talk I will give a whistle stop tour of some of the different physical systems we can use to learn more about gravity in this way, with a focus on how we can use them synoptically to learn more about dark energy. Stops will include gravitational waves emitted by binary systems, the cosmic microwave background, large scale structure formation, and (theoretical) bounds on the behaviour of gravity on scales inaccessible to current experiments.
18/03/2021 -- Kazuya Koyama (Portsmouth U., ICG, UK)
Future galaxy surveys such as Euclid, LSST and SKA will cover larger and larger scales where general relativistic effects become important. On the other hand, our study of large scale structure still relies on Newtonian N-body simulations. I show how standard Newtonian N-body simulations can be interpreted in terms of the weak-field limit of general relativity. Our framework allows the inclusion of radiation perturbations and the non-linear evolution of matter. I show how to construct the weak-field metric by combining Newtonian simulations with results from Einstein-Boltzmann codes. I discuss observational effects on weak lensing and ray tracing, identifying important relativistic corrections. Finally, I show that this framework can be extended to gravitational theories beyond general relativity.
11/03/2021 -- Raissa Mendes (Universidade Federal Fluminenense, Brazil)
In this talk, I will discuss how extreme properties that may be present in the interior of some neutron stars can turn them into unique laboratories for tests of modified theories of gravity. In particular, I will focus on the case of scalar-tensor theories with screening mechanisms. These theories offer an interesting framework for cosmology, since the scalar degree of freedom could help to drive the accelerated expansion of the universe, while screening off its effects in solar system scales, where general relativity is very well tested. Although it is typically understood that screening becomes more effective in high density environments, we will show in a few interesting models how it can actually fail in the densest places in nature - the core of some neutron stars.
04/03/2021 -- Ali Seraj (Universite Libre de Bruxelles)
In this talk, I will review the gravitational wave memory effect in Einstein general relativity and discuss its relation to BMS symmetries. Then I will consider Brans-Dicke theory containing an additional gravitational degree of freedom. This mode is associated with novel memory effects. However, being a scalar, it is not obvious which symmetry this memory corresponds to. I will show that the memories associated with the breathing mode correspond to the asymptotic symmetries of a dual 2 form representation of the scalar field.
18/02/2021 -- François Larrouturou (IAP Paris, France)
In 2011, F. Hassan and R. Rosen achieved the construction of the first gosth-free theory of two interacting spin-2 fields. But despite its great elegance and interesting phenomenological implications, this theory of "bigravity" suffers from a gradient-type instability. Moreover, it postulates the existence of vectorial and scalar gravitational modes, the latter being severely constrained by observations of binary pulsars. This stimulated us to construct a "minimal" theory of bigravity, ie. a theory of two interacting spin-2 fields that propagates only four tensorial degrees of freedom. This talk will first introduce the motivations that led to this "minimal" theory of bigravity, and review its construction by a Hamiltonian procedure. I will then present its cosmological phenomenology, and show that it provides a stable nonlinear completion of the cosmology of the Hassan-Rosen bigravity. We will end by discussing interesting phenomenological features and possible ways to test the theory.
11/02/2021 -- Ondrej Hulik (Charles University, Prague)
I will review the formulation of generalized and exceptional geometry as the underlying geometry of supergravity and M-theory. With suitable generalized framework one can treat these distict types of geometries as a special case of one object "G-algebroid". I will sketch main underlying idea and how is this formulation useful in treating T/U duality.
04/02/2021 -- Giulia Cusin (Universite de Geneve, Switzerland)
There are two possible approaches to describe a population of astrophysical gravitational wave (GW) sources: one can focus on high signal-to-noise sources that can be detected individually, and build a catalogue. Alternatively, one can take a background-approach and study the incoherent superposition of GW signals emitted by the entire population (both resolved and unresolved sources) from the onset of stellar activity until today. A detailed description of signals from resolvable sources, and of the properties of a stochastic background, including propagation effects, is crucial to extract accurate information on the underlying source population. Moreover, these two observables contain complementary astrophysical information and, once combined, they can provide insight on the properties of a faint and distant sub-population that cannot be accessed with any other means of observation. In my talk I will outline the differences and the complementarity of these two approaches, from the point of view of observations and of theoretical modeling, and stress a few caveats to be kept in mind when deriving predictions to be compared with (present and future) datasets.
28/01/2021 -- Paolo Benincasa (IFT Madrid, Spain)
QFT in nearly dS space-times and, more generally, in FRW backgrounds allows us to describe correlations at the end of inflation. However, how to extract fundamental physics out of them is still a challange: we do not even know how fundamental pillars such as causality and unitarity of the time evolution constrain them. In this talk I will present a recent program which aims to construct the wavefunction of the universe, which generates these correlations, directly from first principles without making any reference to time evolution: these observables naturally live at the boundary of the nearly dS/FRW space-times and the time evolution is integrated out. I will discuss two approaches: one approach aims to construct the wavefunction from the knowledge of its general analytic properties, in a similar fashion as scattering amplitudes in flat space-time (which can be formulated directly from on-shell data with no reference to fields whatsoever); the second approach aims to find new mathematical objects, with their own first principle definition, which has the very same properties we ascribe to the wavefunction of the universe, with all the basic physical principles such as causality and unitarity, emerging from their intrinsic definition.
21/01/2021 -- Timothy Anson (Universite Paris-Saclay, France)
Starting from a recently constructed stealth Kerr solution of higher order scalar tensor theory, I will discuss disformal versions of the Kerr spacetime with a constant disformal factor and a regular scalar field. While the disformed metric has only a ring singularity and asymptotically is quite similar to Kerr, it is neither Ricci flat nor circular. Non-circularity has far reaching consequences on the structure of the solution. In particular, I will discuss the properties of important hypersurfaces in the disformed spacetime: ergosphere, stationary limit and event horizon, and highlight the differences with the Kerr metric.
14/01/2021 -- Siddharth Prabhu (Tata Institute of Fundamental Research, India)
In the last couple of decades, we have learnt a great deal about quantum gravity and its holographic nature in asymptotically AdS spacetimes. Here, we explore this idea in asymptotically flat spacetimes with the following question: Can an observer on the boundary of the spacetime distinguish between two states that are deemed distinguishable by an observer in the bulk? We argue that semiclassical gravity is an effective tool to answer this question, and extrapolate its results to make a few reasonable assumptions regarding the low energy structure of any complete theory of quantum gravity. Using these, we argue that an asymptotic observer indeed has access to all the information in a theory with only massless bulk excitations, provided they are at the past boundary of future null infinity. We also show that information available in any cut of future null infinity is also available in any later cut, but the converse doesn't hold. Similar results hold for past null infinity. We will comment on several interesting questions that this line of investigation sheds light on.
17/12/2020 -- Oksana Iarygina (Leiden University, NL)
The reheating era in the early universe, that connects inflation and big-bang nucleosynthesis, is still very weakly constrained. However, inefficient preheating can lead to a prolonged matter-dominated phase after inflation, changing the time during inflation when the Cosmic Microwave Background (CMB) modes exit the horizon. This shifts CMB predictions and thus can break the degeneracy of otherwise indistinguishable inflation models. Typically, models that allow a UV completion, include many distinct fields, often with curved field-space manifolds. Based on arXiv:2005.00528, arXiv:1810.02804, the present talk focuses on the physical mass scales that control the dynamics and observable predictions of all multi-field models with a non-zero field-space curvature: the Hessian of the potential, the turning rates of the trajectory and the field-space curvatures. We analyse how their interplay affects reheating and shifts inflationary predictions. We also demonstrate the existence of a region in parameter space, where the symmetric and asymmetric multi-field alpha-attractors, that are known by the universality of their single-field inflationary predictions, are explicitly not the same: one preheats and one does not. This leads to a different cosmic history for the two models, with one possibly exhibiting a long matter-dominated phase, and a shift in the observational predictions for ns and r.
10/12/2020 -- Horng Sheng Chia (Institute for Advanced Study, Princeton, USA)
Black holes are never isolated in realistic astrophysical environments; instead, they are often perturbed by complicated external tidal fields. How does a black hole respond to these tidal perturbations? In this talk, I will discuss both the conservative and dissipative responses of the Kerr black hole to a weak and adiabatic gravitational field. The former describes how the black hole would change its shape due to these tidal interactions, and is quantified by the so-called "Love numbers". On the other hand, the latter describes how energy and angular momentum are exchanged between the black hole and its tidal environment due to the absorptive nature of the event horizon. I will show that the Love numbers of the Kerr black hole vanish identically — in other words, you cannot stretch a black hole. I will also describe how the Kerr black hole's dissipative response implies that energy and angular momentum can either be lost to or extracted from the black hole, with the latter process commonly known as the black hole superradiance. I will end by discussing how these tidal responses leave distinct imprints on the gravitational waves emitted by binary black holes.
03/12/2020 -- Antony Lewis (University of Sussex, UK)
Cosmological measurements from the cosmic microwave background, large-scale structure, lensing, supernovae and other data are now able to constrain multiple cosmological parameters to percent-level precision within in the context of the standard Lambda-CDM cosmology. Disagreements between these measurements assuming Lambda-CDM could provide strong evidence for beyond-Lambda-CDM physics. I review the status of current measurements and their agreement (or otherwise) within the standard cosmological model. I'll mention some possible types of model extensions that could help to resolve H0 tensions and how new physics might be pinned down by forthcoming data. Galaxy and CMB lensing provide interesting complementary constraints; I'll show the state-of-the-art results from CMB lensing, compare with galaxy lensing and discuss possible implications.
26/11/2020 -- Eiichiro Komatsu (Max-Planck-Institute for Astrophysics, Germany)
Polarised light of the cosmic microwave background, the remnant light of the Big Bang, is sensitive to parity-violating physics. In this presentation we report on a new measurement of parity violation from polarisation data of the European Space Agency (ESA)'s Planck satellite. The statistical significance of the measured signal is 2.4 sigma. If confirmed with higher statistical significance in future, it would have important implications for the elusive nature of dark matter and dark energy.
19/11/2020 -- Mikhail Shaposhnikov (EPFL)
It is well-known since the works of Utiyama and Kibble that the gravitational force can be obtained by gauging the Lorentz group, which puts gravity on the same footing as the Standard Model fields. The resulting theory - Einstein-Cartan gravity - happens to be very interesting. First, it incorporates Higgs inflation at energies below the onset of the strong-coupling of the theory. Second, it contains a four-fermion interaction that originates from torsion associated with spin degrees of freedom. This interaction leads to a novel universal mechanism for producing singlet fermions in the Early Universe. These fermions can play the role of dark matter particles. Finally, it may generate the electroweak symmetry breaking by a non-perturbative gravitational effect.
12/11/2020 -- William Barker (Kavli Institute for Cosmology, Cambridge, UK)
Several novel Poincare gauge theories of gravity (curvature and torsion) were recently found to be unitary/power-counting renormalizable in the weak regime, and pass solar system tests [1,2]. We show these theories contain LCDM as an attractor state, despite neither the Einstein-Hilbert or cosmological constant terms appearing the action: the only extra parameter (xLCDM) adds effective dark radiation to relieve the Hubble tension [3]. We show that the phenomenology of the general ten-parameter theory, including novel theories, can be easily understood through a non-canonical bi-scalar-tensor analogue [4]. We discuss ongoing Dirac-Bergmann analysis of the Hamiltonian in the strong regime.([1] arXiv:1812.02675, [2] arXiv:1910.14197, [3] arXiv:2003.02690, [4] arXiv:2006.03581)
05/11/2020 -- Fedor Bezrikov (University of Manchester, UK)
I'll make a review of the attempts to understand a seemingly simple process of reheating in Higgs inflation. Although reheating can be readily expected to happen at rather high temperatures, its details leave imprint on the number of inflationary e-foldings and, thus, on predictions for CMB parameters. The quest for the understanding reheating took some time, starting form simple, but incomplete approach, evolving into realisation that careful study in the strong coupled regime is inevitable. The approach of perturbative UV completion of the model by $R^2$ inflation was hoped to provide immediate answer to preheating dynamics in a weakly coupled theory, but turned into an ongoing study of evolution in non-linear potentials. I will also mention that in some cases even pure non-regularised Higgs inflation can allow for calculable predictions for preheating.
22/10/2020 -- Eugene Lim (King's College London, UK)
Inflation is now the paradigmatic theory of the Big Bang. But is it deserved? I will describe the conceptual and theoretical challenges that Inflation is still facing, argue that we should keep an open mind. In particular, I will argue that while it is a theory that claims to be a theory of initial conditions of the Universe, successful inflation actually depends on an intimate interplay between its own initial conditions and the inflationary model. I will show how one might go about probing this interplay by testing whether inflation can begin if its own initial conditions are not homogenous.
15/10/2020 -- Filippo Camilloni (University of Perugia, IT and Niels Bohr Institute, DK)
Force-free electrodynamics is a non-linear regime of Maxwell's equations often employed to provide a minimal non-trivial level of description for pulsar and black hole magnetospheres. For a solution of this system to be physically meaningful the field has to be magnetically dominated, Fˆ2=Bˆ2-Eˆ2>0, however no analytic solution is known to respect this requirement in the background of a highly-spinning black hole. In this talk I will show how the Near-Horizon Extreme Kerr (NHEK) region might play a crucial role for the construction of sensible models of extreme Kerr magnetospheres . Any stationary and axisymmetric force-free solution in the extreme Kerr background is observed to converge to an attractor in the NHEK region. We used this attractor as an universal starting point to develop a new perturbative approach, showing that at the second order in perturbation theory it is possible to find magnetically-dominated force-free fields. A similar attractor mechanism occurs in the Near-Horizon Near-Extreme Kerr (near-NHEK) region of a nearly-extreme Kerr black hole, thus providing a way to extend this formalism outside extremality.
08/10/2020 -- Anne Green (University of Nottingham, UK)
Place: Lecture Hall
Primordial Black Holes (PBHs) are black holes formed in the early Universe, for instance from the collapse of large density perturbations generated by inflation. The discovery of gravitational waves from mergers of ~10 Solar mass black hole binaries has led to increased interest in PBHs as a dark matter candidate. I will review the formation of PBHs and the limits on their abundance, with particular emphasis on microlensing constraints in the Solar mass region. I will also emphasise key open questions in the field (e.g. clustering, and methods for constraining asteroid mass PBHs).
12/03/2020 -- Keigo Shimada (Tokyo Institute of Techonology)
Place: 226
Scalar-tensor theories in metric-affine geometry are formulated. General Relativity is currently the most successful gravitational theory which has surpassed countless observations. However, in recent years, it has been noticed that GR cannot explain some cosmological phenomena such as inflation, dark energy and dark matter. To solve this, countless alternative gravitational theories beyond General Relativity has been proposed. However, most require the geometry to be Riemannian, just as in GR. In this talk, it will be shown how one could extend theories of gravity by 'deforming' Riemann Geometry into what is called metric-affine geometry, in which not only the metric but also that connection is an independent variable that is decided from the gravitational action. By applying metric-affine formalism to scalar-tensor theories, one notices that there are different and fruitful characteristics that appear when compared to the Riemann counterpart. Especially, through the novel symmetry of the connection called 'projective symmetry', one may find natural ways to eliminate ghosts that are caused by higher derivatives. Finally, some possible applications would be discussed. References: Phys.Rev. D98 (2018) no.4, 044038 Phys. Rev. D 100, 044037 (2019).
27/02/2020 -- Elias Kiritsis (APC, Paris & Crete University)
I will discuss ideas on how gravity can be an emergent interaction in QFT, what guarantees the emergent diffeomorphism invariance, what are its general features and properties and what could be the possible implications for realistic gravitational physics.
02/03/2020 -- Tarek Anous (Amsterdam University)
The BFSS matrix model provides an example of gauge-theory / gravity duality where the gauge theory is a model of ordinary quantum mechanics with no spatial subsystems. If there exists a connection between areas and entropies in this model similar to the Ryu-Takayanagi formula, the entropies must be more general than the usual subsystem entanglement entropies. I will give a brief overview of the BFSS/D0 brane geometry duality and describe general features of the extremal surfaces in the bulk. I will then discuss the possible entropic quantities in the matrix model that could be dual to the 'regulated areas' (which I will define) of these extremal surfaces.
20/02/2020 -- Georgios Loukes-Gerakopoulos (Astronomical Institute, Prague)
In this talk three different studies we have undertaken to address cosmological issues, like dark energy, will be presented and their results will be discussed. In these studies we have tried to remain model agnostic as much as possible. In particular, our first study (arxiv:1902.11051) concerns a cosmic fluid obeying rest-mass conservation of unspecified equation of state (EoS) in an unspecified background assuming just that the fluid's speed of sound is positive and less than the speed of light. Our second study (arXiv:2001.00825) performs a dynamical analysis of a barotropic fluid of unspecified EoS with positive energy density in spatially curved Friedmann-Robertson-Walker (FRW) spacetimes. While, the third study (arXiv:1905.08512) performs a dynamical analysis of a broad class of non-minimally coupled real scalar fields in spatially curved FRW spacetime with unspecified positive potential.
13/02/2020 -- Paolo Creminelli (ICTP, Trieste)
Cosmic inflation makes the universe flat and homogeneous, but under which conditions inflation will start? I will discuss some analytical results that show, with very weak assumptions, that inflation starts somewhere and some (partial) results about a de Sitter ho-hair theorem.
30/01/2020 -- Tomas Ledvinka (Charles University, Prague)
Despite the tremendous success of mathematical general relativity which revealed among others surprising features of the geometry of rotating (Kerr) black holes and developed approximation techniques to study early stages of their inspiral, the necessity to describe completely the merger of two black holes lead to a substantial progress of numerical relativity. This field necessarily uses techniques of modern computer science to amass and command number crunching capabilities of current computers as well as numerical methods for partial differential equations, but the successful computer simulations also required new type of answers to questions "what is the black hole" and "what kind of equations are the Einstein ones". From this perspective I will also mention some results on hyperbolicity analysis of 3+1 reductions of Einstein equations and coordinate choice and horizon formation for collapse of gravitational waves into a black hole.
29/01/2020 -- Harold Erbin (University of Turin)
Machine learning has revolutionized most fields it has penetrated, and the range of its applications is growing rapidly. The last years has seen efforts towards bringing the tools of machine learning to lattice QFT. After giving a general idea of what is machine learning, I will present two recent results on lattice QFT: 1) computing the Casimir energy for a 3d QFT with arbitrary Dirichlet boundary conditions, 2) predicting the critical temperature of the confinement phase transition in 2+1 QED at different lattice sizes.
23/01/2020 -- Chris Clarkson (Queen Mary University of London)
Over the coming decade new surveys will map the cosmos over huge volumes. This will allow us to probe general relativity on unprecedented scales. I shall discuss some of the new relativistic effects that may be significant on these scales. Though corrections to the Newtonian picture of observations of structure formation are small, they should be detectable, and offer new insights into gravity on scales approaching the horizon.
16/12/2019 -- Pavel Motoloch (CITA, Toronto)
Measurements of galaxy angular momenta can, at least in principle, be used to probe fundamental physics such as primordial gravitational waves and non-Gaussianity. In my talk I explain how galaxy spins arise from the initial density perturbations, describe how they are sensitive to various physical parameters of interest and finally detail our related observational effort.
12/12/2019 -- Filippo Vernizzi (Universite Paris Saclay)
The observed accelerated expansion of the Universe opens up the possibility that general relativity is modified on cosmological scales. While this has motivated the theoretical study of many alternative theories that will be tested by the next generation of cosmic large-scale structure surveys, I will show that the recent observations of gravitational waves by LIGO/Virgo has dramatic consequences on these theories.
28/11/2019 -- Guillermo Ballesteros (Autonoma University, Madrid)
I will discuss the idea that black holes may constitute a large fraction of the Universe's dark matter, focusing mostly on their formation from large primordial fluctuations generated during inflation. I will summarize the ups and downs of this mechanism and explain some ideas that help to alleviate its main shortcomings.
21/11/2019 -- Giovanni Acquaviva (Charles University, Prague, CZ)
It is known that the entropy of a system contained in a certain volume is bounded from above by the entropy of a black hole with corresponding surface area. We relate such universal bound to the existence of fundamental degrees of freedom and provide model-independent considerations about their features. In particular, both geometry and fields propagating on it are seen as phenomena emergent from the more fundamental dynamics, in analogy with many examples in condensed matter physics. An immediate consequence is that, even though the fundamental evolution is considered unitary, the fields develop an entanglement with the spacetime geometry, hence leading to an effective non-unitary evolution on the emergent level. We exemplify some consequences of this scenario by providing a toy-model of black hole evaporation: the entanglement between geometry and fields is interpreted at our low-energy scales as an effective loss of information in Hawking radiation. A question currently under scrutiny is how can unitary and continuum quantum field theory emerge from such fundamental picture.
04/11/2019 -- Dionysios Anninos (King's College London, UK)
We discuss quantum fields on a Euclidean sphere and their relation to de Sitter space. Various cases are considered, including particles of different spins, and masses. Some emphasis will be placed on the three-dimensional case. Time permitting, we will consider higher spin theories.
17/10/2019 -- Sk Jahanur Hoque (Chennai Mathematical Institute, India)
We discuss different notion of charges for asymptotically de Sitter space-time. We present a covariant phase space construction of hamiltonian generators of asymptotic symmetries with `Dirichlet' boundary conditions in de Sitter spacetime, extending a previous study of J\"ager. We show that the de Sitter charges so defined are identical to those of Ashtekar, Bonga, and Kesavan (ABK). We then present a comparison of ABK charges with other notions of de Sitter charges. We compare ABK charges with counterterm charges, showing that they differ only by a constant offset, which is determined in terms of the boundary metric alone. We also compare ABK charges with charges defined by Kelly and Marolf at spatial infinity of de Sitter spacetime. When the formalisms can be compared, we show that the two definitions agree.
10/10/2019 -- Cosimo Bambi (Fudan University)
Einstein's theory of general relativity was proposed over 100 years ago and has successfully passed a large number of observational tests in weak gravitational fields. However, the strong field regime is still largely unexplored, and there are many modified and alternative theories that have the same predictions as Einstein's gravity for weak fields and present deviations only when gravity becomes strong. X-ray reflection spectroscopy is potentially a powerful tool for testing the strong gravity region around astrophysical black holes with electromagnetic radiation. In this talk, I will present the reflection model RELXILL_NK designed for testing the metric around black holes and the current constraints on possible new physics from the analysis of a few sources.
03/10/2019 -- Dalimil Mazac (Simons Center for Geometry and Physics, Stony Brook University)
Ultraviolet consistency of quantum gravitational theories requires the presence of new states at or below the Planck scale. In the setting of AdS3/CFT2, this statement follows from the modular bootstrap. It has been a long-standing problem to improve the best upper bound on the mass of the lightest non-graviton state in this context. I will explain how this can be done using the "analytic extremal functionals", which were originally developed for the four-point bootstrap in 1D. The new analytic upper bound on the dimension of the lightest nontrivial primary is c/8.503... at large c (central charge) -- an improvement over the previous best bound c/6 due to Hellerman. I will also explain that the sphere packing problem of Euclidean geometry can be studied using a version of the modular bootstrap. The analytic functionals apply also in this context. They lead directly to the recent solution of the sphere-packing problem in 8 and 24 dimensions due to Viazovska and Cohn+Kumar+Miller+Radchenko+Viazovska.
The talk will be based on https://arxiv.org/pdf/1905.01319.pdf
02/10/2019 -- Thales Azevedo (Institute of Physics - UFRJ)
Recently, a gauge theory built out of dimension-six operators such as (DF)^2 appeared in the double-copy construction of conformal supergravity amplitudes. In this talk, I will show how theories of that kind are related to conventional, sectorized and ambitwistor string theories.
01/10/2019 -- Poulami Nandi (Indian Institute of Technology Kanpur, India)
Conformal Carrollian groups are known to be isomorphic to Bondi-Metzner-Sachs (BMS) groups that arise as the asymptotic symmetries at the null boundary of Minkowski spacetime. The Carrollian algebra is obtained from the Poincare algebra by taking the speed of light to zero, and the conformal version similarly follows. In this paper, we construct explicit examples of Conformal Carrollian field theories as limits of relativistic conformal theories, which include Carrollian versions of scalars, fermions, electromagnetism, Yang-Mills theory and general gauge theories coupled to matter fields. Due to the isomorphism with BMS symmetries, these field theories form prototypical examples of holographic duals to gravitational theories in asymptotically flat spacetimes. The intricacies of the limiting procedure lead to a plethora of different Carrollian sectors in the gauge theories we consider. Concentrating on the equations of motion of these theories, we show that even in dimensions d = 4, there is an infinite enhancement of the underlying symmetry structure. Our analysis is general enough to suggest that this infinite enhancement is a generic feature of the ultra-relativistic limit that we consider.
30/09/2019 -- Antoine Bourget (Imperial College London, UK)
I will explore the geometrical structure of Higgs branches of quantum field theories with 8 supercharges in 3, 4, 5 and 6 dimensions. They are symplectic singularities, and as such admit a decomposition (or foliation) into so-called symplectic leaves, which are related to each other by transverse slices. We identify this foliation with the pattern of partial Higgs mechanism of the theory and, using brane systems and recently introduced notions of magnetic quivers and quiver subtraction, we formalise the rules to obtain the Hasse diagram which encodes the structure of the foliation.
19/09/2019 -- Gizem Sengor (Czech Academy of Sciences, CZ)
The symmetry group of de Sitter, can accommodate fields of various mass and spin among its unitary irreducible representations. These unitary representations are labeled by the spin and scaling dimension. The scaling dimension depends on the mass and spin of the field and can have purely imaginary values. This talk focuses on scalar fields on de Sitter and aims to show that even the purely imaginary weights correspond to unitary operators on de Sitter, which seems contrary to the case on Anti de Sitter.
By studying the late time limit of scalar field solutions with different masses (conformally coupled, heavy and light fields); we identify the unitary representations they correspond to with respect to their scaling dimension and recognize them as late time boundary operators. The definition for a positive definite inner product on de Sitter is subtle. For operators with real scaling dimension it involves a so called intertwining operator. By carefully accounting for the presence or the absence of the intertwining operator we show that all of the identified boundary operators have positive definite norm and are thus unitary representations.
12/08/2019 -- Camilo Garcia-Cely (DESY, Hamburg)
In this talk, I will discuss MeV spin-2 dark matter. In particular, I will show that such a particle typically self-interacts and undergoes self-annihilations via 3-to-2 processes. I will discuss its production mechanisms and also identify the regions of the parameter space where self-interactions can alleviate the discrepancies at small scales between the predictions of the collisionless dark matter paradigm and cosmological N-body simulations.
01/08/2019 -- Andrei Frolov (Simon Fraser University)
The simple story of primordial gravitational waves produced by inflation sourcing B-modes of Cosmic Microwave Background polarization in reality is complicated by the fact that we are looking through the coloured glass of astrophysical foregrounds originating much closer to home. I will talk about amplitude of primordial B-modes we expect from inflation, characterization of polarized dust foregrounds from Planck Legacy data, and where do we go from here. In particular, I will show new reconstruction of the large-scale galactic magnetic field responsible for the patterns we see in the dust polarization, and explain how it allows accurate modelling of the polarized dust emission for design and analysis of future CMB experiments.
22/07/2019 -- Agnes Ferte (Jet Propulsion Laboratory, Pasadena, USA)
The universe has been going through a phase of accelerated expansion for the last 6 billion years. Understanding the origin of this cosmic acceleration is one of the main goals of observational cosmology: is it caused by a cosmological constant or a dynamical dark energy? Or is it a sign that we don't understand the laws of gravity on cosmological scales? In this talk I will first describe weak lensing, a powerful observable that helps addressing these fundamental questions. I will then give an overview of the current experimental context for weak lensing. In the main part of the talk, I will present my results on tests of gravity on large scales through weak lensing and end by presenting the Precision Projector Laboratory, which goal is to characterize the new generation of detectors that will be used in future galaxy surveys.
19/07/2019 -- Xingang Chen (Harvard University)
How to model-independently distinguish the inflation scenario from alternatives to inflation, as the origin of the Big Bang Cosmology, is an important challenge in modern cosmology. In this talk, we show that massive fields in the primordial universe function as standard clocks and imprint clock signals in the density perturbations, which directly record the scale factor of the universe as a function of time, a(t). This function is the defining property of any primordial universe scenario, so can be used to identify the inflation scenario, or one of its alternatives, in a model-independent fashion. The signals also encode the mass and spin spectra of the particle physics at the energy scale of the primordial universe.
16/07/2019 -- Leonardo Modesto (SUSTech, Shenzhen, China )
In order to have a unitary and finite quantum gravity, we propose a weakly nonlocal completion of the Einstein-Hilbert's action compatible with causality (a Shapiro's time advance never occurs in Nonlocal Gravity). As a consequence of finiteness, there is no Weyl anomaly and the theory turns out to be conformal invariant at classical as well at quantum level. Therefore, finite nonlocal quantum gravity is a conformal invariant theory in the spontaneously broken phase of the Weyl symmetry. The coupling to matter enjoy the same properties with and without supersymmetry. As an application, Weyl conformal symmetry solves the black hole's singularity issue and the cosmological singularity problem, otherwise unavoidable in a generally covariant local or non-local gravitational theory. Following and extending the seminal paper by Narlikar and Kembhavi, we are able to provide explicit examples of singularity free black hole exact solutions. The absence of divergences is based on the finiteness of the curvature invariants and on the geodesic completion. Indeed, no massive or massless particles can reach the former singularity in a finite amount of proper time or of affine parameter.
15/07/2019 -- Mirek Rapcak (Perimeter Institute, Canada)
I will discuss generalizations of the $\mathcal{W}_{1+\infty}$ algebra denoted as $\mathcal{W}_{m|n\times \infty}$ generated by a super-matrix of fields for each integral spin $i=1,2,3,\dots$. Truncations of the algebra are in correspondence with holomorphic functions on singular Calabi-Yau three-folds given by the zero locus of $xy=z^mw^n$. I propose a free-field realization of such truncations generalizing the Miura transformation for $\mathcal{W}_N$ algebras. Relations in the ring of holomorphic functions lead to bosonisation-like relations between different free-field realizations. The algebras are expected to be AGT dual to gauge theories supported on divisors corresponding to the zero locii of such holomorphic functions. The discussion uncovers many non-trivial relations between vertex operator algebras, algebraic geometry and gauge theory.
11/07/2019 -- David Svoboda (Perimeter Institute, Canada)
We present a para-complex analogy of the Generalized Kähler (GK) geometry, generalized para-Kähler (GpK) geometry. We show that similarly to GK geometry describing targets of 2D (2,2) supersymmetric sigma models, the GpK geometry describes the targets of (2,2) twisted supersymmetric sigma models. We then discuss topological twists of such sigma models. Because the involved geometries are para-complex, they provide new examples -- in particular of topological theories -- on manifolds that are not complex, contrary to the usual (2,2) case.
08/07/2019 -- Adolfo Cisterna (Chile University)
In this talk a new method for the construction of homogenous black strings is shown. The method, which is based on a particular scalar-dressing of the extra dimensions of the spacetime under consideration, allow us to construct the black string generalization of the AdS Schwarzschild black hole in any dimension in General Relativity. Furthermore the method can be generalized to provide the black string extension of the Boulware-Deser black hole, or the black string extension of any black hole contained in the Lovelock theory. It will be also discussed how to construct black string with non-trivial matter fields.
04/07/2019 -- Cesar Arias (Riemann Center for Geometry and Physics (Leibniz University of Hannover))
It has been proposed that Vasiliev's nonlinear equations can be extracted from a cubic action principle of the Chern–Simons type, built up from a set of differential forms, a trace operation and a star product inherited from the associative algebra, and a nilpotent differential containing the (gauged) de Rham differential. In this talk, we argue that all of these algebraic structures can be naturally modeled by a class of two-dimensional topological models, referred to as differential Poisson sigma models, which we analyse in some detail.
24/06/2019 -- Andrea Fontanella (Instituto de Fisica Teorica UAM/CSIC, Madrid)
I will present how we found the hidden relativistic symmetry in the context of AdS2 and AdS3 integrable superstring theories (arXiv:1903.10759). Then I shall discuss how such symmetry can be used in AdS3 to write down the Thermodynamic Bethe Ansatz for massless non-relativistic modes from the one available in literature for massless relativistic modes.
10/06/2019 -- Dmitry Gorbunov (Institute for Nuclear Research, Moscow)
Inflation can explain why the Universe is flat and homogeneous at large scales. However, it is not falsifiable unless also responsible for the matter perturbations sourcing the cosmic structure formation and anisotropy of cosmic microwave background. Moreover, even in that case different models often give (almost) the same predictions for the cosmological spectra, and it would be nice to test these inflationary models in other ways. The Higgs inflation is one of the examples naturally providing with such independent tests. A recently suggested modification with $R^2$-term solves the strong coupling problem in the original Higgs inflation allowing for perturbative matching of high-energy and low-energy model coupling constants, which is required to perform such direct tests. A remarkable feature of the model is instant preheating due to tachyonic instabilities in Higgs and vector boson sectors, which ask for a special study.
07/06/2019 -- Mairi Sakellariadou (King's College London)
The direct detections of Gravitational Waves (GWs) by the Advanced LIGO and Advanced Virgo interferometers have opened a new era of astronomy. Aside the current detections associated with individual loud events, one expects a superposition of coincident unresolved events leading to a stochastic GW background (SGWB). After reviewing briefly the SGWB, I will discuss how the anisotropic distribution of sources and the inhomogeneous geometry of the intervening spacetime can induce anisotropies. I will consider a SGWB produced by (1) cosmic strings and (2) by compact binary coalescences. I will show that while the SGWB monopole is sensitive to the particular model one uses, the anisotropic angular power spectrum is basically insensitive to the cosmic string model or the nature of binary black holes population. I will then discuss the noise in the anisotropies of the astrophysical GW background sourced by the finite sampling of both the galaxy distribution and the compact binary coalescence event rate.
04/06/2019 -- Tanmay Vachaspati (Arizona State University)
Place: FZU Lecture Hall
We show that certain quantum systems in non-trivial classical backgrounds can be mapped into entirely classical systems in higher dimensions. The evolution of the classical system can be used to obtain the particle production rate as well as the quantum backreaction on the classical background. The technique has many potential applications, including breather/oscillon dynamics, Hawking radiation and black hole evaporation, and particle production during inflation.
21/05/2019 -- Julian Adamek (Queen Mary University of London)
I will give a brief overview of the latest code release v1.2 of gevolution, with particular attention to the new features that facilitate the analysis on observers' past light cones. This provides a framework to include all interesting relativistic contributions in the prediction of large-scale structure observables.
13/05/2019 -- Tomislav Prokopec (Utrecht University)
In this talk I will give an introduction on how to compute quantum corrections in inflation. I will review quantum effects in interacting scalar theories, scalar quantum electrodynamics and summarize on the quantum one loop corrections to dynmical gravitons and scalar gravitational potentials. Understanding these type of corrections is of crucial importance for our understanding of how large can be the quantum corrections to cosmological perturbations during inflation.
10/04/2019 -- Ogan Ozsoy (Swansea University)
Observations of Cosmic Microwave Background ( CMB ) radiation appear to be consistent with the simplest realizations of the inflationary paradigm: single field slow-roll inflation. However, in practice, CMB probes can provide us information about the inflationary dynamics only for a limited range of scales that correspond to a small portion of the dynamics compared to required time span of inflation in solving the standard problems of Hot Big Bang cosmology. This leaves us with a large portion of the dynamics together with a vast range of scales that are pretty much uncharted and yet to be explored. In this talk, I will focus on two possible observational windows together with a simple primordial mechanism that can provide us the opportunity to probe the inflationary dynamics on small scales compared to the CMB. In this context, I will show two exemplary scenarios that has potential to accomplish this goal through enhanced scalar and tensor fluctuations during inflation.
25/03/2019 -- James Bonifacio (Case Western Reserve University)
A free massless scalar in flat space has an infinite number of shift symmetries. In (A)dS, each of these symmetries is preserved only for particles with particular discrete masses. I will show how these shift symmetries generalize to massive higher-spin particles and explain how these are related to partially massless symmetries. For the case of scalar fields, I discuss deformations of the underlying symmetry algebras and whether there exist invariant interactions. This leads to a ghost-free theory in (A)dS that is invariant under a deformed quadratic shift symmetry and which reduces in flat space to the special Galileon. This theory has a rich structure of interactions that are completely fixed by the nonlinear symmetry, including a nontrivial potential. Lastly, I will speculate on possible generalizations to interacting massive higher-spin particles.
18/03/2019 -- Masahide Yamaguchi (Tokyo Institute of Technology)
We propose a new class of higher derivative scalar-tensor theories without the Ostrogradsky's ghost instabilities. The construction of our theory is originally motivated by a scalar field with spacelike gradient, which enables us to fix a gauge in which the scalar field appears to be non-dynamical. We dub such a gauge as the spatial gauge. Though the scalar field loses its dynamics, the spatial gauge fixing breaks the time diffeomorphism invariance and thus excites a scalar mode in the gravity sector. We generalize this idea and construct a general class of scalar-tensor theories through a non-dynamical scalar field, which preserves only spatial covariance. We perform a Hamiltonian analysis and confirm that there are at most three (two tensors and one scalar) dynamical degrees of freedom, which ensures the absence of a degree of freedom due to higher derivatives. Our construction opens a new branch of scalar-tensor theories with higher derivatives.
14/03/2019 -- Stefano Camera (University of Turin)
'Synergy' means 'the interaction of two or more agents to obtain a combined effect greater than the sum of their separate effects'. With this in mind, in this talk I shall present my current lines of research, all focussed on developing novel combinations of astrophysical and cosmological observables to the aim of testing the foundations of the concordance cosmological model. Specifically, I shall discuss how innovative cross-correlations can mitigate the impact of systematic effects, noise and cosmic variance, to the end of studying dark energy and modified gravity models, detecting particle dark matter signatures, and testing gravity and inflation on the largest cosmic scales. All, with a view on the current and oncoming generation of cosmological experiments and large-scale surveys.
28/02/2019 -- Gizem Sengor (FZU)
Cosmological backgrounds in general posses time dependence. On these backgrounds scalar degrees of freedom that transform nonlinearly under time diffeomorphisms arise to guarantee the time diffeomorphism invariance of the action. In the early universe these time dependent backgrounds can be attributed to the presence of time dependent scalar fields that dominate the energy momentum density of the universe. Then the species of the scalar degree of freedom that transforms nonlinearly under time diffeomorphisms correspond to perturbations of the scalar field that gives rise to the time dependence of the cosmological background at a given era. Effective field theories (EFT) of cosmological perturbations generalize the interactions between cosmological perturbations of different species based on their transformation properties under diffeomorphisms. Preheating refers to the stage at the end of inflation where the inflaton field continues to dominate the energy momentum density but transfers its energy to other fields through resonance, as opposed to perturbative decays. The aim of this talk is to consider general interactions between the perturbations of the inflaton and a second scalar field during Preheating, to understand the scales these interactions introduce and explore which species propagate as effective degrees of freedom at different scales.
27/02/2019 -- Antonio Racioppi (NICPB, Tallinn, Estonia)
We study models of chaotic inflation where the inflaton field $\phi$ is coupled non-minimally to gravity via $\xi \phi^n R$, a.k.a. $\xi$-attractors. We focus on the Palatini theory of gravity and we show that in this case Starobinsky inflation is not any more a universal attractor. On the other hand we prove that, once quantum corrections are taken into account, the strong coupling limit of (a certain class of) $\xi$-attractor models will move into linear inflation regardless of the adopted gravity formulation metric or Palatini).
25/02/2019 -- Jakub Vicha (NICPB, Tallinn, Estonia)
The Pierre Auger Observatory is the currently largest cosmic-ray detector covering ultra-high energies from 10^18 eV to 10^20 eV. The size of exposure accumulated since 2004 granted measurements of unprecedented precisions on energy spectrum, mass composition and anisotropy searches. These measurements guide us slowly to the sources of ultra-high energy cosmic rays, which is a tantalizing mystery of physics. A brief introduction to the field of ultra-high energy cosmic rays will be given together with a description of the Pierre Auger Observatory and its detection techniques. Then, an overview of the most interesting results will follow.
18/02/2019 -- Vojtech Witzany (Astronomical Institute of the Czech Academy of Sciences)
A compact stellar mass object inspiralling onto a massive black hole deviates from geodesic motion due to radiation-reaction forces as well as finite-size effects. Such deviations need to be included with sufficient precision into wave-form models for the upcoming space-based gravitational-wave detector LISA. I will present the formulation and solution of the Hamilton-Jacobi equation of a generic geodesic in Kerr space-time perturbed by the spin-curvature coupling, the leading order finite-size effect. In return, this solution allows to compute a number of observables such as the turning points of the orbits as well as the fundamental frequencies of motion. These results essentially solve the question of conservative finite-size effects in extreme mass ratio inspirals.
10/12/2018 -- Shinji Mukohyama (Kyoto University and Tokyo University)
It is generally believed that modification of general relativity inevitably introduce extra physical degree(s) of freedom. In this talk I argue that this is not the case by constructing modified gravity theories with two local physical degrees of freedom. After classifying such theories into two types, I show explicit examples and discuss their cosmology and phenomenology.
26/11/2018 -- Ondrej Pejcha (Charles University, Prague)
Interest in the transient astronomical sky has increased tremendously thanks to modern time-domain surveys, which have discovered unexpected diversity in previously known phenomena and identified many new classes of transients. I will focus on two types of transients that are important for the nucleosynthesis in the Universe and the evolution of gravitational wave sources. I will argue that the deaths of massive stars marked by core-collapse supernovae are highly sensitive to initial conditions, which leads to a complex pattern of neutron star and black hole formation. Many stars are members of binary systems and their evolution can be significantly affected by a catastrophic interaction, which results in the rapid loss of mass, energy and angular momentum, and sometimes even merger of the binary star. This phase was recently connected to a newly identified group of red transients. I will present surprising findings from the theoretical interpretation of observations of these red transients.
23/11/2018 -- Rachel Houtz (IFT Madrid)
In this talk I present a model with an enlarged color sector which solves the strong CP problem via new massless fermions. QCD color is unified with another non-Abelian group with a large confinement scale. The spontaneous breaking of the unified color group provides a source of naturally large axion mass due to small size instantons, and as a result no very light axions are present in the low-energy spectrum. The axion scale may be around a few TeV which translates to observable signals at colliders. This model naturally enlarges the parameter space for axions which solve the strong CP problem well beyond that of invisible axion models.
22/11/2018 -- Antonino Marciano (Fudan University, Shanghai)
We discuss how inflation and bounce cosmology can emerge from a four-fermion interaction induced by torsion. Inflation can arise from coupling torsion to Standard Model fermions, without any need of introducing new scalar particles beyond the Standard Model. Within this picture, the inflaton field can be a composite field of the SM-particles and arises from a Nambu-Jona-Lasinio mechanism in curved space-time, non-minimally coupled with the Ricci scalar. The model we specify predicts small value of the r-parameter, namely r ~ 10-3 - 10-2, which nonetheless would be detectable by the next generation of experiments, including BICEP 3 and the ALI-CMB projects. On the other hand, bouncing cosmology can be also accounted for in terms of fermion condensates, with the remarking appearance of an ekpyrotic phenomenon, which is solely due to the quantum corrections to the fermion potential. We finally argue about the richness of the phenomenological perspective encoded in both the schemes.
20/11/2018 -- Andrea Addazi (Fudan University, Shanghai)
Place: 117 or 226
We will discuss dark matter models which are related to dark first order phase transitions (D.F.O.P.T.) in the early Universe. D.F.O.P.T. sources an efficient materialization of dark bubbles. Dark bubbles can scatter each others producing a gravitational radiation background, detectable in next generation of experiments. We will discuss the specific framework of Majoron dark matter, as a neutrino mass and a Warm Dark Matter genesis model. Majorons may also be detected in next laboratory experiments: i) electron-positron high luminosity colliders; ii) neutrino-less-double-beta decays processes; iii) experiments searching for baryon violating Neutron-Antineutron transitions.
05/11/2018 -- Subodh Patil (Niels Bohr Institute, Copenhagen)
In this talk, we present an amusing observation that primordial gravitational waves, if ever observed, can be used to bound the hidden field content of the universe. This is because a large number of hidden fields can resum to potentially observable logarithmic runnings for the graviton two-point function in the context of single field inflation, courtesy of a `large N' expansion. This allows one to translate ever more precise bounds on the tensor to scalar consistency relation into bounds on the hidden field content of the universe, with potential implications for phenomenological constructions that address naturalness with a large number of species. Along the way, we'll review how the cutoff for an EFT that includes gravity changes as we incorporate matter, identifying two distinct scales for gravity. We'll also need to address certain subtleties regarding loop corrections on cosmological backgrounds, especially with regards to the correct implementation of dimensional regularization.
25/10/2018 -- Alexey Golovnev (St.-Petersburg State University)
I will review the basic construction of teleparallel gravity and its modifications, with a special emphasis on local Lorentz transformations in the tetrad space. One of these modified models, the f(T), is widely used for cosmological model building. I will explain how (linear) cosmological perturbations should be treated in f(T) and in similar models. Finally I will discuss the problem of dynamical structure and Hamiltonian analysis for modified teleparallel gravity.
16/10/2018 -- Lasha Berezhiani (Max Planck Institute of Physics, Munich)
After a brief review of some of the empirical correlations between dark and baryonic sectors within galaxies, I will discuss a novel theory of dark matter superfluidity as a potential explanation of this observations. I will argue that, depending on the mass and self-interaction cross section of dark matter particles, the superfluid may in principle be formed in the central regions of galactic halos. After this, I will discuss the criteria that need to be met by superfluid properties in order to account for the above-mentioned empirical correlations.
8/10/2018 -- Katherine Freese (Nordita, Sweden)
Inflation, a period of accelerated expansion at the beginning of the Universe, seeks to explain the (otherwise mysterious) large scale smoothness, isotropy, and "oldness" of the Universe. An important product of this inflationary epoch is the origin of density perturbations that are the seeds of galaxies and other large structures today. The density perturbations and gravitational waves produced by inflation provide sensitive tests of both the inflationary paradigm and of individual inflationary models. In the past decade predictions of inflation have been tested by Cosmic Microwave Background data, most recently with the Planck satellite observations. The basic idea of inflation matches the data and sensitive tests have been made of individual models. Planck data have ruled out most inflation models. I will discuss the status of Natural Inflation, a model that my collaborators and I originally proposed in 1990, as well as modern variants. Natural inflation uses "axions" as the inflaton, where the term "axion" is used generically for a field with a flat potential as a result of a shift symmetry. The successes of inflation as well as the potential discoveries in upcoming data will be emphasized.
17/09/2018 -- Elena De Paoli (Marseille, CPT)
We identify a symplectic potential for general relativity in tetrad and connection variables that is fully gauge-invariant, using the freedom to add surface terms. When torsion vanishes, it does not lead to surface charges associated with the internal Lorentz transformations, and reduces exactly to the symplectic potential given by the Einstein-Hilbert action. In particular, it reproduces the Komar form when the variation is a Lie derivative, and the geometric expression in terms of extrinsic curvature and 2d corner data for a general variation. As a direct application of this analysis we prove that the first law of black hole mechanics follows from the Noether identity associated with the covariant Lie derivative, and that it is independent of the ambiguities in the symplectic potential provided one takes into account the presence of non-trivial Lorentz charges that these ambiguities can introduce.
30/07/2018 Shun-Pei Miao (National Cheng Kung University, Taiwan)
We consider an additional fine-tuning problem which afflicts scalar-driven models of inflation. The problem is that successful reheating requires the inflaton be coupled to ordinary matter, and quantum fluctuations of this matter induces Coleman-Weinberg potentials which are not Planck-suppressed. Unlike the flat space case, these potentials depend upon a still-unknown nonlocal functional of the metric which agrees with the Hubble parameter for de Sitter. Such a potential cannot be completely subtracted off by any local action. We numerically consider the effect of subtracting it off at the beginning of inflation in a simple model. For fermions the effect is to prevent inflation from ending unless the Yukawa coupling to the inflaton is so small as to endanger reheating. For gauge bosons the effect is to make inflation end almost instantly, again unless the gauge charge is unacceptably small.
27/07/2018 Richard Woodard (University of Florida, USA)
MOND is a phenomenological model which modifies the extreme weak field regime of Newtonian gravity so as to explain galactic rotation curves without dark matter. If correct, it must be the non-relativistic, static limit of some relativistic modified gravity theory. I show how the only possible metric-based modification of gravity is nonlocal, and I construct the action using the Tully-Fisher relation and weak lensing. Then I explore the consequences of this model for cosmology. This talk is based on four arXiv papers: 1106.4984, 1405.0393,1608.07858 and 1804.01669.
26/07/2018 Dam Thanh Son (Kadanoff Center for Theoretical Physics, University of Chicago, USA)
25/07/2018 Oleg Teryaev (Joint Institute for Nuclear Research, Dubna, Russia)
The energy-momentum tensor matrix elements describe the particle coupling to gravitational field. They are responsible for gravity action on particle spin which may result, in particular, in neutrino spin-flip in anisotropic Universe. One of the proton's formfactors, related to the pressure of quarks, was recently experimentally extracted from the data obtained in Jefferson Lab. (Nature, V. 557, p. 396, May 17, 2018). The pressure is extremely large with the distribution analogous to that in macroscopic stable object, like star.
24/07/2018 Renato Costa (University of Cape Town, South Africa)
The singularity problem is one of the hints that the \LambdaCDM models has to be extended at very high energies. We use the guiding principle of symmetries to extend the FLRW background to an explicitly T-dual one which is well described by double field theory (DFT). We show that, at the level of the background, one can have a singularity-free cosmology once the dual time coordinate introduced by DFT is inversely related to the standard time coordinate of general relativity. We also show that introducing matter in DFT cosmology naturally leads to the correct equation of state for the winding modes and to a more clear interpretation of the connection between the two time coordinates.
18/07/2018 Massimiliano Rinaldi (Trento University, Italy)
In this talk I will present a scalar-tensor model of modified gravity that is globally scale-invariant. Such a symmetry spontaneously breaks to give rise to a mass scale, and an inflationary scenario naturally emerges. The same model will be presented both in the Jordan and in the Einstein frame and the compatibility with current observations will be discussed.
17/07/2018 Alessandro Drago (Ferrara University, Italy)
Place: room 117
I will discuss what we have learnt from the first merger of two neutron stars observed in gravitational waves and in E.M. waves. My discussion will include information coming from new theoretical analyses and also from x-ray data collected by satellites.
12/07/2018 Jarah Evslin (Institute of Modern Physics, Lanzhou, China)
There are at least two 3 sigma anomalies in the cosmic expansion rate. One is the discrepancy between the local Universe measurement of the Hubble constant by Riess et al and also using strong lensing time delays vs the best fit Planck result assuming LCDM. The other is the Lyman alpha forest Baryon Acoustic Oscillation (BAO) measurement, which disagrees with LCDM when combined with other BAO measurements or Planck. We note that unanchored BAO provides a robust geometric probe, free of all but the most basic cosmological assumptions. Using it, we find that if these anomalies are confirmed, the first necessarily implies a change in pre-recombination cosmology while the second implies dynamical dark energy between the redshifts z=2 and z=0.6.
03/07/2018 Yi-Zen Chu (National Central University, Taiwan)
Despite being associated with massless particles, electromagnetic and gravitational waves do not propagate strictly on the null cone in curved spacetimes. They also develop tails, traveling inside the light cone. This tail effect, in particular, provides a contribution to the self-force of compact bodies orbiting super-massive black holes, which in turn are believed to be important sources of gravitational waves for future space based detectors like LISA, TianQin and Taiji. For the first portion of my talk I will describe my efforts to explore novel methods to understand the tail effect in curved geometries -- primarily in cosmological spacetimes. Some of the spin-offs include the (small) discovery of new type of gravitational wave memory effect induced by tails. If time permits, for the second part of my talk, I will address a seemingly basic aspect of gravitational wave theory that -- as far as I am aware -- has not received proper clarification in the literature to date. Specifically, the "transverse-traceless" gravitational wave (GW) is usually touted as the gauge-invariant observable; while practical computations actually do not strictly yield this "TT" GW. Furthermore, the gauge-invariant TT GW is actually acausally related to its matter source, as can be seen by simply computing its associated Green's function. I will clarify the situation for the spin-1 photon, as an analogy to the gravitational case.
28/06/2018 Andreas Albrecht (University of California at Davis, USA)
I review the current status of cosmic inflation, including successes and open questions. I also scrutinize the question of the famous cosmological "tuning puzzles" and analyze the extent to which inflation does and does not resolve these. I explain why I think the open questions about inflation are deeply scientifically exciting. They should not be regarded as "failures" of inflation, nor should they be swept under the rug.
Decoherence and "einselection" have important roles in quantum physics, and are understood to be important in the emergence of classical behavior. Traditional discussions of einselection all assume an arrow of time. The extent to which einselection (and thus the emergence of classicality) is tied to an arrow of time has possibly deep implications for cosmology. In this talk I present some early results on this topic based on calculations in a toy model related to the classic Caldeira Leggett model, which I solve unitarily in all regimes. This talk will include introductory material, and will not assume prior familiarity with decoherence, einselection or cosmology.
26/06/2018 Eugeny Babichev (LPT, Orsay, France)
A Hamiltonian density bounded from below implies that the lowest-energy state is stable. I will discuss that, contrary to common lore, an unbounded Hamiltonian density does not necessarily imply an instability: this is a coordinate-dependent statement. I will give the correct stability criterion, using the relative orientation of the causal cones for all propagating degrees of freedom. I will then apply this criterion to an exact Schwarzschild-de Sitter solution of a beyond-Horndeski theory, while taking into account the recent experimental constraint regarding the speed of gravitational waves coming from GW170817.
25/06/2018 Wojciech Hellwing (Warsaw, Poland)
While the Earth-base laboratories keep trying very hard to elucidate on the nature of the elusive dark matter particles the other very promising avenue to test and/or falsify potential dark matter candidates resides in astrophysical observations. In this context our own Galaxy - the Milky Way - with its unique set of satellites shows potential to serve as a extraterrestrial laboratory for dark matter. The very physical nature of dark matter particles and especially the differences between the main candidate, the neutralino of Cold Dark Matter (CDM), and its currently strongest competitor, the sterile neutrino of Warm Dark Matter candidate, may lead to significant differences in the properties of dwarf galaxies. Such objects are dominated (by mass) by their host DM haloes and therefore provide an unique view on the physical properties of DM. I shall discuss our recent efforts to use the state-of-the-art galaxy formation hydrodynamical simulation scheme of the EAGLE project as well as high-resolution Copernicus Complexio N-body simulations to study the galaxy formation of Milky Way like systems in CDM and WDM scenarios. Our results render new insights on potential ways to use astronomical observations for falsifying the CDM paradigm and testing its competitors.
15/06/2018 -- Emre Kahya (Istanbul)
I will discuss quantum gravitational loop effects to observable quantities such as curvature power spectrum and primordial non-gaussianity of Cosmic Microwave Background (CMB) radiation. We first review the previously shown case where one gets a time dependence for zeta-zeta correlator due to loop corrections. Then we investigate the effect of these loop corrections to primordial non-gaussianity of CMB.
The gravitational wave (GW) signal (GW170817) from the coalescence of binary neutron stars was simultaneously seen throughout the electromagnetic (EM) spectrum from radio waves to gamma rays. We point out that this simultaneous detection rules out a class of modified gravity theories, and provides another indirect evidence for the existence of dark matter.
06/06/2018 -- Sébastien Clesse (University of Namur)
I will present the current status of primordial black holes as a Dark Matter candidate, a scenario that has recently seen a strong revival of interest. Formation models, astrophysical and cosmological constraints, as well as observations pointing towards the possible existence of primordial black holes, with abundances comparable to the one of dark matter, will be reviewed and discussed, including the gravitational waves from massive black hole mergers detected by LIGO/VIRGO. Finding evidence of even a single primordial black hole could have groundbreaking consequences for our understanding of the early Universe and of High Energy physics.
21/05/2018 -- Maksym Ovchynnikov (Leiden University)
It is well-known that the Standard Model of particle physics does not explain dark matter, neutrino masses and matter-antimatter asymmetry of the Universe and therefore has to be extended. This means that there should exist some new particles that are either too heavy to be found before (the "energy frontier") or interact too feebly (the "intensity frontier"). In the absence of a good guiding principle predicting where we should look for new physics, we consider the so-called "portals" — renormalizable interactions between new particles and the Standard Model. We review these portals and their phenomenology at the "intensity frontier" (in particular at SHiP). We pay special attention to the searches of dark matter particle through these portals and discuss the cosmological status of "light dark matter".
17/05/2018 -- Lorenzo Pizzuti (Trieste)
I will provide a brief overview on my work concerning constraints on modified gravity models obtained using galaxy cluster mass profile determinations. In particular, I will present the results of a paper in which we combined the information given by the kinematic of galaxies in cluster with the information provided by lensing analyses for 2 galaxy clusters of the CLASH-CLASH\VLT collaboration to get constraints on f(R) models. In order to discuss the applicability of the proposed method in view of future imaging and spectroscopic surveys, I will further introduce my current study of cosmological simulations, aiming at estimating and calibrating the impact of systematics.
15/05/2018 -- Santiago Casas (CEA Paris-Saclay)
The large freedom in the free functions affecting linear perturbations in theories of modified gravity and dark energy leads to the burden of parametrization, which means that the observational constraints depend strongly on the way these free functions are parametrized. Using a model-independent test of gravity, alleviates this problem and it even frees us from assumptions about initial conditions, galaxy bias or the nature of dark matter. In this talk I will present the first model-independent reconstruction of the gravitational slip as a function of redshift, using present data on large scale structure and the Hubble function. For future data I will show how we can use these tests to rule out entire classes of modified gravity models and how we have to handle, in a Bayesian way, the constraints from models which are very close to LCDM and might not even be clearly distinguishable, even with next generation surveys.
07/05/2018 -- Diego Blas (King's College London)
The high quality of the data from pulsar timing makes of it a fantastic resource to understand gravitational phenomena. Traditionally this has been used to test general relativity. In this talk I will describe a less explored possibility: using pulsar timing to understand dark matter properties. I will focus on (possibly) detectable modifications of binary orbits due to the interaction with dark matter in different scenarios.
04/05/2018 -- Peter Tinyakov (University Of Brussels)
Compact stars - neutron stars and white dwarfs - can capture and accumulate dark matter. Even though only a tiny fraction of the star mass can be accumulated in realistic conditions, this may lead to dramatic consequences such as the star collapse into a black hole. Thus, mere existence of neutron stars and white dwarfs sets constraints on DM models where this phenomenon occurs. Alternatively, if only a fraction of NS is converted into black holes, these may be identified with the gravitational wave detectors: the masses of such BH are around one solar mass, while stellar evolution does not lead to BH lighter than ~2 solar masses. We will discuss in detail two examples: the DM composed of primordial black holes, and asymmetric DM with self-interactions.
03/05/2018 -- Jan Novák (Technical University of Liberec)
We investigate the Universe at the late stage of its evolution and inside the cell of uniformity 150 - 370 MPc. We consider the Universe to be filled at these scales with dust like matter, a minimally coupled Galileon field and radiation as matter sources. We will use the mechanical approach and therefore the peculiar velocities of the inhomogeneities as well as fluctuations of other perfect fluids are nonrelativistic. Such fluids are said to be coupled, because they are concentrated around inhomogeneities. We investigate the conditions under which the Galileon field can become coupled. We know from previous work that at background level coupled scalar field behave as a two-component perfect fluid: a network of frustrated cosmic string and cosmological constant. We found a correction for the Galileon field, which behaves like matter. We investigate a similar task for K-essence models and we try to find the conditions under which the K-essence scalar field with the most general form for its action can become coupled. We investigate at the background level three particular examples of the K-essence models: (1) the pure kinetic K-essence field, (2) a K-essence with a constant speed of sound and (3) the K-essence model with the Lagrangian bX+cX2−V (φ). We demonstrate that if the K-essence is coupled, all these K-essence models take the form of multicomponent perfect fluids where one of the component is the cosmological constant. Therefore, they can provide the late-time cosmic acceleration and be simultaneously compatible with the mechanical approach.
10/04/2018 -- Luca Marzola (National Institute of Chemical Physics and Biophysics, Tallinn)
In this talk I review the origin of the 21-cm line and explain why the particle physics community is making such a big deal out of it. I will show two possible ways to use the new results of the EDGES experiment and try to convince you that, maybe, you should have a look into the matter too.
12/04/2018 -- Tomi Koivisto (NORDITA, Stockholm)
Teleparallel gravity is formulated in terms of a flat spacetime affine connection. In the symmetric teleparallelism, the affine connection is further torsion-free. These simplifications may improve the theory of gravity both technically (only first derivatives and no boundary term in the action) and conceptually (resolution of the gravitational energy, separation of the inertial effects). In the talk we will review these formulations and discuss some recent developments in the symmetric teleparallel geometry that were reported in the pre-print arXiv:1803.10185.
27/03/2018 -- Julian Adamek (Queen Mary University London)
I present a general (relativistic) framework for numerical simulations of cosmic large-scale structure in the context of generic metric theories of gravity. The full spacetime metric is evolved within a weak-field description, while cold dark matter is represented as an N-body ensemble that follows timelike geodesics. The framework allows one to study phenomena that lead to generic modifications of the metric perturbations, either by introducing new relativistic sources or by modifying the theory of gravity.
20/03/2018 -- Pat Stengel (University of Stockholm)
The Standard Model Higgs boson, which has previously been shown to develop an effective vacuum expectation value during inflation, can give rise to large particle masses during inflation and reheating, leading to temporary blocking of the reheating process and a lower reheat temperature after inflation. We study the effects on the multiple stages of reheating: resonant particle production (preheating) as well as perturbative decays from coherent oscillations of the inflaton field. Specifically, we study both the cases of the inflaton coupling to Standard Model fermions through Yukawa interactions as well as to Abelian gauge fields through a Chern-Simons term. We find that, in the case of perturbative inflaton decay to SM fermions, reheating can be delayed due to Higgs blocking and the reheat temperature can decrease by up to an order of magnitude. In the case of gauge-reheating, Higgs-generated masses of the gauge fields can suppress preheating even for large inflaton-gauge couplings. In extreme cases, preheating can be shut down completely and must be substituted by perturbative decay as the dominant reheating channel. Finally, we discuss the distribution of reheating temperatures in different Hubble patches, arising from the stochastic nature of the Higgs VEV during inflation and its implications for the generation of both adiabatic and isocurvature fluctuations.
15/03/2018 -- Ilidio Lopes (University of Lisbon)
For the past decade asteroseismology has opened a new window into studying the physics inside stars. Today, it is well known that more than ten thousand stars have been found to exhibit solar-like oscillations. This large amount of high-quality data for stars of different masses and sizes is having a profound impact in our understanding of the structure of stars in the main and the post-main sequence, on the formation and evolution of stellar clusters in our Galaxy. Moreover, it can be used to test new fundamental laws of nature including the existence of dark matter. While many particle candidates have been proposed as the main constituents of dark matter, the impact of such candidates in the evolution of stars has been sparsely addressed. In this talk, I will focus on the impact that dark matter has in the evolution of stars, and how stellar oscillations have been used to constrain the properties of dark matter. I will discuss the potential of the next generation of asteroseismic missions helping us to address this problem.
20/02/2018 -- Eric A. Bergshoeff (University of Groningen)
A Schroedinger equation proposed for the GMP gapped spin-2 mode of fractional Quantum Hall states is found from a novel non-relativistic limit, applicable only in 2+1 dimensions, of the massive spin-2 Fierz-Pauli field equations. It is also found from a novel null reduction of the linearized Einstein field equations in 3+1 dimensions, and in this context a uniform distribution of spin-2 particles implies, via a Brinkmann-wave solution of the non-linear Einstein equations, a confining harmonic oscillator potential for the individual particles.
11/12/2017 -- Martin Roček (SUNY, Stony Brook)
Place: auditorium
WZW models and generalized geometry
I'll review (2,2) superspace and explore how to describe the generalized kahler structure of (2,2) supersymmetric WZW models, presenting surprising new results for SU(3).
8/11/2017 -- Pierre Fleury (University of Geneva)
Weak lensing with finite beams
The standard theory of weak gravitational lensing relies on the infinitesimal light beam approximation. In this context, images are distorted by convergence and shear, the respective sources of which unphysically depend on the resolution of the distribution of matter—the so-called Ricci-Weyl problem. In this talk, I will discuss a strong-lensing-inspired formalism designed to deal with finite light beams. I will show that it solves the Ricci-Weyl problem. Furthermore, finite-size effects systematically enhance the beam's distortions, which could affect the interpretation of cosmic shear data.
6/11/2017 -- Andrei Gruzinov (New York University)
Particle production by real (astrophysical) black holes
The rate of production of light bosons (if they exist) by astrophysical black holes is calculated. Observability of this effect is discussed.
23/10/2017 -- George Pappas (Lisbon Centre for Astrophysics)
Neutron stars as matter and gravity laboratories
Compact objects in general and neutron stars (NSs) in particular open a window to some of the most extreme physics we can find in nature. On the one hand in the interior of NSs we can find matter in very extreme densities, exceeding nuclear densities and anything we can probe in the laboratory, while on the other hand NSs are related to the strongest gravitational fields next only to those found in black holes. Therefore studying NSs gives us access to both supranuclear densities as well as strong gravity and can be used to get information and test our theories of matter (equation of state) and gravity. The relevant properties of the structure of NSs are encoded on the spacetime around them and by studying the astrophysical processes that take place around NSs we can map that spacetime and extract these properties (i.e., the multipole moments, the equation of state, etc). In this talk we will discuss these properties of NSs and how they are related to the properties of the spacetime around them both in GR and in one of the proposed alternative theories of gravity. We will also talk about the relation of these properties to astrophysical observables and how one could tell these theories apart.
16/10/2017 -- Tessa Baker (Oxford University)
Tests of Beyond-Einstein Gravity
Corrections to General Relativity on large distance scales are under consideration as an explanation of cosmic acceleration. However, studying extended gravity models on an individual basis is a labour-intensive way of testing these ideas. I will explain how instead EFT-inspired parameterised methods can be used as a powerful and efficient way of testing for deviations from GR. I will outline the theoretical foundations of these techniques, and describe the current status of their observational constraints.
8/9/2017 -- Dani Figueroa (CERN)
Higgs Cosmology: implications of the Higgs for the early Universe.
I will discuss some of the consequences arising when we take into account the existence, and hence the presence, of the Standard Model Higgs during Inflation. In particular, I will derive stringent constraints on the couplings of the Higgs to the inflationary and gravitational sectors. I will also discuss the circumstances under which the Higgs can be resposible for the origin of the Standard Model species required by the 'hot Big Bang' paradigm. If there is enough time, I will also discuss the implications of all this for primordial gravitational waves.
6/9/2017 -- Sergey Ketov (Tokyo Metropolitan University)
Starobinsky inflation in supergravity
I begin with an introduction to Starobinsky inflation based on (R+R^2) gravity, in light of Planck data about CMB. Next, I introduce the supergravity extensions of Starobinsky inflation, review their problems and possible solutions. I conclude with a discussion of reheating after Starobinsky inflation in the context of supergravity.
29/6/2017 -- Bruce Bassett (University of Cape Town)
Rise of the Machine: AI and Fundamental Science
With the recent spectacular advances in machine learning we are naturally confronted with the question of the limits of Artificial Intelligence (AI). Here we will review how AI is being used in astronomy, discuss the future role of AI in fundamental science and finally discuss whether AI will ever be able to undertake its own original research.
28/6/2017 -- Dmitry Semikoz (APC, Paris)
Signatures of a two million year old nearby supernova in antimatter data.
In this talk I will show how one can explain multiple anomalies in the cosmic ray data by adding the effects of a 2 million year old nearby supernova to static model of galactic cosmic rays. In particular, this supernova can explain the excess of positrons and antiprotons above 20 GeV found by PAMELA and AMS-02, the discrepancy in the slopes of the spectra of cosmic ray protons and heavier nuclei in the TeV-PeV energy range and the plateau in cosmic ray dipole anisotropy in the 2-50 TeV energy range. Same supernova was responsible for Fe60 measured in ocean crust.
2/6/2017 -- David Alonso (University of Oxford)
Science with future ground-based CMB experiments.
After the findings of Planck, the immediate future of CMB observations lies with the next generation of ground-based experiments. In this talk I will first introduce the most compelling science objectives for these experiments in combination with future large-scale-structure surveys. Then I will describe a number of novel observational methods to tackle these objectives enabled by the enhanced angular resolution and reduced noise levels of Stage-3 and Stage-4 observatories, as well as the main challenges they will face.
22/5/2017 -- Mathieu Langer (Université Paris-Sud)
Magnetizing the intergalactic medium during reionization.
An increasing amount of evidence indicates that cosmological sheets, filaments and voids may be substantially magnetised. The origin of magnetic fields in the the Intergalactic Medium is currently uncertain. It seems now well known that non-standard extensions to the physics of the Standard Model are capable of providing mechanisms susceptible of magnetising the Universe at large. Much less well known is the fact that standard, classical physics of matter-radiation interactions possesses actually the same potential. After reviewing briefly our current knowledge about magnetic fields on the largest scales, I will discuss a magnetogenesis mechanism based on the exchange of momentum between hard photons and electrons in an inhomogeneous Intergalactic Medium. Operating in the neighbourhood of ionising sources during the Epoch of Reionization, this mechanism is capable of generating magnetic seeds of relevant strengths on scales comparable to the distance between ionising sources. In addition, summing up the contributions of all ionising sources and taking into account the distribution of gas inhomogeneities, I will show that this mechanism leaves the IGM, at the end of Reionization, with a level of magnetization that might account for the current magnetic fields strengths in the cosmic web.
--based on Durrive & Langer, MNRAS, 2015, and Durrive et al. MNRAS 2017 (submitted)--
16/5/2017 -- Sergey Sibiryakov (CERN, EPFL, INR RAS)
Counts-in-cells statistics of cosmic structure and non-perturbative methods of quantum field theory
I will show how the probability distribution for matter over/under- densities in spherical patches of the universe can be derived from first principles using the instanton technique borrowed from quantum field theory. The spherical collapse solution plays the role of an instanton, whereas deviations from sphericity are consistently accounted for by a Gaussian integral over small perturbations around the instanton. The method is valid even for large — by a factor ten — deviations from the mean density and provides a way to probe the dynamics of dark matter and statistics of initial fluctuations in the regime where perturbative treatment does not apply.
25/4/2017 -- Ippocratis Saltas (University of Lisbon)
What can unimodular gravity teach us about the cosmological constant?
Unimodular gravity became very popular over the last years as a theory that could shed light on the cosmological—constant problem. In this talk, I will explain the idea behind unimodular gravity, and discuss its (in)ability to bring a new perspective to the problem of the cosmological vacuum.
12/4/2017 -- Andrei Nomerotski (Brookhaven National Lab, USA)
Place: Lecture Theatre
Status and Plans for Large Synoptic Survey Telescope
Investigation of Dark Energy remains one of the most compelling tasks for modern cosmology. It can be studied with several probes which are accessible through precise and deep surveys of the Universe. In the talk I will review the status and plans for Large Synoptic Survey Telescope, which will precisely measure the positions and shapes of billions of galaxies along with estimates of their distances, providing an order-of-magnitude improvement relative to current experiments. LSST Camera employs thick, fully depleted CCDs with extended infrared sensitivity. The talk will provide more detail on the camera design and will discuss limitations on the achievable precision coming from the instrumentation.
6/4/2017 -- Alex Vikman (CEICO, Institute of Physics)
The Phantom of the Cosmological Time-Crystals
I will discuss a recently proposed new cosmological phase where a scalar field moves exactly periodically in an expanding spatially-flat Friedmann universe. On average this phase has a vacuum or de Sitter equation of state and can be interesting to model Inflation and Dark Energy in a novel way. This phase corresponds to a limiting cycle of the equations of motion and can be considered as a cosmological realization of a general idea of a "time-crystal" introduced by Wilczek et. all in 2012. Recently we showed that this cosmological phase is only possible, provided the Null Energy Condition is violated and the so-called Phantom divide is crossed. Using methods from the dynamical systems, we proved that in a rather general class of single scalar field models called k-essence: i) this crossing causes infinite growth of quantum perturbations on short length-scales, and ii) exactly periodic solutions are only possible, provided the limiting cycle encircles a singularity in the phase plane. The configurations neighboring this singular curve in the phase space are linearly unstable on one side of the curve and superluminal on the other side. Moreover, the increment of the instability is infinitely growing for each mode by approaching the singularity, while for the configurations on the other side, the sound speed is growing without limit. We illustrated our general results by analytical and numerical studies of particular models proposed by Wilczek and collaborators. Finally I will briefly discuss systems where this idea of time-crystals may be realized.
3/4/2017 -- Jnan Maharana (Institute of Physics, Sachivalaya Marg, India)
Place: Room 226.
Scattering of Stringy States and T-duality
First a brief overview of target space duality will be presented. Compactification of a closed bosonic string in its massless backgrounds wil be considered when it is compactified on a d-dimensional torus. The vertex operators associated with the moduli of the compactified closed string will be constructed. The Kawai-Llewellyn-Tye factorization technique will be utilized to show the T-duality transformation properties of S-matrix for the moduli.
27/3/2017 -- Michal Bilek (Astronomical Institute of the Czech Academy of Sciences)
Galaxy interactions in MOdified Newtonian Dynamics (MOND)
MOdified Newtonian Dynamics (MOND) is a promising attempt to solve the missing mass problem by changing the standard laws of physics rather than by postulating the dark matter. MOND has been inspired by observations of isolated disk galaxies. It is thus important to test it also in other objects. First, I will give a short introduction to MOND and review the published simulations of interacting galaxies. I will then present my work on testing MOND in elliptical galaxies using remnants of accreted satellites and my simulation of the past close encounter of Milky Way and the Andromeda galaxy that MOND predicts. I will discuss observational evidence for this encounter.
27/2/2017 -- Misao Sasaki (Yukawa Institute for Theoretical Physics, Kyoto -- director)
Signatures from inflationary massive gravity.
Inflation is a natural platform for modified gravity. In this talk, we consider
a theory that spontaneously violates the local SO(3) symmetry, which gives
rise to a preferred spatial frame during inflation.
As a result, the tensor modes become massive. We argue that this theory
leads to several interesting observational signatures.
23/2/2017 -- Misao Sasaki
Place: Lecture hall.
Colloquium: Inflation and Beyond.
There is strong observational evidence now that the Universe has
experienced an almost exponential expansion at its very early stage, called
inflation. In this talk I first review the inflationary universe and its observational
predictions. Then I discuss possible future directions beyond and behind theory
of inflation, and their observational signatures.
14/12/2016 -- Giovanni Acquaviva (Charles University, Prague)
Dark matter perturbations with causal bulk viscosity
We analyse the evolution of perturbations of cold dark matter endowed with bulk viscosity. Focusing on structure formation well within the Hubble radius, the perturbative analysis is carried out in the Newtonian approximation while the bulk viscosity is described by Israel-Stewart's causal theory of dissipation. Differently from previous analysis based on non-causal theories, we obtain a density contrast evolution governed by a third order equation. This framework can be employed to address some of the current inconsistencies in the observed clustering of galaxies.
9/12/2016 -- David Pirtskhalava (EPFL, Lausanne, Switzerland)
Place:Room 117.
Relaxing the Cosmological Constant
14/11/2016 --Glenn Barnich (Université Libre de Bruxelles & International Solvay Institutes)
Finite BMS transformations
18/10/2016 -- Eugeny Babichev (Laboratoire de Physique Théorique d'Orsay, Orsay, France)
Gravitational origin of dark matter
Na Slovance 1999/2, 182 21 Praha 8 tel:+420 266 052 666
© Copyright 2023 ceico.
Powered by Sense/Net CMS Community Edition | CommonCrawl |
America/Edmonton English
2015 CAP Congress / Congrès de l'ACP 2015
13 Jun 2015, 06:00 → 19 Jun 2015, 17:00 America/Edmonton
The 2015 CAP Congress is being hosted by University of Alberta, June 15-19, 2015. This Congress is an opportunity to showcase and celebrate the achievements of physicists in Canada and abroad. Mark your calendars and bookmark the main Congress web site (http://www.cap.ca/en/congress/2015 ) for easy access to updates and program information.
BEST STUDENT PAPER COMPETITION RESULTS
PRINTED CONGRESS PROGRAM (including maps, directions, etc.) now available by clicking here!
CHANGES TO PROGRAM AS OF JUNE 12, 2015
GENERAL INFORMATION: PLEASE READ
Link to online registration: https://www.cap.ca/CAP_Meetings/default.aspx
WARNING! Telemarketing scam for CAP Congress hotel discounted rates. Read more...
ATTENTION! Appel de télémarketing frauduleux, tarifs d'hôtel réduits pour le congrès de l'ACP. En savoir plus ...
Le Congrès 2015 de l'ACP se tiendra à la University of Alberta (Edmonton) du 15 au 19 juin 2015. Au cours de cet événement nous pourrons profiter des présentations et des réalisations de physiciens et physiciennes du Canada et d'ailleurs, et les célébrer. Inscrivez la date du congrès à votre agenda et créez un signet de l'adresse du site web du congrès (http://www.cap.ca/fr/congress/2015) pour accéder facilement aux mises à jour et au contenu de la programmation.
Résultats de la compétition de la meilleure communication étudiante
Le programme imprimé du congrès (incluant des cartes, directions, etc.) est maintenant disponible ! Cliquer ici
Lien à l'inscription en ligne (choississez "français" et suivez les instructions) : https://www.cap.ca/CAP_Meetings/default.aspx
2015 CAP Congress Poster
[email protected]
Saturday, 13 June
Sat, 13 Jun
Mon, 15 Jun
Tue, 16 Jun
Wed, 17 Jun
Fri, 19 Jun
CINP Town Hall (Sat) / Consultation publique du ICPN CCIS L1-047
CCIS L1-047
Convener: Prof. Garth Huber (University of Regina)
Welcome & Introduction 10m
Speaker: Garth Huber (University of Regina)
Coffee and Discussion 33m
Discussion on Hadron Structure/QCD (21) 21m
Discussion on HQP Issues (40) 40m
Speaker: Juliette Mammei
CINP Town Hall (Sat) / Consultation publique du ICPN: FUNDAMENTAL SYMMETRIES I
Convener: Gerald Gwinner
ALPHA Antihydrogen Symmetry Test (20+5) 25m
MOLLER Parity Violation Experiment at JLab (20+5) 25m
Cold Neutrons at SNS (10+2) 12m
UltraCold Neutrons at TRIUMF (20+5) 25m
CINP Town Hall (Sat) / Consultation publique du ICPN: Hadron Structure/QCD
Convener: Adam Garnsworthy
Compton Scattering Measurements at MAMI (15+3) 18m
Speaker: Prof. David Hornidge (Mount Allison University)
Nucleon electromagnetic form factor measurements at JLab (15+3) 18m
Speaker: Adam Sarty (Saint Mary's University)
The GlueX and Exclusive Meson Production Programs at JLab (20+5) 25m
Extreme QCD: Characterizing the Quark-Gluon Plasma (15+3) 18m
Speaker: Prof. Charles Gale (McGill University)
Sunday, 14 June
CAP Board Meeting (Old and New) / Réunion du CA de l'ACP (ancien et nouveau) CCIS 3-003
CCIS 3-003
Long Range Planning Committee Kick-off Meeting / Reunion du comite de planification a long terme CCIS 4-003
IPP Town Hall - AGM / Consultation publique et AGA de l'IPP NINT Taylor Room
NINT Taylor Room
Convener: Michael Roney (University of Victoria)
Introduction-IPP Director's Report 20m
Speaker: Michael Roney (University of Victoria)
Particle Astrophysics CFREF 10m
Speaker: Prof. Tony Noble (Queen's University)
DEAP 20m
Speaker: Mark Boulay (Q)
SNO+ 20m
Speaker: Christine Kraus
SuperCDMS 20m
Speaker: Prof. Gilles Gerbier (Queen's University)
PICO 20m
Speaker: Tony Noble (Queen's University)
EXO 20m
Speaker: Prof. David Sinclair (Carleton University)
NEWS Experiment 20m
Speaker: Prof. Gilles Gerbier (Queens' University)
LUNCH 30m
IceCube 20m
Speaker: Prof. Darren Grant (University of Alberta)
Theory review-Higgs, EW, BSM 40m
Speakers: Dr David Morrissey (TRIUMF) , Prof. Heather Logan (Carleton University)
Moller Experiment at JLAB 10m
Speaker: Michael Gericke (University of Manitoba)
ALPHA 10m
Speaker: Makoto Fujiwara (TRIUMF (CA))
UCN 20m
Speaker: Prof. Jeffery Martin (University of Winnipeg)
Belle II 20m
Speaker: Dr Christopher Hearty (IPP / UBC)
HEPNET/Computing in HEP 10m
Speaker: Randy Sobie (University of Victoria (CA))
MRS - Alberta/Toronto 5m
Speaker: James Pinfold (University of Alberta (CA))
MRS - Carleton/Victoria/Queens 10m
Speaker: Prof. Kevin Graham (Carleton University)
NA62 20m
Speaker: Dr Toshio Numao
Halo+upgrade at LNGS 15m
Speaker: Prof. Clarence Virtue (Laurentian University)
VERITAS 10m
Speaker: Prof. David Hanna (McGill University)
g-2 at JPARC 10m
Speaker: Dr Glen Marshall (TRIUMF)
Theory Review - QCD 15m
Speaker: Prof. Randy Lewis (York University)
ATLAS 30m
Speaker: Prof. Alison Lister (UBC)
Technical support for experiment development and construction 10m
Speaker: Dr Fabrice Retiere (TRIUMF)
CINP Town Hall (Sun) / Consultation publique du ICPN CCIS L1-029
Introductory Comments 10m
Lunch (provided) 42m
Discussion on Nuclear Astrophysics (19) 19m
Speaker: Iris Dillmann
Discussion on Nuclear Structure (20) 20m
Speaker: Adam Garnsworthy
Coffee 20m
Discussion on Fundamental Symmetries (20) 20m
General Discussion 49m
CINP Town Hall (Sun) / Consultation publique du ICPN: NUCLEAR STRUCTURE & ASTROPHYSICS I
Convener: Garth Huber (University of Regina)
Nuclear Structure aspects of the Gamma-Ray program (20+5) 25m
Nuclear Astrophysics aspects of the Gamma-Ray program (20+5) 25m
Fundamental Symmetries aspects of the Gamma-Ray program (20+5) 25m
Speaker: Carl Svensson
Nuclear Astrophysics with DRAGON/TUDA/EMMA (15+3) 18m
Speaker: Chris Ruiz (TRIUMF)
TITAN Ion Trap Program at ISAC (20+5) 25m
CAP Advisory Council (Old and New) / Conseil consultatif de l'ACP (ancien et nouveau) CCIS L1-029
CINP Town Hall (Sun) / Consultation publique du ICPN: NUCLEAR STRUCTURE & ASTROPHYSICS II
Convener: Charles Gale
Canadian Penning Trap & Related Ion-Trap Expts @ ANL (15+3) 18m
Reaction spectroscopy of rare isotopes with low and high-energy beams (15+3) 18m
Speaker: Rituparna Kanungo (TRIUMF)
Electroweak measurements of nuclear neutron densities via PREX and CREX at JLab (15+3) 18m
Speaker: Juliette Mammei (University of Manitoba)
Ab initio nuclear theory for structure and reactions (20+5) 25m
Speaker: Francesco Raimondi
From nuclear forces to structure and astrophysics (10+2) 12m
Speaker: Alexandros Gezerlis
CINP Town Hall (Sun) / Consultation publique du ICPN: FUNDAMENTAL SYMMETRIES II
Convener: Iris Dillmann
TRIUMF's Neutral Atom Trap for Beta Decay (15+3) 18m
Francium Trap Project (15+3) 18m
Neutrinoless Double Beta Decay (25+5) 25m
Speaker: David Sinclair
Electroweak Physics (15+3) 20m
CINP Town Hall (Sun) / Consultation publique du ICPN: New Facilities
Convener: Juliette Mammei
Science Opportunities of ARIEL (20+5) 25m
Resources for Detector Development in the Canadian Subatomic Physics Community (10+2) 12m
CINP Board Meeting / Réunion du conseil de l'ICPN CCIS L1-047
IPP Inst. Members and Board of Trustees Meetings / Réunions des membres inst. et du conseil de l'IPP NINT Taylor Room
Monday, 15 June
Joint CINP-IPP Meeting / Réunion conjointe de l'ICPN et de l'IPP (DPN-PPD) CCIS L1-140
Conveners: Prof. Garth Huber (University of Regina) , Michael Roney (University of Victoria)
Report from NSERC SAP ES 35m
Speaker: John Martin (York University (CA))
Canada Foundation for Innovation and Subatomic Physics 20m
Speaker: Olivier Gagnon (Fondation canadienne pour l'innovation)
Report from TRIUMF Director 35m
Speaker: Jonathan Bagger (Johns Hopkins University)
Report from SNOLAB Director 25m
Speaker: Nigel Smith (SNOLab)
Report from Subatomic Physics Long Range Plan Committee Chair 20m
Speaker: Dean Karlen (University of Victoria (CA))
PiC Editorial Board Meeting / Réunion du Comité de rédaction de La Physique au Canada CCIS L1-047
Convener: Bela Joos (University of Ottawa)
IPP / CINP Health Break / Pause santé IPP / ICPN CCIS L2 Foyer
CCIS L2 Foyer
CINP Annual General Meeting / Assemblée générale annuelle de l'ICPN CCIS L1-029
IPP Town Hall - AGM / Consultation publique et AGA de l'IPP CCIS L1-140
Status and future plan of KEK and J-PARC 35m
Speaker: Yasuhiro Okada (KEK)
T2K+HyperK 30m
Speaker: Hirohisa A. Tanaka (University of British Columbia)
ILC 20m
Speaker: Alain Bellerive (Carleton University (CA))
Long Range Plan: Next Steps for IPP 20m
Lunch / Diner
M-PLEN Plenary Session - Start of Conference - Sara Seager, MIT / Session plénière - Ouverture du Congrès - Sara Seager, MIT CCIS 1-430
Convener: Robert Fedosejevs (University of Alberta)
Exoplanets and the Search for Habitable Worlds 45m
Thousands of exoplanets are known to orbit nearby stars with the statistical inference that every star in our Milky Way Galaxy should have at least one planet. Beyond their discovery, a new era of "exoplanet characterization" is underway with an astonishing diversity of exoplanets driving the fields of planet formation and evolution, interior structure, atmospheric science, and orbital dynamics to new depths. The push to find smaller and smaller planets down to Earth size is succeeding and motivating the next generation of space telescopes to have the capability to find and identify planets that may have suitable conditions for life or even signs of life by way of atmospheric biosignature gases.
Speaker: Prof. Sara Seager (Massachusetts Institute of Technology)
M1-1 Topological States of Matter (DCMMP) / États topologiques de la matière (DPMCM) NINT Taylor room
Convener: Kaori Tanaka (University of Saskatchewan)
Nematic and non-Fermi liquid phases of systems with quadratic band crossing 30m
I will review the recent work on the phases and quantum phase transitions in the electronic systems that feature the parabolic band touching at the Fermi level, the celebrated and well-studied example of which is the bilayer graphene. In particular, it will be argued that such three-dimensional systems are in principle unstable towards the spontaneous formation of the (topological) Mott insulator at weak long-range Coulomb interaction. The mechanism of the instability can be understood as the collision of non-Fermi liquid fixed point, discovered by Abrikosov in the `70s, with another, critical, fixed point, which approaches it in the coupling space as the system's dimensionality reaches certain ``critical dimension" from above. Some universal characteristics of this scenario, the width of the non-Fermi liquid crossover regime, and the observability of the nematic Mott phase in common gapless semiconductors such as gray tin or mercury tellurude will be discussed.
Speaker: Prof. Igor Herbut (Simon Fraser University)
Collective modes and interacting Majorana fermions in topological superfluids 30m
Topological phases of matter are characterized by the absence of low-energy bulk excitations and the presence of robust gapless surface states. A prime example is the three-dimensional (3D) topological band insulator, which exhibits a bulk insulating gap but supports gapless 2D Dirac fermions on its surface. This physics is ultimately a consequence of spin-orbit coupling, a single-particle effect within the reach of the band theory of solids. The phenomenology of topological superfluids (and superconductors, which are charged superfluids) is rather similar, with a bulk pairing gap and gapless 2D surface Majorana fermions. The standard theory of topological superfluids exploits this analogy and can be thought of as a band theory of Bogoliubov quasiparticles. In particular, this theory predicts that Majorana fermions should be noninteracting particles. Band insulators and superfluids are, however, fundamentally different: While the former exist in the absence of interparticle interactions, the latter are broken-symmetry states that owe their very existence to such interactions. In particular, unlike the static energy gap of a band insulator, the gap in a superfluid is due to a dynamical order parameter that is subject to both thermal and quantum fluctuations. In this talk, I will argue that order parameter fluctuations in a topological superfluid can induce effective interactions among surface Majorana fermions. Possible consequences of these interactions will be discussed.
Speaker: Joseph Maciejko (University of Alberta)
Dilute limit of an interacting spin-orbit coupled two-dimensional electron gas 15m
The combination of many-body interactions and Rashba spin-orbit coupling in a two-dimensional fermion system gives rise to an exotic array of phases in the ground state. In previous analyses, it has been found that in the low fermion density limit, these are nematic, ferromagnetic nematic, and spin-density wave phases. At ultra-low densities, the ground state favours the ferromagnetic nematic phase if the interactions are short range (contact), and the nematic phase if the interactions are long range (dipolar). In this talk, we examine interacting two-fermion systems with spin-orbit coupling. These systems retain the physics of the dilute limit of the many-body system, while allowing us to solve the ground state exactly for each type of interaction. We determine the symmetries of the ground state, which uniquely determine the phase of the system. These phases could potentially be observed in two-dimensional GaAs heterostructures with quantum wells that lack inversion symmetry.
Speaker: Mr Joel Hutchinson (University of Alberta)
Andreev and Josephson transport in InAs nanowire-based quantum dots 15m
Superconducting proximity effects are of fundamental interest and underlie recent proposals for experimental realization of topological states. Here we study superconductor-quantum dot- superconductor (S-QD-S) junctions formed by contacting short- channel InAs nanowire transistors with Nb leads. When the carrier density is low, one or more quantum dots form in the nanowire due to spatial potential fluctuations. Low-temperature electrical transport shows clear signatures of proximity superconductivity, such as regions of negative differential conductance, Multiple Andreev Reflections (MAR) and spectroscopic features hinting at the formation of Andreev Bound States (ABS). These features can coexist with the Coulomb diamond structure resulting from the dot charging energy. The theory of Andreev and Josephson transport in S-QD-S structures is invoked in order to elucidate the experimental data. Particular attention is devoted to an intermediate coupling regime, wherein the superconducting energy gap $\Delta$ is on the same order of magnitude as the tunnel coupling strength $\Gamma$, but smaller than the Coulomb charging energy of the dot $U$. In this model, a rich interplay exists between $U$, which favours a spin-doublet ground state for the quantum dot, $\Delta$, which favours a BCS- like singlet ground state, and Kondo correlations in the dot, which favour a Yu-Shiba-Rusinov-like singlet ground state. A quantum phase transition can occur from the doublet to the BCS- like singlet ground state, marking a $0$-$\pi$ transition in the Josephson current of the junction. The significance of these results to the search for topological states in semiconductor nanowire junctions is discussed.
Speaker: Kaveh Gharavi (University of Waterloo)
M1-10 NSERC's Partnership Program: Panel Discussion and Q&A / Programmes de partenariats du CRSNG : Table ronde et Q&R CAB 235
CAB 235
Convener: Bill Whelan (University of Prince Edward Island)
NSERC's Partnership Program: Panel Discussion and Q&A / Programme de partenariats du CRSNG : Table ronde et Q&R 1h 30m
This session is a panel discussion and Q&A on researcher-industry collaborations and NSERC funding opportunities. Panelists include: Irene Mikawoz (NSERC Prairies Regional Office), Donna Strickland (U. of Waterloo), Wayne Hocking (Western U.), Kristin Poduska (Memorial U.), Chijin Xiao (U. of Saskatchewan) and Andranik Sarkissian (Plasmionique Inc). -- Cette séance consistera en un débat d'experts et en une période de questions sur les collaborations entre les chercheurs et l'industrie et sur les possibilités de financement du CRSNG. Au nombre des experts figurent Irene Mikawoz (Bur. régional des Prairies du CRSNG), Donna Strickland (U. de Waterloo), Wayne Hocking (U. Western), Kristin Poduska (U. Memorial), Chijin Xiao (U. de Saskatchewan) et Andranik Sarkissian (Plasmionique Inc).
M1-2 Organic and Molecular Electronics (DCMMP-DMBP-DSS) / Électronique organique et moléculaire (DPMCM-DPMB-DSS) CCIS L2-190
Convener: Doug Bonn (Univ. of British Columbia)
Principles and methods enabling atom scale electronic circuitry 30m
Quantum dots are small entities, typically consisting of just a few thousands atoms, that in some ways act like a single atom. The constituent atoms in a dot coalesce their electronic properties to exhibit fairly simple and potentially very useful properties. It turns out that collectives of dots exhibit joint electronic properties of yet more interest. Unfortunately, though extremely small, the still considerable size of typical quantum dots puts a limit on how close multiple dots can be placed, and that in turn limits how strong the coupling between dots can be. Because inter-dot coupling is weak, properties of interest are only manifest at very low temperatures (milliKelvin). In this work the ultimate small quantum dot is described – we replace an "artificial atom" with a true atom - with great benefit. It is demonstrated that the zero-dimensional character of the silicon atom dangling bond (DB) state allows controlled formation and occupation of a new form of quantum dot assemblies - at room temperature. It is shown that fabrication geometry determines net electron occupation and tunnel-coupling strength within multi-DB ensembles and moreover that electrostatic separation of degenerate states allows controlled electron occupation within an ensemble. Single electron, single DB transport dynamics will be described as will conduction among collectives of DBs. Some results and speculation on the viability of a new "atomic electronics" based upon these results will be offered. As new technologies require new fabrication and analytical tools, a few words about robust, readily repairable, single atom tips will be offered too. This tip may be an ideal scanned probe fabrication tool.
Speaker: Robert Wolkow (University of Alberta)
Polarization induced energy level shifts at organic semiconductor interfaces probed on the molecular scale by scanning tunnelling microscopy 30m
The inter- and intra- molecular energy transfer that underlies transport, charge separation for photovoltaics, and catalysis are influenced by both the spatial distribution of electronic states and their energy level alignment at interfaces. In organic materials, the relevant length scales are often on the order of a single molecular unit. Scanning tunneling microscopy (STM) and spectroscopy (STS) stands as one of few techniques with the ability to resolve both the spatial structure of these interfaces while probing energy levels on the nanometer scale. Here, we have used STM/STS in a spectroscopic mapping mode to investigate the spatial shifts in energy levels across well-defined 2-dimensional nanoscale clusters of 3,4,9,10-perylene tetracarboxylic dianhydride (PTCDA) decoupled from an Ag(111) substrate by a bilayer of NaCl. We find a striking difference between the HOMO and LUMO states of molecules residing at the edges of these clusters and those in the centre. Edge molecules exhibit a gap that is up to 0.5eV larger than observed for inner molecules. Most of this difference is accounted for by the shift of the occupied states, strongly influencing level alignment for a boundary region of single molecular width. As STS is a single-particle spectroscopy – adding or removing a charge – the energy levels measured are influenced by the local polarization environment. The shifts observed for several different geometries of islands correspond well with calculations of the stabilization of this transient charge via the polarization of the other molecules in the cluster. These effects are expected to influence organic semiconductors that exhibit hopping-like transport, and processes such as charge separation occurring at interfaces in organic photovoltaic devices. As the polarizability of most molecular semiconductors is anisotropic, the structure and orientation of molecules at interfaces will play a significant role in the resulting energy level alignment.
Speaker: Sarah Burke (University of British Columbia)
On the Road to Low Power Circuitry: Analysis of Si Dangling Bond Charging Dynamics 15m
Undesired circuit heating results from the billions of electrons flowing through our devices every second. Heating wastes energy (leading to shorter battery life), and also puts a limit on computational speeds. The solution to excess heat generation is of huge commercial interest and has led to a large push towards nanoscale electronics which are smaller and more energy efficient. Proposed hybrid atom-scale schemes have already been formed to reduce power consumption of Complementary Metal Oxide Semiconductor (CMOS) chips commonly used in many consumer electronics including digital cameras and computers. At the heart of these schemes are atomic silicon dangling bonds (DBs) which can theoretically be used to form ultra-low power nanowires. In order to move towards the realization of these practical schemes, however, fundamental physical properties of DBs must first be characterized and studied. One of the properties inherent to DBs is their ability to store electrons. They can exist in a positive, neutral, or negative charge state when storing zero, one, or two electrons respectively. When imaging a DB with a scanning tunneling microscope (STM), fluctuations of the DB charge state can be observed that are driven by influence of the STM tip. A correlation analysis method adapted from biophysics was utilized to study these fluctuations in charge state to help uncover intrinsic transition rates between states for a given DB. Analysis such as this also opens the doors to study more complex systems of interacting DBs as well, which is another important step towards making practical devices.
Speaker: Roshan Achal (University of Alberta)
Set Point Effects in Fourier Transform Scanning Tunneling Spectroscopy 15m
Fourier Transform Scanning Tunneling Spectroscopy (FT-STS) has become an important experimental tool for the study of electronic structure. By combining the local real space picture of the electronic density of states provided by scanning tunneling microscopy with the energy and momentum resolution of FT-STS one can extract information about the band structure and dispersion. This has been thoroughly demonstrated in studies of the superconducting cuprates, the iron arsenides, and heavy fermion compounds. FT-STS relies on the Fourier transform of the dI/dV, the derivative of the tunneling current with respect to the applied bias. Under the approximations of zero temperature, a flat tip density of states, and an energy independent tunneling matrix element it can be shown that the dI/dV signal is proportional to the local density of states of the sample. Under real experimental conditions, however, these approximations are not strictly valid, leading to additional functional dependencies of the dI/dV. A variety of artifacts can result when one considers the three most common measurement modes: constant current maps, constant height maps, and spectroscopic grids. We illustrate the different artifacts that can appear in FT-STS using data taken from the well understood surface state of an Ag(111) single crystal at 4.2 K and under ultra-high vacuum conditions. We find that constant current dI/dV maps taken with a lock-in amplifier lead to a feature in the FT-STS dispersion that disperses as a function of energy below the Fermi level (E$_F$) and becomes constant above E$_F$. This result shows the importance of distinguishing dispersing features caused by quasiparticles in the sample from those caused by the measurement. We compare the set point artifacts in all three modes of measurement to scattering model simulations based on the T-matrix formalism. Finally we propose a guide to help identify and isolate these set point artifacts for future studies in systems where the band structure and correlations create a complex scattering space.
Speaker: Mr Andrew Macdonald (University of British Columbia)
M1-3 Theory, modelling and space weather I (DASP) / Théorie, modélisation et climat spatial I (DPAE) CAB 243
Convener: David Knudsen (University of Calgary)
Examples of exact solutions of charged particle motion in magnetic fields and their applications. 30m
There are very few exact solutions for the motion of a charged particle in specified magnetic field. These solutions have considerable theoretical as well as pedagogical value. In this talk I will briefly describe several known analytical solutions, such as motion in the equatorial plane of a dipole and in a constant gradient field. Particular attention will be given to a relatively unknown solution corresponding to magnetic field inversely proportional to the radius. This case leads to relatively simple expressions involving only elementary functions. I will discuss applications of this solution to validation of numerical methods of particle tracing, such as symplectic integration. Another interesting use of this solution is comparison with the adiabatic drift theory. Finally, this solution can be used as a building block for developing new numerical integration schemes for particle tracing.
Speaker: Konstantin Kabin (RMC)
Energetic Electron Precipitation Model 15m
Energetic electron precipitations cause atmospheric ionization - a complicated process which depends on many parameters. We present our energetic particles precipitation model which consists of three main parts: Energetic electron sources; Coupled electron/photon transport in the earth atmosphere; RIOMETERs and Very Low Frequency (VLF) receivers response to energetic electron precipitations. The primary source of energetic electrons - Van-Allen radiation belts - occupy a vast region of space and accumulate an immense amount of energy. In the Northern hemisphere they map to a broad ring crossing Canada. Under certain conditions trapped electrons can penetrate even deeper into the atmosphere causing modulations of free electron densities of D-Layer. The model implements coupled electron/photon transport based on MCNP 6 general transport code. Model verification uses data from the Medium Energy Proton and Electron Detector instrument on NOAA's POES satellite. Calculated electron fluxes and estimated electron density altitudinal profiles are used to construct and validate a realistic transport model that maps energetic electron fluxes incident on the upper atmosphere to GO CANADA (RIOMETERs and VLF receivers) instrument responses.
Speaker: Alexei Kouznetsov (University of Calgary)
Properties of the lunar wake inferred from hybrid-kinetic simulations and an analytic model 15m
There is renewed interest in the Moon as a potential base for scientific experiments and space exploration. Earth's nearest neighbour is exposed directly to the solar wind and solar radiation, both of which present hazards to successful operations on the lunar surface. In this paper we present lunar wake simulation and analytic results and discuss them in the context of observations from the ARTEMIS mission. The simulation results are based on hybrid-kinetic simulations while the analytic model is based on the formalism developed by [Hutchinson, 2008]. The latter makes assumptions of cylindrical geometry, a strong and constant magnetic field, and fixed transverse velocity and temperature. Under these approximations the ion fluid equations (with massless electrons) can be solved analytically by the method of characteristics. In this paper the formalism presented by Hutchinson is applied by including plasma density variations and flow within the lunar wake. The approach is valid for arbitrary angles between the interplanetary magnetic field and solar wind velocity, and accounts for plasma entering the wake region from two tangent points around the Moon. Under this condition, two angle-dependent equations for ion fluid flow are obtained, which can be solved using the method of characteristics to provide the density inside the wake region. it is shown in Fig1 and Fig2 that the model provides excellent agreement with observations from the ARTEMIS mission [Angelopoulos, 2011], and with large-scale hybrid-kinetic plasma simulations [Paral and Rankin, 2012]. It will be shown that the analytic model provides a practical alternative to large-scale kinetic simulations, and that it is generally useful for determining properties of the lunar wake under different solar wind conditions. It will be useful as well for predicting properties of the plasma environment around the Moon that have not yet been visited by spacecraft. Acknowledgments. This work was partially supported by grants from the Canadian Space Agency and the Natural Sciences and Engineering Research Council of Canada (NSERC). The simulations also benefited from access to the Westgrid Compute Canada facilities. The ARTEMIS data for this paper are available at NASA's Space Physics Data Facility (SPDF) (http://spdf.gsfc.nasa.gov/). Hossna Gharaee extends thank to THEMIS software manager Jim Lewis for his help on using ARTEMIS satellite data. -Hutchinson, I. (2008),Oblique ion collection in the drift approximation:How magnetized Mach probes really work, Physics Of Plasmas, 15, 123503, doi: 10.1063/1.3028314 -Angelopoulos, V. (2011), The ARTEMIS mission, Space Sci. Rev. (Netherlands), 165(1-4), 3–25. -Paral and Rankin (2012),Dawn-dusk asymmetry in the Kelvin-Helmholtz instability at Mercurry, Nature Communications, 4, 1645, doi:10.1038/ncomms2676,
Speaker: Hossna Gharaee (university of Alberta, Departement of Physics)
Explaining the Newly Discovered Third Radiation Belt 15m
Accurate specification of the global distribution of ultra-low frequency (ULF) wave power in space is critical for determining the dynamics and acceleration of outer radiation belt electrons. Current radiation belt models use ULF wave radial diffusion coefficients which are analytic functions of Kp based on ULF wave statistics. In this presentation we show that these statistical based analytic models for the radial diffusion coefficients can produce electron flux values in surprising agreement with the observations during geomagnetically quiet intervals. However, during some storm intervals the radial diffusion rates derived directly from ULF wave observations can become orders of magnitude higher than those given by the analytic expressions based on ULF wave statistics. During these storm intervals only the radiation belt models driven by the radial diffusion coefficients derived directly from ULF wave measurements produce electron flux values in agreement with the observations. Utilizing Van Allen Probe data and CARISMA magnetometer data results will be presented of the electron flux obtained using the diffusion coefficients derived directly from the ULF wave measurements which shed new light on some interesting observations made by the Van Allen Probes
Speaker: Dr Louis Ozeke (University of Alberta)
Fast damping of Alfven waves: Observations and modeling 15m
Results of analysis of Cluster spacecraft data will be presented that show that intense ultra-low frequency (ULF) waves in the inner magnetosphere can be excited by the impact of interplanetary shocks and solar wind dynamic pressure variations. The observations reveal that such waves can be damped away rapidly in a few tens of minutes. We examine mechanisms of ULF wave damping for two interplanetary shocks observed by Cluster on 7 November 2004, and 30 August 2001. The mechanisms considered are ionospheric joule heating, Landau damping, and waveguide energy propagation. It is shown that Landau damping provides the dominant ULF wave damping for the shock events of interest. It is further demonstrated that damping is caused by drift-bounce resonance with ions in the energy range of a few keV. Landau damping is shown to be more effective in the plasmasphere boundary layer due to the relatively higher proportion of Landau resonant ions that exist in that region. Moreover, multiple energy dispersion signatures of ions were found in the parallel and anti-parallel direction to the magnetic field immediately after the interplanetary shock impact in the November 2004 event. These dispersion signatures can be explained by flux modulations of local ions (rather than the ions from the Earth's ionosphere) by ULF waves. Test particle simulations will be used to simulate the energy dispersions of particles caused by ULF waves. In our study, particles will be traced backward in time until they reach a region with known distribution function. Liouville's theorem is then used to reconstruct the distribution function at the location of Cluster in a model magnetosphere.
Speaker: Chengrui Wang (University of Alberta)
M1-4 Theoretical Astrophysics (DTP) / Astrophysique théorique (DPT) CAB 239
Convener: Arundhati Dasgupta (University of Lethbridge)
Probing Physics with Observations of Neutron Stars and White Dwarfs 30m
White dwarfs and neutron stars are two of the densest objects in the Universe. Discovered 105 and 45 years ago, these objects are two of the best astrophysical laboratories of fundamental physics. The simple existence of white dwarfs is a stellar-size manifestation of quantum physics. I will describe how we use these objects today to study quantum-chromodynamics, quantum-electrodynamics, neutrino and axion physics and even thermodynamics in realms inaccessible to Earth-bound laboratories. In the process we also discover the detailed fate of our own Earth and Sun.
Speaker: Jeremy Heyl (UBC)
Observations and Theory of Supernova Explosions and their Remnants 30m
A supernova explosion occurs to end the life of a massive star (with mass of more than 8-10 times that of the sun). These explosions create and eject the elements that make up everything around us, including the earth. The life of a massive star will be outlined, and its sudden death in a supernova event. Following the explosion, the ejected material and energy interacts with the surrounding interstellar medium to produce a supernova remnant. Supernova remnants provide mass and kinetic energy to the interstellar medium, and accelerating most of the cosmic rays we observe. The observational aspects of supernova remnant will be reviewed and related to theoretical models.
Speaker: Denis Leahy
No "End of Greatness": Superlarge Structures and the Dawn of Brane Astronomy 15m
Several groups have recently reported observation of large scale structures which exceed the size limits expected from standard structure formation in a 13.8 billion years old LambdaCDM universe. On the other hand, the concept of crosstalk between overlapping 3-branes carrying gauge theories was recently introduced in arXiv:1502.03754[hep-th]. Crosstalk impacts the redshift of signals from brane overlap regions by making signals with the redshift z of the overlap region appear to have lower or higher redshift, depending on the electromagnetic crosstalk couplings. This leads to brane induced appearance of structure in redshift observations. The Lyman-alpha forest is a natural candidate to look for brane overlap at redshift z<6.
Speaker: Dr Rainer Dick (University of Saskatchewan)
M1-5 Nuclear Techniques in Medicine and Safety (DNP-DIAP) / Techniques nucléaires en médecine et en sécurité (DPN-DPIA) CCIS L1-047
Convener: Zisis Papandreou (University of Regina)
Evaluation of SiPM Arrays and Use for Radioactivity Detection and Monitoring 30m
Silicon photomultipliers (SiPMs) are novel photo sensorss that are needed for many applications in a broad range of fields. The advantages of such detectors are that they feature low bias ($<$100V) operation, high gain (10$^5$ to 10$^6$), insensitivity to magnetic fields, excellent photon detection efficiency (PDE), and the ability to operate in field conditions over a range of temperatures; they are compact, easy to use, require simple electronics and can be produced commercially in various formats. To evaluate and operate SiPM Arrays, we developed novel techniques of measurement of the PDE, the cross-talk probability and the breakdown voltage for the SiPM-arrays with summed output, which is most popular type of SiPMs on the market; these techniques allow one to make the required measurements when the separation of individual photopeaks in the output spectrum (that was crucial for the "conventional" techniques used before) is not available. I will also present our study of prototypes of gross counting gamma and neutron detectors for first responders that use SiPMs coupled to appropriate scintillators.
Speaker: Andrei Semenov
Neutron Generator Facility at SFU - GEANT4 Dose Prediction and Verification 15m
A neutron generator facility under development at Simon Fraser University (SFU) utilizes a commercial deuterium- tritium neutron generator (Thermo Scientific P 385) to produce 14.2 MeV neutrons at a nominal rate of $3\times10^8$ neutrons/s. The facility will be used to produce radioisotopes to support a research program including nuclear structure studies and neutron activation analysis. As a prerequisite for regular operation of the facility and as a personnel safety consideration, dose rate predictions for the facility were implemented via the GEANT4 Monte-Carlo framework. Dose rate predictions were compared at two low neutron energy cutoffs: 5 keV and 1 meV, with the latter accounting for low energy thermal neutrons but requiring significantly more computation time. As the SFU facility geometry contains various openings through which thermal neutrons may penetrate, it was necessary to study their contribution to the overall dose rate. A radiation survey of the facility was performed as part of the commissioning process, consisting of a neutron flux measurement via copper foil activation and dose rate measurements throughout the facility via a $^3$He gas filled neutron detector (Thermo Scientific WENDI-2). When using the 1 meV low neutron energy cutoff to account for thermal neutrons in the dose rate predictions, the predictions and survey measurements agree to within a factor of 2 or better in most survey locations.
Speaker: Mr Jonathan Williams (Simon Fraser University)
Rapid Elemental Analysis of Human Finger Nails Using Laser-Induced Breakdown Spectroscopy 15m
Zinc is a crucial element needed for many processes in the human body. It is essential for enzymatic activity and many cellular processes, such as cell division. A zinc deficiency can lead to problems with the immune system, birth defects, and blindness. This problem is especially important to address in developing countries where nutrition is limited. Supplements can be taken to increase the zinc intake, however it is difficult to determine who is zinc deficient and requires these supplements. The gold standard tests for determining the zinc concentration in the human body are both expensive and time-consuming. Zinc in human fingernails can be shown to represent the overall zinc concentration in the body. Laser-induced breakdown spectroscopy (LIBS) provides a quick analysis of the zinc concentration in a human fingernail with minimal sample preparation, thus LIBS could serve as a real-time biomedical assay for zinc deficiency. LIBS was performed on a collection of healthy human finger nails in an argon environment. The intensities of the zinc ion lines observed in the plasma were proportional to the zinc concentrations of each nail as measured by SIDMS. The variance of the measured zinc intensities between fingers of a given hand and between left and right hands for a single person was studied. Normalization of the zinc lines to other emission lines in the spectrum to reduce shot to shot variation was investigated. Studies were also performed to determine the spatial distribution of zinc within the nail. The influence of nail preparation prior to LIBS testing is an ongoing area of study.
Speaker: Ms Vlora Riberdy (University of Windsor)
The 2018 Shutdown of the NRU Reactor 30m
The federal government recently announced its decision to shut down the NRU reactor in 2018. The National Research Universal (NRU) reactor commenced operation in 1957, to provide neutrons for several missions simultaneously, including the production of neutron beams to support fundamental experimental research on solids and liquids, advancing knowledge of condensed matter physics. Today, the Canadian Neutron Beam Centre manages six thermal neutron beam lines at the NRU reactor, and sustains a team of scientific and technical experts who enable collaborative research projects to be performed effectively by students and scientists from over 30 Canadian universities, as well as over 100 foreign institutions from about 20 countries. The Canadian Institute for Neutron Scattering has organized a meeting for Canada's physics community to consider whether and how the imminent loss of this unique Canadian resource should be addressed. This presentation will provide historical context and details of the current situation, as background for an informed conversation about options and actions over the next few years.
Speaker: Dr John Root (Canadian Neutron Beam Centre)
M1-6 Neutrinoless Double-beta Decay I (PPD-DNP) / Double désintégration beta sans neutrino I (PPD-DPN) CCIS 1-140
Convener: Rudiger Picker (TRIUMF)
Neutrino in the Standard Model and beyond 30m
The Standard Model teaches us that in the framework of such general principles as local gauge symmetry, unication of weak and electromag- netic interactions and Brout-Englert-Higgs spontaneous breaking of the elec- troweak symmetry nature chooses the simplest possibilities. Two-component left-handed massless neutrino elds play crucial role in the determination of the charged current structure of the Standard Model. The absence of the right-handed neutrino elds in the Standard Model is the simplest, most economical possibility. In such a scenario Majorana mass term is the only possibility for neutrinos to be massive and mixed. Such mass term is gener- ated by the lepton-number violating Weinberg eective Lagrangian. In this approach three Majorana neutrino masses are suppressed with respect to the masses of other fundamental fermions by the ratio of the electroweak scale and a scale of a lepton-number violating physics. The discovery of the neu- trinoless double -decay and absence of transitions of avor neutrinos into sterile states would be evidence in favor of the minimal scenario we advocate here. 1
Speaker: Prof. Samoil Bilenky (JINR (Dubna))
Status of the SNO+ Experiment 30m
The SNO+ experiment, at the SNOLAB underground laboratory, consists of 780 Mg of linear alkylbenzene scintillator contained in the 12 m diameter SNO acrylic sphere and and observed by the SNO photomultiplier tubes. SNO+ will be loaded with tellurium, at approximately the 0.3% level to enable a sensitive search for neutrinoless double beta decay. This talk will detail the experiment, the sensitivity and the status of the detector.
Speaker: Prof. Aksel Hallin (University of Alberta)
Extraction of optical parameters in SNO+ with an in-situ optical calibration system 15m
SNO+ is a multi-purpose neutrino physics experiment investigating neutrinoless double beta decay and neutrino oscillations. The SNO+ detector consists of a 12m diameter acrylic vessel (AV), surrounded by ultra-pure water and approximately 9500 photomultiplier tubes (PMTs) which are positioned on a stainless steel PMT support structure (PSUP). The acrylic vessel will be filled with liquid scintillator. An in-situ optical calibration system based on LEDs and laser sources has been deployed. These optical sources feed light into the detector via optical fibres mounted on the PSUP, resulting in various beams of light. A collimated source will be used to measure the scattering in the liquid scintillator. Data have been taken while the AV was empty to understand the optical properties of the detector. We have analyzed the data to establish properties of the calibration system and to quantify the surface parameters, reflectivity and surface roughness responsible for scattering, as well as various parameters of the optical calibration system. These parameters will be a valuable input to the position and energy reconstruction algorithms, as well as the simulation, of SNO+.
Speaker: Dr Kalpana Singh Singh (Department of Physics, University of Alberta)
Double-beta decay half-life of 96Zr – nuclear physics meets geochemistry 15m
Double-beta (\beta\beta) decay measurements are a class of nuclear studies with the objective of detecting the neutrinoless (0\nu) decay variants. Detection of a 0\nu\beta\beta decay would prove the neutrino to be massive and to be its own anti-particle (i.e., a Majorana particle). A key parameter in the detection of the 0\nu\beta\beta decay is the energy, or Q-value, of the decay. ^{96}Zr is of particular interest as a double-beta decay candidate. A geochemical measurement of its \beta\beta decay half-life by measuring an isotopic anomaly of the ^{96}Mo daughter in ancient zircon samples yielded a value of 0.94(32)x10^{19} yr [1]. More recently, the NEMO collaboration measured the half-life directly to be 2.4(3)x10^{19} yr [2], twice as long as the geochemical measurement. As the geochemical result could be contaminated by a sequence of two single \beta-decays, the first being a 4-fold unique forbidden \beta-decay of ^{96}Zr to the 44 keV J^{\pi}=5^+ excited state in ^{96}Nb, followed by the 23 h \beta-decay of ^{96}Nb to ^{96}Mo, further study is mandated. Depending on the Q-value for the first decay, the estimated half-life could be of the same order as the one for the \beta\beta-decay [3]. However, the key parameter is the Q-value for the single \beta-decay, which enters in leading order as Q^{13} into the phase-space factor of the decay. Such a study is being carried out at the TRIUMF TITAN experiment and at the University of Calgary Isotope Science Lab. At TITAN we are measuring the Q-values for the ^{96}Zr to ^{96}Mo \beta\beta-decay and for the ^{96}Zr to ^{96}Nb single \beta-decay, with the goal of reaching a precision near 0.1 keV. At the UCalgary ISL, we are repeating the measurement of the ^{96}Mo isotopic anomaly using modern equipment and techniques. Combined, these measurements will remove a long-standing discrepancy of the two independent ^{96}Zr \beta\beta-decay half-life measurements. [1] M. E. Wieser and J. R. De Laeter, Phys. Rev. C 64, 024308 (2001). [2] NEMO-3 Collaboration, Nucl. Phys. A 847, 168-179 (2010). [3] J. Suhonen, Univ. Jyväskylä, private communication.
Speaker: Adam Mayer (University of Calgary)
M1-7 Advances in Nuclear Physics and Particle Physics Theory (DNP-PPD-DTP) / Progrès en physique nucléaire et en physique des particules théoriques (DPN-PPD-DPT) CCIS 1-160
Convener: Pierre Ouimet (University of Regina)
Ab initio calculations of nuclear structure and reactions 30m
The description of nuclei starting from the constituent nucleons and the realistic interactions among them has been a long-standing goal in nuclear physics. In recent years, a significant progress has been made in developing ab initio many-body approaches capable of describing both bound and scattering states in light and medium mass nuclei based on input from QCD employing Hamiltonians constructed within chiral effective field theory. We will discuss recent breakthroughs that allow for ab initio calculations for ground states, spectroscopy and reactions of nuclei and even hypernuclei throughout the p- and sd-shell and beyond with two- and three-nucleon interactions. We will also present results for nuclear reactions important for astrophysics, such as 7Be(p,γ)8B and 3He(α,γ)7Be radiative capture, and for 3H(d,n)4He fusion.
Speaker: Petr Navratil (TRIUMF)
New horizons for MCAS: heavier masses and alpha-particle scattering. 15m
The Multi-Channel Algebraic-Scattering (MCAS) method, developed in 2003 for the analysis of low-energy nuclear spectra and of resonant scattering, continues to be effectively used for nuclear-structure studies. The MCAS approach allows the construction of the nucleon-core model Hamiltonian which can be defined in detail (coupling to the collective modes, rotational or vibrational, diverse components of the interaction operators, nonlocal effects due to Pauli exclusion). As reported at previous CAP congresses, MCAS analyses have given good descriptions of bound states and low-lying resonant spectra of medium-light nuclei, including nuclei well off the line of stability. This presentation deals with new directions for MCAS, specifically, moving to heavier target nuclei (mass A = 18-23) and new projectiles in the scattering process, recently, the α particle. New results will be shown for n+18O and p+18O, n+22Ne, and α scattering on targets from mass A=3 to A=16, the last of these yielding structure information for 20Ne.
Speaker: Dr Juris P. Svenne (University of Manitoba, Dept. of Physics and Astronomy)
The coefficient of restitution of inflatable balls 15m
The bouncing of sports balls is often characterized in terms of the coefficient of restitution, which represents the ratio of the after-impact velocity to the before-impact velocity. While the behaviour of the coefficient of restitution as a function of the internal pressure of the ball has been studied, no theoretical justification has been given for any parametric curve fitted to the data. In this talk, we present a mechanistic model of the ball, leading to a simple two-parameter fit. The model will be compared to several commonly available sports balls.
Speaker: Mr Gaëtan Landry (Dalhousie University)
M1-8 Energy frontier: Standard Model and Higgs Boson I (PPD) / Frontière d'énergie: modèle standard et boson de Higgs I (PPD) CCIS L1-140
Convener: Bhubanjyoti Bhattacharya (University of Montreal)
Summary of ATLAS Standard Model measurements (including top quark) 30m
While the LHC is ramping up for it's second run at yet higher centre-of-mass energy, the experimental Collaborations are not only preparing for this run, but also ensuring that the maximum amount of information is extracted from the data taken in 2011 and 2012 at a centre-of-mass energy of 7 and 8 TeV, respectively. Many precision standard model measurements have been carried out, spanning some 14 orders of magnitude in production cross section. Some striking examples of achievements by the ATLAS Collaboration in measurements of production and properties of standard model particles will be presented. Particular focus will be placed on the observation of new processes or novel experimental techniques used to improve the precision of the analyses.
Speaker: Alison Lister (University of British Columbia (CA))
Measurement of the Higgs-boson properties with the ATLAS detector at the LHC 30m
A detailed review on the properties of the Higgs boson, as measured with the ATLAS experiment at the LHC, will be given. The results shown here use approximately 25 fb-1 of pp collision data, collected at 7 TeV and 8 TeV in 2011 and 2012. The measurements of the mass, couplings properties and main quantum numbers will be presented. Prospects for the upcoming Run2, starting in May 2015, will be reviewed.
Speaker: Manuela Venturi (University of Victoria (CA))
Measurement of the yy -> WW cross-section and searches for anomalous quartic gauge couplings WWAA at the ATLAS experiment 15m
Searches for the anomalous quartic gauge coupling of two photons to two W bosons (WWAA) were made at LEP and Tevatron. More recently many searches have been performed by the CMS and ATLAS collaborations at the Large Hadron Collider (LHC). Among the processes sensitive to these couplings are the Wy and yy -> WW production. In hadron colliders, yy -> WW events where the W bosons decay into leptons (electrons, muons or taus that subsequently transform into electrons or muons) have a clean signature. The two charged leptons originate from a vertex devoid of other outgoing particles, because they are produced by an electroweak interaction. Isolating the lepton vertex from other tracks suppresses strong interactions that produce many extra charged particles including higher cross-section processes such as Drell-Yan and top production. In this talk, I will present the measurement of the yy -> WW cross-section and searches for the WWAA anomalous quartic gauge couplings using the data collected by the ATLAS experiment during 2012.
Speaker: Chav Chhiv Chau (University of Toronto (CA))
M1-9 Ultrafast and Time-resolved Processes (DAMOPC) / (DPAMPC) CCIS L2-200
Convener: Chitra Rangan (University of Windsor)
Following an Auger Decay by Attosecond Pump-Probe Measurements 30m
Attosecond Physics is an emerging field at the international level which now provides tabletop attosecond (as=10^-18 s.) light sources extending from the extreme ultraviolet (XUV, 10-100 eV) to X-rays (keV) [1]. This feat opens new avenues in atomic and molecular spectroscopies [2], especially, to perform time-resolved experiments of ultrafast electron dynamics on the unexplored attosecond timescale [3]. I will present the first attosecond pump-probe measurement where an XUV attosecond pulse initiates an Auger decay and where an attosecond broadband optical pulse probes this ultrafast process. Supported by our model, we suggest that the optical probe acts as a gate of the Auger transition, in analogy with the FROG (frequency-resolved optical gating) technique commonly-used for measuring femtosecond laser pulses [4]. We believe this is a universal idea that will prevail in attosecond measurements [5]: I will show how our pump-probe scheme and modeling can reveal few-femtoseconds (atomic) to sub-fs (condensed matter) Auger lifetimes. [1] T. Popmintchev *et al.*, Nature Photonics 4, 822 (2010). [2] J. B. Bertrand *et al.*, Nature Physics 9, 174 (2013). [3] S. R. Leone *et al.*, Nature Photonics 8, 162 (2014). [4] R. Trebino, FROG, Kluwer Academic Publishers, Boston (2002). [5] A. Moulet, J.B. Bertrand *et al.*, Okinawa, Ultrafast Phenomena (2014).
Speaker: Prof. Julien Beaudoin Bertrand (Université Laval)
Ultrafast imaging of nonlinear terahertz pulse transmission in semiconductors 15m
Terahertz pulse spectroscopy has been widely used for probing the optical properties and ultrafast carrier dynamics of materials in the far-infrared region of the spectrum. Recently, sources of intense terahertz (THz) pulses with peak fields higher than 100 kV/cm have allowed researchers to explore ultrafast nonlinear THz dynamics in materials, such as THz-pulse-induced intervalley scattering in semiconductors. Here, we use a gated intensified CCD camera and full-field electro-optic imaging with femtosecond laser pulses to directly observe dipole electric fields arising from shift currents induced by intense THz pulses in n-doped InGaAs. Voltage pulses generated by the THz-pulse-induced shift currents are also measured directly on a high speed oscilloscope. The polarization of the shift current with respect to that of the THz pump beam is determined. The simultaneous measurement of both the induced dipole and transmitted THz pulse allows for sub-picosecond resolution imaging of nonlinear THz dynamics in semiconductors.
Speaker: Haille Sharum (University of Alberta)
Energy transfer dynamics in blue emitting functionalized silicon nanocrystals 15m
We use time-resolved photoluminescence (TRPL) spectroscopy to study the effects of surface passivation and nanocrystal (NC) size on the ultrafast PL dynamics of colloidal SiNCs. The SiNCs were passivated by dodecylamine and ammonia, and exhibit blue emission centered at ~473 nm and ~495 nm, respectively. For both functionalizations, increasing the size of the NCs from ~3 nm to ~6 nm did not result in a PL red-shift, but instead show an identical spectral profile. More interestingly, the nanosecond PL decay dynamics are size- and wavelength-independent with a radiative recombination rate on the order of ~108/s, characteristic of PL from charge transfer states associated with silicon oxynitride bond. Based on TRPL and fluence-dependent measurements, we hypothesize that electrons are first photoexcited within the SiNCs and then rapidly transferred to silicon oxynitride bonds at the surface, creating charge transfer states responsible for the nanosecond blue PL.
Speaker: Glenda De los Reyes (Physics Department, University of Alberta)
Molecular SuperRotors: Control and properties of molecules in extreme rotational states 30m
Extremely fast rotating molecules, known as "super-rotors", may exhibit a number of unique properties, from rotation-induced nano-scale magnetism to formation of macroscopic gas vortices. Orchestrating molecular spinning in a broad range of angular frequencies is appealing from the perspectives of controlling molecular dynamics. Yet in sharp contrast to an optical excitation of molecular vibration, laser control of molecular rotation is rather challenging. I will report on our recent progress in generating and controlling molecular super-rotors (e.g. oxygen molecules occupying ultrahigh rotational states, J > 120, or carbon dioxide with J>400) with a specially designed intense laser pulses, known as an "optical centrifuge". I will discuss the results of our study of collisional, optical and magnetic properties of molecular superrotors.
Speaker: Valery Milner (UBC)
Health Break / Pause santé CCIS L2 Foyer
CAP-NSERC Liaison Cttee Mtg / Réunion du comité de liaison ACP-CRSNG CCIS 4-285
M2-1 Computational methods in condensed matter physics (DCMMP) / Méthodes numériques en physique de la matière condensée (DPMCM) NINT Taylor room
Convener: Jesko Sirker (U Manitoba)
Extrinsic Spin Hall Effect in Graphene 30m
The intrinsic spin-orbit coupling in graphene is extremely weak, making it a promising spin conductor for spintronic devices. However, for many applications it is desirable to also be able to generate spin currents.Theoretical predictions and recent experimental results suggest one can engineer the spin Hall effect in graphene by greatly enhancing the spin-orbit coupling in the vicinity of an impurity. The extrinsic spin Hall effect then results from the spin-dependent scattering of carriers by impurities in the presence of spin-orbit interaction. This effect can be used to convert charge currents into spin currents efficiently. I will discuss recent experimental results on spin Hall effect in graphene decorated with adatoms and metallic clusters[1,2] and show that a large spin Hall effect can appear in graphene in the presence of locally enhanced spin-orbit coupling. I will present results from single impurity scattering calculations [3], and also from a real-space implementation of the Kubo formalism [4] for tight-binding Hamiltonians with different forms of spin-orbit coupling. [1] J. Balakrishnan et al., Nat. Phys. 9, 284 (2013). [2] J. Balakrishnan et al., Nat. Commun. 5, 4748 (2014). [3] A. Ferreira, T. G. Rappoport, M. A. Cazalilla, A. H. Castro Neto, Phys. Rev. Lett. 112, 066601 (2014). [4] Jose H. Garcia, Lucian Covaci and Tatiana G. Rappoport, arXiv:1410.8140.
Speaker: Tatiana Rappoport (Federal University of Rio de Janeiro)
Klein Tunnelling in Graphene 15m
In 1929 Oskar Klein solved the Dirac equation for electrons scattering off of a barrier. He found that the transmission probability increased with potential height unlike the non-relativistic case where it decreases exponentially. This phenomenon can also been in a graphene lattice where the energy bands form a structure known as a Dirac cone around the points where they touch. In this project we analyze phenomenon without substituting the graphene hamiltonian for the Dirac hamiltonian. First we analyse the propagation of gaussian wave packets on the one dimensional lattice, the two dimensional square lattice, and the graphene lattice. Here we look at how the wave packet evolves in time as it propagates. We then study how the packet tunnels through barriers on the graphene lattice, focusing on the region where the Dirac cone is formed We compare this tunnelling to the case of the non-relativistic and the relativistic free particle.
Speaker: Mr Kameron Palmer (University of Alberta)
Extensions of Kinetic Monte Carlo simulations to study thermally activated grain reversal in dual-layer Exchange Coupled Composite recording media. 15m
Thermal activation processes represent the biggest challenge to maintain data on magnetic recording media, which is composed of uniformly magnetized nano-meter grains. These processes occur over long time scales, years or decades, and result in reversing magnetization of the media grains by rare events. Typically, rare events present a challenge if modelled by conventional micromagnetic techniques as they are limited to time scales on the order of microseconds even with the best computer resources. A convenient approach that can access long time scales and be able to simulate such rare events processes is the Kinetic Monte Carlo method (KMC). The KMC method computes the time between successive grain reversals induced by an external magnetic field based on an Arrhenius-Neel approximation for thermally activated processes. The KMC method has recently been applied to model single-layer media [1], and we have now extended the method to study dual-layer Exchange Coupled Composition (ECC) media used in current generations of disc drives. A complication to using the KMC method for ECC media is governed by the complex reversal process of coupled grains due to the existence of metastable states. The energy barrier separating the metastable states is obtained from the minimum energy path (MEP) using a variant of the nudged elastic band method [2] and the attempt frequency is calculated based on the Langer formalism [3]. To simplify carrying KMC from single layer media to a dual-layer, we have performed a detailed study for only two coupled grains to help us understand and explore the energy landscape of ECC media and be able to handle the complications associated with ECC media [4]. Applications to study characteristic MH hysteresis loops for multi-grained dual-layered systems is made. 1. M. L. Plumer, T. J. Fal, J. I. Mercer, J. P. Whitehead, J. van Ek, and A. Ajan, IEEE Trans. Mag, 50, 3100805 (2014). 2. R. Dittrich, T. Schrefl, D. Suess, W. Scholz, H. Forster and J. Fidler, R. J.M.M.M. 250, L12–L19 (2002). 3. J. S. Langer, Ann. Phys. 54 258, N.Y. (1969). 4. A. M. Almudallal, J. I. Mercer, J. P. Whitehead, M. Plumer, J. van Ek and T. J. Fal (submitted).
Speaker: Dr Ahmad Almudallal (Memorial University of Newfoundland)
The Kronig-Penney model extended to arbitrary potentials via numerical matrix mechanics 15m
We present a general method using matrix mechanics to calculate the bandstructure for 1D periodic potential arrays, filling in a pedagogical gap between the analytic solutions to the Kronig-Penney model and more complicated methods like tight-binding. By embedding the potential for a unit cell of the array in a region with periodic boundary conditions, we can expand in complex exponential basis states to solve for the matrix elements. We show that Bloch's condition can be added in a potential-independent way, and so repeated diagonalizations of the unit cell matrix with different parameters of the crystal momentum will fill out the bandstructure. Comparisons with the analytic solutions to the Kronig-Penney model show excellent agreement. We then generate bands for two variants of the Kronig-Penney model, the periodic harmonic oscillator and its inverted form, and a symmetric linear well such that each has similarly-bounded electrons at the peak of the third energy band. We show how these different, more "realistic", potentials can be used to tune electron-hole effective mass asymmetries. Finally, preliminary results for the extension to 2D are demonstrated.
Speaker: Mr Pavelich Robert (University of Alberta)
A Multiorbital DMFT Analysis of Electron-Hole Asymmetry in the Dynamic Hubbard Model 15m
The dynamic Hubbard model (DHM) improves on the description of strongly correlated electron systems provided by the conventional single-band Hubbard model through additional electronic degrees of freedom, namely a second, higher energy orbital and associated hybridization parameters for interorbital transitions. The additional orbital in the DHM provides a more realistic modeling of electronic orbital "relaxation" in real lattices. One result of orbital relaxation is a clear electron-hole asymmetry, absent in the single-band case. We have employed the computational technique of dynamical mean field theory, generalized to the two-orbital case, to study this asymmetry with respect to varying system parameters, including both intersite and intrasite orbital hybridization as well as the role played by Mott physics. Our results stand in good agreement with previous exact diagonalization studies of the DHM.
Speaker: Christopher Polachic
M2-10 Atomic and Molecular Spectroscopy: microwave to X-ray (DAMOPC) / Spectroscopie atomique et moléculaire: des micro-ondes aux rayons X (DPAMPC) CCIS L2-200
Convener: Steven Rehse (University of Windsor)
SPECTROSCOPIC LINE-SHAPE STUDIES FOR ENVIRONMENTAL AND METROLOGIC APPLICATIONS 30m
Our research group has investigated the spectra of several gases of environmental importance using our 3-channel laser spectrometer or the experimental facility at the far-infrared beamline at the Canadian Light Source. Our results have been used by others through our contributions to the HITRAN and GEISA databases used by the atmospheric community and we have made our own contributions to the field. Our group has also performed accurate measurements of the fundamental Boltzmann constant based on a line-shape analysis of acetylene spectra recorded using a tunable diode laser. This study is of high importance since the accuracy of our laser spectroscopy based measurement is the second best in the world.
Speakers: Li-Hong Xu (University of New Brunswick) , Ronald Lees (University of New Brunswick)
CLS Synchrotron FIR Spectroscopy of High Torsional Levels of CD3OH: The Tau of Methanol 15m
Structure from high torsional levels of the CD$_3$OH isotopologue of methanol has been analyzed in Fourier transform spectra recorded at the Far-Infrared beamline of the Canadian Light Source synchrotron in Saskatoon. Energy term values for $A$ and $E$ torsional species of the third excited torsional state, v$_t$ = 3, are now almost complete up to rotational levels $K$ = 15, and thirteen substates have so far been identified for v$_t$ = 4. The spectra show interesting close groupings of strong high-v$_t$ sub-bands related by Dennison's torsional symmetry index $/tau$, rather than $A$ and $E$, that can be understood in terms of a simple and universal free-rotor "spectral predictor" chart. The energy curves for the v$_t$ = 3 and 4 ground-state torsional levels pass through several of the excited vibrational states, and a number of anharmonic and Coriolis interactions have been detected through perturbations to the spectra and appearance of forbidden sub-bands due to strong mixing and intensity borrowing.
Speaker: Dr Ronald Lees (Centre for Laser, Atomic and Molecular Sciences, Department of Physics, University of NB)
Analysis of Quantum Defects in high energy Helium P states 15m
Quantum defects are useful in interpreting high energy atomic states in terms of simple Hydrogenic energy levels. We will find the energy levels for 1snp singlet and triplet P state Helium from $n = 2$ to $n = 12$ with some of the most accurate helium atom calculations to date using the exact non-relativistic Hamiltonian with wave functions expanded in a basis set of Hylleraas coordinates. The results will be used to determine accurate values for the coefficients in the quantum defect expansion: $\delta = \delta_0 + \delta_2/n^{*2} + \delta_4/n^{*4} + \cdots$, where $n^* = n - \delta$. We will also test the usual assumption that only the even powers of $1/n^*$ need be included [1]. In addition, we will study the effectiveness of a unitary transformation in reducing the numerical linear dependence of the basis set for large basis sets.
Speaker: Ryan Peck
Precision Measurement of Lithium Hyperfine and Fine Structure Intervals 15m
A number of experiments have precisely measured fine and hyperfine structure splittings as well as isotope shifts for several transitions at optical frequencies for 6,7Li [1]. These data offer an important test of theoretical techniques developed by two groups to accurately calculate effects due to QED and the finite nuclear size in 2 and 3 electron atoms. The work by multiple groups studying several transitions in both Li+ and neutral Li permits a critical examination of the consistency of separately the experimental work as well as theory. Combining the measured isotope shifts with the calculated energy shifts passing these consistency tests permits the determination of the relative nuclear charge radius with an uncertainty approaching 1 x 10-18 meter which is more than an order of magnitude better than obtained by electron scattering. Progress toward a precision measurement of the fine structure constant is also discussed. 1. W. A. van Wijngaarden & B. Jian, European Physical Journal D, 222, 2057-2066 (2013)
Speaker: Prof. William van Wijngaarden (Physics Department, York University)
Dual Co-Magnetometer using Xe129 for Measurement of the
Neutron's Electron Dipole Moment 15m
A new high-density ultra cold neutron source is being constructed and developed at TRIUMF in Vancouver, BC with collaborators from Japan and several Canada research groups. One of the first goals of this collaboration is to measure the electric dipole moment (EDM) of the neutron to an uncertainty of <10$^{-27}$ e-cm. To measure the nEDM, a magnetic resonance (MR) experiment on polarized neutrons is performed and the uncertainty of these measurements is limited by how well the magnetic field surrounding the neutrons is known. Previous nEDM experiments relied on a precise in-situ measurement of the homogeneous magnetic field using a Ramsey fringe measurement of the spin precession of Hg$^{199}$ (co-habituating with the cold neutrons). Our efforts are to develop a co-magnetometer for nEDM measurements in which both Hg$^{199}$ and a second atomic species (Xe$^{129}$) are introduced into the same region as the neutrons and measured simultaneously to better characterize the geometric phase effects which dominate the systematic uncertainties in the magnetic field determination. Xe$^{129}$ was chosen, in part, due to its negligible interactions with the neutrons and the Hg$^{199}$. The spin precession of Xe$^{129}$will be detected by measuring the fluorescence decay following a spin-selective 2-photon transition (driven by 252 nm light) from the ground 5p$^6$($^1$S0) state to the 5p$^5$($^2$P$_{3/2}$)6p excited state. For this purpose, we have first developed a high power (~200 mW) continuous wave UV laser. In this talk we will discuss the next steps in our co- magnetometer development: our latest results on characterizing the precision of Xe$^{129}$in the excited state using this laser and subsequently measuring the Larmor frequency of the polarized Xe$^{129}$ in a magnetic field.
Speaker: Joshua Wienands (University of British Columbia)
M2-2 Material growth and processing (DCMMP) / Croissance et traitement des matériaux (DPMCM) CCIS L2-190
Convener: David Broun (Simon Fraser University)
Field-tuned quantum criticality of heavy fermion systems 30m
Intensive study of strongly correlated electronic systems has revealed the existence of quantum phase transitions from ordered states to disordered states driven by non-thermal control parameters such as chemical doping, pressure, and magnetic field. In this presentation I will discuss a recent progress of magnetic field-tuned quantum criticality with particular emphasis on the Fermi liquid instabilities of conduction electrons in heavy fermion metals and emergent phases around quantum critical points. In particular, a wide range of strange metallic behavior has been observed beyond the quantum critical point in Yb-based materials; YbAgGe, Ge-doped YbRh2Si2, YbPtBi. In the H-T phase diagram of YbPtBi, for example, three regimes of its low temperature states emerges: (I) antiferromagnetic state, characterized by spin density wave like feature, which can be suppressed to T = 0 by the relatively small magnetic field of Hc ~ 4 kOe, (II) field induced anomalous state in which the electrical resistivity follows r(T) ~ T1.5 between Hc and ~ 8 kOe, and (III) Fermi liquid state in which r(T) ~ T2 for H > 8 kOe. Regions I and II are separated at T = 0 by what appears to be a quantum critical point. Whereas region III appears to be a Fermi liquid associated with the hybridized 4f states of Yb, region II may be a manifestation of a spin liquid state. The observation of a separation between the antiferromagnetic phase boundary and the small to large Fermi surface transition in recent experiments has led to the new perspective on the mechanism for quantum criticality. In this new approach, the global phase diagram includes the effects of magnetic frustration, which is an important additional tuning parameter in the Kondo lattice model of heavy fermion materials. Frustration leads to the enhanced quantum fluctuations, as the system tunnels between different competing magnetic states.
Speaker: Prof. Eundeok Mun (Simon Fraser University)
Ge:Mn Dilute Magnetic Semiconductor 15m
This work aims to develop Ge:Mn dilute magnetic semiconductor and study the fundamental origin of ferromagnetism in this system. Using ion implantation at $77$ K, a single crystal Ge wafer was doped with magnetic Mn ions. The implantation was done at ion energy of $4.76$ MeV with a fluence of 2 x 10$^{16}$ ion/cm$^2$. X-ray diffraction (XRD) of the as-implanted sample showed that the implanted layer was amorphous. Therefore, different samples were annealed at $200$⁰C, $330$⁰C and $600$⁰C in a tube furnace to achieve a solid phase epitaxial regrowth of the implanted layer. XRD of the sample annealed at $330$⁰C for $33$ hours showed a polycrystalline layer. The depth profile of Mn in the as-implanted sample and the post-annealed sample at $330$⁰C was determined using secondary ion mass spectroscopy (SIMS) and it was found that some Mn diffused to the surface during the annealing. XRD of the sample annealed at $600$⁰C for $35$ minute showed peaks corresponding to an unknown phase in addition to peaks from amorphous and polycrystalline Ge. The sample annealed at $200$⁰C for $168$ hour showed no evidence of solid phase epitaxy. A SQUID was used to measure the magnetic properties of all samples. At low temperature, the as-implanted sample showed a paramagnetic behaviour. A magnetic hysteresis at $5$K and up to $200$K was observed for the samples annealed at $330$⁰C and $200$⁰C. The $600$⁰C annealed sample showed no ferromagnetic response and a significant reduction in the paramagnetic response at low temperature compared to the as-implanted sample.
Speaker: Laila Obied (Brock University)
The nanostructure of (Ybx, Y1-x)2O3 thin films obtained by reactive crossed-beam laser ablation using bright-field and high-angle annular dark-field STEM imaging. 15m
Ytterbium-doped yttrium oxide thin films were obtained with a variant of pulsed laser deposition, called reactive crossed-beam laser deposition, wherein a cross-flow of oxygen, synchronized with the laser pulses, is used for oxidizing and entraining the ablation products of a Yb/Y alloy target towards a substrate placed inside a vacuum chamber [1]. The nanostructure of the films is examined using X-ray and electron diffraction, as well as Scanning Transmission Electron Microscopy (STEM). As-produced coatings are amorphous and become nanocrystalline cubic yttria after annealing. STEM images taken in the Bright-Field and in the High-Angle Annular Dark-Field modes reveal complementary aspects of the nanostructure of yttria, namely the presence of oxygen vacancies as well as distortion of the cationic lattice respectively. These peculiarities of the crystalline structure play an important role in the luminescence properties of the Yb3+ ions, since they lower the crystal-field symmetry of the Yb3+ substitution site, which directly affect the luminescence spectra. Thin films made of luminescent materials are attractive for many applications such as coherent miniature optical sources [2], optical converter from the infrared to the visible range for in-vivo imaging [3], to name a few. Des revêtements d'oxyde d'ytterbium et d'yttrium (YbxY1-x)2O3 nanostructurés ont été déposés à l'aide d'une variante de l'ablation laser, pour laquelle le transport et l'oxydation des produits d'ablation d'une cible métallique vers le substrat se font à l'aide de courtes bouffées d'oxygène synchronisées avec le laser [1]. La nanostructure et les propriétés de luminescence de nos revêtements ont été étudiées par diffraction aux rayons X et diffraction électronique ainsi que par microscopie électronique à transmission. Les dépôts obtenus sont amorphes et deviennent nanocristallins après recuit. Pour la première fois à notre connaissance, nous avons mis en évidence, en tirant profit de la complémentarité des images obtenues par microscopie électronique à transmission en champ sombre et en champ clair, la présence de lacunes d'oxygènes et la distorsion du réseau cationique. Ces deux particularités jouent un rôle important dans les propriétés de luminescence des ions d'Yb3+ puisqu'elles permettent l'apparition de sites de substitution à faible symétrie, augmentant ainsi la probabilité de relaxation radiative de l'ion excité. La capacité de contrôler la taille des nanocristaux par un recuit permet de contrôler la forme du spectre de luminescence. Les matériaux luminescents déposés sous forme de couches minces sont très attrayants pour plusieurs applications comme les sources lumineuses cohérentes miniaturisées [2], la conversion dans domaine du visible pour l'imagerie ou la détection in-vivo [3], etc. References : [1] J.-F Bisson, G. Patriarche, T. Marest, J. Thibodeau, (2015) Nanostructure and luminescence properties of amorphous and crystalline ytterbium-yttrium oxide thin films obtained with pulsed reactive crossed-beam deposition, J. Mater. Sci., 50(3), 1267-1276 [2] I C Robin, R Kumaran, S Penson, S E Webster, T Tiedje and A Oleinik (2008) Structure and photoluminescence of Nd:Y2O3 grown by molecular beam epitaxy. Opt. Mat. 30: 835-838 [3] G S Yi, G M Chow, Synthesis of hexagonal-phase NaYF4 : Yb,Er and NaYF4 : Yb,Tm nanocrystals with efficient up-conversion fluorescence (2006) Adv. Funct. Mater. 16(18), 2324-2329
Speaker: Prof. Jean-François Bisson (Université de Moncton)
Investigation of the effect of growth condition on defects in MBE grown GaAs1-xBix 15m
Incorporation of Bismuth into GaAs causes an anomalous bandgap reduction (88 meV/% for dilute alloys) with rather small lattice mismatch compared to ternary In or Sb alloys. The bandgap can be adjusted over a wide range of infrared wavelengths up to 2.5 μm by controlling the Bi content of the alloy which is useful for laser, detector and solar cell applications. Semiconductor lasers are compact and efficient so they are the preferable choice in many applications. GaAs1-xBix can be used as the light emitting material for the 1-1.3 μm communication wavelengths. Another application is vertical-external-cavity surface-emitting- lasers (VECSEL) to generate high power infrared output and then doubling the frequency to achieve yellow laser light. The first step to make a laser is optimizing GaAs1-xBix growth parameters to realize the best material quality which cannot be achieved unless the defects in the crystal are understood. The three requirements for MBE growth of GaAs1-xBix are: low growth temperature (compared to standard GaAs), small As2:Ga ratio and controlled Bi flux. In this research, we tried to understand the relation between the growth conditions and the crystal defects using photoluminescence (PL) and deep level transient spectroscopy (DLTS). PL intensity is a good relative gauge for the number of defects as the defects are typically non-radiative recombination centres. Our results show that the reduction of growth temperature from 400°C to 300°C with all other growth conditions fixed causes the Bi concentration in the deposited films to increase from 1% to 5% but the PL intensity decreases by more than a factor of 1000. Changes in the other two growth conditions, As2:Ga ratio and Bi flux, affect the Bi incorporation but they are not as important factors in the PL intensity as the growth temperature. Two samples were grown at different temperatures (330°C and 375°C) with approximately the same Bi concentration (~2%) at a stoichiometric As:Ga flux ratio. The temperature dependence of the PL shows that the sample grown at higher temperature has less photoluminescence emission from shallow defect states and a stronger temperature dependence of the bandgap. We interpret the shallow defects as intrinsic localized states close to the valence band edge associated with Bi next nearest neighbour clusters. DLTS measurements on GaAs and GaAsBi samples show that the density of deep levels increases at low growth temperature and that a Bi surfactant reduces the density of deep levels. DLTS measurements on dilute GaAsBi samples grown at different temperatures will be presented.
Speaker: Vahid Bahrami Yekta (University of Victoria)
Atomic Force Microscopy Characterization of Hydrogen Terminated Silicon (100) 2x1 Reconstruction 15m
Hydrogen terminated silicon (100) $2 \times 1$ (H:Si(100)) is examined using a novel non-contact atomic force microscopy (NC-AFM) approach. NC-AFM gives access to unique information on the surface such as unperturbed surface charge distributions, chemical bonding, and surface forces. H:Si(100) is an attractive surface for examination due to its potential for nano-electronics. Dangling bonds on the surface act as atomic silicon quantum dots and have application in quantum dot cellular automata-based nano-computing, which through geometrical arrangement can be used to create ultra-fast, ultra-low-power wires and logic gates. It also provides a promising platform for AFM examination of electronically decoupled adsorbed atoms, physisorbed molecules, and chemisorbed molecular structures. As part of this AFM analysis of H:Si(100), images were taken in the as yet unexplored constant height scanning mode. By incrementing the tip-sample distance above the surface, different force regimes were accessed. Attractive van der Waals forces were observed in the long range, and repulsive interactions indicative of Pauli-repulsive forces were seen at close range. An evolution of surface topography from attractive to repulsive surface forces is demonstrated, with the repulsive regime showing the first direct observation of the chemical bond structure of H:Si(100). Furthermore, site-specific force spectroscopy on key surface lattice points reveals unique force contributions. These location-specific profiles are compared to Density Functional Theory modeling for the surface, with catalogued site-specific differences having application in subtraction of background forces for the aforementioned deposited molecule or atom examination. NC-AFM contributes strongly to our understanding of forces at play in the surface structure of H:Si(100), opening the way for many future experiments.
Speaker: Ms Taleana Huff (University of Alberta)
M2-3 Theory, modelling and space weather II (DASP) / Théorie, modélisation et climat spatial II (DPAE) CAB 243
Development of Comprehensive Model of Earth Ionosphere and its Application for Studies of MI-coupling 30m
A comprehensive model of the Earth ionosphere has been developed [Sydorenko and Rankin, 2012 and 2013]. The model is two-dimensional, it resolves the meridional direction and the direction along the geomagnetic field. The dipole coordinates are used, the azimuthal symmetry is assumed. The model considers torsional Alfven waves and includes the meridional convection electric field. The electric field along the geomagnetic field is calculated from the condition of quasineutrality. The ions (H+, N+, O+, N2+, NO+, and O2+) and the electrons are represented as conducting fluids. The neutrals (H, N, O, N2, NO, and O2) are considered as a stationary background, the meridional wind can be included but it does not change the neutral parameters. Numerous heating and cooling processes, chemical reactions between ions and neutrals, recombination, and effects of energetic electron precipitation and EUV radiation are included. The main simulation area covers the altitude range from 100 km to few thousand km, the width of the main area at the bottom is up to few hundred km. The model was applied to study oxygen ion upwelling caused by electron precipitation and Alfven waves. Recently, the model was used to investigate plasma density and temperature oscillation observed by EISCAT during the period of intense magnetospheric activity and predicted that the event was accompanied by significant modification of the composition of neutrals in the thermosphere. Sydorenko, D., and R. Rankin (2012), Simulation of ionospheric disturbances created by Alfvén waves, J. Geophys.Res., 117, A09229, doi:10.1029/2012JA017693. Sydorenko, D., and R. Rankin (2013), Simulation of O+ upflows created by electron precipitation and Alfvén waves in the ionosphere, J. Geophys. Res. Space Physics, 118, 5562–5578, doi:10.1002/jgra.50531.
Speaker: Dr Dmytro Sydorenko (University of Alberta)
Solar wind modelling for operational forecasting 30m
Dark regions seen in extreme ultraviolet and X-ray images of the solar corona, called coronal holes (COHO), are known to be sources of fast solar wind streams. These streams often impact the Earth's magnetosphere and produce geomagnetic storms to which Canada is susceptible. COHO are associated with open coronal magnetic field lines along which fast solar wind streams emanate from the Sun. COHO can survive several Sun's rotations, especially near the solar minimum, giving rise to recurrent enhancements in the solar wind speed and geomagnetic activity. While solar wind forecasting can be based on COHO images by taking into account a statistical correlation between COHO area and solar wind parameters at the Earth, a more physics based approach considers open magnetic field lines that extend from the photosphere to the corona. To forecast the solar wind, a numerical code based on the coronal field approach has been developed. To derive the global coronal magnetic field a potential field source surface and Schatten current sheet models are used. Empirical relations, including Wang-Sheeley-Arge, are used to establish a link between the solar wind speed and properties of open magnetic field lines. Investigations of the solar wind speed and magnetic field polarity forecasts at the Earth for 2007-2014 show a good agreement with observations, most notably around solar minimum. Disagreements, excluding those due to transient solar disturbances, are discussed. In particular, the role of COHO area size, their latitudinal location and proximity to active regions is discussed. Prospects of using the solar wind forecast in forecasting geomagnetic activity over Canada are examined.
Speaker: Ljubomir Nikolic (Natural Resources Canada)
Using an information theory-based method for statistical detection of high-frequency climate signals in northern-hemisphere water supply variations 15m
Water scarcity is an acute global concern under population and economic growth, and understanding hydroclimatic variation is becoming commensurately more important for resource management. Climatic drivers of water availability vary complexly on many time- and space-scales, but serendipitously, the climate system tends to self-organize into coherent dynamical modes. Two of these are El Niño-Southern Oscillation (ENSO) and the Arctic Oscillation (AO), which have hemisphere- to planet-wide impacts on regional climate through intricate relationships called teleconnections. Traditionally, statistical studies assume such teleconnections are linear, or at least monotonic. Recent work instead suggests ENSO and AO impacts can be strongly nonlinear – specifically, parabolic. However, these phenomena remain incompletely understood, and river flows spatiotemporally integrate upstream climatic influences in complicated ways, sensitive to local terrestrial hydrologic characteristics. We therefore directly examine annual flow volume time series from 42 of the northern hemisphere's largest ocean-reaching rivers for highly nonlinear teleconnections. We apply a novel approach based on optimal polynomial selection using the Akaike information criterion, which combines the Kullback-Leibler information, quantifying how much information content is lost when approximating truth using a model, with maximum likelihood concepts. Unlike conventional null-hypothesis significance testing, the method provides a rigorously optimal balance between model performance and parsimony; explicitly accommodates no-effect, linear-effect, and strongly nonlinear-effect models; and estimates the probability that a model is true given the data. While we discover a rich diversity of responses, parabolic relationships are formally consistent with the data for almost half the rivers and are optimal for eight. Highly nonlinear teleconnections could radically alter the standard conceptual model of how water resources respond to climate variability. For example, the Sacramento River in drought-ridden California exhibits no significant linear ENSO teleconnection but a 92% probability of a quadratic relationship, improving simple mean predictive error by up to 65% and implying greater opportunity for climate-informed early-season water supply forecasting than previously appreciated.
Speaker: Dr Sean W. Fleming (Environment Canada, MSC Science Division)
M2-4 Cosmic Frontier: Cosmology I (DTP-PPD-DIMP) / Frontière cosmique: cosmologie I (DPT-PPD-DPIM) CCIS 1-140
Convener: James Pinfold (University of Alberta (CA))
Probing the Nature of Inflation 30m
The idea that the early universe included an era of accelerated expansion (Inflation) was proposed to explain very qualitative features of the first cosmological observations. Since then, our observations have improved dramatically and have lead to high precision agreement with the predictions of the first models of inflation, slow-roll inflation. At the same time, there has been significant growth in the number of mechanisms for inflation, many of which are qualitatively distinct from slow-roll. Nevertheless, most of these ideas are also consistent with current data. In this talk, I will review inflation and its current observational status. I will then discuss the important theoretical targets for the future and the prospects for achieving them.
Speaker: Daniel Green
Determining Power Spectra of High Energy Cosmics 15m
The angular power spectrum is a powerful observable for characterizing angular distributions, popularized by measurements of the cosmic microwave background (CMB). The power spectra of high energy cosmics ($\gamma$-rays, protons, neutrinos, etc.) contains information about their sources. Since these cosmics are observed on an event-by-event basis, the nature of the power spectrum measurement is fundamentally different from the CMB. We present new progress on the statistical properties of these power spectrum measurements and discuss the new information about the sources that can be gleaned from these observations.
Speaker: Sheldon Campbell (The Ohio State University)
Searching for the echoes of inflation from a balloon - The first SPIDER flight 15m
SPIDER is a balloon-borne polarimeter designed to detect B-modes in the CMB at degree angular scales. Such a signal is a characteristic of early universe gravitational waves, a cornerstone prediction of inflationary theory. Hanging from a balloon at an altitude of 36 km allows the instrument to bypass 99% of the atmosphere and get an unobstructed view of the sky at 90 and 150 GHz. The multi-band nature of the experiment will help characterize galactic foregrounds, which need to be well understood before a primordial polarization signal can be extracted from the data. During its first flight from Antarctica in January 2015, SPIDER probed 8% of the sky with 2000 polarization-sensitive bolometers. These were distributed amongst six cryogenically cooled telescopes housed in a 1300 liter liquid-helium cryostat. This massive cryostat was supported and steered by a light-weight carbon fibre structure, equipped with two sets of motors that controlled its pointing on the sky through real-time position feedback from a variety of sensors. I will discuss the performance of the instrument over the 16 day flight and what we might learn from the dataset. I will also give a glimpse into the capabilities of the upgraded instrument, scheduled to fly in 2018.
Speaker: Ivan Padilla (University of Toronto)
M2-5 Nuclear Astrophysics (DNP) / Astrophysique nucléaire (DPN) CCIS L1-140
Convener: Barry Davids (TRIUMF)
The turbulent hydrodynamics and nuclear astrophysics of anomalous stars from the early universe 30m
The anomalous abundances that can be found in the most metal-poor stars reflect the evidently large diversity of nuclear production sites in stars and stellar explosions, as well as the cosmological conditions for the formation and evolution of the first generations of stars. Significant progress in our predictive understanding of nuclear production in the early universe comes now within reach through advancing capabilities to perform large-scale 3D stellar hydrodynamic simulations of the violent outbursts of advanced nuclear burning. When complemented with comprehensive nucleosynthesis simulations we can characterize the chemical evolution of stellar populations. Nuclear production sites in the early universe involves unstable species on the p- and n-rich side of the valley of stability, and nuclear data in key cases is presently too uncertain to enable the required predictive simulation capability. These are the underpinnings to decipher the messages from the early universe hidden in the anomalous abundances of metal poor stars.
Speaker: Falk Herwig (University of Victoria)
Quark-Novae : Implications to High-Energy and Nuclear Astrophysics 30m
After a brief account of the physics of the Quark-Nova (explosive transition of a neutron star to a quark star), I will discuss its implications and applications to High Energy and Nuclear Astrophysics. The talk will focus on Quark-Novae in the context of Super-Luminous Supernovae and in the context of the origin of heavy elements (r-process nucleosynthesis). The Quark-Nova has the potential to provide new insight into explosive astrophysical phenomena and the origin of some elements in the periodic table, by naturally combining the might of researchers in nuclear physics, sub-nuclear physics and astrophysics. Rachid Ouyed (UofC)
Speaker: Prof. Rachid Ouyed (University of Calgary)
Hadronic-to-Quark-Matter Phase Transition: Effects of Strange Quark Seeding. 15m
When a massive star depletes its fuel it may undergo a spectacular explosion; the supernova. If the star is massive enough, it can undergo a second explosion; the Quark nova. The origin for this second explosion has been argued to be the transition from Hadronic-to-Quark-Matter (Ouyed et al. 2013). Hadronic-to-Quark-Matter phase transition occurs when hadronic (nucleated) matter under high temperatures and/or densities deconfines into what is called a quark-gluon plasma (QGP). This talk will explore the required conditions for a star to undergo a Quark nova. In particular, under which conditions should the transition from Hadronic-to-Quark-Matter occur so that there is a second explosion for a massive star? The talk will be at an introductory level and will present the results of theoretical and computational calculations performed to estimate the production rate of strange quarks by self-annihilation of dark matter determining whether or not dark matter self-annihilation can be responsible by itself to start a combustion in the core of a star for it to undergo a Quark nova.
Speaker: Mr Luis Welbanks (University of Calgary)
M2-6 Radiation Therapy (DMBP-DNP) / Thérapie par rayonnement (DPMB-DPN) CCIS 1-160
Convener: Melanie Martin (University of Winnipeg)
Medical linear accelerator mounted mini-beam collimator: transferability study 15m
Background: In place of the uniform dose distributions used in conventional radiotherapy, spatially-fractionated radiotherapy techniques employ a planar array of parallel high dose 'peaks' and low dose 'valleys' across the treatment area. A group at the Saskatchewan Cancer Agency have developed a mini-beam collimator for use with a medical linear accelerator operated at a nominal energy of 6MV. Purpose: The goal of this work was to characterize various attributes of the mini-beam collimated dose distribution and assess consistency of those attributes across a set of medical linear accelerators. Materials and Methods: Three "beam matched" Varian iX accelerators were used in this study. All measurements were made using a PTW scanning water tank set with a 100 cm source to surface distance. Dose profiles perpendicular to the plane of the mini-beam collimator were measured at a depth of 10.0 cm for a square field of side 4.0 cm. Percentage depth dose (PDD) curves along the central peak dose were made for a square field of side 4.0 cm. Relative point dose measurements were made at a depth of 10.0 cm along the central peak dose using two different diode detectors (PTW TN60017 and IBA stereotactic field diode (SFD)). A collimator factor (CF), defined as the ratio of the collimated point dose to that of the open field point dose, was determined at a depth of 10 cm for each linac for square field sizes of side 2.0, 3.0, 4.0 and 5.0 cm. Results: When normalized to the central peak dose, the profile data revealed a variation in the relative valley dose across the three linacs. However, the PDD data was consistent indicating no variation in beam energy across the three linacs. As previously determined, the measured CF did differ as a function of detector. This results from the active volume of the detectors being different. The measured CF also differed across the set of linacs. The PTW diode measurements showed an average difference of 2.65% across accelerators, and the SFD showed an average difference of 5.6% across accelerators. The difference in CF and valley dose is believed to result from differences in the electron source width incident on the Bremsstrahlung target for each of the accelerators. Conclusion: The dose profile and collimator factors of the mini-beam collimated dose were not found to be consistent across a set of medical linear accelerators.
Speaker: Mr William Davis (Department of Physics and Engineering Physics, University of Saskatchewan)
Cancer cell targeting gold nanoparticles for therapeutics 15m
Polyethylene glycol (PEG) has promoted the prospective cancer treatment applications of gold nanoparticles (GNPs). *In vivo* stealth of GNPs coated with PEG (PEG-GNPs) takes advantage of the enhanced permeability and retention effect in tumor environments, making them suitable for targeted treatment. Because PEG minimizes gold surface exposure, PEG-GNP interaction with ligands that mediate cancer cell uptake is lower than uncoated GNPs. Hence, the cellular uptake of PEG-GNPs is significantly lower than uncoated GNPs *in vitro*. As intracellular localization of GNPs maximizes its therapeutic enhancement, there is a need to improve the uptake of PEG-GNPs. To enhance uptake, receptor mediated endocytosis peptides were conjugated with PEG-GNPs of varying core sizes. Spherical GNPs of diameters 14 nm, 50 nm and 70 nm and a PEG chain length of 2 and 5 kDa were used to determine a preferred core size and chain length for uptake *in vitro* in HeLa and MDA-MB-231 cells. Radiosensitization of HeLa cells to a 6 MVp clinical photon beam via GNP conjugates were observed to assess its therapeutic application.
Speaker: Charmainne Cruje (Ryerson University)
Development and Imaging of the World's first Whole-Body Linac-MRI Hybrid System 30m
**Purpose:** We designed and first whole-body clinical linac-MRI hybrid (linac-MR) system to provide real-time MR guided radiotherapy with current imaging and treatment. Installation began in our clinic in 2013, and the world-first images from a linac-MR on a human volunteer were obtained in July 2014. **Methods:** The linac-MR consists of an isocentrically mounted 6 MV linac that rotates in-unison with a biplanar 0.6 T MRI in transverse plane. The Bo field and the central axis of the 6 MV beam are parallel to each other. The optimized fringe field results in insignificant increase in entrance dose. The parallel configuration avoids large increases in dose at tissue/air interfaces and at beam exit due to electron return effect that occurs in the perpendicular configuration. We were first to demonstrate concurrent MR imaging and linac-irradiation of head-size phantoms in 2008, on a single gantry. The head prototype have been described in our 40 peer-reviewed articles (linac-MR.ca/publications.html). The current functional whole-body rotating linac-MR system is built on the engineering and physics obtained from the head prototype. **Results:** The current system is mechanically well balanced and rotates at 1 rpm. The 3D magnetic field mapping demonstrates minimal perturbation in magnetic field homogeneity with gantry rotation which is easily and effectively shimmed by gradient coils. The Larmor Frequency varies with gantry angle due to the Bo interaction with room shielding and to the directional changes of the Earth's magnetic relative, and closely follows our predictions calculated previously. Angle dependent 3D magnetic field maps and Larmor Frequency are used to automatically and optimally create image acquisition parameters for any gantry angle. Metrics obtained at different rotating angles show that the image quality is comparable to those of clinical MRI systems, and thus satisfy the requirements for real-time MR-guided radiotherapy. **Conclusions:** The system highlights are: 1) 6 MV linac, 2) high-quality MR images during irradiation, 3) simultaneous linac and MR rotation in parallel configuration to avoid strong angle-dependent shimming, and to avoid increased dose at beam exit and tissue/air interfaces, 3) installation through the maze of an existing vault, 4) cryogen-free superconducting magnet not requiring a helium vent, and 5) ability to turn magnet off or on in a few minutes for servicing.
Speaker: Prof. B. Gino Fallone (University of Alberta)
A SYSTEMATIC APPROACH TO STANDARDIZING SMALL FIELD DOSIMETRY IN RADIOTHERAPY APPLICATIONS 15m
Small field dosimetry is difficult, yet consistent data is necessary for the clinical implementation of advanced radiotherapy techniques. In this work we present improved experimental approaches required for standardizing measurement, Monte Carlo (MC) simulation based detector correction factors as well as methods for reporting experimental data. A range of measurements and MC modelling studies have been reported by our group. Based on these methods and results, recommendations are given as to: (1) commissioning/fine-tuning MC models for use in small field dosimetry, (2) correction factors for a range of shielded and unshielded diode detectors, (3) what constitutes a 'very small field size' - based on the different effects as field size gets smaller, (4) measurement methods necessary to control uncertainties at these very small field sizes and (5) reporting against an effective field size - taking into account measured dosimetric field size. The results of the work clearly show that measurement and modelling based methods can be standardized to improve the consistency in small field dosimetry. Through standardization the best accuracy possible can be achieved in these increasingly clinically-used conditions.
Speaker: Dr Gavin Cranmer-Sargison (Department of Medical Physics, Saskatchewan Cancer Agency)
Multifunctional perfluorocarbon nanoemulsions for cancer therapy and imaging 15m
There is interest for the use of nanoemulsions as therapeutic agents, particularly Perfluorocarbon (PFC) droplets, whose amphiphilic shell protects drugs against physico-chemical and enzymatic degradation. When delivered to their target sites, these PFC droplets can vaporize upon laser excitation, efficiently releasing their drug payload and/or imaging tracers. Due to the optical properties of gold, coupling PFC droplets with gold nanoparticles significantly reduces the energy required for vaporization. In this work, nanoemulsions with a perfluorohexane core and Zonyl FSP surfactant shell were produced using an oil-in-water technique. Droplets were characterized in terms of size and morphology using high resolution fluorescence techniques (i.e. Total Internal Reflection Fluorescence Microscopy, TIRFM, and Fluorescence Correlation Spectroscopy, FCS), electron microscopy, and light scattering techniques (i.e. Dynamic Light Scattering, DLS). The ability of PFC droplets to vaporize are demonstrated using Optical Microscopy (OM). Our emulsion synthesis technique has given a reproducible, unimodal size distribution of PFC droplets corresponding to an average hydrodynamic diameter of 53.5 ± 3.8 nm, from DLS and FCS, with long-term stability at physiological conditions. Their size and stability makes them cost effective drug delivery vehicles suitable for efficient internalization within cancer cell lines. To vaporize the nanoemulsions, silica coated gold nanoparticles (scAuNPs) were used and excited with a 532 nm laser. Taken together, TIRFM, dual-colour FCS, and OM show that scAuNPs are within the same diffraction-limited spot of these PFC droplets before vaporization.
Speaker: Mr Donald A. Fernandes (Ryerson University)
M2-7 Cosmic frontier: Dark matter I (PPD-DTP) / Frontière cosmique: matière sombre I (PPD-DPT) CCIS L1-160
Convener: Kevin Graham (Carleton University)
Status of Dark Matter Theories 30m
The existence of dark matter is a prominent puzzle in model physics, and it strongly motivates new particle physics beyond the standard model.I will review theoretical candidates for dark matter as proposed in the literature, and their status in light of recent experimental searches. I will also discuss new possibilities of dark matter theories and related research avenues.
Speaker: Yanou Cui (Perimeter Institute)
The DEAP-3600 Dark Matter Experiment -- Updates and First Commissioning Data Results 30m
The DEAP-3600 experiment uses 3.6 tons of liquid argon for a sensitive dark matter search, with a target sensitivity to the spin-independent WIMP-nucleon cross-section of 10^{-46} cm^2 at 100 GeV WIMP mass. This high sensitivity is achievable due to the large target mass and the very low backgrounds in the spherical acrylic detector design as well as at the unique SNOLAB facility. Scintillation light in liquid argon is collected with 255 high efficiency photomultiplier tubes. Pulse shape discrimination is used to reject electromagnetic backgrounds from the WIMP induced nuclear recoil signal. We have started taking commissioning data. In this talk we will present the status of the experiment and results from analysis of the first commissioning data.
Speaker: Dr Bei Cai (Queen's University)
Direct Detection Prospects for Higgs-portal Singlet Dark Matter 15m
There has recently been a renewed interest in minimal Higgs-portal dark matter models, which are some of the simplest and most phenomenologically interesting particle physics explanations of the observed dark matter abundance. In this talk, we present a brief overview of scalar and vector Higgs-portal singlet dark matter, and discuss the nuclear recoil cross sections of the models. We show that, given a reasonable range for the theoretical uncertainties in the calculation, the expected cross sections are found in the region of the parameter space that will be probed by next generation direct detection experiments. In particular, within two years of operation the XENON1T experiment should be able to make a strong statement about Higgs-portal singlets.
Speaker: Fred Sage (University of Saskatchewan)
Status of the PICO-60 Dark Matter Search Experiment 15m
The PICO collaboration (formerly PICASSO and COUPP) uses bubble chambers for the search for Weakly Interacting Massive Particle (WIMP) dark matter. Such bubble chambers are scalable, can have large target masses and can be operated at regimes where they are insensitive to backgrounds such as beta and gamma radiation. The PICO-60 experiment is a bubble chamber that has been developed and operated at SNOLAB with 37 kg of CF$_3$I as a target liquid. The experiment is currently being upgraded for the use with 60 kg of ultra clean C$_3$F$_8$ to focus on the search for spin dependent dark matter. The PICO-60 detector is expected to have a world leading sensitivity to spin dependant dark matter interactions. In this talk an overview of the progress of PICO-60 experiment, the results from the dark matter runs with existing data and future plans are presented.
Speaker: Pitam Mitra (University of Alberta)
M2-8 Teaching Physics to a Wider Audience (DPE) / Enseigner la physique à un auditoire plus vaste (DEP) CAB 239
Convener: Adam Sarty (Saint Mary's University)
Asymmetric Wavefunctions from Tiny Perturbations 15m
We present an undergraduate-accessible analysis of a single quantum particle within a simple double well potential through matrix mechanics techniques. First exploring the behavior in a symmetric double well (and its peculiar wavefunctions), we then examine the effect that varying well asymmetry has on the probability density. We do this by embedding the potential within a larger infinite square well, expanding in this simple basis, and solving for the matrix elements. The resulting wavefunctions are drastically different than those of the unperturbed system. A relatively tiny drop in one of the well depths results in a nearly complete collapse (localization) of the wavefunction into one of the wells. This system can be accurately mapped to a much simpler two-state "toy model"; this makes it clear that this localization is also a property of a generic double well system.
Speaker: Tyler Dauphinee
An online resource for teaching about energy 15m
Energy issues are important to Canada, and a logical topic for Canadians to teach. The Energy Education group at the University of Calgary has built a free on-line resource suitable for teaching an 'energy for everyone' course from a physics department. This resource includes interactive data visualizations and real world simulations to help students understand the role of energy in modern society
Speaker: Prof. Jason Donev (University of Calgary)
Essential Psychology in Physics - MBTI and You 15m
According to the wikipedia entry, psychology is an academic and applied discipline that involves the scientific study of mental functions and behaviours. Since learning involves mental functions, it only makes sense that psychology has a role in the classroom - including a post-secondary physics class. The Myers-Briggs Type Indicator (MBTI) is one model that provides a framework for identifying differences in how individuals perceive the world, make decisions, and communicate. By becoming aware of one's own type, individuals can understand why they may be perceived as 'different' from their colleagues, which is often more than just their gender. Furthermore, utilizing MBTI can help instructors become more effective in the classroom by maximizing their strengths and minimizing their vulnerabilities. In this talk, I will present an introduction to MBTI, and give some ideas on how to use this framework to improve working relationships both inside and outside the classroom.
Speaker: Dr Jo-Anne Brown (University of Calgary)
Essential Psychology in the Physics Classroom - Five Steps to Improve Classroom Effectiveness 15m
Teaching large physics classes - especially to non-physics majors that may have developed an extraordinary aversion to anything math-related - can be a challenge, even for the best instructors. However, there are a few techniques, drawn from psychology, that can help improve the experience for both the instructor and the students. In this talk, I will present a 'five-step program' I developed that works effectively for any level of class I teach. The result of this program has been high student satisfaction for the course, as well as a retention of my sanity.
Speaker: Jo-Anne Brown (University of Calgary)
M2-9 Advanced Instrumentation at Major Science Facilities: Accelerators (DIMP) / Instrumentation avancée dans des installations scientifiques majeures: accélérateurs (DPIM) CCIS L1-047
Convener: Kirk Michaelian (Natural Resources Canada)
CLS 2.0: The Next 10 Years 30m
The Canadian Light Source (CLS) is Canada's premier source of intense light for research, spanning from the far infrared to hard x-rays. The facility has been in operations for 10 years and in that time has hosted over 2,000 researchers from academic institutions, government, and industry, from 10 provinces and 2 territories, and provided a scientific service critical in over 1,000 scientific publications. As the CLS reaches this important milestone, a series of workshops at the Annual Users' Meeting (May 2015), will help define the scientific direction of the facility for the next 10 years, to address the Canadian research community's scientific challenges. This presentation will present scientific and technical highlights from the CLS today and give an outlook of where photon science using light sources may go in the future.
Speaker: Dr Dean Chapman (Canadian Light Source Inc.)
Acquaman: Scientific Software as the Beamline Interface 15m
The Acquaman project (Acquisition and Data Management) was started in early 2010 at the Canadian Light Source. Over the past four years, the project has grown to support five beamlines by providing beamline control, data visualization, workflow, data organization, and analysis tools. Taking advantage of modular design and common components across beamlines, the Acquaman team has demonstrated that a framework dedicated to synchrotron beamlines can deliver high quality interfaces while also reducing overall development cost and production time. Acquaman supports scientific researchers by allowing them to focus on the scientific techniques they know while reducing the need to understand specific hardware, which changes from beamline to beamline. Focus will be given to this topic in the broader context of how to manage a modular, scalable, and flexible framework. Additionally, two small case studies – the IDEAS and SXRMB beamlines – will be used to demonstrate the ease of deployment on new beamlines.
Speaker: David Chevrier (Canadian Light Source)
A Phase Space Beam Position Monitor for Synchrotron Radiation 15m
Synchrotron radiation experiments critically depend on the stability of the photon beam position. The position of the photon beam at the experiment or optical element location is set by the electron beam source position and angle as it traverses the magnetic field of the bend magnet or insertion device. An ideal photon beam monitor would be able to measure the photon beam's position and angle, and thus infer the electron beam's position in phase space. Monochromatic x-ray beams at synchrotrons are typically prepared by x-ray diffraction from crystals usually in the form of a double crystal monochromator. Diffraction couples the photon wavelength or energy to the incident angle on the lattice planes within the crystal. The beam from such a monochromator will contain a spread of energies due to the vertical divergence of the photon beam from the source. This range of energies can easily cover the absorption edge of a filter element such as iodine at 33.17 keV. A vertical profile measurement with and without the filter can be used to determine the vertical angle and position of the photon beam. In these measurements an imaging detector measures these vertical profiles with an iodine filter that horizontally covers part of the monochromatic beam. The goal was to investigate the use of this combined monochromator, filter and detector as a phase space beam position monitor. The system was tested for sensitivity to position and angle under a number of synchrotron operating conditions, such as normal operations and special operating modes where the beam is intentionally altered in position and angle. The results are comparable to other methods of beam position measurements and indicate that such a system is feasible in situations where part of the white synchrotron beam can be used for the phase space measurement.
Speaker: Nazanin Samadi (University of Saskatchewan)
Observation of Wakefields in Coherent Synchrotron Radiation at the Canadian Light Source 15m
Synchrotron light sources routinely produce brilliant beams of light from the infrared to hard X-ray. Typically, the length of the electron bunch is much longer than the wavelength of the produced radiation, causing the electrons to radiate incoherently. Many synchrotron light sources, including the Canadian Light Source (CLS), can operate in special modes where the electron bunch, or structures in the electron bunch, are small enough that they radiate coherently, producing coherent synchrotron radiation (CSR). Using a Michelson interferometer and RF diodes at CLS, we observe structure in THz CSR which is due to the electromagnetic wake following the electron bunch. The RF diode measurements provide direct observations of the wakefields, and we compare against wakefield simulations. Given the complexity of the vacuum chamber geometry, the agreement between simulation and measurement is quite satisfactory.
Speaker: Ward Wurtz (Canadian Light Source Inc.)
Welcome BBQ Reception / Réception d'accueil avec BBQ CCIS Ground Level Foyer
CCIS Ground Level Foyer
Herzberg Memorial Public Lecture - Miguel Alcubierre, National Univ. of Mexico / Conférence commémorative publique Herzberg - Miguel Alcubierre, National Univ. of Mexico Myer Horowitz Theatre
Faster than the Speed of Light 1h
In this talk I will give a short introduction to some of the basic concepts of Einstein's special theory of relativity, which is at the basis of all of modern physics. In particular, I will concentrate on the concept of causality, and why causality implies that nothing can travel faster than the speed of light in vacuum. I will later discuss some of the basic ideas behind Einstein's other great theory, General Relativity, which is the modern theory of gravity and postulates that the geometry space-time is dynamic and the presence of large concentrations of mass and energy produce a "curvature" in space-time. I will then talk about how the curvature of space-time can be used in several ways to travel "faster than the speed of light" by distorting the geometry away from that of flat space. In particular, I will discuss the ideas behind the geometric model for a "warp drive".
Speaker: Prof. Miguel Alcubierre (National University of Mexico)
Post-talk Reception Dinwoodie Lounge
Dinwoodie Lounge
Tuesday, 16 June
CAP Foundation Annual General Meeting / Assemblée annuelle de la Fondation de l'ACP CCIS 4-285
Convener: Robert Mann (University of Waterloo)
Exhibit booths open 08:30-16:00 / Salle d'exposition ouverte de 08h30 à 16h00 CCIS L2 Foyer
Teachers' Day - Session I / Journée des enseignants - Atelier I CCIS L1-047
Opening and Welcome, Calvin Kalman from CAP 15m
Changing student's approach to learning physics, Calvin Kalman, Chair of CAP Division of Physics Education 30m
Metamaterials: Controlling light,heat,sound and electrons at the nanoscale, Zubin Jacob, Electrical and Computer Engineering, UofA 45m
T-PUB Commercial Publishers' Session: Resources to Enhance University Physics Teaching (DPE) / Session des éditeurs commerciaux : Ressources visant à améliorer l'enseignement de la physique à l'Université (DEP) CCIS L1-029
Convener: Don Mathewson (Division of Physics Education, CAP)
Pearson Education's digital resources for supporting Physics teaching: Mastering Physics 45m
This presentation will provide an overview of the online resources which Pearson Education can provide to help support your university physics teaching. We will begin with an overview of how one faculty member has implemented and used Pearson resources in his first-year physics course sequence, and the plans in place for including further tools in the coming year. The presentation will then review other available tools that can help enhance your teaching toolkit.
Speakers: Adam Sarty (Saint Mary's University) , Mrs Claire Varley (Customer Experience Manager – Higher Education, Pearson Canada)
A panel discussion of PER and Enhanced WebAssign in teaching physics 45m
Join Nelson Education and some of Canada's leading physics educators for a demonstration of enhanced WebAssign and a discussion around physics education research in practice including the use of digital learning tools to promote better learning outcomes.
Speakers: Ernie McFarlane (University of Guelph) , Marina Milner-Bolotin (The University of British Columbia) , Martin Williams (University of Guelph)
T1-1 Superconductivity (DCMMP) / Supraconductivité (DPMCM) NINT Taylor room
Convener: Tatiana Rappoport (Federal University of Rio de Janeiro)
Scanning Tunneling Spectroscopy of LiFeAs 30m
LiFeAs is one of several pnictide and chalcogenide superconductors that can be grown in single-crystal form with relatively few defects. Spectroscopy away from any native defects reveals a spatially uniform superconducting gap, with two distinct gap edges. Quasiparticle interference over the gap energy range provides evidence for an S+- pairing state. We further explore the spectroscopy of both native, and deliberately introduced defects and compare to theoretical calculations for defects in an S+- superconductor.
Speaker: Prof. D,A. Bonn (University of British Columbia)
Interplay of charge density waves and superconductivity 30m
We examine possible coexistence or competition between charge density waves (CDW) and superconductivity (SC) in terms of the extended Hubbard model. The effects of band structure, filling factor, and electron-phonon interactions on CDW are studied in detail. In particular, we show that van Hove singularities per se can lead to the formation of CDW, due to a substantial energy gain by electron-phonon coupling. While this is contrary to the conventional view that CDW are caused by nesting of Fermi surfaces, it is consistent with recent experimental findings.
Speaker: Kaori Tanaka (University of Saskatchewan)
Quantum oscillation studies of quantum criticality in PrOs$_4$Sb$_{12}$ 15m
PrOs$4$Sb${12}$ is a cubic metal with an exotic superconducting ground state below 1.8 K. The crystal fields around the Pr site are such that it has a singlet ground state and a magnetic triplet just 8K above the ground state. Under an applied magnetic field, the triplet splits, and the S$_z = +1$ state crosses the singlet state at easily accessible magnetic fields. In the region of the level crossing the ground state reconstructs, creating a so-called "antiferroquadrupolar" (AFQ) phase that exists at temperatures below $1$ K and magnetic fields between about $4.5$ and $12$ tesla. This state offers a rare opportunity to observe the behaviour of quantum oscillations upon crossing a phase transition. In a recent paper [$1$] we argued that the lower boundary of the AFQ phase should have exotic behaviour as T $\rightarrow 0$ K due to mixing of hyperfine states with the AFQ order. We will describe our attempts to observe this behaviour via magnetic susceptibility and quantum oscillation measurements. [$1$] A. McCollam, B. Andraka and S. R. Julian, Physical Review B 88 (2013) 075102.
Speaker: Dr Stephen Julian (University of Toronto)
A Variational Wave Function for Electrons coupled to Acoustic Phonons 15m
We survey briefly the electron-phonon interactions in metals with an emphasis on applications in electron-phonon mediated superconductivity. While BCS theory and Eliashberg theory have significant predictive power, the microscopic Hamiltonians for the processes they describe are still an open area of study. We will examine the hitherto unsolved BLF-SSH model of electrons interacting with acoustical phonons and present a novel variational wave function for the solution of this model. We examine the validity of this variational wave function across applicable parameter regimes.
Speaker: Carl Chandler (University of Alberta)
T1-10 THz science and applications (DAMOPC) / Sciences et applications des THz (DPAMPC) CCIS L2-200
Convener: Matt Reid (University of northern british columbia)
Ultrafast dynamics of mobile charges and excitons in hybrid lead halide perovskites 30m
In this talk we discuss recent experiments using ultra-broadband time-resolved THz spectroscopy (uTRTS) studying charge and excitonic degrees of freedom in the novel photovoltaic material CH3NH3PbI3. This technique uses near single-cycle and phase stable bursts of light with an ultra-broad bandwidth spanning 1 - 125 meV to take snapshots of a material's dielectric function or optical conductivity on femtosecond time scales after photoexcitation. These transient spectra reveal free charge transport properties on unprecedented time scales, and at the same time can probe internal excitations of Coulombically bound excitons. It is therefore an ideal technique for studying materials related to solar energy conversion such as semiconducting polymers, quantum dots and even the new hybrid metal halide perovskites. We apply uTRTS to a single crystal of CH3NH3PbI3, temporally resolving the charge carrier generation dynamics, the screening of infrared active phonons and the dissociation of excitons. Our measurements reveal remarkably high charge carrier mobilities on ultrafast time scales, as well as the importance of screening at elevated carrier densities.
Speaker: David Cooke (McGill University)
Carrier dynamics in semiconductor nanowires studied using optical-pump terahertz-probe spectroscopy 30m
The advance of non-contact measurements involving pulsed terahertz radiation presents great interests for characterizing electrical properties of a large ensemble of nanowires. In this work, InP and Si nanowires grown by molecular beam epitaxy or by chemical vapor deposition on silicon substrates were characterized using optical-pump terahertz probe (OPTP) transmission experiments. The influence of various fabrication parameters (v.g. doping and NW diameter) on the carrier dynamics has been investigated. Photocarrier lifetimes and mobilities can be extracted from such OPTP measurements.
Speaker: Prof. Denis Morris (Département de physique, Université de Sherbrooke)
Towards quantum repeaters using frequency multiplexed entanglement 15m
Quantum communication is based on the possibility of transferring quantum states, generally encoded into so-called qubits, over long distances. Typically, qubits are realized using polarization or temporal modes of photons, which are sent through optical fibers. However, photons are subject to loss as they travel through optical fibers or free space, which sets a distance barrier of around 100 kilometers. In classical communications, this problem can be straightforwardly solved by amplification, but this is not an option in quantum mechanics because of the non-cloning theorem. Fortunately, photon loss can be overcome by implementing quantum repeaters [1], which create long-distance entanglement via entanglement swapping from shorter-distance entanglement links. Such protocols require the capacity to create entanglement in a heralded fashion, to store it in quantum memories, retrieve it after feed-forward information, and to swap it. A variety of architectures and protocols have been proposed for implementing quantum repeaters [2]. Ideally, a quantum repeater protocol should minimize the physical resources required to establish entanglement between two points. Our team is working on a specific quantum repeater scheme that explores frequency multiplexing. This will allow us to increase the probability of generating short-distance entanglement, with a success rate close to 100%, while taking maximum benefit of the quantum memories developed by other members of our group [3]. The proposed scheme requires quantum memories and entangled photons pair sources capable to work in the frequency multiplexing domain. This presentation will focus on the description of the general scheme and on the multiplexed entangled photon pair sources that we are developing.
Speaker: Mr Pascal Lefebvre (University of Calgary)
True Random Number Generation based on Interference between Two Independent Lasers 15m
Reliable true random number generation is essential for information theoretic security in a quantum cryptographic system based on quantum key distribution (QKD) and one-time pad encryption [1]. Various random number generation methods have already been proposed and demonstrated, such as schemes based on the detection of single photons [2], whose rate is limited by the dead time of single photon detectors. Alternative approaches are based on the chaotic light emission from a semiconductor laser [3, 4]. In this talk we propose and demonstrate a novel scheme to generate random numbers based on interference between two independent lasers, i.e. a continuous wave (CW) laser and a gain-switched pulsed laser, each emitting light at around 1550 nm wavelength. The physical basis of our random number generator is the randomness of the phase difference between light emitted from the two independent lasers. Using only off-the-shelf components, we achieve a random number generation rate of 250 MHz. The properties of the generated random numbers are tested using National Institute Standards and Technology (NIST) statistical test suite. We also discuss the extension of our methods from random bits to randomly selected symbols with more than two different values. References [1] N. Gisin, and R. Thew, "Quantum communication," Nature Photon. 1, 165 (2007). [2] A. Stefanov, N. Gisin, O. Guinnard, L. Guinnard, and H. Zbinden, "Optical quantum random number generator," J. Mod. Opt. 47, 595 (2000). [3] T. Symul, S. M. Assad, and P. K. Lam, "Real time demonstration of high bitrate quantum random number generation with coherent laser light," Appl. Phys. Lett. 98, 231103 (2011). [4] A. Uchida, K. Amano, M. Inoue, K. Hirano, S. Naito, H. Someya, I. Oowada, T. Kuashige, M. Shiki, S. Yoshimori, K. Yoshimura, and P. Davis, "Fast physical random bit generation with chaotic semiconductor lasers," Nature Photon. 2, 728 (2008).
Speaker: Caleb John (University of Calgary)
T1-11 Medical Imaging (DMBP) / Imagerie médicale (DPMB) CAB 239
The Rate of Reduction of Defocus in the Chick Eye is Proportional to Retinal Blur 15m
PURPOSE. Calculations of retinal blur and eye power are developed and used to study blur on the retina of the growing chick eye. The decrease in defocus and optical blur during growth is known to be an active process. Here we show that the rate of defocus reduction is proportional to the amount of blur on the retina. METHODS. From literature values of chick eye parameters, the amounts of defocus and optical axial length were fitted as a function of age. Pupil size was used to calculate blur on the retina due to defocus. Eye length and calculated eye power were compared up to day 75 to examine their contributions to decreasing retinal blur. Novel equations were used to calculate eye power and a new definition of the end point of active defocus reduction is introduced. RESULTS. During initial growth, eye length increases while blur from defocus decreases. Eye power decreases exponentially reaching and closely matching eye length after day 35. This gives an almost stable value of defocus beyond day 50. Retinal blur decreases almost exponentially until between days 40 and day 50. After day 50, angular retinal blur changes in agreement with predictions of a uniformly expanding eye model and passive growth. CONCLUSIONS. Concurrent variations in eye power and length produce smaller changes in defocus. We define the time at which angular retinal blur becomes stable as the completion of active reduction of defocus. Prior to this time, the rate of defocus reduction is proportional to the amount of blur on the retina. After this time, the measured eye properties are consistent with uniform eye expansion and angular blur is close to the presumed resolution limit of the cone photoreceptors. The eye power calculation presented is accurate and simpler than other approaches without the need for additional dimensional data.
Speaker: Prof. Melanie Campbell (University of Waterloo)
Image Analysis and Quantification for PET Imaging 15m
Introduction: Positron emission tomography (PET) is a highly sensitive, quantitative and non-invasive detection method that provides 3D information on biological functions inside the body. There are several factors affecting the image data, including normalization, scattering, and attenuation. In this study we have quantified the effect of scattering and attenuation corrections on the PET data. Methods: The image quality phantom (approximating the size of a mouse) was modified to match the diameters of the rat and monkey count rate phantoms by creating high density polyethylene (HDPE) sleeves that fit over the standard phantom. The emission and transmission data from the phantom, filled with 18F, were acquired with a microPET P4 scanner. The data were histogrammed, reconstructed, using various algorithms, with required corrections applied, including normalization and physical decay of 18F. The data were analyzed using volume of interest (VOI) analysis with and without attenuation or scattering corrections. Signal–to-noise ratio values were calculated and the results were correlated with the phantom size, correction methods and reconstruction algorithm. Results: The signal to noise using OSEM3D/MAP algorithm provided the highest signal-to-noise ratio values for all three phantoms, followed by OSEM2D. Since both are iterative algorithms and reduce the noise in the images. Attenuation correction, along with scattering correction had a significant impact on the quantitative results. Conclusion: Both attenuation and scattering corrections need to be included in image quantification for PET imaging. OSEM3D/MAP provides the images with highest signal-to-noise ratio values.
Speaker: Dr Esmat Elhami (University of Winnipeg)
Magnetic Susceptibility Mapping in Human Brain using High Field MRI 30m
Magnetic Resonance Imaging (MRI) is a powerful imaging method for examining hydrogen protons and their local environment. Inferences can be made about the local environment from the signal relaxation (decay or recovery) or phase evolution. For many years, phase images were largely discarded in favor of magnitude images only, which dominate clinical MRI. Although the sensitive nature of phase images to magnetic field perturbations can cause a high degree of artifact, phase images also provide a means to examine the underlying local susceptibility distribution. Extraction of the local susceptibility requires removing nonlocal field effects that arise from strong air-tissue susceptibility differences, then performing an ill-posed inverse problem on the local magnetic field to yield the susceptibility map. This emerging MRI research area named Quantitative Susceptibility Mapping (QSM) provides a means to discriminate between tissues such as myelin, calcium and iron. This talk will introduce QSM and explore its value in human brain, particularly for measurement of iron accumulation in grey matter. These measures are further enhanced by using higher magnetic field strengths, greater than the clinical standards of 1.5 and 3.0 T. The value of these stronger magnetic fields will also be explored.
Speaker: Dr Alan Wilman (University of Alberta)
Correlating quantitative MR changes with pathological changes in the white matter of the cuprizone mouse model of demyelination 15m
Mouse brain white matter (WM) damage following the administration of cuprizone was studied weekly using diffusion tensor imaging, quantitative magnetization transfer imaging, T2-weighted MRI (T2w), and electron microscopy (EM). A previous study examined correlations between MR metrics and EM measures after 6 weeks of feeding. The addition of weekly *ex vivo* tissue analysis allows for a more complete understanding of the correlations between MR metrics and EM measures of tissue pathology. Signal inversion is apparent in the T2w images as the number of weeks of cuprizone feeding increased. A decreased magnetization transfer ratio (MTR) is observed in the WM regions of the cuprizone mouse as cuprizone feeding continued. Many changes are observed in the *ex vivo* data including directionality changes in the external capsule in the directional encoded map of diffusion tensor imaging from weeks 1 to 6. From the EM images, myelinated axons are apparent in both cuprizone and control mice. Cuprizone is associated with oligodendroglial swelling and apoptosis. The significant change between control and cuprizone mice in the corpus callosum peaks in T2w at week 3 whereas it peaks at week 4 in MTR. The first large change in T2w occurs between weeks 2 and 3 in the external capsule and between weeks 3 and 4 in the MTR. Radial diffusivity appears to be different between control and cuprizone mice even in week 1. The weekly changes in radial diffusivity follow a different time course than MTR and T2 in the cuprizone mouse. The different time courses of the MR metrics suggest that T2, MTR and diffusivity are sensitive to different pathological features in WM. ANOVA will be used to determine when significant changes occur in MRI metrics. EM analysis of the tissue is in progress for correlations with WM pathology. Visually it can be seen in the EM images at week 3 that the control and cuprizone corpus callosum show a similar amount of myelinated axons. Our results are consistent with EM from other studies suggesting MTR likely reflects demyelination. The addition of the weekly *ex vivo* tissue analysis allows for a more complete understanding of the correlations between MR metrics and EM measures of tissue pathology.
Speaker: Prof. Melanie Martin (Physics, University of Winnipeg, Radiology, University of Manitoba)
Interstitial point radiance spectroscopy in turbid media 15m
Optical spectroscopy has become a valuable tool in biomedical diagnostics because of its ability to provide biochemical information on endogenous and exogenous chromophores present in tissues. In this work, point radiance spectroscopy using a white light source is investigated 1) to measure the optical properties of bulk tissues and 2) to detect localized gold nanoparticles in tissue mimicking Intralipid and porcine muscle phantoms. An angular sensitive detector made from a side-firing fiber was developed and used to measure the angular distribution of light (up to 180 degree rotation of the fiber) in selected locations in a phantom. Rotation provides angular optical data for analysis. An alternative approach is to use non-directional fluence data, but for optical property recovery, this requires translation of the fiber which is not desirable. In our radiance approach, the white light source also provides some spectroscopic information (focused in the 650-900 nm band) in addition to spatial information of a target (i.e. gold nanoparticles). We have measured the effective attenuation coefficient, diffusion coefficient, absorption coefficient and reduced scattering coefficient of Intralipid phantoms and thermally coagulated porcine muscle. Further, gold nanoparticle inclusions embedded in tissue mimicking media and ex vivo tissues were detectable via a novel spectro-angular analysis technique. This work is focused on the development of a new optical fiber based tool for disease detection. Funding: NSERC Discovery Grant, Atlantic Innovation Fund, Canada Foundation for Innovation and Canada Research Chairs Program
Speaker: Dr Bill Whelan (Dept of Physics, University of Prince Edward Island)
T1-2 Many body physics & Quantum Simulation (DAMOPC-DCMMP) / Physique des N corps et simulation quantique (DPAMPC-DPMCM) CAB 235
Convener: Shohini Ghose
Universal features of quantum dynamics: quantum catastrophes 30m
Tracking the quantum dynamics following a quench of a range of simple many-body systems (e.g. the two and three site Bose-Hubbard models, particles on a ring), we find certain common structures with characteristic geometric shapes that occur in all the wave functions over time. What are these structures and why do they appear again and again? I will argue that they are quantum versions of the catastrophes described by catastrophe theory [R. Thom (1975), V.I. Arnol'd (1975)]. Quantum catastrophes occur in quantum fields: they are singular in the mean-field limit and require second-quantization to be well behaved, i.e. the essential discreteness of the excitations of the quantum field needs to be taken into account for a quantum catastrophe to be regularized. They are second quantized versions of more familiar catastrophes such as rainbows and the bright lines on the bottom of swimming pools (although the latter are rarely described in these terms!). Their universality stems from the fact that they are generic (need no symmetry) and structurally stable (immune to perturbations) as guaranteed by catastrophe theory.
Speaker: Duncan O'Dell (McMaster University)
Hot and Cold Dynamics of Trapped Ion Crystals near a Structural Phase Transition 15m
Small arrays of laser-cooled trapped ions are widely used for quantum information research, but they are also a versatile mesoscopic system to investigate physics with a flavor reminiscent of familiar models in condensed matter. For example, in a linear rf Paul trap, laser-cooled trapped ions will organize into a linear array when the transverse confinement of the trap is strong enough; however, at a critical trap anisotropy the ions will undergo a symmetry–breaking structural transition to a two-dimensional zigzag configuration. We have studied what is effectively the melting behavior of the ion arrays near to the linear-zigzag transition. We have also investigated the classical non-equilibrium dynamics during rapid quenches of the transition in order to test the Kibble-Zurek mechanism of topological defect formation across a symmetry-breaking transition. In this talk I will present our current investigations of dynamics near the linear-zigzag transition at ultralow temperatures, corresponding to just a few quanta of thermal energy in the vibrations of the ion array. I will discuss our implementation of a new laser cooling technique for trapped Ytterbium ions and our progress towards experiments in the quantum regime. For example, we are interested in whether decoherence effects can be sufficiently suppressed to prepare superpositions of the symmetry-broken configurations.
Speaker: Paul C Haljan (Simon Fraser University)
Simulating Anderson localization via a quantum walk on a one-dimensional lattice of superconducting qubits. 15m
Quantum walk (QW) on a disordered lattice leads to a multitude of interesting phenomena, such as Anderson localization. While QW has been realized in various optical and atomic systems, its implementation with superconducting qubits still remains pending. The major challenge in simulating QW with superconducting qubits emerges from the fact that on-chip superconducting qubits cannot hop between two adjacent lattice sites. In this talk, I discuss how to overcome this barrier and develop a gate-based scheme to realize the discrete time QW by placing a pair of qubits on each site of a 1D lattice and treating an excitation as a walker. It is also shown that various lattice disorders can be introduced and fully controlled by tuning the qubit parameters in our quantum walk circuit. We observe a distinct signature of transition from the ballistic regime to a localized QW with an increasing strength of disorder. Finally, an eight-qubit experiment is proposed where the signatures of such localized and delocalized regimes can be detected with existing superconducting technology.
Speaker: Joydip Ghosh (University of Calgary)
Cold Atom Metrology: Progress towards a New Absolute Pressure Standard 30m
Laser cooling and trapping of atoms has created a revolution in physics and technology. For example, cold atoms are now the standard for time keeping which underpins the GPS network used for global navigation. In this talk, I will describe a research collaboration between BCIT, UBC (Kirk Madison) and NIST (Jim Fedchak - Sensor Science Division) with the goal of creating a cold atom (CA) based primary pressure standard for the high- and ultra-high vacuum regimes: A cold, trapped atom can act as a sensitive detector for a particle that passes through its collision cross-section and imparts momentum to it. The collision event is registered if the sensor atom's momentum gain is high enough to escape the trap. Thus, an ensemble of confined atoms measures the flux of particles via the observed loss rate of sensor atoms from the trap. In short, the particle flux (pressure) passing through the sensor atom volume transduces a timing signal (loss rate). The loss rate is sensitive to the type of collision and to the trap depth confining the atoms [1]. These factors afford an opportunity to study collision physics and the physics of the trap while working towards a new standard. The advantages of a CA standard include the fact the sensor relies on immutable properties of atomic matter and their interactions and that it will be a primary pressure standard, tied directly to the second, a base SI unit. This absolute standard would provide a valuable alternative to gas expansion/orifice flow transfer standards currently in use. In this talk I will review the basic ideas supporting the science and technology, along with an update on our progress. [1] D. Fagnan, J. Wang, C. Zhu, P. Djuricanin, B. G. Klappauf, J. L. Booth and K. W. Madison, Phys. Rev. A 80, 022712, 2009.
Speaker: Dr James Booth (British Columbia Institute of Technology)
T1-3 Ground-based / in situ observations and studies of space environment I (DASP) / Observations et études de l'environnement spatial, sur terre et in situ I (DPAE) CAB 243
Convener: Konstantin Kabin (RMC)
Exoplanet Atmospheres: Triumphs and Tribulations 30m
From the first tentative discoveries to veritable spectra, the last 15 years has seen a triumphant success in observation and theory of exoplanet atmospheres. Yet the excitement of discovery has been mitigated by lessons learned from the dozens of exoplanet atmospheres studied, namely the difficulty in robustly identifying molecules, the possible interference of clouds, and the permanent limitations from a spectrum of spatially unresolved and globally mixed gases without direct surface observations. Nonetheless the promise and expectation is that the next generation of space telescopes will have the capability of detecting atmospheric biosignature gases if they exist on planets orbiting nearby stars, and the vision for the path to assess the presence of life beyond Earth is being established.
Speaker: Prof. Sara Seager (Massachusett institute of technology)
The Far-Infrared Universe: from the Universe's oldest ligth to the birth of its youngest stars 15m
Over half of the energy emitted by the Universe appears in the relatively unexplored Far-Infrared (FIR) spectral region, which is virtually opaque from the ground and must be observed by space-borne instrumentation. The European Space Agency (ESA) Planck and Herschel Space Observatories, launched together on 14 May 2009, have both provided pioneering observations in this spectral range from star and planet formation to the intensity and polarization of the cosmic microwave background. Herschel and Planck completed observations in April and October of 2013, respectively. Although data analysis efforts within the instrument teams are ongoing, both have provided data and analysis tools to ESA public archives, with more software updates and data releases expected to continue into 2015 and 2016, including the much anticipated Planck polarisation data and results. Recent Planck and Herschel results are presented with a discussion of the development of, and Canadian participation in, the future of FIR astrophysics.
Speaker: Jeremy Scott (University of Lethbridge)
Neutron Monitor Atmospheric Pressure Correction Method Based on Galactic Cosmic Rays tracing and MCNP Simulations of Cascade Showers 15m
Nuclear spallations accompanying Galactic Cosmic Ray (GCR) propagation through the atmosphere forms a so-called "cascade shower" by means of production of secondary protons, photons, neutrons, muons, pions and other energetic particles. World-wide Neutron Monitor (NM) network has been deployed for the ground-based monitoring of energetic protons and neutrons precipitations. Real time data from 36 NMs (including Calgary NM) are collecting in the Neutron Monitor Database (NMDB) [http://www.nmdb.eu]. However, each monitor has its own detection efficiency, which depends on NM location, design, operational and atmospheric parameters, so NM counting rates must be normalized. We developed and implemented a numerical technique which allows NM count rates estimation based on the spectrum of primary GCR, NM location, its internal design, and atmospheric parameters. Primary GCR are tracing to the top of atmosphere using our in-home computational tool [Kouznetsov, 2013]. The background proton and neutron particle fluxes are computed at Calgary NM location based on MCNP6 simulations and MSIS-E-90 Atmosphere Model [http://omniweb.gsfc.nasa.gov/vitmo/msis_vitmo.html]. Results obtained for Calgary NM improve standard atmospheric pressure correction procedure and can be used to normalize counting rates for the world-wide NM network.
Ionospheric Sounding Opportunities Using Signal Data From Pre-existing Amateur Radio And Operational Networks 15m
Amateur radio and other signals used for dedicated purposes, such as the Automatic Position Reporting System (APRS) and Automatic Dependent Surveillance Broadcast (ADS-B), are signals that exist for another reason, but can be used for ionospheric sounding. Whether mandated and government funded or voluntarily constructed and operated, these networks provide data that can be used for scientific and other operational purposes which rely on space weather data. Given the current state of the global economic environment and fiscal consequences to scientific research funding in Canada, these types of networks offer an innovative solution with pre-existing hardware for more real-time and archival space-weather data to supplement current methods, particularly for data assimilation, modelling and forecasting. Furthermore, the mobile ground-based transmitters offer more flexibility for deployment than stationary receivers. Numerical modeling has demonstrated that APRS and ADS-B signals are subject to Faraday rotation as they pass through the ionosphere. Ray tracing techniques were used to determine the characteristics of individual waves, including the wave path and the state of polarization at the satellite receiver. The modeled Faraday rotation was computed and converted to total electron content (TEC) along the ray paths. TEC data can be used as input for computerized ionospheric tomography (CIT) in order to reconstruct electron density maps of the ionosphere. The primary scientific interest of this study was to show that these signals can be used as a new source of data for CIT to image the ionosphere, possibly other data assimilation models, and to obtain a better understanding of magneto-ionic wave propagation.
Speaker: Alex Cushley
SELF- AND AIR-BROADENED LINE SHAPE PARAMETERS OF METHANE IN THE 2.3 MICRONS RANGE 15m
Methane is an important greenhouse gas in the terrestrial atmosphere and a trace gas constituent in planetary atmospheres. We report measurements of the self- and air-broadened Lorentz widths, shifts and line mixing coefficients along with their temperature dependences for methane absorption lines in the 2.22 to 2.44 microns spectral range. This set of highly accurate spectral line shape parameters is needed for radiative transfer calculations in terrestrial or planetary atmospheres. This research was performed in collaboration with colleagues from the College of William and Mary, Williamsburg,VA, NASA Langley Research Center and Jet Propulsion Laboratory.
Speaker: Adriana Predoi-Cross (University of Lethbridge)
T1-4 Mathematical Physics (DTP) / Physique mathématique (DPT) CCIS L1-160
Convener: Andrew Frey (University of Winnipeg)
Geometrization of N-Extended 1-Dimensional Supersymmetry Algebras 30m
The problem of classifying off-shell representations of the N-extended one-dimensional super Poincare algebra is closely related to the study of a class of decorated graphs known as Adinkras. We show that these combinatorial objects possess a form of emergent supergeometry: Adinkras are equivalent to very special super Riemann surfaces with divisors. The method of proof critically involves Grothendieck's theory of "dessins d'enfants", work of Cimasoni-Reshetikhin expressing spin structures on Riemann surfaces via dimer models, and an observation of Donagi-Witten on parabolic structure from ramified coverings of super Riemann surfaces.
Speaker: Charles Doran (University of Alberta)
Some discrete-flavoured approaches to Dyson-Schwinger equations 30m
I will discuss two recent ideas on how to better understand the underlying structure of Dyson-Schwinger equations in quantum field theory. These approaches use primarily combinatorial tools; classes of rooted trees in the first case and chord diagrams in the second case. The mathematics is explicit and approachable.
Speaker: Karen Yeats (Simon Fraser University)
Novel Charges in CFT's 15m
In this talk we construct two infinite sets of self-adjoint commuting charges for a quite general CFT. They come out naturally by considering an infinite embedding chain of Lie algebras, an underlying structure that share all theories with gauge groups U(N), SO(N) and Sp(N). The generality of the construction allows us to carry all gauge groups at the same time in a unified framework, and so to understand the similarities among them. The eigenstates of these charges are restricted Schur polynomials and their eigenvalues encode the value of the correlators of two restricted Schurs. The existence of these charges singles out restricted Schur polynomials among the number of bases of orthogonal gauge invariant operators that are available in the literature.
Speaker: Dr Pablo Diaz Benito (University of Lethbridge)
Yang-Mills Flow in the Abelian Higgs Model 15m
The Yang-Mills flow equations are a parabolic system of partial differential equations determined by the gradient of the Yang-Mills functional, whose stationary points are given by solutions to the equations of motion. We consider the flow equations for a Yang-Mills-Higgs system, where the gauge field is coupled with a scalar field. In particular we consider the Abelian case with axial symmetry. In this case we have vortex-type classical solutions corresponding to Ginzburg-Landau model of superconductivity. In this case the flow equations are reduced to two coupled partial differential equations in two variables, which we can solve numerically given initial conditions. Looking at the behaviour of the flow near the solutions in this model tells us about the stability of the solutions, and in the case of stable solutions allows us to approximate the solutions numerically. Study of the flow in the dimensionally reduced Abelian case provides a starting point for studying flows in more complicated cases, such as non-Abelian Higgs models, or full 3+1 dimensional theories. Using the AdS/CFT correspondence, which provides an equivalence between a field theory and a gravitational theory in one higher dimension where Yang-Mills flow could be compared with more well known geometric flow equations such as Ricci flow.
Speaker: Paul Mikula (University of Manitoba)
T1-5 Energy Frontier: Susy & Exotics I (PPD-DTP) / Frontière d'énergie: supersymétrie et particules exotiques I (PPD-DPT) CCIS 1-160
Convener: Reyhaneh Rezvani (Université de Montréal)
Natural and unnatural SUSY 30m
After the first run of LHC, the parameter space of supersymmetric theories is under serious pressure. In this talk I will present attempts at natural SUSY model building and also discuss the consequences of relaxing the naturalness assumption of supersymmetric theories.
Speaker: Thomas Gregoire (Carleton University)
Hunt for Supersymmetry with the ATLAS detector at LHC 30m
Supersymmetry is one of the most motivated theories beyond the Standard Model of particle physics. It explains the mass of the observed Higgs boson and provides a Dark Matter candidate among other attractive features. A striking prediction of Supersymmetry is the existence of a new particle for each Standard Model one. I will highlight results of the extensive program of the ATLAS Collaboration searching for supersymmetric particles with the Run 1 data of 2012 and show the discovery potential of Run 2 starting in the summer of 2015.
Speaker: Zoltan Gecse (University of British Columbia (CA))
A search for heavy gluon and vector-like quark in the 4b final state in pp collisions at 8 TeV 15m
Searches for vector-like quarks are motivated by Composite Higgs models assuming a new strong sector and predict the existence of new heavy resonances. A search for single production of vector-like quarks is performed for the through the exchange of a heavy gluon in the $p p \to G^* \to B\bar b/\bar B b \to H b \bar b \to b~\bar b~b~\bar b$ process, where $G^*$ is a heavy color octet vector resonance and $B$ a vector-like quark of charge -1/3. The largest background, QCD multi-jet, is estimated using a data-driven method. In case of no excess of events, upper limits on the production cross sections and lower limits in the 2D plane {m_G^*, m_B} will be set.
Speaker: Frederick Dallaire (Universite de Montreal (CA))
Electroweak Baryogenesis and the LHC 15m
It is not known how to explain the excess of matter over antimatter with the Standard Model. This matter asymmetry can be accounted for in certain extensions of the Standard Model through the mechanism of electroweak baryogenesis (EWBG), in which the extra baryons are created in the early Universe during the electroweak phase transition. In this talk I will review EWBG, connect it to theories of new physics beyond the Standard Model, and show that in many cases the new particles and interactions required for efficient EWBG can be discovered using existing and expected data from the LHC.
Speaker: David Morrissey (TRIUMF)
T1-6 Cosmic Frontier: Cosmology II (PPD-DTP-DIMP) / Frontière cosmique: cosmologie II (PPD-DPT-DPIM) CCIS 1-140
Convener: Claudio Kopper (University of Alberta)
New results from Planck 30m
The Planck satellite has completed its mission to map the entire microwave sky at nine separate frequencies. A new data release was made in February 2015, based on the full mission, and including some polarization data for the first time. The Planck team has already produced more than 100 papers, covering many different aspects of the cosmic microwave background (CMB). We have been able to learn in detail about the physics of the interstellar medium in our Galaxy, and to remove this foreground emission in order to extract the cosmological information from the background radiation. Planck's measurements lead to an improved understanding of the basic model which describes the Universe on the very largest scales. In particular, a 6 parameter model fits the CMB data very well, with no strong evidence for extensions to that sceneraio. There are constraints on inflationary models, neutrino physics, dark energy and many other theoretical ideas. New cosmological probes include CMB lensing, CMB-extracted clusters of galaxies, the Cosmic Infrared Background and constraints on large-scale velocities. This talk will highlight some of the new results of the 2015 papers, including the improvements coming from the addition of polarization dimension.
Speaker: Douglas Scott (UBC)
**WITHDRAWN** Planck, gravity waves, and cosmology in the 21st century 30m
In this talk I'll survey the current observational status in cosmology, highlighting recent developments such as results from the Planck satellite, and speculate on what we might achieve in the future. In the near future some important milestones will be exploration of the neutrino sector, and much better constraints on the physics of the early universe via B-mode polarization. In the far future we can hope to measure a variety of cosmological parameters to much higher precision than they are currently constrained.
Speaker: Kendrick Smith (Perimeter Institute for Theoretical Physics)
The CHIME Dark Energy Project 15m
The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is a novel radio telescope currently under construction at the Dominion Radio Astrophysical Observatory in Penticton, BC. Comprising four 20-m by 100-m parabolic cylinders, each equipped with 256 antennas along its focal line, CHIME is a `software telescope' with no moving parts. It will measure the 21-cm emission from neutral hydrogen to map the distribution of matter between redshifts 0.8 and 2.5, over most of the northern sky. By following the apparent size of the baryon acoustic oscillation (BAO) feature in the data, we can measure the expansion history of the Universe over an epoch where the effects of Dark Energy began to become important and thereby improve our understanding of this recently discovered phenomenon. The science goals, technical details, and current status of CHIME will be presented.
T1-7 Nuclear Structure I (DNP) / Structure nucléaire I (DPN) CCIS L1-140
Convener: Adam Garnsworthy (TRIUMF)
Light Exotic Nuclei Studied via Resonance Scattering 30m
Remarkable advances have been made toward achieving the long-sought-after dream of describing properties of nuclei starting from realistic nucleon-nucleon interactions in the last two decades. The ab initio models were very successful in pushing the limits of their applicability toward nuclear systems with ever more nucleons and exotic neutron to proton ratios. Predictions of these models often are very close to the experimental data, but sometimes deviate from experiment substantially. For example, the exotic isotope of helium, $^9$He, represents a curious case of stark disagreement between the predictions of modern theories and what is believed to be the experimental knowledge of this nucleus. In this talk I will present recent experimental results that shed light on structure of $^9$He and some other light exotic nuclei that were studied using resonance scattering approach and will discuss these findings in view of predictions of the ab initio models.
Speaker: Grigory Rogachev (Texas A&M University)
Low-energy, precision experiments with ion traps: mass measurements and decay spectroscopy 30m
The atomic mass is a unique identifier of each nuclide, akin to a human fingerprint, and manifests the sum of all interactions among its constituent particles. Hence it provides invaluable insights into many disciplines from forensics to metrology. At TRIUMF Ion Trap for Atomic and Nuclear science (TITAN), Penning trap mass spectrometry is performed on radioactive nuclides, particularly those with half-lives of as short as 9 ms. The TITAN mass values of Mg-20 and Mg-21 have been used to test the isobaric multiplet mass equation (IMME), revealing its dramatic breakdown. On the other side of the valley of stability, the increasingly detailed mass survey in the island of inversion on nuclides has exposed the lowest shell gap of any magic nucleus and the first crossover in the two-neutron separation energy. At heavier masses, neutron-rich Rb and Sr isotopes have been charge bred and their masses measured to probe the r-process, which is believed to be responsible for the production of roughly 50% of the abundance of elements heavier than Fe. A highlight of recent results and an overview of the advanced ion-manipulation techniques used will be presented.
Speaker: Anna Kwiatkowski (TRIUMF)
Electric Monopole Transition Strengths in $^{62}$Ni 15m
Excited states in $^{62}$Ni were populated with a (p, p') reaction using the 14UD Pelletron accelerator at the Australian National University. The proton beam had an energy of 5 MeV and was incident upon a self-supporting $^{62}$Ni target of 1.2 mg/cm$^2$. Electric monopole transition strengths were measured from simultaneous detections of the internal conversion electrons and $\gamma$-rays emitted from the de-excitating states, using the Super-e spectrometer coupled with a Germanium detector. The Super-e spectrometer has a superconducting solenoid magnet with its magnetic axis arranged perpendicular to the beam axis, which transports the electrons from the target to the 9 mm thick Si(Li) detector array which is situated 350 mm away from the target. The strength of the $0_2^+ \rightarrow 0_1^+$ transition has been measured to be 77$^{+23}_{-34} \times 10^{-3}$ and agrees with previously reported values. Upper limits have been placed on the $0^+_3 \rightarrow 0^+_1$ and $0^+_3 \rightarrow 0^+_2$ transitions. The measured $\rho^2(E0)$ value of the $2^+_2 \rightarrow 2^+_1$ transition in $^{62}$Ni has been measured for the first time and found to be the largest $\rho^2(E0)$ value measured to date in nuclei heavier than Ca. The low-lying states of $^{62}$Ni have previously been classified as one- and two-phonon vibrational states based on level energies. The measured electric quadrupole transition strengths are consistent with this interpretation. However as electric monopole transitions are forbidden between states which differ by one phonon number, the simple harmonic quadrupole vibrational picture is not sufficient to explain the large $\rho^2 (E0)$ value for the $2^+_2 \rightarrow 2^+_1$ transition. A discussion of the results and experimental technique will be presented, along with preliminary shell model calculations.
Speaker: Mr Lee J. Evitts (TRIUMF)
**WITHDRAWN** Fast-timing mesurements in neutron-rich $^{65}$Co 15m
The region below $^{68}$Ni has recently attracted great attention, from both experimental and theoretical studies, due to the observation of a sub-shell closure at N=40 and Z=28. The collectivity in the region is revealed in the even-even Fe and Cr isotopes by the low energy of the first 2$^+$ states and the enhanced $B(E2;2^+\rightarrow0^+)$ reduced transition probabilities, which peak at 21(5) W.u. for $^{64}$Cr[1], $^{66}$Fe[2] and 22(3) W.u. for $^{68}$Fe[1]. These effects can only be reproduced by large-scale shell model calculations with the inclusion of the $\nu g_{9/2}$ and $\nu d_{5/2}$ orbitals. Precise experimental information on the Co isotopes is important for understanding the nuclear structure in this region, with particular interest in the transition rates, as they can be interpreted as originating from a $\pi f^{-1}_{7/2}$ proton hole coupled to its even-even Ni neighbor. With this aim, a fast-timing ATD $\beta\gamma\gamma$(t) [3] experiment was performed at ISOLDE in CERN, where the $\beta$-decay chain of exotic neutron-rich Mn were measured. In this work we report on the investigation of the low-energy structure of $^{65}$Co populated in the $\beta$-decay of $^{65}$Fe by means of $\gamma\gamma$ and fast-timing spectroscopy. Our $^{65}$Co level scheme confirms the transitions previously observed in [4] and expands it with several new gammas and levels up to $\sim$2.5 MeV. Employing the ATD $\beta\gamma\gamma$(t) method, the half-lives and lifetime limits of some of the low-lying states have been measured for the first time. Some of the deduced transition rates are significantly lower than expected by the systematics of the region, yet this remains to be to be explained by shell model calculations. Making use of the measured half-lives, tentative spin-parities are proposed for some of the lower levels. [1] H.L. Crawford et al., Phys. Rev. Lett. 110, 242701 (2013). [2] W. Rother et al., Phys. Rev. Lett. 106, 022502 (2011). [3] H. Mach et al., Nucl. Instrum. Meth. A280, 49 (1989). [4] D. Pauwels et al. Phys. Rev. C 79, 044309 (2009).
Speaker: Bruno Olaizola Mampaso (Nuclear Physics Group - University of Guelph)
T1-8 Special session to honor Dr. Akira Hirose I (DPP) / Session speciale en l'honneur de Dr. Akira Hirose I (DPP) CCIS 1-430
Convener: Luc Stafford (U.Montréal)
Overview of the Recent J-TEXT Results 30m
The experimental research in recent years on the J-TEXT tokamak are summarized, the most significant results including observation of core magnetic and density perturbations associated with sawtooth events and tearing instabilities by a high-performance polarimeter-interferometer (POLARIS), investigation of a rotating helical magnetic field perturbation on tearing modes, studies of resonant magnetic perturbations (RMP) on plasma flows and fluctuations, and explorations of high density disruptions in ohmic heating and gas puffing discharges. The POLARIS system developed on J-TEXT has time response up to 1 μs, phase resolution < 0.1o and spatial resolution ~3 cm (17 chords). Such high resolution permits investigations of fast equilibrium dynamics as well as magnetic and density perturbations associated with magnetohydrodynamic (MHD) instabilities. Based on the measurement, temporal evolution of the safety factor profile, current density profile and electron density profile are obtained during sawtooth crash events as well as disruptions. In addition, core magnetic and density perturbations associated with MHD tearing instabilities are clearly detected. Particle transport due to the sawtooth crashes is analyzed. It found that the sawteeth only partially flatten the core density profile, but enhanced particle diffusion on the time scale of the thermal crash occurs over much of the profile. The RMP system on J-TEXT can generate a rotating helical field perturbation with a maximum rotation frequency up to 10 kHz, and dominant resonant modes of m/n = 2/1, 3/1 or 1/1. It is found that tearing modes can be easily locked and then rotate together with a rotating RMP. During the mode locking and unlocking, instead of amplifying the island, the RMP can suppress the island width, especially when there is a small frequency gap between the island and the RMP. The effects of RMPs on plasma flows and fluctuations are studied with Langmuir probe arrays at the plasma edge. Both toroidal rotation velocity and radial electric field increase with RMP coil current when the RMP current is no more than 5kA. When the RMP current reaches 6kA, the toroidal velocity profile becomes flatter near the last closed flux surface. The absolute amplitude of Er also significantly decreases at IRMP = 6 kA. At the same time, the behavior of the poloidal and toroidal turbulent stresses from simultaneous probe measurements are consistent with the Er trends. Both LFZF and GAM are also damped by strong RMPs. Some interesting features of high density disruptions are identified by interpreting the measured POLARIS data and the radiation power measurements. In the density ramp-up phase of a high density disruption shot, an asymmetry of density profile between the Low-Field-Side (LFS) edge (r>0.8a) and the High-Field-Side (HFS) edge (r<-0.8a) would appear and increase gradually. At the same time, an asymmetry of radiation power profile also arises as the result of the asymmetry of density profile at the edge. When the density at the HFS edge increases to nearly twice as large as the density at the LFS edge, a low-frequency (<1kHz) density perturbation suddenly stimulates at the HFS edge and gradually expanded into the center region. The disruption takes place when the density perturbation reaches the location nearly the q=2 surface. All the details will be presented at the meeting.
Speaker: Ge Zhuang (Huazhong University of Science and Technology)
Magnetic Fluctuations Measurements in Magnetized Confinement Plasmas 30m
Both the magnetic fluctuations and electron density fluctuations are important parameters for fusion-oriented plasma research since fluctuation-driven transport dominates in high temperature magnetic confinement devices. The far-infrared laser systems are employed to measure both the Faraday rotation and electron density simultaneously with time response up to a few microseconds in reversed filed pinch, tokamaks. Fast time response combined with low phase noise also enables us to directly measure magnetic and density fluctuations. The various MHD activities such as sawtooth crash, tearing reconnection and fast particle modes have been observed in various magnetic confinement devices. The high temporal resolution of polarimetry provides excellent platform to study internal magnetic fluctuations and magnetic fluctuation induced transport. The work is supported by US Department of Energy.
Speaker: Dr Weixing Ding (UCLA)
Plasma Ion Implantation for Photonic and Electronic Device Applications 30m
Plasma Ion Implantation (PII) is a versatile ion implantation technique which allows very high fluence ion implantation into a range of targets. The technique is conformal to the surface of the implanted object, which makes it suitable for a wide range of applications. The ease with which high ion fluences can be delivered means that the technique can be used to change the stoichiometry (e.g. elemental composition) as well as the atomic-level structure of the target material in the implanted region. When combined with masking techniques and post-implant thermal processing, PII offers a powerful way to make new materials in-situ (e.g. within an existing solid-state matrix). The Plasma Physics Lab (PPL) at the University of Saskatchewan is home to a custom PII system with ion implantation energies ranging from 0-20 keV. This system is capable of a delivering very high ion doses in short times (e.g. high ion fluences) and has been employed in a range of applications, primarily oriented toward applications in photonics, to modify the properties of a variety of semiconductor materials. It has been used to fabricate luminescent silicon Schottky diodes based on silicon nanocrystals as well as SiC nanocrystallites. A more recent, low energy application of the system is N-doping of graphene, a technologically important new material for future electronic and photonic applications.
Speaker: Prof. Michael Bradley (Physics & Engineering Physics, University of Saskatchewan)
T1-9 Nanostructured Surfaces and Thin Films (DSS-DCMMP) / Surfaces et couches minces nanostructurées (DSS-DPMCM) CCIS L2-190
Convener: Steve Patitsas (University of Lethbridge)
**WITHDRAWN** Electrical and optical properties of electrochromic Tungsten trioxide (WO3) thin films at temperature range 300 to 500K 15m
During the past decade a great interest has been shown in the study of transition tungsten trioxide (WO3) thin films. The reason is that this transition presents a number of interesting optical and electrical properties. While their optical properties are very well studied in view of their application in smart windows, not much study is focussed on their electrical properties as a function of temperature. In this work we will present a detailed study of the electrical properties of lithium intercalated as a function of the temperature coefficient of resistance (TCR) of WO3 thin films as well as the electrochromic properties of these films. Using the variable range hoping model we calculated the density of states at the Fermi level of a samples prepared by thermal evaporation and inserted with lithium by a dry process. The TCR measurements were performed in temperature range 300 to 500 K. The understanding of this temperature dependent electrical behavior is expected to enhance our understanding of the electrochromic process in these films.
Speaker: Bassel Abdel Samad (Moncton University)
Characterization of the 2D percolation transition in ultrathin Fe/W(110) films using the magnetic susceptibility 15m
The growth of the first atomic layer of an ultrathin film begins with the deposition of isolated islands. Upon further deposition, the islands increase in size until, at some critical deposition, the merging of the islands creates at least one connected region of diverging size. This universal phenomenon describing connectivity is termed "percolation" and occurs at a "percolation transition" that can be described in renormalization group theory. In the context of a 2-dimensional ultrathin ferromagnetic film, geometric percolation can be monitored through the magnetic susceptibility, since as the island size diverges so does the correlation length of the ferromagnetic state. Although much work has been done studying films of known deposition as a function of temperature to detect percolation, very little work has characterized the transition as it occurs as a function of deposition at constant temperature. We report on measurements of the magnetic susceptibility, using the surface magneto-optic Kerr effect (SMOKE) under ultrahigh vacuum (UHV), as a function of deposition (at constant temperature) for the Fe/W(110) system as the first atomic layer is formed. Two regimes were detected: a high temperature regime with a broad susceptibility peak at larger depositions that represent a standard Curie transition from paramagnetism to ferromagnetism in a continuous film, and a low temperature regime with a much sharper peak in the susceptibility that occurs at the same deposition regardless of temperature. The low temperature regime is a good candidate for a geometric 2-dimensional percolation transition. Preliminary analysis gives a percolation critical exponent of $\gamma = 2.4 \pm 0.2$, in agreement with the result from the 2D Ising model.
Speaker: Randy Belanger (McMaster University)
You don't know what you've got 'till it's gone: ambient surface degradation of ZnO powders 15m
ZnO has rich electronic and optical properties that are influenced by surface structure and composition, which in turn are strongly affected by interactions with water and carbon dioxide. We correlated the effects of particle size, surface area, and crystal habit with data from X-ray photoelectron spectroscopy and zeta potential measurements to compare the degradation of ZnO powders prepared by several different synthesis methods. Neither surface polarity nor surface area, on their own, can account for the differences in the extent of carbonation among differently synthesized ZnO samples, and dissolution is a very significant in some samples [1]. Furthermore, ambient surface carbonation appears to be self-limiting for some ZnO powders (solvothermal synthesis), while ZnO produced by other synthesis methods (solid-state metathesis) can be completely converted to hydrozincite, Zn$_5$(OH)$_6$(CO$_3$)$_2$ in a matter of weeks. We show how these differences in surface carbonation correlate with frequency-dependent electrical properties, emphasizing the impact of ambient humidity variations. [1] J. Cheng and K.M. Poduska, ECS Journal of Solid State Science and Technology, 3 (5) P133-P137 (2014).
Speaker: Kristin Poduska (Memorial University of Newfoundland)
**WITHDRAWN** Density Functional Theory Study of Hydrogen on Metal Oxide and Insulator Surfaces 15m
Hydrogen molecule is being promoted as an environmentally clean energy source of the future. In order to use hydrogen as a source of energy, infrastructures have to be built. These infrastructures are efficient processes for hydrogen extraction, and efficient processes and materials for hydrogen storage. The major problem facing the use of hydrogen as a clean source of energy is the storage of liquid hydrogen. Hydrogen fuel can be concentrated into a small volume and store it in fuel tanks. The concentration of hydrogen can be done simply by cooling the hydrogen to an extremely low temperature or by compressing it under very high pressure as liquid. The concentrated normal mixture consist 25% *p*-H2, 75 % *o*-H2 and after hours of storage, about 40 % of the original content of the tank evaporates. The reason of this evaporation is the spontaneous conversion of orthohydrogen (*o*-H2) to parahydrogen (*p*-H2) over a period of time. This conversion is releasing enough heat to evaporate most of the liquid hydrogen and yield the explosion of the tank storage. In order to overcome this problem and limit the boil-off to low levels, the tank most be fill with a liquid hydrogen that has already been converted to a mixture close to 100 % *p*-H2. Special procedures are needed to maintain the composition (proportion) of the two types of hydrogen molecules (*o*-H2) and parahydrogen (*p*-H2) to be 100 % *p*-H2. In this presentation we will discuss the results of DFT methods of hydrogen molecule physisorbed on SrTiO3, Fe(OH)3 and MgO(001) surfaces. Energies, orbitals, positions and vibration frequencies of H2 molecule on these surfaces are calculated. Our results show that H2 molecules can physisorbed on these surfaces and that these surfaces induce o–p conversation of H2. The effect of molecular orientations and positions of H2 molecules on the catalysts surface on the *o-p* H2 conversion yield will be presented.
Speaker: Prof. Abdulwahab Sallabi (Physics Department, Misurata University, Misurata , Libya)
CAP Foundation Board Meeting / Réunion du CA de la Fondation de l'ACP CCIS 4-285
T-MEDAL CAP Medal Talk - Chitra Rangan, U. Windsor (Teaching Undergraduate Physics / Enseignement de la physique au 1er cycle) CCIS 1-430
Generating Ideas for Active and Experiential Learning in Physics 30m
The Physics community has known the importance of Active Learning (AL) for the last twenty years (see [1,2]). A recent analysis of 225 studies on AL [3] has demonstrated that "active learning appears effective across all class sizes --- although the greatest effects are in small (n <= 50) classes." Physicists have innovated both technologies and techniques for AL [4,5]. Yet, most classes, particularly in institutions where research is conducted, are primarily delivered via lectures. Many research-active faculty members do not feel like they have the time or incentive to explore AL methodologies. At the University of Windsor, we have started a Faculty Network called "Promoters of Experiential, Active, and Research-based Learning" [6] to support our teacher-researcher colleagues in the Faculty of Science. Inspired by the activities of this network, in this session, I will lead a discussion on how very busy, teacher-researchers can adopt proven Active Learning strategies in their own classes. [1] Richard Hake, "Interactive-engagement vs. traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses" American Journal of Physics, v. 66, pp. 64-74 (1998). [2] Deslauriers, L., E. Schelew, and C. Wieman, "Improved Learning in a Large-Enrollment Physics Class" Science, v. 332, pp. 862-864 (2011). [3] Scott Freeman et al., "Active learning increases student performance in science, engineering, and mathematics" PNAS, v.111, pp. 8410–8415 (2014). [4] David E. Meltzer and Ronald K. Thornton, "Resource Letter ALIP–1: Active-Learning Instruction in Physics" Am. J. Phys. v. 80, pp. 478 -496 (2012). [5] Multimedia Educational Resource for Learning and Online Teaching, http://www.merlot.org, © 1997–2015 MERLOT. Retrieved May 2, 2015. [6] P.E.A.R.L. @ UWindsor, www.uwindsor.ca/pearl.
Speaker: Chitra Rangan (University of Windsor)
Health Break (with exhibitors) / Pause santé (avec exposants) CCIS L2 Foyer
Teachers' Day - Session II / Journée des enseignants - Atelier II CCIS L1-047
Quantum superposition and the uncertainty principle in the class room; a hands-on experience, Martin laforest, Senior manager, Scientific Outreach, Institute for Quantum Computing, University of Waterloo 1h
NSERC Presentation by Elizabeth Boston / Présentation du CRSNG par Elizabeth Boston CCIS 1-430
NSERC EG Chair Report (L.-H. Xu) / Rapport de la présidente du GE (L.-H. Xu) CCIS 1-430
Convener: Donna Strickland (University of Waterloo)
CAP-NSERC Liaison Committee Report (W. Whelan) / Rapport du Comité de liaison ACP-CRSNG (W. Whelan) CCIS 1-430
CAP Past Presidents' Meeting / Réunion des anciens présidents de l'ACP CCIS 4-285
Convener: Ken Ragan (McGill)
DAMOPC Annual Meeting / Assemblée annuelle DPAMC CCIS L2-200
Convener: Chitra Rangan (U)
DMBP Annual Meeting / Assemblée annuelle DPMB CAB 239
Convener: Maikel Rheinstadter (McMaster University)
DNP Annual Meeting / Assemblée annuelle DPN CCIS L1-140
DPP Annual Meeting / Assemblée annuelle DPP CCIS L2-190
Convener: Chijin Xiao (Univ. of Saskatchewan)
IPP Scientific Council Meeting / Réunion du comité scientifique de l'IPP CCIS 2-122
Lunch / Dîner
New Faculty Lunch Meeting with NSERC / Dîner-rencontre des nouveaux professeurs avec le CRSNG CCIS L1-029
Teachers' Day - Lunch / Journée des enseignants - Diner CCIS L2 Teaching Labs
CCIS L2 Teaching Labs
Afternoon workshops: 5h
List of proposed workshops, teachers will be asked to sign up for three workshop at most. A separate registration form for workshops will be sent to the teachers by the teachers local committee. - Cavendish experiment (measuring G) - Millikan oil drop. (obtaining the basic electron charge) - e/m for electrons. - Electron diffraction (verifying particle wave duality). - Video analysis of Galileo's ramp. - Frank-Hertz experiment (quantization of energy). - Faraday's rotation (polarization, field induced polarization). - Visit to Dr. Jacob's lab.
T2-1 Materials characterization: microscopy, imagining, spectroscopy (DCMMP) / Caractérisation des matériaux: microscopie, imagerie, spectroscopie (DPMCM) NINT Taylor room
Convener: Eundeok Mun (Simon Fraser University)
Ultrafast Transmission Electron Microscopy and its Nanoplasmonic Applications 30m
Understanding matter at the dynamic and microscopic levels is fundamental for our ability to predict, control and ultimately design new functional properties for emerging technologies. Reaching such an understanding, however, has traditionally been difficult due to limited experimental methodologies that can simultaneously image both in space and time. Ultrafast transmission electron microscopy (UTEM), a newly emerging field, offers the means to overcome this limitation by merging the femtosecond domain of pulsed lasers with the nanoscale domain of transmission electron microscopes. With UTEM, it is possible to capture ultrafast events in real space, diffraction and even spectroscopy. In this particular contribution, we emphasize the plasmonic imaging capability of UTEM in space and time. Localized electric fields that are induced optically exhibit unique phenomena of fundamental importance to nanoplasmonics. UTEM enables direct visualization of these fields as they rise and fall within the duration of the excitation laser pulse (few hundreds of femtoseconds) with several nanometers of spatial resolution. This imaging approach is based on an inelastic photon-electron interaction process, where the probing electrons gain energy equal to the integer multiple of the photon quanta (2.4 eV in these experiments). This new phenomenon in electron energy loss spectroscopy and its fundamentals will be discussed. Furthermore, images, and movies, of plasmonic near-fields of particle dimers, nanoparticles with different sizes and shapes, particle ensembles and standing-wave plasmons at the step edges of layered-graphene strips are presented. These results establish UTEM as a tool with unique capabilities to approach nanoplasmonics.
Speaker: Prof. Aycan Yurtsever (INRS-EMT)
Means of mitigating the limits to characterization of radiation sensitive samples in an electron microscope. 30m
The scattering of the fast electrons by a sample in the transmission electron microscope (TEM) results in a measurable signal and also leads to sample damage. In an extreme case, the damage can be severe and can proceed faster than data can be collected. The fundamental limit on whether a measurement can be performed is set by the interaction cross section and collection efficiency for the desired signal and by the total damage cross section. Mitigation strategies involve selecting the strongest possible signal, modifying the microscope optics and hardware to maximize the collection efficiency and preparing the sample in a way that maximizes the signal. A major recent breakthrough is the practical implementation of Zernike-like imaging in a TEM. The Zernike-like imaging in a TEM increases the contrast by a factor of two to four compared to conventional bright field TEM. The corresponding decrease in the irradiation dose needed to obtain desired signal to noise ratio translates either to higher resolution in the images or less damage to the sample at the same resolution. The mechanism utilized in this case is the local charging of an uniform thin film placed in the back focal plane of the objective lens of a TEM. The application of the Zernike-like imaging in TEM range from imaging of magnetic fields in vacuum to imaging of DNA strands.
Speaker: Marek Malac
Terahertz Scanning Tunneling Microscopy in Ultrahigh Vacuum 15m
The terahertz scanning tunneling microscope (THz-STM) is a new imaging and spectroscopy tool that is capable of measuring picosecond electron dynamics at the nanoscale. Free-space THz pulses are commonly used for non-contact conductivity measurements, but they are diffraction limited to millimeter length scales. We can overcome this limit by coupling THz pulses to a sharp metal tip through propagating surface modes. At the STM junction, the THz pulse acts as a picosecond voltage transient which drives electron tunneling on an ultrafast timescale. This effect can be used to spatially and temporally probe the local conductivity of a surface after an excitation. Here we demonstrate THz-STM in an ultrahigh vacuum (UHV) environment for the first time. We have measured a THz-induced-tunnel-current over highly-oriented pyrolytic graphite (HOPG), and Si(111) in UHV. The experimental results agree well with our model, providing insight for the THz-STM mechanism. Recent progress towards atomic resolution and the nature of THz-induced-tunneling in an STM will be presented.
Speaker: Mr Vedran Jelic (University of Alberta)
Identifying differences in long-range structural disorder in solids using mid-infrared spectroscopy 15m
Structural disorder in calcium carbonate materials is a topic of intense current interest in the fields of biomineralization, archaeological science, and geochemistry. In these fields, Fourier transform infrared (FTIR) spectroscopy is a standard material characterization tool because it can clearly distinguish between amorphous calcium carbonate and calcite. Earlier theoretical work based on density functional theory (DFT) showed that calcite's in-plane bending mode in FTIR is very sensitive to changes in local (intra-unit-cell) disorder, which accounts for the near vanishing amplitude of this peak for amorphous calcium carbonate [1]. In a subsequent study of polycrystalline calcites, DFT investigations showed that local disorder was also qualitatively consistent with changes in the in-plane bending modes for these materials [2]. Here, we examine this assumption by presenting our study of the structural differences among several different sources of crystalline calcite, all of which show differences in the widths of their FTIR in-plane bending mode peaks. We used X-ray diffraction (XRD) to assess disruptions to long-range periodicity including lattice strain, microstrain fluctuations, and crystalline domain size (crystallinity). These quantities were then correlated with mid-FTIR (carbonate vibrational mode) peak positions, widths and relative intensities. Unlike the earlier studies [2], our results show that the in-plane bending mode can be strongly affected by the long-range disorder (based on XRD data) even when the local environments (based on Extended X-ray Absorption Fine Structure data) are identical. This apparent discrepancy between calculated and experimental models of structural disorder is, in fact, strong evidence for the near continuum of local and long-range structural differences that calcium carbonate materials can accommodate. Thus, we conclude that mid-FTIR spectra can be a powerful diagnostic for identifying differences in long-range structural disorder in carbonate-containing materials. References: [1] R. Gueta, A. Natan, L. Addadi, S. Weiner, K. Refson and L. Kronik, Angew. Chem., Int. Ed., 2007, 46, 291–294. [2] K. M. Poduska, L. Regev, E. Boaretto, L. Addadi, S. Weiner, L. Kronik and S. Curtarolo, Adv. Mater., 2011, 23, 550–554.
Speaker: Ben Xu (Memorial University)
T2-10 Cold and trapped atoms, molecules and ions (DAMOPC) / Atomes, molécules et ions froids et piégés (DPAMPC) CCIS L2-200
Project ALPHA: Applying AMO Physics to Antimatter and Using Antimatter to Study AMO Physics 30m
In 2010, the ALPHA Collaboration working at the AD Facility at CERN achieved the first capture and storage of atomic antimatter with our confinement of low temperature antihydrogen in an Ioffe-type magnetic minimum atom trap. [1] This achievement was only reached through the application of a range of tools and techniques from an interdisciplinary spectrum of fields, including AMO Physics. Examples of AMO Physics tools used in antihydrogen capture and storage include charged particle confinement and manipulation in a Penning-Malmberg trap, evaporative cooling [2], and sympathetic (i.e. charged particle collisional) processes. With the achievement of stable and long-term storage of antihydrogen, focus at ALPHA has now shifted to using antihydrogen as a system for carrying out a range of atomic physics studies, including completion of proof of principle microwave spectroscopy [3], charge neutrality, and gravitational force measurements. With the completion of commissioning of our 2nd-geenration ALPHA-2 apparatus, we now aim to move into the field of high precision spectroscopy of antihydrogen. This invited talk will focus on discussing the AMO physics aspects of the ALPHA experiment, both the tools from AMO physics used for ALPHA and the AMO physics measurements undertaken and planned for ALPHA. This will include both work completed with the ALPHA-1 apparatus, and that undertaken and planned with the ALPHA-2 system. * Presented on behalf of the ALPHA Collaboration, CERN (http://alpha.web.cern.ch/). [1] G.B. Andresen et al. (ALPHA Collaboration), Nature Physics 7, 558 (2011). [2] G.B. Andresen et al. (ALPHA Collaboration, Phys. Rev. Lett. 105, 013003 (2010). [3] C. Amole et al. (ALPHA Collaboration), Nature 483, 439 (2012).
Speaker: Prof. Robert Thompson (University of Calgary)
Evaporative Cooling in Electromagnetic Radio Frequency Ion Traps 15m
In 2011, the ALPHA collaboration created and trapped neutral anti-hydrogen particles for the first time in history [1]. Key to this achievement was the demonstration of evaporative cooling of charged particles in a Penning Trap, a cooling method that had not previously been achieved with trapped low temperature ions [2]. Work is currently underway at the University of Calgary to computationally investigate the feasibility of optimum conditions for employing evaporative cooling in Paul-type ion traps, a combination of cooling and trapping that has not been used in the past. Due to the complex ion-ion and ion-trapping field interactions, the system is modelled and equations of motion of particles solved computationally using the RK4 method. This work explored the intrinsic challenges of cooling a system of charged particles constrained by an oscillating field, and showing that, dependent on the precise system parameters, evaporation of particles from a trapped system may or may not reduce the temperature of the remainder of the ensemble. Therefore, an extensive range of simulations have been used to study the evolution of a system of ions trapped in an electromagnetic RF trap under a range of different initial conditions and plasma shapes. For each set of system parameters, the cooling parameters were varied using a Monte-Carlo method to find optimum conditions to achieve evaporative cooling, i.e achieving the highest temperature drop while minimizing the particle loss rate. This presentation will include the results of the work and it's future applications in fields such as spectroscopy and mass measurements will be discussed. [1] G. B. Anderson et al. (ALPHA Collaboration), Nat. Phys., 558, (2011) [2] G. B. Anderson et al. (ALPHA Collaboration), Phys. Rev. Lett. 105, 013003 (2010)
Speaker: Lohrasp Seify (University of Calgary)
LohraspSeify_CAP_Edmonton_16thJune_2.pptx
LohraspSeify_CAP_Edmonton_16thJune.pdf
LohraspSeify_CAP_Edmonton_16thJune.pptx
Demonstration of a Microtrap Array and manipulation of Array Elements 15m
A novel magnetic microtrap has been demonstrated for ultracold neutral atoms [1]. It consists of two concentric currents loops having radii r1 and r2. A magnetic field minimum is generated along the axis of the loops if oppositely oriented currents flow through the loops. Selecting r2/r1 = 2.2 maximizes the restoring force to the trap center. The strength and position of the microtrap relative to the atom chip surface can be precisely adjusted by applying an external bias magnetic field. A microtrap array can be formed by linking individual microtraps in series. A linear array of 11 microtraps having r1= 60 microns, was loaded with more than 105 87Rb atoms using three different methods: 1) from a transported quadrupole magnetic trap, 2) directly from a mirror MOT and 3) from an optical dipole trap. A proposal to manipulate atoms in adjacent microtraps will also be presented. 1. B. Jian & W. A. van Wijngaarden, Appl. Physics B: Lasers & Optics. (2014).
Speaker: Dr Bin Jian (National Research Council)
Engineered spin-orbit coupling in ultracold quantum gases 30m
Ultracold quantum gases are an ideal medium with which to explore the many-body behaviour of quantum systems. With a century of research in atomic physics at the foundation, a wide variety of techniques are available for manipulating the parameters that govern the behaviour of these systems, including tuning the interactions between particles and manipulating their potential energy landscapes. In recent years, the ability to generate "artificial gauge fields" has made it possible to simulate, experimentally, the effects of electromagnetic fields among these uncharged particles. Using the same techniques, which selectively transfer momentum from light to atoms, a correlation between the internal state and the motion of the atoms, known as "spin-orbit coupling," has also be realised in several quantum gas systems. The relationship between spin and motion is quite general: experiments in quantum gases have used it to perform experiments that mimic both Dirac equation (with high-energy phenomena) and the spintronics (with small power consumption). One promising avenue for quantum simulations with spin-orbit coupled systems is to study the competing effects of this coupling and interparticle interactions. To do this, potassium-39 systems are well-suited: they have widely tunable interactions and technically feasible spin-orbit coupling schemes. Unlike conventional solid state systems, both interactions and spin-orbit coupling are tunable, and predictions suggest that the character of the low-temperature ordered systems will depend strongly on these parameters, giving states that have both superfluid and magnetic character. Further, these experiments will allow for the study of non-equilibrium behaviour of the interacting, spin-orbit coupled system, including measuring behaviour at the condensation transition and low-temperature dynamics.
Speaker: Lindsay LeBlanc (University of Alberta)
T2-11 Laser, Laser-matter interactions, and plasma based applications (DPP) / Lasers, interactions laser-matière et applications basées sur les plasmas (DPP) CCIS L2-190
Convener: Andranik Sarkissian (PLASMIONIQUE Inc)
Modification of graphene films in the flowing afterglow of microwave plasmas at reduced-pressure 30m
Graphene films were exposed to the late afterglow of a reduced-pressure N2 plasma sustained by microwave electromagnetic fields. X-ray photoelectron spectroscopy (XPS) shows that plasma-generated N atoms are incorporated into both pyridinic and pyrrolic groups, without excessive reduction of sp2 bonding. Nitrogen incorporation was found to be preceded by N adsorption, where N adatom density increased linearly with treatment time while aromatic nitrogen saturated. This finding was confirmed by Raman spectra showing a linear increase of the D:G ratio attributed to constant surface flux of plasma generated species. Combined Density Functional Theory calculations with a Nudged Elastic Band (DFT-NEB) approach indicate that incorporation reactions taking place at point vacancies in the graphene lattice requires an activation energy in the 2-6 eV range, but the energy required for the reverse reaction exceeds 8 eV. Stable nitrogen incorporation is therefore judged to be defect-localized and dependent on the energy transfer (6 eV) provided by N2(A)-to-N2(X) metastable-to-ground de-excitation reactions occurring at the late-afterglow-graphene interface. This represents one of the first experimental evidence of the role of metastables during materials and nanomaterials processing in non-thermal plasmas.
Speaker: Luc Stafford (U.Montréal)
Pump-probe Studies of Warm Dense Matter 30m
Warm Dense Matter (WDM) is a material under extreme conditions which has near solid density but has a temperature of several electron volts. It is a state lies in between condense matter state and plasma state. The study of materials under extreme conditions is currently a forefront area of study in material science and has generated enormous scientific interest. The understanding of WDM is important for laser material processing, which has many scientific and industrial applications, as well as Inertial Fusion Energy, which is a safe energy source that has no carbon emission and almost unlimited fuel supply. The understanding of WDM is also important for planetary science and astrophysics. Ultrafast pump-probe methods can been used to study the evolution of WDM in sub-picosecond time scale. When a high intensity ultrashort laser pulse is absorbed by a solid target, a non-equilibrium WDM with electron temperature of several electron volts, ion temperature near room temperature and density remains as solid is formed initially in less than a picosecond. During the subsequent several picoseconds the electron temperature reduces and ion temperature rises and target eventually disassembles into an expanding plasma. Ultrafast probing techniques based on optical, electron diffraction and x-ray diffraction have been used to study the properties of laser produced WDM led to a better understanding of WDM. An overview of our current understanding of laser produced WDM will be presented in this talk.
Speaker: Prof. Ying Tsui (University of Alberta)
Deposition of functional coatings on glass substrates using a recently-developed atmospheric-pressure microwave plasma jet 15m
In recent years, atmospheric-pressure plasmas have gained a lot of interest in view of their interest for fast treatment of materials over large area wafers. While such plasmas are typically based on corona or dielectric barrier discharges (DBDs) for processing of thin samples (for example roll-to-roll systems), a number of applications require the treatment of thicker samples and thus the use of plasma jet configurations. We have recently developed a new, atmospheric-pressure plasma source using a surfaguide sustaining simultaneously 3 tubular plasmas based on the propagation of an electromagnetic surface wave. Operated at 2.45GHz, these tubular plasmas are characterized by much higher electron densities (10^13-10^14cm-3) than conventional DBDs (10^9-10^10 cm-3), thus allowing very high fragmentation rates of precursors intended for PECVD, even in a jet configuration. In the waveguide system used in this study, since only the fundamental mode (a cosine maximum of the electric field on the axis of the large section of the rectangular waveguide) is propagating, the first two tubes were placed off-axis, while the last one was placed just after, on the axis. Such configuration enabled important power absorption by the latter tube even if significant amount of power was already used by the first two. Through the displacement of a plunger located at the end of the transmission line, after the surfaguide, selective lengths of the first-row tubes and second-row tube can be achieved. This phenomenon is ascribed to the displacement of the maximum electric field intensity of an established stationary wave in the transmission line. For short tube lengths downward of the surfaguide, a peculiar spatial structure was observed in which off-axis plasma filaments close to the wave launcher converged towards a single on-axis point near the exit followed by a diffuse plasma plume.
Speaker: Mr Antoine Durocher-Jean (U. Montreal)
Measurements of Ionization States in Warm Dense Aluminum with Femtosecond Betatron Radiation from a Laser Wakefield Accelerator 15m
Study of the ionization state of material in the warm dense matter regime is a significant challenge at present. Recently, we have demonstrated that the femtosecond duration Betatron x-ray radiation from the laser wakefield acceleration of electrons is capable of being employed as a probe to directly measure the ionization states of warm dense aluminum via K-shell line absorption spectroscopy [1]. In order to apply the radiation for such an application, a Kirkpatrick-Baez Microscope is used to selectively focus the radiation around the 1.5 keV photon energy range onto a 50-nm free-standing aluminum foil that is heated by a synchronized 800 nm laser pump pulse. The transmitted x-ray spectrum is spectrally resolved by a flat Potassium Acid Phthalate (KAP) Bragg crystal spectrometer. Here we report the results of the first-time direct measurements of the ionization states of warm dense aluminum using this Betatron x-ray probe setup. Measurements of the ionization states were taken at two pump fluences and various time delays to observe the evolution of the warm dense matter state. Plasmas spectroscopic modeling associated with 1D hydrodynamic simulation is being carried out to interpret the ionized charge distributions from the measured K-shell absorption lines. Details of the measurements and simulations will be presented. [1] M.Z. Mo, et al., Rev. Sci. Instrum. 84, 123106 (2013).
Speaker: Mianzhen Mo (University of Alberta)
T2-2 Condensed Matter Theory (DCMMP-DTP) / Théorie de la matière condensée (DPMCM-DPT) CAB 235
Convener: Joseph Maciejko (University of Alberta)
Many-body localization and potential realizations in cold atomic gases 30m
Disorder in a non-interacting quantum system can lead to Anderson localization where single-particle wave functions become localized in some region of space. Recently, the study of interaction effects in systems which do exhibit Anderson localization has attracted renewed interest. In my talk I will present recent theorerical progress in understanding localization in many-body systems. I will, in particular, discuss one-dimensional lattice models with binary disorder which can potentially be realized in cold atomic gases using two species of atoms. A purification scheme can be used to perform an exact binary disorder average making such models amenable to numerical studies directly in the thermodynamic limit.
Speaker: Jesko Sirker (U Manitoba)
Light-Trapping Architecture for Room Temperature Bose-Einstein Condensation of Exciton-Polaritons near Telecommunication Frequencies 15m
While normally quantum mechanical effects are observable at cryogenic temperatures and at very small length scales, our work brings these quantum phenomena to the macroscopic length scale and to room temperature. Our work focuses on the possibility of room-temperature thermal equilibrium Bose-Einstein condensation (BEC) of quantum well exciton-polaritons in micrometer scale cavities composed of photonic band gap materials. Using cavities composed of double slanted pore (SP2) photonic crystals embedded with InGaAs quantum wells, we predict the formation of a 10 $\mu$m to 1 cm sized thermal equilibrium Bose-Einstein condensate at room temperature that allows for the emission of light near the telecommunications band of $\sim$1300 nm. The three-dimensional photonic band gap of the SP2 crystal allows for light to be strongly confined to the quantum wells, resulting in strong light-matter coupling in the exciton-polaritons and vacuum Rabi splittings that are $\sim$2% of the bare exciton recombination energy. The photonic band gap also strongly inhibits the radiative decay of the exciton-polaritons and due to the slow non-radiative decay of excitons as well as fast exciton-phonon scattering in InGaAs at room temperature, the exciton-polaritons that form the BEC are able to reach thermal equilibrium with their host lattice. We consider three InGaAs quantum wells (of width 3 nm surrounded by 7 nm InP barriers) judiciously placed in a 33 nm cavity between SP2 crystals with a lattice constant of 471 nm and polaritons consisting of a superposition of excitons and photons that are tuned below the excitonic recombination energy. This detuning increases the polariton's dispersion depth and increases the number of available photon-like states to enhance the formation of a BEC. We predict the onset of a BEC at a temperature of 364 K in a box-trap of side length 10 $\mu$m at a polariton density of $1.6\times10^{11}$ cm$^{-2}$, indicating that a room temperature, thermal equilibrium BEC can be obtained with light emission near the telecommunications band.
Speaker: Mr Pranai Vasudev (University of Toronto)
Molecular-dynamics simulations of two-dimensional Si nanostructures 15m
Nanostructed materials make it possible to tailor the vibrational properties of a system for specific uses like thermoelectric applications or phononic waveguides. In this work, the vibrational properties of two-dimensional silicon nanostructures are studied. The nanostructures are build from arrays of nanowires that are arranged in such a manner that they form a periodic lattice. The method of molecular-dynamics simulations is used to calculate the vibrational properties. Results will be shown for the vibrational density of states as well as dispersion relations at long wavelength.
Speaker: Dr Ralf Meyer (Laurentian University)
Inverse melting in a simple 2D liquid 15m
We employ several computer simulation techniques to study the phase behaviour of a simple, two dimensional monodisperse system of particles interacting through a core-softened potential comprising a repulsive shoulder and an attractive square well. This model was previously constructed and used to explore anomalous liquid behaviour in 2D and 3D, including liquid-liquid phase separation [1]. The calculated phase diagram includes six crystal phases in addition to the liquid and gas. Interestingly, we find that one of the melting curves exhibits inverse melting, for which the liquid freezes to a crystal upon isobaric heating over a very small range of pressure [2]. We find that the range of inverse melting can be enlarged by increasing the extent of the repulsive shoulder, and show that despite occurring in 2D, the melting transition is first order and to a liquid, rather than to a hexatic or quasicrystal phase [3]. As this range increases, the topology of the phase diagram changes systematically until it breaks, leading to even more crystal phases appearing. [1] A. Scala, M. R. Sadr-Lahijany, N. Giovambattista, S. V. Buldyrev, and H. E. Stanley, Phys. Rev. E 63, 041202 (2001). [2] A. M. Almudallal, S. V. Buldyrev, and I. Saika-Voivod, J. Chem. Phys. 137, 034507 (2012). [3] A. M. Almudallal, S. V. Buldyrev, and I. Saika-Voivod, J. Chem. Phys. 140, 144505 (2014).
Speaker: Ahmad Almudallal (Memorial University of Newfoundland)
Cellular Automaton with nonlinear Viscoelastic Stress Transfer to Model Earthquake Dynamics 15m
Earthquakes may be seen as an example of self-organized criticality. When we transform the Gutenberg-Richter law of earthquake magnitude, the seismic moment, as a measure of the energy released, yields a power law distribution indicating a self-similar pattern. The earthquake dynamics can be modelled by employing the spring-block system, which features a slowly-driving force, failure threshold and interactions between elements as in a complex system. In this approach the earthquake fault is modelled by an array of blocks coupling the loading plate and the lower plate. For computational simplicity, the spring-block model has been mapped to various cellular automata. However, the spring-block model (including the cellular automata version) with its underlying physics, is not sufficient to reproduce some of the empirical scaling laws for real seismicity. In particular, a robust power law time-dependence of the aftershock rate function can not be observed, which indicates the need to introduce new physical mechanisms for the aftershock triggering. Taking into account the rheology of the fault zone, we introduce the nonlinear viscoelastic stress transfer into the interactions between blocks and the tectonic loading force in a basic spring-block model setting. The shear stress of the viscous component is a power-law function of the velocity gradient with an exponent between 0 and 1, showing a shear weakening effect. As a result, the stress transfer function takes a power-law time-dependent form. It features an instantaneous stress transfer during an instantaneous avalanche triggered by the global loading, as well as a power-law relaxation term, which could trigger further aftershocks. In this nonlinear viscoelastic model, avalanches (earthquakes) triggered either by the global loading or the relaxation exhibit a robust power-law frequency-size distribution. Maximum-likelihood fitting of temporal rates of stacked sequences shows a power-law time decay, which agrees with the modified Omori law. Our results also show that the nonlinearity of the viscoelastic interactions plays a key role in determining the type of the stress transfer function. Our study suggests that the nonlinear viscoelastic stress transfer might be a possible triggering mechanism for real aftershocks.
Speaker: Xiaoming Zhang (University of Western Ontario)
T2-3 Ground-based / in situ observations and studies of space environment II (DASP) / Observations et études de l'environnement spatial, sur terre et in situ II (DPAE) CAB 243
Convener: Donald Danskin (Natural Resources Canada)
New View of Aurora from Space using the e-POP Fast Auroral Imager 30m
The Fast Auroral Imager (FAI) on the CASSIOPE Enhanced Outflow Probe (e-POP) consists of two CCD cameras, which measure the atomic oxygen emission at 630 nm and prompt auroral emissions in the 650 to 1100 nm range, respectively, using a fast lens system and high quantum-efficiency CCDs to achieve high sensitivity, and a common 26 degree field-of-view to provide nighttime images of about 650 km diameter from apogee (1500 km). The FAI is capable of operating in four viewing modes: nadir viewing, for imaging over a large latitude range; Earth-target viewing, for pointing at an emission target of fixed altitude, latitude and longitude; limb viewing, for measurement of altitude profiles; and inertial pointing, for imaging of an inertial target such as a star field. The near infrared camera provides one image of 0.1 sec exposure per second, and we restrict our examples to this camera. The four viewing modes make possible the observations of a variety of auroral and airglow phenomena, such as rapidly varying and small-scale structures in the auroral oval. The examples shown here illustrate some obvious features in the auroral phenomena that lead to new perspectives in the context of high-resolution studies of ionospheric processes.
Speaker: Prof. Leroy Cogger (University of Calgary)
CASSIOPE e-POP and coordinated ground-based studies of polar ion outflow, auroral dynamics, wave-particle interactions, and radio propagation 15m
The Enhanced Polar Outflow Probe (e-POP) is an 8-instrument scientific payload on the Canadian CASSIOPE small satellite, comprised of plasma, magnetic field, radio, and optical instruments designed for in-situ observations in the topside polar ionosphere at the highest-possible resolution. Its science objectives are to quantify the micro-scale characteristics of plasma outflow in the polar ionosphere and probe related micro- and meso-scale plasma processes at unprecedented resolution, and explore the occurrence morphology of neutral escape in the upper atmosphere. The e-POP mission comprises three important components for the investigation of atmospheric and plasma flows and related auroral and wave particle interaction processes in the topside polar ionosphere: a satellite, a ground-based and a theoretical component. We present an overview of the important, new observations and related results from these three interconnected mission components since the successful launch of CASSIOPE in September 2013.
Speaker: Andrew Yau (University of Calgary)
The nature of GPS receiver bias variabilities: An examination in the Polar Cap region and comparison to Incoherent Scatter Radar 15m
The problem of receiver Differential Code Biases (DCBs) in the use of GPS measurements of ionospheric Total Electron Content (TEC) has been a constant concern amongst network operators and data users since the advent of the use of GPS measurements for ionospheric monitoring. While modern methods have become highly refined, they still demonstrate unphysical bias behavior, namely notable solar cycle variability. Recent studies have highlighted the potential impact of temperature on these biases, resulting in small diurnal or seasonal behavior, but have not addressed the, far more dominant, solar cycle variability of estimated receiver biases. This study investigates the nature of solar cycle bias variability. We first identify the importance of the strongest candidate for these variabilities, namely shell height variability. It is shown that the Minimizations of Standard Deviations (MSD) bias estimation technique is linearly dependent on the user's choice of shell height, where the sensitivity of this dependence varies significantly from 1 TECU per 4000km of shell height error in solar minimum winter to in excess of 1 TECU per 90km of shell height error during solar maximum summer. To assess the importance of these sensitivities, we present true shell height derived at Resolute, Canada using the Resolute Incoherent Scatter Radar (R-ISR), operated by SRI International and a Canadian Advanced Digital Ionosonde (CADI) operated by the Canadian High Arctic Ionospheric Network (CHAIN). This investigation demonstrates significant shell height variability translating to bias variabilities of up to several TECU. These variabilities, however, are found to be insufficient to account for all of the observed bias solar cycle variability. To investigate these variabilities further, we next compare Total Electron Content (TEC) measurements made by a CHAIN GPS receiver at Resolute to integrated electron density profiles derived from the nearby Resolute ISR. Taking the ISR measurements as truth, we find that ISR-derived GPS receiver biases vary in the same manner as those derived using the MSD or other bias estimation approaches. Based on these results, we propose that standard receiver DCB estimation techniques may be interpreting a significant portion of plasmaspheric electron content as DCBs, resulting in apparent diurnal, seasonal, and solar cycle DCB variability.
Speaker: Mr David Themens (University of New Brunswick)
Transmission of Waves from a High-Frequency Ionospheric Heater to the Topside Ionosphere 15m
In the first year of operation of the ePOP instruments on the Canadian small satellite CASSIOPE, a number of passes were recorded during which the Radio Receiver Instrument (RRI) measured radiation from powerful high-frequency ground transmitters that act as ionospheric heaters. In the case of measurements of transionospheric propagation from the Sura heating facility in Russia, located at 56.15°N, 46.10°E, RRI reception of heater waves was accompanied by the operation of the trifrequency Coherent Electromagnetic Radio Tomography (CERTO) beacon on the satellite radiating at 150, 400 and 1067 MHz. CERTO waves, detected at three ground receivers near Sura, allowed total electron content to be measured continuously along the three different paths between CASSIOPE and the three ground sites. Subsequent tomographic processing provided the ionospheric electron density distribution as a function of latitude and altitude. With this density model tool in hand, ray-tracing was applied to the prediction at the spacecraft of various properties of the HF waves from the Sura heater. When compared with the observations, the predictions validate the relevance of geometric-optics principles in transionospheric propagation.
Speaker: Dr Gordon James (University of Calgary)
Dawn-dusk asymmetry in the intensity of polar cap flows as seen by SuperDARN 15m
Polar cap flow pattern and intensity depend on the IMF Bz and By components. For IMF Bz<0, the pattern is consistently two-celled, and previous studies indicate that flows are fastest near noon and midnight for By<0 and during afternoon-dusk hours for By>0. In this study, we investigate the polar cap flow intensity in two ways. First we consider highly-averaged (over each month of observations in 2007-2013) convection patterns inferred from all SuperDARN radar measurements and discuss typical configurations of the polar cap region with enhanced flows, depending on the IMF By, with a focus on the dusk-dawn asymmetry. We demonstrate seasonal and perhaps solar cycle changes in the asymmetry. We then consider 2 years of Clyde River radar data on the azimuthal component of the flow and show the asymmetry observed directly. We discuss the complexity of the phenomenon in contrast to the more firm conclusions of previous studies.
Speaker: Alexandre Koustov (U)
T2-4 Fields and Strings (DTP) / Champs et cordes (DPT) CCIS L1-047
Convener: Rainer Dick (University of Saskatchewan)
Scale and Conformal Invariance in Quantum Field Theory 30m
The behavior of coupling constants in quantum field theory under a change of energy scale is encoded in the renormalization group. At fixed points of the renormalization group flow, quantum field theories exhibit conformal invariance and are described as conformal field theories. The larger spacetime symmetry of conformal field theory is not the smallest possible extension of Poincare invariance. Indeed, scale invariance could occur without conformal invariance which would lead to scale field theories. We thus investigate the theoretical implications of scale invariance without conformal invariance in quantum field theory. We argue that renormalization group flows of such theories correspond to recurrent behaviors, i.e. limit cycles or ergodicity. We discuss the implications for the a-theorem, and use Weyl consistency conditions to show that scale invariance implies conformal invariance at weak coupling in four-dimensional quantum field theory. Finally, we clarify the necessary and sufficient conditions for conformality and present new types of conformal field theories.
Speaker: Prof. Jean-Francois Fortin (Laval University)
Dynamics of Gravitational Collapse in AdS Space-Time 30m
Gravitational collapse in asymptotically anti-de Sitter spacetime is dual to thermalization of energy injected to the ground state of a strongly coupled gauge theory. Following work by Bizon and Rostworowski, numerical studies of massless scalar fields in Einstein gravity indicate that generic initial states thermalize, given time, even for arbitrarily small energies. From the gravitational perspective, this appears due to a combination of a turbulent instability in the nonlinear local dynamics and the ability of matter to reflect from the conformal boundary. I will discuss recent work examining the effects of new length scales in the dynamics, including a scalar mass and higher-curvature corrections to the gravitational action.
Speaker: Andrew Frey (University of Winnipeg)
ads5twoMp.avi
gbMp.avi
massivetwoPi.avi
Thermodynamic and Transport Properties of a Holographic Quantum Hall System 15m
We apply the AdS/CFT correspondence to study a quantum Hall system at strong coupling. Fermions at finite density in an external magnetic field are put in via gauge fields living on a stack of D5 branes in Anti-deSitter space. Under the appropriate conditions, the D5 branes blow up to form a D7 brane which is capable of forming a charge-gapped state. We add finite temperature by including a black hole which allows us to compute the low temperature entropy of the quantum Hall system. Upon including an external electric field (again as a gauge field on the probe brane), the conductivity tensor is extracted from Ohm's law.
Speaker: Joel Hutchinson (University of Alberta)
Constraints and Bulk Physics in the AdS/MERA Correspondence 15m
It has been proposed that the Multi-scale Entanglement Renormalization Ansatz (MERA), which is efficient at reproducing CFT ground states, also captures certain aspects of the AdS/CFT correspondence. In particular, MERA reproduces the Ryu-Takayanagi-type formula and the network structure is similar to a discretized AdS space where the renormalization direction gives rise to the additional bulk dimension. Such discovery may enable us to study the important features of gravity/gauge duality in a more controlled setting. We will show that in order for MERA to recover bulk physics consistent with our current knowledge of holography, it has to satisfy certain consistency relations and that it can only capture bulk physics much larger than the AdS radius. A more specific framework to construct bulk-boundary dictionary, bulk states and Hilbert space from a boundary theory using MERA will also be discussed.
Speaker: ChunJun Cao (Caltech)
T2-5 Nuclear Structure II (DNP) / Structure nucléaire II (DPN) CCIS L1-140
Convener: Reiner Kruecken (TRIUMF)
Single particle structure in neutron-rich Sr isotopes approaching $N=60$ 30m
The shape coexistence and shape transition at $N=60$ in the Sr, Zr region is the subject of substantial current experimental and theoretical effort. An important aspect in this context is the evolution of single particle structure for $N<60$ leading up to the shape transition region, which can be calculated with modern large scale shell model calculations using a $^{78}$Ni core or Beyond Mean Field Models. One-neutron transfer reactions are a proven tool to study single-particle energies as well as occupation numbers. Here we report on the study of the single-particle structure in $^{95-97}$Sr via ($d,p$) one-neutron transfer reactions in inverse kinematics. The experiments presented were performed at TRIUMF's ISAC facility using the TIGRESS gamma-ray spectrometer in conjunction with the SHARC charge particle detector. Highly charged beams of $^{94,95,96}$Sr, produced in the ISAC UCx target and charge-bred by an ECR source were accelerated to 5.5 MeV/$A$ in the superconducting ISAC-II linac before delivery to the experimental station. Other than their clear scientific value, these measurements were landmark being the first high mass ($A>30$) post-accelerated radioactive beam experiments performed at TRIUMF. Recent advances within the facility making the measurements posisble will be highlighted as well as initial results from the experiments discussed in the context of evolving single-particle structure.
Speaker: Dr Peter Bender (TRIUMF)
Doppler shift lifetime measurements using the TIGRESS Integrated Plunger 15m
Along the $N=Z$ line, shell gaps open simultaneously for prolate and oblate deformations; the stability of these prolate and oblate configurations is enhanced by the coherent behaviour of protons and neutrons in $N=Z$ nuclei. Additionally, amplification of proton-neutron interactions along the $N=Z$ line may yield information on the isoscalar pairing interactions which have been predicted in many nuclear models but not yet experimentally observed. Electromagnetic transition rates measured via Doppler shift lifetime techniques are recognized as a sensitive probe of collective behavior and shape deformation and can be used to discriminate between model calculations. To take advantage of this opportunity, the TIGRESS Integrated Plunger (TIP) has been constructed at Simon Fraser University. The current TIP infrastructure [1] supports lifetime measurements via the Doppler Shift Attenuation Method (DSAM). One advantage of Doppler shift lifetime measurements is that lifetimes can be extracted independent of the reaction mechanism. TIP has been coupled to the TIGRESS segmented HPGe array at TRIUMF as part of the experimental program at ISAC-II. The initial studies using TIP employ fusion-evaporation reactions. Here, reaction channel selectivity can greatly enhance the sensitivity of the measurement. To enable channel selection, the 24-element TIP CsI wall was used for evaporated light charged-particle identification. Reaction channel selectivity has been demonstrated using the TIP infrastructure following the successful production of the $N=Z$ nucleus $^{68}$Se via the $^{36}$Ar + $^{40}$Ca fusion-evaporation reaction. A Geant4-based code for TIP is being developed as a tool to aid the analysis and for the optimization of future experiments. The device, experimental approach, analysis, and preliminary results will be presented and discussed. [1] P. Voss et al., Nucl. Inst. and Meth. A746, (2014) 87.
Speaker: Mr Aaron Chester (Simon Fraser University Department of Chemistry)
Isomeric decay spectroscopy of 96Cd 15m
Self-conjugate nuclei, where $N=Z$, exhibit a strong $pn$ interaction due to the large overlap of wavefunctions in identical orbitals. The heaviest $N=Z$ nuclei studied so far is $^{92}$Pd, and it has demonstrated a strong binding in the $T = 0$ interaction [1]. As the mass number increases, the nucleus approaches the doubly-magic $^{100}$Sn. To investigate the evolution of the $pn$ interaction strength near the shell closure $N = Z = 50$, experimental results on the next self-conjugate, even-even nucleus $^{96}$Cd are needed. Record quantities of $^{96}$Cd were produced at RIKEN Radioactive Isotope Beam Factory, via fragmentation of an intense $^{124}$Xe beam on a thin $^{9}$Be target. Their decay products were measured with EURICA, consisting of HPGe/LaBr$_3$ detectors for gamma-rays, and WAS3ABI, a set of position-sensitive silicon detectors for positrons, protons and ions. A high-spin isomeric state in $^{96}$Cd was found, along with gamma-ray transitions that populate both the ground state and the 16$^{+}$ spin-trap isomeric state. Isomer half-lives and the proposed experimental level scheme of $^{96}$Cd will be presented, followed by a discussion of its $pn$ interaction strength and the decay to $^{96}$Ag.
Speaker: Jason Park (University of British Columbia/TRIUMF)
The Electromagnetic Mass Analyser EMMA 15m
The Electromagnetic Mass Analyser EMMA is a recoil mass spectrometer for TRIUMF's ISAC-II facility designed to separate the recoils of nuclear reactions from the heavy ion beams that produce them and to disperse the recoils according to their mass/charge ratio. In this talk I will present an update on the construction and commissioning of the spectrometer and its components.
Speaker: Barry Davids (TRIUMF)
New decay modes of the high-spin isomer of $^{124}$Cs 15m
As part of a broader program to study the evolution of collectivity in the even-even nuclei above tin, a series of $\beta$-decay measurements of the odd-odd Cs isotopes into the even-even Xe isotopes, specifically $^{122,124,126}$Xe, have been made utilizing the 8$\pi$ spectrometer at TRIUMF-ISAC. The 8$\pi$ spectrometer consisted of 20 Compton-suppressed high-purity germanium (HPGe) detectors and the Pentagonal Array of Conversion Electron Spectrometers (PACES), an array of 5 Si(Li) conversion electron detectors. The decay of $^{124}$Cs to $^{124}$Xe is the first measurement to be fully analyzed. A very high-statistics data set was collected and the $\gamma\gamma$ coincidence data were analyzed, greatly extending the $^{124}$Xe level scheme. Several weak $\it{E2}$ transitions into excited 0$^+$ states in $^{124}$Xe were observed. The $\it{B(E2)}$ transition strengths of such low-spin transitions are very important in determining collective properties, which are currently poorly characterized in the region of the neutron-deficient xenon isotopes. A new $\beta^+$/EC-decay branch from a high-spin isomeric state of $^{124}$Cs has been observed for the first time. Decay of the isomer ($J^\pi$ = (7)$^+$, $T_{1/2}$ = 6.3(2) s) is seen to populate high-spin states in the $^{124}$Xe daughter nucleus that are otherwise inaccessible through the $\beta$-decay of the 1$^+$ $^{124}$Cs ground state. Combining $\gamma\gamma$ as well as $\gamma$-electron coincidence data, several new transitions in the isomeric decay of the (7)$^+$ state have been observed. The characterization of the new $\beta$-decay branch and the isomeric decay of the high-spin state will be presented.
Speaker: Allison Radich (University of Guelph)
T2-6 Nuclear Physics in Medicine (DNP-DMBP-DIAP) / Physique nucléaire en médecine (DPN-DPMB-DPIA) CAB 239
Accelerator-Based Medical Isotope Production at TRIUMF 30m
TRIUMF operates a suite of [H-] cyclotrons (13, 2 x 30, 42 and 500 MeV) which, in addition to supplying our basic science program, are used to produce a variety of medical isotopes. Within the next few years TRIUMF will also begin isotope production in our new Advanced Rare IsotopE Laboratory (ARIEL) – a 50 MeV, 10 mA continuous-wave electron linac. The breadth and power of our infrastructure has positioned TRIUMF to be a major producer for some medical isotopes, while enabling access to others that are less common. Since 2010, a TRIUMF-led collaboration has sought to produce Tc-99m directly on small cyclotrons via the Mo-100(p,2n) reaction. Recent successes have shown >30 Ci (1110 GBq) of Tc-99m produced in a single 6 hr irradiation on a 450 µA TR30 cyclotron (at 24 MeV) at TRIUMF. Solutions for 16 and 19 MeV cyclotrons have also been developed. Our goal is to enable all Canadian cyclotron centres to produce Tc-99m in lieu of the imminent cessation of isotope production at the Chalk River reactor. TRIUMF is also pursuing novel methods for producing radiometals that are of interest to the medical community. We have demonstrated the utility of liquid targets for producing research quantities of Zr-89, Ga-68, Y-86 and Sc-44; made by irradiating salt solutions of the appropriate starting material. To date, mCi (MBq) quantities have been isolated and purified, opening the door for the development of novel radiopharmaceuticals. Finally, a brief discussion will ensue on efforts to apply Isotope Separation On-Line (ISOL) infrastructure within the ISAC facility at TRIUMF to produce research quantities of radiotherapeutic isotopes. Progress on the isolation of alpha emitters At-211 and Ac-225 will be presented. TRIUMF seeks to enable clinical trials with these and many other potentially useful radiotherapeutic isotopes available through our existing science program. Overall, TRIUMF's Nuclear Medicine program seeks to address current and anticipated challenges in the production of important clinical isotopes. With over 1000 small (<30 MeV) cyclotrons in 70 countries, the time is ripe to establish accelerators as a viable, decentralized source of medical radionuclides.
Speaker: Paul Schaffer (TRIUMF)
Producing Medical Isotopes with Electron Linacs 30m
The Canadian Light Source (CLS) has been working on a project to develop a facility that uses a 35 MeV high power (40 kW) electron linac to produce medical isotopes. This project was funded by Natural Resources Canada's Non-reactor-based Isotope Supply Program which was initiated following the lengthy shutdowns of the NRU reactor at Chalk River that caused significant shortages of molybdenum-99/technetium-99m isotopes for the medical community. The CLS has been collaborating with the Prairie Isotope Production Enterprise (PIPE) in Winnipeg to develop an entire production cycle from molybdenum targets through to clinical approval by Health Canada of linac-derived isotopes. This talk will outline the reasons for using electron linacs for this application, as well as many of the broader challenges encountered to develop an alternate supply chain for these vital isotopes.
Speaker: Mark de Jong (Canadian Light Source Inc.)
Calculation of isotope yields for radioactive beam production 15m
Access to new and rare radioactive isotopes is key to their application in nuclear science. Radioactive ion beam (RIB) facilities around the world, such as TRIUMF (Canada's National Laboratory for Particle and Nuclear Physics, 4004 Westbrook Mall, Vancouver, BC, V6T 2A3), work to develop target materials that generate ion beams used in nuclear medicine, astrophysics and fundamental physics studies. At Simon Fraser University, we are developing a computer simulation of the RIB targets at TRIUMF to augment the existing knowledge and to support future target developments. This simulation will be used to predict the amounts of isotopes produced by the targets in use at TRIUMF to allow for better experiment preparation as well as to gauge the efficiency of using new target materials and varying driver beam intensities to generate different ranges of isotopes. The simulation, built in GEANT4 (Geant4 - A Simulation Toolkit, S. Agostinelli et al., Nuclear Instruments and Methods A 506 (2003) 250-303), a Monte Carlo nuclear transport toolkit, consists of a target of 300 uranium carbide disks, each 120 microns thick, encased in a tantalum container, which is then bombarded by a 480 MeV proton beam, as per the specifications of the TRIUMF target station. The simulation records the isotopes generated as well as their formation process (i.e. fission, fragmentation and neutron capture) and other related properties such as residual kinetic energy of the reaction products. These results are then compared to data gathered at the TRIUMF yield station (P. Kunz, C. Anreoiu, et al. Rev. Sci. Instrum. 85 (2014) 053305), a nuclear spectroscopy experiment dedicated to RIB characterization. Results from the simulation will be presented, along with benchmarking and comparison to the yield station data and other nuclear transport codes.
Speaker: Ms Fatima Garcia (Simon Fraser University and TRIUMF)
Coincidence Measurements using the SensL MatrixSM-9 Silicon-photomultiplier Array 15m
The silicon photomultiplier (SiPM) has emerged as a rival device to traditional photodetectors such as the photomultiplier tube (PMT). Over the past decade, SiPMs - also known as Multi-pixel photon counters (MPPCs) and Single-photon avalanche diodes (SPADs) - have found applications in fields ranging from, for example, high-energy physics and atmospheric lidar, to homeland security, biophotonics and nuclear medicine. Due to their wide-ranging applications, arrays of SiPMs are now available commercially as part of modular, turnkey readout systems. One such device - the MatrixSM-9 manufactured by SensL - has been designed specifically for use in high-resolution medical imaging systems required in, for example, state-of-the-art PET applications. We present preliminary coincidence measurements using the Matrix SM-9 system, coupled to a plastic scintillator, to image a $^{22}$Na positron source.
Speaker: Dr Jamie Sanchez-Fortun Stoker (University of Regina)
T2-7 Energy Frontier: Susy & Exotics II (PPD) / Frontière d'énergie: supersymétrie et particules exotiques II (PPD) CCIS 1-160
Convener: Prof. Dean Karlen (University of Victoria (CA))
Searches for Exotic Physics at ATLAS 30m
The most exciting discovery to come from the LHC would be that of something completely unexpected. To that end, the ATLAS experiment has been enthusiastically analyzing the 2012 LHC data recorded at a centre of mass energy of 8 TeV looking for any possible evidence of new physics. A variety of signatures has been considered, including heavy resonances, excesses above the Standard Model expectation in numerous channels, and particles that are long-lived, highly ionizing, or invisible. This talk will explore some of these searches and touch on the various interpretations, such as dark matter, extra dimensions, and other intriguing extensions to the Standard Model.
Speaker: Wendy Taylor (York University (CA))
A Search for Magnetic Monopoles and Exotic Long-lived Particles with Large Electric Charge at ATLAS 15m
A search for highly ionizing particles produced in 8 TeV proton-proton collisions at the LHC is performed with the ATLAS detector. A dedicated trigger increases significantly the sensitivity to signal candidates stopping in the electromagnetic calorimeter and allows to probe particles with higher charges and lower energies. Production cross section limits are obtained for stable particles in the mass range $200-2500$ GeV for magnetic charges in the range of Dirac charge $0.5<|g|<2.0$ and for electric charges in the range $10<|z|<60$. Limits are presented for various pair-production scenarios, and model-independent limits are presented in fiducial regions of particle energy and pseudorapidity.
Speaker: Mr Gabriel David Palacino Caviedes (York University (CA))
The MoEDAL Experiment at the LHC - a New Light on the High Energy Frontier 30m
In 2010 the Canadian led MoEDAL experiment at the Large Hadron Collider (LHC) was unanimously approved by CERN's Research Board to start data taking in 2015. MoEDAL is a pioneering experiment designed to search for highly ionizing avatars of new physics such as magnetic monopoles or massive (pseudo-)stable charged particles. Its groundbreaking physics program defines over 30 scenarios that yield potentially revolutionary insights into such foundational questions as: are there extra dimensions or new symmetries; what is the mechanism for the generation of mass; does magnetic charge exist; what is the nature of dark matter; and, how did the big-bang develop. MoEDAL's purpose is to meet such far-reaching challenges at the frontier of the field. The innovative MoEDAL detector employs unconventional methodologies tuned to the prospect of discovery physics. The largely passive MoEDAL detector, deployed at Point 8 on the LHC ring, has a dual nature. First, it acts like a giant camera, comprised of nuclear track detectors - analyzed offline by ultra fast scanning microscopes - sensitive only to new physics. Second, it is uniquely able to trap the particle messengers of physics beyond the Standard Model for further study. MoEDAL's radiation environment is monitored by a state-of-the-art real-time TimePix pixel detector array. I shall also briefly discuss a new proposal to include a new active MoEDAL sub-detector to search for millicharged particles.
T2-8 Cosmic frontier: Dark matter II (PPD) / Frontière cosmique: matière sombre II (PPD) CCIS 1-140
Convener: Aksel Hallin (University of Alberta)
Status of the PICASSO and PICO experiments 30m
The PICO collaboration, a merger of COUPP and PICASSO experiments, searches for dark matter particles using superheated fluid detectors. These detectors can be operated within a set of conditions where they become insensitive to the typically dominant electron recoil background. Additionally, the acoustic measurement of the bubble nucleation makes possible the rejection of additional backgrounds such as alpha decays. This technique also allows for the target nuclei to be changed within the same experiment in order to confirm the properties of dark matter. This presentation reports on the PICASSO experiment that completed taking data in 2014, and the PICO-2L and PICO-60 experiments that were recently commissioned at the Snolab deep underground laboratory in Sudbury.
Speaker: Dr Guillaume Giroux (Queen's University)
DEAP-3600 trigger - the needle in the haystack 15m
DEAP-3600 is a dark matter experiment based at SNOLAB. It uses 3600kg of liquid argon as a target, and searches for scintillation light from argon nuclei struck by weakly interacting massive particles (WIMPs). Argon-39 atoms also undergo beta decay, and the recoiling electrons also produce scintillation light. Beta decays are expected to occur at least $10^8$ times as frequently as WIMP interactions, and the DEAP-3600 trigger is critical in filtering out the vast majority of background events, while keeping 100% of signal events. This talk will explain the very flexible trigger scheme that was developed, and will detail the commissioning and optimisation of the system.
Speaker: Ben Smith (TRIUMF)
Early studies of detector optical calibrations for DEAP-3600 15m
The DEAP-3600 experiment is looking for dark matter WIMPs by detecting the scintillation light produced by a recoiling liquid argon nucleus. Using a 1 tonne fiducial volume a WIMP-nucleon cross section sensitivity of 10^{-46} cm2 in is expected for 3 years of data taking for a 100GeV WIMP. DEAP-3600 has been designed for a target background of 0.6 events in the WIMP region of interest in 3 years of data taking. In this talk I will present the status of the commissioning of the optical data collected by DEAP.
Speaker: Dr Berta Beltran (Univeristy of Alberta)
Single photon counting for the DEAP dark matter detector 15m
DEAP-3600, comprised of a 1 tonne fiducial mass of ultra-pure liquid argon, is designed to achieve world-leading sensitivity for spin-independent dark matter interactions. DEAP-3600 measures the time distribution of scintillation light from the de-excitation of argon dimers to select events. This measurement allows background events from Ar39 decays to be rejected at a high level. The performance of this analysis critically relies on DEAP's ability to identify pulses in the waveforms of the photomultilier tubes and accurately assessing the number of photo-electrons contributing to each pulse. Photomultiplier tube effects, such as dark noise and afterpulsing, can degrade the measurement and weaken the level of background discrimination. An algorithm has been developed for finding pulses and identifying the number of photo-electrons.
Speaker: Thomas McElroy (University of Alberta)
T2-9 Gender and Arts in Physics Teaching (CEWIP-DPE) / Genre et arts dans l'enseignement de la physique (CEFEP-DEP) CCIS L1-160
Convener: Dr Marina Milner-Bolotin (The University of British Columbia)
Model-Based Reasoning in Upper-division Lab Courses 30m
Modeling, which includes developing, testing, and refining models, is a central activity in physics. Well-known examples from include everything from the Bohr model of the hydrogen atom to the Standard Model of particle physics. Modeling, while typically considered a theoretical activity, is most fully represented in the laboratory where measurements of real phenomena intersect with theoretical models, leading to refinement of models and experimental apparatus. However, experimental physicists use models in complex ways and the process is often not made explicit in physics laboratory courses. We have developed a framework to describe the modeling process in physics laboratory activities. The framework attempts to abstract and simplify the complex modeling process undertaken by expert experimentalists. The framework can be applied to understand typical processes such the modeling of the measurement tools, modeling "black boxes," and signal processing. We demonstrate that the framework captures several important features of model-based reasoning in a way that can reveal common student difficulties in the lab and guide the development of curricula that emphasize modeling in the laboratory. We also use the framework to examine troubleshooting in the lab and guide students to effective methods and strategies.
Speaker: Heather Lewandowski (University of Colorado)
Fabulous Physicists from Around the World: The tale of ICWIP 2014 30m
A century ago it was only Marie Curie and a few other women who were part of the physics community, but in 2014, CAP and IUPAP (International Union of Pure and Applied Physics) brought over 200 women physicists from 52 countries to Waterloo for the 5th IUPAP International Conference on Women in Physics (ICWIP). ICWIP 2014 was held at Wilfrid Laurier University from August 5 to 8, 2014. This was the first time this conference was held in North America. It was a unique opportunity for Canadian scientists and researchers to share ideas and experiences with women and men from around the world. This is the story of this one-of-a-kind conference, including the scientific and gender-focused sessions and talks, the official resolutions approved by the delegates, the amazing personal stories shared, and of course, the closing night dance party.
Speaker: Shohini Ghose (Wilfrid Laurier University)
Gender gaps in a first-year physics lab 15m
It has been established that male students outperform female students on almost all commonly-used physics concept inventories. However, there is significant variation in the factors that contribute to this gender gap, as well as the direction in which they influence it. It is presently unknown if such a gender gap exists on the relatively new Concise Data Processing Assessment (CDPA). To get at estimates of the gap, we have measured performance on the CDPA at the pre-test and post-test level in the first-year physics lab at the University of British Columbia. We find a gender gap on the CDPA that persists from pre- to post-test and that is as big as, if not bigger than, similar reported gaps. That being said, we ultimately claim no evidence that female students are less capable of learning than their male peers, and we suggest caution when using gain measures alone to draw conclusions about differences in science classroom performance across gender.
Speaker: Dr James Day (University of British Columbia)
T3-1 Materials characterization: electrical, optical, thermal (DCMMP) / Caractérisation des matériaux: électrique, optique, thermique (DPMCM) NINT Taylor room
Convener: Wayne Hiebert (National Institute for Nanotechnology)
A pump-probe technique to measure the Curie temperature distribution of exchange-decoupled nanoscale ferromagnet ensembles 30m
Heat assisted magnetic recording (HAMR) has been recognized as a leading technology to increase the data storage density of hard disk drives[1]. Dispersions in the properties of the grains comprising the magnetic medium can lead to grain-to-grain Curie temperature variations, which drastically affect noise in the recorded magnetic transitions, limiting the data storage density capabilities in HAMR[2]. In spite of the need to investigate the origin of the Curie temperature distribution ($\sigma_{Tc}$) and establish means to control it, no approach to measure $\sigma_{Tc}$ has been available. We have recently presented a method to measure the switching temperature distribution of an ensemble of exchange-decoupled grains with perpendicular anisotropy subject to nanosecond heating pulses of varying intensity[3]. The rapid cooling rate ensures that the grain magnetization is not affected by thermal activation, so that the grains switch at Tc. A switching temperature distribution can then be directly interpreted as a measure for $\sigma_{Tc}$. Here we summarize the results of this measurement routine to a series of FePt HAMR media samples in which the degree of *L*$1_{0}$ chemical ordering and alloy composition is systematically varied. We also present modeling results based on the Landau-Lifshitz-Bloch formalism that validates the experimental approach and provides experimental bounds for its validity[4]. Measurements of $\sigma_{Tc}$ reveal a sizable dependence, which we interpret in the context of thermodynamic drive for disordered to ordered crystalline structure phase transformation. Besides the ability to measure $\sigma_{Tc}$, which is of importance to engineer suitable HAMR media capable of high density magnetic recording, the presented technique can be applied to studies on the competition between Zeeman energy and thermal fluctuations that affect the switching probability upon cooling from Tc. [1] D. Weller, O. Mosendz, G.J. Parker, S. Pisana, and T.S. Santos, Phys. Status Solidi A 210, 1245 (2013). [2] H. Li and J.-G. Zhu, IEEE Tran. Magn. 49, 3568 (2013). [3] S. Pisana, S. Jain, J.W. Reiner, C.C. Poon, O. Hellwig, B.C. Stipe, Appl. Phys. Lett. 104, 162407 (2014). [4] S. Pisana, S. Jain, J.W. Reiner, O. Mosendz, G.J. Parker, M. Staffaroni, O. Hellwig, B.C. Stipe, IEEE Tran. Magn., in press.
Speaker: Prof. Simone Pisana (York University)
Optical properties and Fermiology near field-tuned quantum critical points 30m
In the so-called "heavy-fermion" metals, the hybridization of the conduction band with electrons localized in partially filled $f$ orbitals leads to the formation of heavy quasiparticles, for which the effective mass can be renormalized by a factor of 100 or more. However, the itinerant nature of these quasiparticles competes with a tendency to form more conventional, magnetically ordered states. These materials are therefore situated near a quantum critical point - a zero-temperature phase transition driven by the competition between kinetic energy and potential energy. This conflict between itinerancy and localization lies at the heart of all correlated electron materials, and makes heavy-fermion systems a model system for testing and understanding correlated quantum matter. Along with the formation of ultra-heavy quasiparticles, the scattering dynamics in heavy fermion compounds also undergo a strong renormalization. This critical slowing-down brings important electronic timescales, such as electronic scattering rates, down into the GHz range, where optical-type measurements and analyses can be carried out with microwaves. We have developed a dilution-refrigerator-based system for carrying out these measurements, and have used it to study a range of heavy fermion materials such as CeCoIn5, UBe13 and URu2Si2. Following an overview of the relevant physics, I will present a summary of our most striking results, illustrating the critical slowing down and mass enhancement that accompany a quantum phase transition.
Speaker: David Broun (Simon Fraser University)
Protein Biosensing with Fluorescent-Core Microcapillaries 15m
Whispering gallery modes (WGMs) are the electromagnetic resonances of dielectric spheres, cylinders, or rings. The WGM wavelengths can shift when the resonant field interacts with a local analyte fluid. This work demonstrates a fluorescent core microcapillary that utilizes WGMs for biosensing applications. This device consists of a glass microcapillary with a 50-μm-diameter inner channel. The channel wall is coated with a film composed of fluorescent silicon quantum dots (SiQDs). Because the SiQD film has a higher index of refraction than the glass capillary wall, it can support cylindrical WGMs. The QD fluorescence spectrum thus consists of a set of sharp peaks at the WGM resonance wavelengths. Part of the WGM field extends into the capillary channel where it samples the fluids pumped inside; thus the cavity resonance wavelengths in the QD fluorescence spectrum depend on the channel medium. The sensitivity of the WGM wavelengths varied between 3 and 24 nm per refractive index unit, depending on the SiQD film thickness. Biosensing with this device was then demonstrated using the standard biotin-avidin system. The QD film in the capillary channel was coated with alternating charged polyelectrolyte (PE) layers with exposed amines for attaching biotin. Biotin in turn has a high specific affinity for the neutravidin protein. These biotinylated PE layers were found to capture neutravidin, yielding a detection limit of 6 nM and an equilibrium association constant of 1.1 x 106 M-1 for biotin-neutravidin in this sensor. Several "blank" runs indicate minimal nonspecific binding. Attractive features of this device include a high degree of physical robustness and minimal equipment requirements (e.g., a tuneable laser is not needed to scan the cavity modes). Future work will aim to increase the so-far moderate detection limit, potentially by improving the device sensitivity via finer control over the SiQD film thickness.
Speaker: Mr Stephen Lane (University of Alberta)
Ultrafast modulation of photoluminescence in semiconductors by intense terahertz pulses 15m
Terahertz (THz) pulse science is a rapidly developing field, and has been applied extensively in the characterization of ultrafast dynamics in semiconductors and nanostructures. The recent development of intense THz pulse sources in lithium niobate (LN), however, allows the dynamics of transient states to be directly manipulated by the large electric field of the THz pulse itself. We have used an ultrafast laser source to generate intense THz pulses in LN with picosecond duration and peak electric fields up to 300 kV/cm. Here we study how these intense THz pulses affect the ultrafast radiative recombination dynamics of photoexcited carriers in semiconductors and semiconductor nanostructures. In GaAs, we observe a sharp transition between THz-pulse-induced quenching and enhancement of photoluminescence (PL) with increasing photoexcited carrier densities. We present spectrally-resolved PL measurements of this transition, which reveal a competition between enhancement at shorter wavelengths versus quenching at longer wavelengths. The dynamics of this interplay between THz pulse enhanced and quenched PL are presented as a function of excitation fluence and time-delay between the excitation and THz pulses. Possible mechanisms that include THz-induced carrier heating and scattering processes are discussed. The effects of intense THz pulses on the PL dynamics in polycrystalline GaAs, and quantum well structures will also be explored. The ability to control material properties with intense THz pulses may lead to novel optoelectronic devices with the ability to modulate light emission on picosecond timescales. This work was supported by NSERC, CFI, ASRIP, AITF, iCiNano, and nanoBridge.
Speaker: Mr David Purschke (University of Alberta)
T3-10 Special session to honour Dr. Akira Hirose II (DPP) / Session spéciale en l'honneur du Dr Akira Hirose II (DPP) CCIS L2-190
From Plasma to Complex Plasma 30m
Earlier research on plasma turbulence and later research development on a complex plasma are discussed. Study of nonlinear evolution of instabilities in a collisionless plasma, especially ion acoustic instability and Buneman instability, revealed the role of plasma collective modes in the heating of plasma particles. It was essential for plasma waves to grow in time, resulting in the heating of plasma itself through effective interaction of plasma particles and plasma waves. Theoretical study revealed the time constant for the heating to occur in a plasma. When plasma instabilities are well developed and spread wide in frequency range, the plasma turbulence caused the broadening of wave-particle resonance region. The earlier plasma experiments tried to eliminate any impurities from the vacuum chamber to guarantee the experimental conditions as much as ideal theoretical bases. However, the onset conditions of plasma instabilities are found to be modified in the presence of dust particles, micron in size and negatively charged. The presence of dust particles is found to modify the effective temperature of electrons, resulting in the suppression of the Landau damping. Furthermore, the dust plasma, now known as a complex plasma because of the nature of complex system as a composite of plasma particles and dust particles, is found to be rich in fundamental novel physics including a strongly coupled state and the anomalous nature of electromagnetic propagation in the medium. Dust particles when placed in a sheath interact each other in the presence of ion flow and produce a line along the flow. The paired chain was interpreted as a pair-formation by the exchange of phonons. The dust particles could be floated at the sheath edge producing a one- or a two-dimensional lattice structure, which provides a platform for the study of low-dimensional behavior of Coulomb systems. Some of the current topics of a complex plasma are discussed.
Speaker: Prof. Osamu Ishihara (Chubu University/Yokohama National University)
Fluctuations and Transport in Hall devices with ExB drift 30m
Devices with stationary, externally applied, electric field which is perpendicular to a moderate amplitude magnetic field B₀, are now a common example of magnetically controlled plasmas. High interest applications involve Penning type plasma sources, magnetrons for plasma processing, magnetic filters for ion separation, and electric space propulsion devices such as Hall thrusters. One common characteristic of these numerous applications are plasma parameters conditions in which electrons are magnetized so the electron Larmor radius is much smaller than the characteristic lengths scale of the devices, while ions have large Larmor radius and do not feel the magnetic field and thus can be easily controlled by the electric field. The latter is a basis of various useful applications for ions extraction, separation and acceleration. Similar conditions also occur in some ionospheric plasmas as well as in some laboratory experiments on magnetic reconnection. The proposed talk reviews physics basis of such Hall plasma discharges. Application of the external electric field perpendicular to the magnetic field, as well as gradients of plasma density, temperature and magnetic field, naturally present in such discharges, result in plasma fluctuations and instabilities that make plasma turbulent and electron transport anomalous. Specific conditions of such plasmas precludes existence of standard drift waves, however other modes, the so called anti-drift modes become possible and unstable. The open magnetic field lines (terminated by the wall) also result in new instabilities, the so called sheath impedance modes. This talk provides physics based description of various modes and instabilities pertinent to such Hall plasmas and resulting anomalous electron transport due to these modes.
Speaker: Dr Andrei Smolyakov (University of Saakatchewan)
Adaptive Matrix Transpose Algorithms for Distributed Multicore Processors 15m
The matrix transpose is an essential primitive of high-performance parallel computing. In plasma physics and fluid dynamics, a matrix transpose is used to localize the computation of the multidimensional Fast Fourier transform, the engine that powers the pseudospectral collocation method. An adaptive parallel matrix transpose algorithm optimized for distributed multicore architectures running in a hybrid OpenMP/MPI configuration is presented. Significant boosts in speed are observed relative to the distributed transpose used in the state-of-the-art adaptive FFTW library. In some cases, a hybrid configuration allows one to reduce communication costs by reducing the number of MPI nodes, and thereby increasing message sizes. This also allows for a more slab-like than pencil-like domain decomposition for multidimensional Fast Fourier Transforms, reducing the cost of, or even eliminating the need for, a second distributed transpose. Nonblocking all-to-all transfers enable user computation and communication to be overlapped. We apply adaptive matrix transposition algorithms on hybrid architectures to the parallelization of implicitly dealiased pseudospectral convolutions used to simulate turbulent flow. Implicit dealiasing outperforms conventional zero padding by decoupling the data and temporary work arrays. Parallelized versions of our implicit dealiasing algorithms for hybrid architectures are publically available in the open-source library FFTW++.
Speaker: John Bowman (University of Alberta)
Dense Plasma Focus for Short-Lived Isotope Activation 15m
Short-lived radioisotopes (SLRs) are used for medical applications including positron emission tomography (PET). The required activity for N-13for PET is about 4 GBq for a myocardial blood perfusion assessment. Dense plasma focus (DPF) has been considered as a low cost methods for producing SLRs as an alternative to conventional cyclotron facilities. A low energy dense plasma focus has been built and optimized at the University of Saskatchewan to study the feasibility of SLRs production, in particular N-13 using energetic deuteron ion beams produced in a dense plasma focus. X-ray detectors and a Faraday cup have been used to characterize the DPF properties, particularly the ion beam energy based on time-of-flight measurements. The preliminary results have shown generation of ions with energies up to 2 MeV, well exceeding the threshold energy for N-13 production (328 keV). Electrical signals have been used for circuit analyses in order to interpret the anomalous plasma resistance and plasma inductance during the pinch phase. Simulation of N-13 activation using deuteron beam has been carried out.
Speaker: Mr R. A. Behbahani (University of Saskatchewan)
T3-2 Quantum Computation and Communication (DTP-DCMMP-DAMOPC) / Communication et calcul quantique (DPT-DPMCM-DPAMPC) CCIS L2-200
Improving Physical Models of Qubit Decoherence and Readout 30m
Qubit coherence measurements are now sufficiently accurate that they can be used to perform 'spectroscopy' of noise due to a complex environment. Measuring not only the decay time, but also the form of decay as a function of some external parameter (e.g. temperature) can determine the nature of the dominant decoherence source. I will describe how temperature-dependent measurements of qubit decoherence time and form of decay can distinguish between a number of different possible sources of environmental charge flucutations (including tunneling and cotunneling with a continuum band, as well as one- and two-phonon absorption processes). These results can be used to identify and suppress dominant charge-noise dephasing mechanisms in semiconductor nanostructures. I will also briefly discuss some new tricks to enhance the fidelity of generic qubit readouts by understanding the physical dynamics of these systems.
Speaker: Prof. Bill Coish (McGill University)
NanoQEY Quantum Key Distribution Satellite 15m
NanoQEY (Nano Quantum EncrYption satellite) is a demonstration satellite which will show the feasibility of implementing Quantum Key Distribution (QKD) between two ground stations on earth using a satellite trusted node approach. One of the main objectives of NanoQEY is to eliminate the necessity for a fine pointing system which will reduce cost and planning time for a satellite. The system will also be simplified from many models that have been proposed due to the smaller space and mass allowances. A few of the QKD satellites that have been proposed are also formatted in the downlink scenario, whereas NanoQEY will be implemented in an uplink scenario. Since the satellite is only used for photon collection and data processing, it is not necessary to have many of the complicated systems on board which would be required for a downlink. The main purpose of NanoQEY is to construct a payload which will be operational for a QKD demonstration and fit onto a nano-satellite in terms of mass and power budgets. However, because of the fine pointing simplification of the satellite, the ground stations will need to compensate for the lack of targeting on the satellite. These ground stations will have to have very fine pointing and tracking capabilities. We have undergone a study to determine the feasibility of a nano-satellite project to implement QKD for world-wide QKD demonstrations and the requirements on a ground station to achieve these goals.
Speaker: Christopher Pugh (University of Waterloo)
Towards a Quantum Non-Demolition Measurement for Photonic Qubits 15m
Many applications of quantum information processing benefit from, or even require, the possibility to detect the number of photons in a given signal pulse without destroying the photons nor the encoded quantum state. We propose and show first steps towards the implementation of such a Quantum Non-Demolition (QND) measurement for time-bin qubits. To implement this measurement, we first store a 'probe' pulse in a cryogenically cooled Tm:LiNbO3 waveguide using an Atomic Frequency Comb (AFC) quantum memory protocol [1]. We then send a 'signal' pulse comprised of two temporal modes off-resonantly with the AFC through a previously prepared transparency window. The off-resonant interaction between the propagating signal and the thulium ions, onto which the probe pulse was mapped, results in the atomic state acquiring a phase-shift. This phase shift is imprinted onto the recalled probe pulse and can be determined using an interferometric measurement. The magnitude of this phase-shift depends on the signal pulse's energy, and detuning w.r.t to the probe pulse. Hence, knowing the phase-shift, we can determine the intensity or the number of photons in the signal pulse. [1] E. Saglamyurek et al, … Nature 2011
Speaker: Chetan Deshmukh (University of Calgary)
Evanescent Waveguide Microscopies for Bio-Application 30m
Two new evanescent field microscopy technologies based on glass slab waveguides with permanent coupling gratings are introduced: waveguide evanescent field fluorescence (WEFF) microscopy and waveguide evanescent field scattering (WEFS) microscopy. The technologies are briefly described and the experimental setup based on a conventional inverted microscope is introduced and compared to existing technologies like TIR and TIRF. The advantages over the existing technologies are clearly addressed. For each technology one application in cell biology is shown. With multimode WEFF microscopy, taking at least two images with two different waveguide modes, it is possible to determine the fluorescence dye location above the waveguide surface. Therefore 2D dye distance maps or 3D contour plots can be calculated for the samples. As an example, the bending of the plasma membranes of cells between focal adhesions and focal contacts to the waveguide surface are investigated. WEFS microscopy which works as a label-free microscopy is used to analyse bacterial biofilm formation: from a parent cell to micro-colonies. In addition experiments on bacterial UV sterilization and its consequences on biofilm formation are shown.
Speaker: Prof. Silvia Mittler (University of Western Ontario)
T3-3 Ground-based / in situ observations and studies of space environment III (DASP) / Observations et études de l'environnement spatial, sur terre et in situ III (DPAE) CAB 243
Convener: Prof. Richard Marchand (University of Alberta)
Anisotropic ion temperatures and ion flows adjacent to auroral precipitating electrons 30m
Large ion temperature anisotropies (temperature perpendicular to magnetic field larger than parallel to magnetic field) in narrow regions of enhanced ion flow have been identified by the Electric Field Instruments on board the Swarm satellites as a persistent feature of the high latitude midnight-sector auroral zone. These flow channels typically span less than 100 km latitudinally with ion flows of several kilometres per second. The largest observed temperature anisotropy ratios exceed the values predicted by currently used cross sections in theories of collisional heating in strong flows by a factor of 2. Coincident optical measurements from ground-base all-sky imagers indicate that these flow channels are immediately adjacent to regions of precipitating electrons, likely in the vicinity of the ionospheric projection of the open-closed boundary. We will be presenting ion velocity, ion temperature, and magnetic field measurements in and around these regions of enhanced ion flow from December 2013. The orbit of the Swarm satellites during this time result in measurements near the Harang discontinuity. The Electric Field Instruments on board the Swarm satellites are ideally suited for analysis of ion temperature anisotropy. The pearls-on-a-string configuration held by the Swarm satellites during these first weeks of the Swarm mission provides a unique opportunity to distinguish temporal from spatial variation in this dynamic region.
Speaker: William Archer (University of Calgary)
Generation, dynamics, and decay of a polar cap patch 15m
The polar cap ionosphere, an important part of the solar wind-magnetosphere-ionosphere system, is formed by ionization of the neutral atmosphere by solar radiation and particle precipitation under internal transportation and chemical processes. The polar ionosphere is primarily driven by magnetospheric convection and neutral circulation, and undergoes structuring over a wide range of temporal and spatial scale sizes. This structuring is due to the interplay of mechanical forces, electrodynamics, and ionization chemistry. The most prominent and frequent structure of the polar cap ionosphere is the polar patch, which is defined as a region of enhanced F layer ionization distinguishable from the background electron density. Several theories, observations, and hypotheses on the generation and dynamics of these patches are available in the literature. However, a coherent understanding of patch formation is still lacking, mainly due to the lack of high spatial and temporal resolution observations. This is also compounded by our attention to more dramatic patch events. This presentation will focus on a less-dramatic patch event using observations from the Canadian High Arctic Ionospheric Network (CHAIN), in order to provide a coherent view of formation, dynamics, and decay of polar patches.
Speaker: Dr Thayyil Jayachandran (University of New Brunswick)
Temporal and Spatial Evolution of Poynting Flux Measured with Swarm 15m
Small Scale Dynamics of Poynting Flux Measured With Swarm We present case studies of ionospheric Poynting flux using the instruments onboard the three ESA Swarm spacecraft. The three Swarm satellites each carry an Electric Field Instrument (EFI) that can be used to measure ion drift velocities. During the first months of the mission the satellites were in nearly circular, polar orbits at an altitude of 490 kilometers and were approximately 1000 kilometers from each other. During this time, they followed one after another in a pearls-on-a-string arrangement, separated by about one minute in time. This relatively close spatial formation allows comparisons to be done between electric field measurements on each satellite, revealing spatial and temporal structure. In this project we measure ionospheric Poynting Flux using each Swarm satellite. Cross correlation functions are calculated between measurements on each satellite and are used to determine the temporal and spatial scales of observed features. Acknowledgements: The EFIs were developed and built by a consortium that includes the University of Calgary, the Swedish Institute for Space Physics in Uppsala, and COM DEV Canada. The Swarm EFI project is managed and funded by the European Space Agency with additional funding from the Canadian Space Agency.
Speaker: Mr Matthew Patrick (University of Calgary)
Small Scale Structuring in Electron Precipitation as seen by the ePOP Suprathermal Electron Imager 15m
Auroral arcs are known to be caused by electrons with keV energies interacting with the neutral atmosphere. However, there is much more to the aurora than auroral arcs. There is a wide range of phenomena that are grouped together as "diffuse aurora". Suprathermal electron precipitation (having energies between 1 eV and a keV) often contributes to the diffuse aurora. Much less is known about suprathermal electron precipitation than the higher energy precipitation. The ePOP Suprathermal Electron Imager (SEI),a high-time-resolution CCD-based detector capable of imaging electron velocity distributions, is currently being used to survey this type of precipitation. We will present observations of dispersed electron busts, where a burst of electron precipitation is dispersed over the distance from source to detector. We will also present observations of "inverse" electron dispersion, in which a low energy population of electrons increases in energy over time. This has not been reported in literature before. We present a simple model that could explain this phenomenon, and results from a simple simulation of it.
Speaker: Taylor Cameron (University of Calgary)
Cusp Ion Upflows Observed by e-POP SEI and RISR-N: Initial Results 15m
Low-energy ion upflows associated with ion heating processes in the cusp/cleft and polar cap regions are investigated using conjunctions of the Enhanced Polar Outflow Probe (e-POP) satellite and the Resolute Bay Incoherent Scatter Radar (RISR-N) in June 2014 and February 2015. e-POP encountered the cusp/cleft ion fountain at 10-14 MLT and around 1000km altitude during these conjunction experiments. Such intermediate-altitude observations of ion upflow have been recorded only rarely by previous satellite missions and ground-based radars. The Suprathermal Electron Imager (SEI) onboard e-POP measured two-dimensional ion distribution functions with a frame rate of 100 images per second, from which high-precision energy and angle information of entering ions can be inferred. Field-aligned ion bulk flow velocities were estimated from the angle information with a resolution of the order of 25 m/s. The second moments of the ion distribution provide us with information on ion temperature, which was found to increase sharply in the region of cusp ion upflows in most cases. Also, ion composition information is available from ePOP's ion mass spectometer (IRM). The ion upflow velocity reaches 2.5km/s in the first identified event on June 1st, 2014, during which the IRM indicated the dominant species as O+ (80%) and H+ (20%). We will compare the in situ measurements with RISR-N observations in order to further understanding of the three-dimensional structure of the cusp ion fountain.
Speaker: Yangyang Shen (University of Calgary)
T3-4 Cosmic Frontier: Dark Matter III (PPD)/ Frontière cosmique: matière sombre III (PPD) CCIS 1-140
Convener: Thomas Gregoire (Carleton University)
Status of the SuperCDMS and European Cryogenic Dark Matter experiments 30m
The SuperCDMS collaboration operates cryogenic germanium detectors to search for particle dark matter (WIMPs), so far at Soudan Underground Laboratory in Minnesota, US. The EURECA collaboration gathers EDELWEISS, a European collaboration also operating cryogenic germanium detectors, at the Laboratoire Souterrain de Modane, and CRESST who operate cryogenic scintillating detectors (CaWO4) at the Laboratori Nazionali del Gran Sasso, Italy, both with same goal of detecting primarily low mass WIMPs. Most recent progress of these searches will be described together with the planned common future at SNOLAB.
Speaker: Dr Gilles Gerbier (Queens University)
New Pulse Processing Algorithm for SuperCDMS 15m
SuperCDMS searches for dark matter in the form of Weakly Interacting Massive Particles (WIMPs) with cryogenic germanium detectors. WIMPs interacting with atomic nuclei deposit energy in form of lattice vibrations (phonons) which propagate through the cylindrical Ge single crystal (75 mm diameter, 25 mm high) until they are absorbed by the phonon sensors covering part of the flat surfaces of the crystal. A fraction of the phonons are absorbed when they first reach the surface; a large fraction, however are reflected numerous times leading to a homogeneous distribution in the crystal. This leads to a pulse shape with an initial sharp pulse whose amplitude depends on the distance between the interaction side and the individual sensor, followed by a slow pulse which is identical for all sensors. Traditionally CDMS has used an optimal filter algorithm to extract energy information, but the different pulse shapes lead to a noticeable position dependence on the reconstructed energy. A modification of this algorithm de-weights the initial part leading to a considerably improved energy resolution. A combination of both methods has been used to determine energy and position information. We developed a new algorithm which accounts for the pulse shape by fitting two pulse templates simultaneously to each pulse, one for the position dependent sharp peak and one for the position independent slow pulse. This algorithm has the potential to improve the energy and position resolution while reducing the overall processing time. We will present a first study of the performance of this algorithm.
Speaker: Mr Ryan Underwood (Queens University)
Alpha particle backgrounds from the neck of the DEAP-3600 dark matter detector 15m
The DEAP-3600 dark matter detector at SNOLAB will search for scattering of weakly interacting massive particles from a 3600 kg liquid argon target. The liquid argon is held in a spherical vessel made from acrylic, with the highest standards of purity for both bulk acrylic and removal of surface activities. At the top of the vessel there is a neck opening to the cooling system, and alpha particles decays in this region can potentially introduce a background to the dark matter measurement. The steps to eliminate these alpha backgrounds will be presented, including details on the detector construction, radioactivity simulations, and analysis methods for measuring alpha backgrounds.
Speaker: Dr James Bueno (University of Alberta)
Optimizing the wavelength-shifter thickness for alpha suppression in the DEAP-3600 detector 15m
The DEAP-3600 experiment is a spherical dark matter detector searching for WIMPs by detecting scintillation light in a 3600 kg mass of liquid argon. Before the ultraviolet scintillation light passes through the optically clear acrylic vessel and light guides to the surrounding photomultiplier tubes, it must pass through a wavelength-shifting layer of tetraphenyl butadiene (TPB). Trace amounts of polonium 210 will contaminate the inner surface of the acrylic vessel as well as the TPB layer, and alpha particles resulting from its decay is expected to contribute background events to the WIMP signal. This talk will present the dependence of this background alpha signal on the thickness of the TPB layer, as well as the expected background events per 3 years of data taking at the optimized TPB thickness.
Speaker: Derek Cranshaw (Queen's University)
T3-5 Study of Neutrino Oscillations (PPD-DTP-DNP) / Études des oscillations de neutrinos (PPD-DPT-DPN) CAB 235
Convener: Zoltan Gecse (University of British Columbia (CA))
Status of Long-Baseline Neutrino Experiments 30m
The current generation of long-baseline neutrino oscillation experiments employ an off-axis $\nu_\mu$ (or $\bar{\nu}_\mu$) beam produced by the decay of pions created when a proton beam strikes a target. The beam is monitored at detector facilities near the production point before travelling hundreds of kilometres to a far detector. Aiming the beam centre slightly away from the far detector provides the off-axis configuration which selects a narrow energy band beam tuned to maximize the oscillation probability. The status of these experiments will be presented. The Tokai to Kamioka (T2K) experiment consists of a $\nu_\mu$ beam produced at the Japan Proton Accelerator Research Centre (J-PARC) in Tokai on the East coast of Japan, which is monitored by a suite of detectors before travelling 295 km to the Super-Kamiokande (SK) water Cerenkov detector. T2K has been in operation since 2010 and has been continually releasing new and exciting neutrino oscillation results. The most recent precision $\nu_\mu \to \nu_e$ appearance and $\nu_\mu$ disappearance oscillation measurements as well as initial results running the experiment in the $\bar{\nu}_\mu$ beam configuration will be presented. The NO$\nu\hspace{-0.11ex}$A experiment, utilizing the NuMI beam and a near detector at Fermilab and a far detector at a distance of 810 km, began operation in 2014. The current status of NO$\nu\hspace{-0.11ex}$A will also be shown.
Speaker: Dr Nicholas Hastings (University of Regina)
Electron Neutrino Cross Section Measurements at the T2K Off-Axis Near Detector 15m
T2K is a long baseline neutrino oscillation experiment in Japan, that targets the measurement of the mixing angle between the first and the third neutrino mass eigenstates ($\theta_{13}$) by looking for the appearance of electron neutrinos ($\nu_e$) in a beam of muon neutrinos ($\nu_\mu$), as well as a precision measurement for the mass difference between the the second and the third neutrino mass eigenstates ($\Delta m^2_{32}$) and their mixing angle ($\theta_{23}$). T2K can also probe anti-neutrino oscillation by looking for the appearance of anti-electron neutrinos ($\overline{\nu_e} $) in a beam of anti-muon neutrinos ($\overline{\nu_{\mu}} $). The experiment uses two detectors: a near detector at 280 m from the neutrino production target (in Tokai), and the far detector at 295 km, Super-Kamiokande (SK). The ND280 is a complex detector that includes a Pi0 Detector (P0D), two Fine Grained Detectors (FGDs), three Time Projection Chambers (TPCs), a Segmented Muon Range Detector (SMRD) and Electromagnetic Calorimeters (ECALs). The electron neutrino sample at ND280 is used for cross-section measurements, the search of sterile neutrinos and for the measurement of the $\nu_e$ component of the total neutrino flux. Obtaining a clean electron neutrino sample is complicated by the large muon neutrino background, and backgrounds due to external gamma rays. This talk will present the results of current electron neutrino cross section measurements at the T2K near detector. Status of work on anti-electron neutrino selection, and research on improving the selection of electrons, positrons, proton background, and background gamma samples using multivariate analysis techniques will be presented.
Speaker: Fady Shaker (university of winnipeg)
Constraining Oscillation Analysis Inputs at the T2K Near Detector 15m
The T2K long-baseline neutrino oscillation experiment is composed of a near detector at 280m and a far detector at Super-Kamiokande located 295 km from the neutrino beam in Tokai. The main oscillation analyses are performed using fits to the data collected at the far detector. These analyses depend on our ability to predict the event rates and energy spectra at the far detector, which in turn depend on cross-section and flux uncertainties. We use inputs from external data, such as MiniBooNE and MINER$\nu$A, as well as beam flux measurements to generate prior estimates of these uncertainties. T2K's near detector then provides a direct internal constraint on the convolution of the flux and cross-section, significantly reducing the uncertainties. This talk will discuss how data from the near detector on T2K is used to constrain the oscillation analysis inputs.
Speaker: Christine Nielsen (University of British Columbia)
Deep Core and PINGU - Studying Neutrinos in the Ice 15m
IceCube and its low energy extension DeepCore have been deployed at the South Pole and taking data since early 2010. Originally designed to search for high energy (on the order of PeV) events, IceCube has recently published the detection of the highest energy events ever recorded. At the same time, enhancements to the detector have been installed to focus on lower energy events. With a neutrino energy threshold of about 10 GeV, DeepCore allows IceCube to access a rich variety of physics including searching indirectly for WIMP dark matter and studying atmospheric neutrinos. A proposed new in-fill array, named PINGU, will continue to lower the threshold for neutrino detection. This will in turn provide the potential to study a great deal of new physics, including the determination of the neutrino mass ordering. This talk will discuss the PINGU detector and the new physics it makes available with a focus on the determination of the ordering.
Speaker: Ken Clark (University of Toronto)
Experimental test of the unitarity of the leptonic mixing (PMNS) matrix 15m
In the past decade, a remarkable progress has been made in the neutrino oscillation determining the lepton mixing (PMNS) angles, except for the CP violating phase delta_CP. The next step is to determine this remaining phase and then over-constraining the PMNS matrix to test its unitarity. Testing the unitarity is an effective way to search for physics beyond the standard model, as is demonstrated in the quark sector. For example, existence of right handed neutrinos or sterile neutrinos would violate the unitarity, and so does new interaction beyond the standard model. In this talk, I will describe the potential path towards testing the unitarity of the PMNS matrix. In particular, CP violation in the baseline length of solar neutrino oscillation provides key information, which the existing Super-Kamiokande data may even start to be sensitive. I will conclude with a prospect of the future experiments to make a stringent test of the unitarity of the PMNS matrix, showing that the accelerator and atmospheric neutrino measurement by Hyper-Kamiokande would take the central role.
Speaker: Dr Akira Konaka (TRIUMF)
T3-6 Nuclear Structure III (DNP) / Structures nucléaires III (DPN) CCIS L1-140
Convener: Jens Dilling (triumf/UBC)
Beta-decay from $^{47}$K to $^{47}$Ca with GRIFFIN 15m
Recent developments in many-body calculation methods have extended the application of *ab initio* interactions to medium-mass nuclei near closed shells. Detailed nuclear data from these isotopes are necessary to evaluate the many-body calculation methods and to test the predictive capacity of the interactions. $^{47}$Ca and $^{47}$K are each one nucleon removed from the doubly-magic nucleus $^{48}$Ca. The beta-decay from $^{47}$K to $^{47}$Ca has a reported half-life of 17.5 s and a $Q(\beta^-)$ value of 6643 keV. Transfer reactions from $^{48}$Ca have identified excited states of $^{47}$Ca throughout the available range of beta-decay $Q$-values, but the two published measurements of $^{47}$Ca populated by the beta-decay of $^{47}$K have only identified four states directly populated by beta decay. High-statistics beta-decay studies using modern high-efficiency, high-granularity detection systems can provide detailed information on level energies, beta-decay and gamma-ray branching ratios, as well as spin/parity assignments and transition mixing ratios through gamma-ray angular correlations. A recent experiment at TRIUMF-ISAC used the GRIFFIN spectrometer to investigate the levels populated by beta decay in more detail. A beam of surface-ionized $^{47}$K was provided by the TRIUMF-ISAC facility and implanted onto a mylar tape at the focus of the GRIFFIN spectrometer, where it decayed to $^{47}$Ca. The early implementation of the GRIFFIN spectrometer used in this experiment consisted of 15 close-packed HPGe clovers with 19% absolute full-photopeak efficiency for 1 MeV gamma rays. Beta detection was provided by ten of the plastic scintillators of SCEPTAR and internal electron conversion spectroscopy was possible with the five lithium-drifted silicon detectors of PACES. The intensity of the beam and the efficiency of the GRIFFIN spectrometer allow for the detection of gamma-ray transitions with small branching ratios, enabling the list of states populated by beta decay to be extended from previous publications. In addition, angular correlations between cascading gamma rays provide information about the spins and parities of states that are not currently included in the beta-decay level scheme. An overview of the experimental apparatus as well as a discussion of the results from preliminary analysis will be presented.
Speaker: Dr Jenna Smith (TRIUMF)
Investigating the Structure of $^{46}$Ca through the $\beta^{-}$ Decay of $^{46}$K Utilizing the New GRIFFIN Spectrometer 15m
Due to its very low natural abundance of 0.004%, the structure of the magic nucleus $^{46}$Ca has not been studied in great detail compared to its even-even Ca neighbors. The calcium region is currently a new frontier for modern shell model calculations based on NN and 3N forces [1,2], so detailed experimental data from these nuclei is necessary for a comprehensive understanding of the region. Excited states in $^{46}$Ca have been identified previously by various reaction mechanisms, most notably from $(p,p')$ and $(p,t)$ reactions [3]. The low-lying structure has been investigated by two previous beta-decay experiments, with large discrepancies present between the reported decay schemes [4,5]. A recent beta-decay of $^{46}$K performed at TRIUMF's ISAC yield station obtained a new $T_{1/2}$ value of 96.303(79) s and identified 33 new gamma rays attributed to $^{46}$Ca [6]. However, since the ISAC yield station is not equipped with gamma-gamma coincidence capabilities the observed gamma rays were not placed into the decay scheme of $^{46}$Ca. In this experiment, using an early-implementation of the new GRIFFIN spectrometer located at TRIUMF-ISAC, a 9$\mu$A 500 MeV proton beam was impinged onto a uranium carbide target to induce spallation and fission reactions. Radioactive species were surface ionized and a high-resolution mass separator was used to select singly-charged $A$ = 46 ions only. The beam consisting almost entirely of $^{46}$K was implanted onto a Mylar tape at the center of the GRIFFIN array. The $^{46}$K source then populated the excited states of $^{46}$Ca through $\beta^{-}$ decay. The resulting gamma-rays were detected with the new GRIFFIN spectrometer, consisting of 15 HPGe clover detectors. The array also included SCEPTAR, an array of ten plastic scintillators mounted down-stream for $\beta$ particle detection, and PACES, an array of Si(Li) detectors used for the detection of conversion-electrons. The high-statistics data set obtained from this experiment makes it possible to extend the current level scheme, including the assignment of new transitions and levels. The spin and parity of excited states in $^{46}$Ca will be determined through a gamma-ray angular correlation analysis. Preliminary results from this experiment will be discussed. [1] J.D. Holt et al., Phys. Rev. C 90, 024312 (2014). [2] A.Ekstrom et al., Phys. Rev. Lett. 110. 192502 (2013). [3] J.Blachot, Nuclear Data Sheets 111, 717 (2010). [4] B. Parsa and G. Gordon, Phys. Lett. 2, 269 (1966). [5] M.Yagi et al., Laboratory Nucl. Sci., Tohoku Univ. 1, 60 (1968). [6] P.Kunz et al., Rev. Sci. Instrum. 85(5), 053305 (2014).
Speaker: Ms Jennifer Pore (Simon Fraser University)
The first GRIFFIN Experiment: An investigation of the $s$-process yields for $^{116}$Cd 15m
In adopted models for the $s$-process, it is assumed that helium shell flashes give rise to two neutron bursts at two different thermal energies $(kT\sim10$ keV and $kT\sim25$ keV). The contribution to the isotopic abundance of $^{116}$Cd from the higher temperature neutron bursts are calculated assuming thermal equilibrium between the ground state and the long-lived isomeric state of $^{115}$Cd. However, it is unknown if the thermal equilibrium between these states is present at the low temperature of the first burst. The presence of thermal equilibrium at low temperatures would significantly decrease the calculated $s$-process yields of $^{116}$Cd. To answer this question, we are searching for gateway levels at slightly higher excitation energy than the isomer in $^{115}$Cd that could be populated from the isomeric state via $(\gamma,\gamma^\prime)$ reactions within stars. Currently, the lowest potential gateway level at an excitation energy of 394 keV has only been observed to decay directly to the isomeric state in $^{115}$Cd. Nonetheless, the observation of this state decaying to the previously known 361 keV level via a weak 33 keV transition would provide a $\gamma$-ray cascade which would bypass the isomeric state. Thus, the observation of this decay would be a direct signature for the presence of thermal equilibrium during the lower temperature neutron burst. However, the direct measurement of a 33 keV transition is difficult due to the large low-energy $\gamma$-ray backgrounds observed in $\beta$-decay experiments. We therefore require high-efficiency $\gamma$-ray detection to indirectly observe this transition via $\gamma$-$\gamma$ coincidences of $\gamma$-rays cascading through this transition. In November 2014, the high-efficiency GRIFFIN HPGe spectrometer was commissioned at TRIUMF's Isotope Separator and Accelerator (ISAC). GRIFFIN is a state-of-the-art array consisting of 16 HPGe clovers, and boasts a large $\gamma$-ray efficiency of roughly 17% at 1 MeV. GRIFFIN also hosts a large suite of auxiliary detectors such as SCEPTAR, which is an array of 20 plastic scintillators designed for $\beta$-particle detection. In this first experiment, beams of $^{115}$Ag and $^{115}$Ag$^{\text{m}}$ were delivered to the GRIFFIN spectrometer equipped with SCEPTAR in order to search for these very-low-intensity $\gamma$-$\gamma$ coincidences following the $\beta$ decay of $^{115}$Ag into $^{115}$Cd. In this talk, results from this first GRIFFIN experiment will be presented.
Speaker: Mr Ryan Dunlop (University of Guelph)
Gamma-Gamma Angular Correlation Measurements With GRIFFIN 15m
When a excited nuclear state emits successive $\gamma$-rays in a $\gamma-\gamma$ cascade, $X^{**} \rightarrow X^{*} + \gamma_{1} \rightarrow X + \gamma_{2}$ an anisotropy is found in the spatial distribution of $\gamma_{2}$ with respect to $\gamma_{1}$. By defining the direction of $\gamma_{1}$ to be the z-axis, the intermediate level, $X^{*} $, in general will have an uneven distribution of m-states. This causes an anisotropy in the angular correlation of the second $\gamma$-ray with respect to the first. The correlations depend on the sequence of spin-parity values for the nuclear states involved as well as the multipolarities and mixing ratios of the emitted $\gamma$-rays. These angular correlations are expressed by the $W(\theta)$ function: \begin{center} $W(\theta) = 1 + \sum\limits_{k = even}^{2L} a_{k}P_{k}(cos\theta)$ \end{center} where $L$ is the lowest multipole order of the emitted $\gamma$-rays and the $a_{k}$ are coefficients\part{title} for all of the $P_{k}(cos\theta)$ Legendre polynomials. Angular correlations can be used for the assignment of spins and parities to nuclear states and thus provide a powerful means to elucidate the structure of nuclei away from stability through $\beta-\gamma-\gamma$ coincidence measurements. In order to explore the sensitivity of the new 16 clover-detector GRIFFIN $\gamma$-ray spectrometer at TRIUMF-ISAC to such $\gamma-\gamma$ angular correlations, and to optimize its performance for these measurements, we have studied a well known $4^{+}\rightarrow 2^{+}\rightarrow 0^{+}$ $\gamma-\gamma$ cascade from $^{60}$Co decay through both experimental measurements and Geant4 simulations. Results of these investigations will be presented in this talk.
Speaker: Mr Andrew MacLean (University of Guelph)
High-Precision Half-Life Measurements for the Superallowed $\beta^+$ emitter $^{10}$C 15m
High precision measurements of superallowed Fermi beta transitions between 0$^+$ isobaric analogue states allow for stringent tests of the electroweak interaction described by the Standard Model. In particular, these transitions provide an experimental probe of the unitary of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, the Conserved-Vector-Current (CVC) hypothesis, as well as set limits on the existence of scalar currents in the weak interaction. Half-life measurements for the lightest of the superallowed emitters are of particular interest as it is the low-$Z$ superallowed decays that are most sensitive to a possible scalar current contribution. The half-life of $^{10}$C can be measured by directly counting the $\beta$ particles or measuring the $\gamma$-ray activity following $\beta$ decay. Previous results for the $^{10}$C half-life measured via these two methods differ at the 1.5$\sigma$ level, prompting simultaneous and independent measurements of the $^{10}$C half-life using both techniques. Since $^{10}$C is the lightest nucleus for which superallowed $\beta$ decay is possible, a high precision measurement of its $ft$ value is essential for obtaining an upper limit on the presence of scalar currents in the weak interaction. Measurements of the $^{10}$C half-life via both gamma-ray photo-peak and direct beta counting were performed at TRIUMF's Isotope Separator and Accelerator (ISAC) facility using the 8$\pi$ spectrometer and a $4\pi$ gas proportional $\beta$ counter at the ISAC General Purpose Station. The 8$\pi$ $\gamma$-ray spectrometer consists of 20 High Purity Germanium (HPGe) detectors as well as the Zero Degree $\beta$ detector, a fast plastic scintillator located at the end of the beam line within the 8$\pi$. This presentation will highlight the importance of these measurements and preliminary half-life results for $^{10}$C will be presented.
Speaker: Michelle Dunlop (University of Guelph)
The First Radioactive Beam at GRIFFIN: $^{26}$Na for Decay Spectroscopy 15m | CommonCrawl |
Turán numbers for a 4-uniform hypergraph
Karen Gunderson
Fri, Nov 6, 2020
For any $r\geq 2$, an $r$-uniform hypergraph $\mathcal{H}$, and integer $n$, the \emph{Tur\'{a}n number} for $\mathcal{H}$ is the maximum number of hyperedges in any $r$-uniform hypergraph on $n$ vertices containing no copy of $\mathcal{H}$. While the Tur\'{a}n numbers of graphs are well-understood and exact Tur\'{a}n numbers are known for some classes of graphs, few exact results are known for the cases $r \geq 3$. I will present a construction, using quadratic residues, for an infinite family of hypergraphs having no copy of the $4$-uniform hypergraph on $5$ vertices with $3$ hyperedges, with the maximum number of hyperedges subject to this condition. I will also describe a connection between this construction and a `switching' operation on tournaments, with applications to finding new bounds on Tur\'{a}n numbers for other small hypergraphs.
Read more about Turán numbers for a 4-uniform hypergraph
Inversions for reduced words
Sami Assaf
Thu, Nov 8, 2018
Discrete Math Seminar
The number of inversions of a permutation is an important statistic that arises in many contexts, including as the minimum number of simple transpositions needed to express the permutation and, equivalently, as the rank function for weak Bruhat order on the symmetric group. In this talk, I'll describe an analogous statistic on the reduced expressions for a given permutation that turns the Coxeter graph for a permutation into a ranked poset with unique maximal element. This statistic simplifies greatly when shifting our paradigm from reduced expressions to balanced tableaux, and I'll use this simplification to give an elementary proof computing the diameter of the Coxeter graph for the long permutation. This talk is elementary and assumes no background other than passing familiarity with the symmetric group.
Read more about Inversions for reduced words
Sato-Tale groups and automorphy for nongeneric genus 2 curves
Andrew Booker
PIMS, University of Calgary
PIMS CRG in Explicit Methods for Abelian Varieties
I will describe recent joint work with Jeroen Sijsling, Drew Sutherland, John Voight and Dan Yasaki on genus 2 curves over Q. Our work has three primary goals: (1) produce an extensive table of genus 2 curves and their associated invariants; (2) explain the various Sato-Tate groups that arise in terms of functoriality; (3) prove at least one example of modularity for each nongeneric Sato-Tate group. Goal (1) was achieved in arXiv:1602.03715, with the data accessible inthe LMFDB, while goals (2) and (3) are in progress.
Read more about Sato-Tale groups and automorphy for nongeneric genus 2 curves
Recent Results on Bootstrap Percolation
Béla Bollobás
Fri, Feb 15, 2013 to Sat, Feb 16, 2013
Bootstrap percolation, one of the simplest cellular automata, can be viewed as an oversimplified model of the spread of an infection on a graph. In the past three decades, much work has been done on bootstrap percolation on finite grids of a given dimension in which the initially infected set A is obtained by selecting its vertices at random, with the same probability p, independently of all other choices. The focus has been on the critical probability, the value of p at which the probability of percolation (eventual full infection) is 1/2.
The first half of my talk will be a review of some of the fundamental results concerning critical probabilities proved by Aizenman, Lebowitz, Schonman, Cerf, Cirillo, Manzo, Holroyd and others, and by Balogh, Morris, Duminil-Copin and myself. The second half will about about the very recent results I have obtained with Holmgren, Smith, Uzzell and Balister on the time a random initial set takes to percolate.
Read more about Recent Results on Bootstrap Percolation
Alan Turing and Enigma
John R. Ferris
Tue, Mar 27, 2012 to Wed, Mar 28, 2012
Alan Turing Year
Central to Alan Turing's posthumous reputation is his work with British codebreaking during the Second World War. This relationship is not well understood, largely because it stands on the intersection of two technical fields, mathematics and cryptology, the second of which also has been shrouded by secrecy. This lecture will assess this relationship from an historical cryptological perspective. It treats the mathematization and mechanization of cryptology between 1920-50 as international phenomena. It assesses Turing's role in one important phase of this process, British work at Bletchley Park in developing cryptanalytical machines for use against Enigma in 1940-41. It focuses on also his interest in and work with cryptographic machines between 1942-46, and concludes that work with them served as a seed bed for the development of his thinking about computers.
Turing 2012 - Calgary
This talk is part of a series celebrating the Alan Turing Centenary in Calgary. The following mathtube videos are part of this series
Alan Turing and the Decision Problem, Richard Zach.
Turing's Real Machine, Michael R. Williams.
Alan Turing and Enigma, John R. Ferris.
Information Theory and Cryptography
Read more about Alan Turing and Enigma
Turing's Real Machines
Michael R. Williams
While Turing is best known for his abstract concept of a "Turing Machine," he did design (but not build) several other machines - particularly ones involved with code breaking and early computers. While Turing was a fine mathematician, he could not be trusted to actually try and construct the machines he designed - he would almost always break some delicate piece of equipment if he tried to do anything practical.
The early code-breaking machines (known as "bombes" - the Polish word for bomb, because of their loud ticking noise) were not designed by Turing but he had a hand in several later machines known as "Robinsons" and eventually the Colossus machines.
After the War he worked on an electronic computer design for the National Physical Laboratory - an innovative design unlike the other computing machines being considered at the time. He left the NPL before the machine was operational but made other contributions to early computers such as those being constructed at Manchester University.
This talk will describe some of his ideas behind these machines.
This talk is part of a series celebrating The Alan Turing Centenary in Calgary. The following mathtube videos are also part of this series
Read more about Turing's Real Machines
Alan Turing and the Decision Problem
Richard Zach
Tue, Jan 24, 2012 to Wed, Jan 25, 2012
Many scientific questions are considered solved to the best possible degree when we have a method for computing a solution. This is especially true in mathematics and those areas of science in which phenomena can be described mathematically: one only has to think of the methods of symbolic algebra in order to solve equations, or laws of physics which allow one to calculate unknown quantities from known measurements. The crowning achievement of mathematics would thus be a systematic way to compute the solution to any mathematical problem. The hope that this was possible was perhaps first articulated by the 18th century mathematician-philosopher G. W. Leibniz. Advances in the foundations of mathematics in the early 20th century made it possible in the 1920s to first formulate the question of whether there is such a systematic way to find a solution to every mathematical problem. This became known as the decision problem, and it was considered a major open problem in the 1920s and 1930s. Alan Turing solved it in his first, groundbreaking paper "On computable numbers" (1936). In order to show that there cannot be a systematic computational procedure that solves every mathematical question, Turing had to provide a convincing analysis of what a computational procedure is. His abstract, mathematical model of computability is that of a Turing Machine. He showed that no Turing machine, and hence no computational procedure at all, could solve the Entscheidungsproblem.
Read more about Alan Turing and the Decision Problem
Cohomology of Quasiperiodic Tilings
Franz Gaehler
Thu, Aug 1, 2002
University of Victoria, Victoria, Canada
Aperiodic Order, Dynamical Systems, Operator Algebras and Topology
• Quasiperiodic tilings
• The hull of a tiling
• Approximation the hull by CW-spaces
• Application to canonical projection tilings
• Relation to matching rules
• Towards an interpretation
Gaehler_Franz_1.pdf
Read more about Cohomology of Quasiperiodic Tilings
On the Chromatic Number of Graphs and Set Systems
András Hajnal
Wed, Sep 1, 2004
University of Calgary, Calgary, Canada
PIMS Distinguished Chair Lectures
During this series of lectures, we are talking about infinite graphs and set systems, so this will be infinite combinatorics. This subject was initiated by Paul Erdös in the late 1940's.
I will try to show in these lectures how it becomes an important part of modern set theory, first serving as a test case for modern tools, but also influencing their developments.
In the first few of the lectures, I will pretend that I am talking about a joint work of István Juhász, Saharon Shelah and myself [23].
The actual highly technical result of this paper that appeared in the Fundamenta in 2000 will only be stated in the second or the third part of these lectures. Meanwhile I will introduce the main concepts and state—--and sometimes prove—--simple results about them.
Hajnal.pdf
Read more about On the Chromatic Number of Graphs and Set Systems
Exponential Sums Over Multiplicative Groups in Fields of Prime Order and Related Combinatorial Problems
Sergei Konyagin
Thu, Apr 1, 2004
University of British Columbia, Vancouver, Canada
Let $p$ be a prime. The main subject of my talks is the estimation of exponential sums over an arbitrary subgroup $G$ of the multiplicative group ${\mathbb Z}^*_p$: $$S(a, G) = \sum_{x\in G} \exp(2\pi iax/p), a \in \mathbb Z_p.$$ These sums have numerous applications in additive problems modulo $p$, pseudo-random generators, coding theory, theory of algebraic curves and other problems.
Konyagin_Lectures.pdf
Group Theory
Read more about Exponential Sums Over Multiplicative Groups in Fields of Prime Order and Related Combinatorial Problems | CommonCrawl |
Materials Research (98)
Film, Media, Mass Communication (2)
Language and Linguistics (2)
MRS Online Proceedings Library Archive (93)
European Astronomical Society Publications Series (12)
Journal of Plasma Physics (12)
Quaternary Research (5)
Acta geneticae medicae et gemellologiae: twin research (4)
Infection Control & Hospital Epidemiology (4)
Psychophysiology (4)
Radioprotection (4)
Journal of Clinical and Translational Science (3)
Powder Diffraction (3)
The British Journal of Psychiatry (3)
Edinburgh University Press (2)
Materials Research Society (95)
EDPS Sciences - Radioprotection (4)
Society for Healthcare Epidemiology of America (SHEA) (4)
Society for Psychophysiological Research (4)
AEPC Association of European Paediatric Cardiology (2)
International Neuropsychological Society INS (2)
World Association for Disaster and Emergency Medicine (2)
American Academy of Cerebral and Developmental Medicine (1)
KNGMG Members (1)
Anneli Lax New Mathematical Library (20)
Pubns Manchester Centre for Anglo-Saxon Studies (17)
Animal Suffering and the Darwinian Problem of Evil
John R. Schneider
John Schneider explores the problem that animal suffering, caused by the inherent nature of Darwinian evolution, poses to belief in theism. Examining the aesthetic aspects of this moral problem, Schneider focuses on the three prevailing approaches to it: that the Fall caused animal suffering in nature (Lapsarian Theodicy), that Darwinian evolution was the only way for God to create an acceptably good and valuable world (Only-Way Theodicy), and that evolution is the source of major, God-justifying beauty (Aesthetic Theodicy). He also uses canonical texts and doctrines from Judaism and Christianity - notably the book of Job, and the doctrines of the incarnation, atonement, and resurrection - to build on insights taken from the non-lapsarian alternative approaches. Schneider thus constructs an original, God-justifying account of God and the evolutionary suffering of animals. His book enables readers to see that the Darwinian configuration of animal suffering unveiled by scientists is not as implausible on Christian theism as commonly supposed.
Reciprocal Space Mapping of Epitaxial Materials Using Position-Sensitive X-ray Detection
S. R. Lee, B. L. Doyle, T. J. Drummond, J. W. Medernach, P. Schneider
Journal: Advances in X-ray Analysis / Volume 38 / 1994
Reciprocal space mapping can be efficiently carried out using a position-sensitive x-ray detector (PSD) coupled to a traditional double-axis diffractometer. The PSD offers parallel measurement of the total scattering angle of all diffracted x-rays during a single rocking-curve scan. As a result, a two-dimensional reciprocal space map can be made in a very short time similar to that of a one-dimensional rocking-curve scan. Fast, efficient reciprocal space mapping offers numerous routine advantages to the x-ray diffraction analyst. Some of these advantages arc the explicit differentiation of lattice strain from crystal orientation effects in strain-relaxed heteroepitaxial layers; the nondestructive characterization of the size, shape and orientation of nanocrystalline domains in ordered-alloy epilayers; and the ability to measure the average size and shape of voids in porous epilayers. Here, the PSD-based diffractometer is described, and specific examples clearly illustrating the advantages of complete reciprocal space analysis are presented.
Probing the high-redshift universe with SPICA: Toward the epoch of reionisation and beyond
Exploring Astronomical Evolution with SPICA
E. Egami, S. Gallerani, R. Schneider, A. Pallottini, L. Vallini, E. Sobacchi, A. Ferrara, S. Bianchi, M. Bocchio, S. Marassi, L. Armus, L. Spinoglio, A. W. Blain, M. Bradford, D. L. Clements, H. Dannerbauer, J. A. Fernández-Ontiveros, E. González-Alfonso, M. J. Griffin, C. Gruppioni, H. Kaneda, K. Kohno, S. C. Madden, H. Matsuhara, F. Najarro, T. Nakagawa, S. Oliver, K. Omukai, T. Onaka, C. Pearson, I. Perez-Fournon, P. G. Pérez-González, D. Schaerer, D. Scott, S. Serjeant, J. D. Smith, F. F. S. van der Tak, T. Wada, H. Yajima
Published online by Cambridge University Press: 26 December 2018, e048
With the recent discovery of a dozen dusty star-forming galaxies and around 30 quasars at z > 5 that are hyper-luminous in the infrared (μ LIR > 1013 L⊙, where μ is a lensing magnification factor), the possibility has opened up for SPICA, the proposed ESA M5 mid-/far-infrared mission, to extend its spectroscopic studies toward the epoch of reionisation and beyond. In this paper, we examine the feasibility and scientific potential of such observations with SPICA's far-infrared spectrometer SAFARI, which will probe a spectral range (35–230 μm) that will be unexplored by ALMA and JWST. Our simulations show that SAFARI is capable of delivering good-quality spectra for hyper-luminous infrared galaxies at z = 5 − 10, allowing us to sample spectral features in the rest-frame mid-infrared and to investigate a host of key scientific issues, such as the relative importance of star formation versus AGN, the hardness of the radiation field, the level of chemical enrichment, and the properties of the molecular gas. From a broader perspective, SAFARI offers the potential to open up a new frontier in the study of the early Universe, providing access to uniquely powerful spectral features for probing first-generation objects, such as the key cooling lines of low-metallicity or metal-free forming galaxies (fine-structure and H2 lines) and emission features of solid compounds freshly synthesised by Population III supernovae. Ultimately, SAFARI's ability to explore the high-redshift Universe will be determined by the availability of sufficiently bright targets (whether intrinsically luminous or gravitationally lensed). With its launch expected around 2030, SPICA is ideally positioned to take full advantage of upcoming wide-field surveys such as LSST, SKA, Euclid, and WFIRST, which are likely to provide extraordinary targets for SAFARI.
2138 Susceptibility to social influence is associated with alcohol self-administration and subjective alcohol effects
Alyssa Schneider, Bethany Stangl, Elgin R. Yalin, Jodi M. Gilman, Vijay Ramchandani
Journal: Journal of Clinical and Translational Science / Volume 2 / Issue S1 / June 2018
Published online by Cambridge University Press: 21 November 2018, pp. 47-48
OBJECTIVES/SPECIFIC AIMS: Peer groups are one of the strongest determinants of alcohol use and misuse. Furthermore, social influence plays a significant role in alcohol use across the lifespan. One of the factors that most consistently predicts successful treatment outcomes for alcohol use disorders is one's ability to change their social network. However, the concept of social influence as defined by suggestibility or susceptibility to social influence has not yet been studied as it relates to drinking behavior and acute subjective response to alcohol. Our objective was to examine the relationship between suggestibility and alcohol consumption and responses, using an intravenous alcohol self-administration (IV-ASA) paradigm in social drinkers. METHODS/STUDY POPULATION: Healthy, social drinkers (n=20) completed a human laboratory session in which they underwent the IV-ASA paradigm. This consisted of an initial 25-minute priming phase, where participants were prompted to push a button to receive individually standardized IV alcohol infusions, followed by a 125-minute phase during which they could push the button for additional infusions. IV-ASA measures included the peak and average breath alcohol concentration (BrAC) and number of button presses. Subjective responses were assessed using the Drug Effects Questionnaire (DEQ) and Alcohol Urge Questionnaire (AUQ) collected serially during the session. Participants completed the Multidimensional Iowa Suggestibility Scale (MISS) to assess suggestibility. The Alcohol Effects Questionnaire (AEFQ) was used to assess alcohol expectancies and the Timeline Followback questionnaire measured recent drinking history. RESULTS/ANTICIPATED RESULTS: After controlling for drinking history, greater suggestibility significantly predicted greater average BrAC, greater peak BrAC, and a greater number of button presses (p=0.03, p=0.02, p=0.04, respectively) during the early open bar phase. Suggestibility significantly predicted subjective alcohol effects following the priming phase which included "Feel," "Want," "High," and "Intoxicated" and was trending for "Like" (p=0.02, p=0.03, p=0.01, p=0.03, p=0.054, respectively) as well as AUQ (p=0.03). After controlling for drinking history, suggestibility significantly predicted "Feel," "Like," "High," and "Intoxicated" peak scores during the open bar phase (p=0.03, p=0.009, p=0.03, p=0.03, respectively). There was no association between suggestibility and "Want More" alcohol. Suggestibility was positively associated with three positive expectancies (global positive; p=0.04, social expressiveness; p=0.005, relaxation; p=0.03), and one negative expectancy (cognitive and physical impairment; p=0.02). DISCUSSION/SIGNIFICANCE OF IMPACT: These results indicate that social drinkers that were more suggestible had higher alcohol consumption, greater acute subjective response to alcohol, and more positive alcohol expectancies. As such, susceptibility to social influence may be an important determinant of alcohol consumption, and may provide insight into harmful drinking behavior such as binge drinking. Future analyses should examine the impact of suggestibility on alcohol-related phenotypes across the spectrum of drinking from social to binge and heavy drinking patterns.
P.002 Exosomal miR-204-5 and miR-632 in CSF are candidate biomarkers for frontotemporal dementia: a GENFI study
R Schneider, P McKeever, T Kim, C Graff, J van Swieten, A Karydas, A Boxer, H Rosen, B Miller, R Laforce, D Galimberti, M Masellis, B Borroni, Z Zhang, L Zinman, JD Rohrer, MC Tartaglia, J Robertson
Background: To determine whether exosomal microRNAs (miRNAs) in CSF of patients with FTD can serve as diagnostic biomarkers, we assessed miRNA expression in the Genetic FTD Initiative (GENFI) cohort and in sporadic FTD. Methods: GENFI participants were either carriers of a pathogenic mutation or at risk of carrying a mutation because a first-degree relative was a symptomatic mutation carrier. Exosomes were isolated from CSF of 23 -pre-symptomatic and 15 symptomatic mutation carriers, and 11 healthy non-mutation carriers. Expression of miRNAs was measured using qPCR arrays. MiRNAs differentially expressed in symptomatic compared to pre-symptomatic mutation carriers were evaluated in 17 patients with sporadic FTD, 13 patients with sporadic Alzheimer's disease (AD), and 10 healthy controls (HCs). Results: In the GENFI cohort, miR-204-5p and miR-632 were significantly decreased in symptomatic compared to pre-symptomatic mutation carriers. Decrease of miR-204-5p and miR-632 revealed receiver operator characteristics with an area of 0.89 [90% CI: 0.79-0.98] and 0.81 [90% CI: 0.68-0.93], and when combined an area of 0.93 [90% CI: 0.87-0.99]. In sporadic FTD, only miR-632 was significantly decreased compared to sporadic AD and HCs. Decrease of miR-632 revealed an area of 0.89 [90% CI: 0.80-0.98]. Conclusions: Exosomal miR-204-5p and miR-632 have potential as diagnostic biomarkers for genetic FTD and miR-632 also for sporadic FTD.
Galaxy Evolution Studies with the SPace IR Telescope for Cosmology and Astrophysics (SPICA): The Power of IR Spectroscopy
L. Spinoglio, A. Alonso-Herrero, L. Armus, M. Baes, J. Bernard-Salas, S. Bianchi, M. Bocchio, A. Bolatto, C. Bradford, J. Braine, F. J. Carrera, L. Ciesla, D. L. Clements, H. Dannerbauer, Y. Doi, A. Efstathiou, E. Egami, J. A. Fernández-Ontiveros, A. Ferrara, J. Fischer, A. Franceschini, S. Gallerani, M. Giard, E. González-Alfonso, C. Gruppioni, P. Guillard, E. Hatziminaoglou, M. Imanishi, D. Ishihara, N. Isobe, H. Kaneda, M. Kawada, K. Kohno, J. Kwon, S. Madden, M. A. Malkan, S. Marassi, H. Matsuhara, M. Matsuura, G. Miniutti, K. Nagamine, T. Nagao, F. Najarro, T. Nakagawa, T. Onaka, S. Oyabu, A. Pallottini, L. Piro, F. Pozzi, G. Rodighiero, P. Roelfsema, I. Sakon, P. Santini, D. Schaerer, R. Schneider, D. Scott, S. Serjeant, H. Shibai, J.-D. T. Smith, E. Sobacchi, E. Sturm, T. Suzuki, L. Vallini, F. van der Tak, C. Vignali, T. Yamada, T. Wada, L. Wang
IR spectroscopy in the range 12–230 μm with the SPace IR telescope for Cosmology and Astrophysics (SPICA) will reveal the physical processes governing the formation and evolution of galaxies and black holes through cosmic time, bridging the gap between the James Webb Space Telescope and the upcoming Extremely Large Telescopes at shorter wavelengths and the Atacama Large Millimeter Array at longer wavelengths. The SPICA, with its 2.5-m telescope actively cooled to below 8 K, will obtain the first spectroscopic determination, in the mid-IR rest-frame, of both the star-formation rate and black hole accretion rate histories of galaxies, reaching lookback times of 12 Gyr, for large statistically significant samples. Densities, temperatures, radiation fields, and gas-phase metallicities will be measured in dust-obscured galaxies and active galactic nuclei, sampling a large range in mass and luminosity, from faint local dwarf galaxies to luminous quasars in the distant Universe. Active galactic nuclei and starburst feedback and feeding mechanisms in distant galaxies will be uncovered through detailed measurements of molecular and atomic line profiles. The SPICA's large-area deep spectrophotometric surveys will provide mid-IR spectra and continuum fluxes for unbiased samples of tens of thousands of galaxies, out to redshifts of z ~ 6.
Behavioral considerations for effective time-varying electricity prices
IAN SCHNEIDER, CASS R. SUNSTEIN
Journal: Behavioural Public Policy / Volume 1 / Issue 2 / November 2017
Wholesale prices for electricity vary significantly due to high fluctuations and low elasticity of short-run demand. End-use customers have typically paid flat retail rates for their electricity consumption, and time-varying prices (TVPs) have been proposed to help reduce peak consumption and lower the overall cost of servicing demand. Unfortunately, the general practice is an opt-in system: a default rule in favor of TVPs would be far better. A behaviorally informed analysis also shows that when transaction costs and decision biases are taken into account, the most cost-reflective policies are not necessarily the most efficient. On reasonable assumptions, real-time prices can result in less peak conservation of manually controlled devices than time-of-use or critical-peak prices. For that reason, the trade-offs between engaging automated and manually controlled loads must be carefully considered in time-varying rate design. The rate type and accompanying program details should be designed with the behavioral biases of consumers in mind, while minimizing price distortions for automated devices.
Glacier change and climate forcing in recent decades at Gran Campo Nevado, southernmost Patagonia
M. Möller, C. Schneider, R. Kilian
Journal: Annals of Glaciology / Volume 46 / 2007
Digital terrain models of the southern Chilean ice cap Gran Campo Nevado reflecting the terrain situations of the years 1984 and 2000 were compared in order to obtain the volumetric glacier changes that had occurred during this period. The result shows a slightly negative mean glacier change of 3.80 m. The outlet glacier tongues show a massive thinning, whereas the centre of the ice cap is characterized by a moderate thickening. Thus a distinct altitudinal variability of the glacier change is noticed. Hypothetically this could be explained by the combined effects of increased precipitation and increased mean annual air temperature. Both to verify and to quantify this pattern of climatic change, the mean glacier change as well as its hypsometric variation are compared with the results of a degree-day model. The observed volumetric glacier change is traced back to possible climate forcing and can be linked to an underlying climate change that must be comparable with the effects of a precipitation offset of at least 7–8% and a temperature offset of around 0.3 K compared to the steady-state conditions in the period 1984–2000.
2276: The impact of social influence and impulsivity on IV alcohol self-administration in non-dependent drinkers
Alyssa Schneider, Bethany L. Stangl, Elgin R. Yalin, Jodi M. Gilman, Vijay Ramchandani
Journal: Journal of Clinical and Translational Science / Volume 1 / Issue S1 / September 2017
Published online by Cambridge University Press: 10 May 2018, pp. 33-34
OBJECTIVES/SPECIFIC AIMS: Impulsivity is a significant predictor of alcohol use and drinking behavior, and has been shown to be a critical trait in those with alcohol use disorder. Suggestibility, or susceptibility to social influence, has been shown to correlate with impulsivity, with highly suggestible individuals being more likely to make impulsive decisions influenced by peer groups. However, the relationship between social influence and drinking behavior is unclear. Our objective was to describe the relationship between social influence and impulsivity traits using the social delayed discounting task and potential differences in intravenous alcohol self-administration (IV-ASA) behavior. METHODS/STUDY POPULATION: Healthy, non-dependent drinkers (n=20) completed a CAIS session, which consisted of an initial 25-minute priming phase, where subjects were prompted to push a button to receive individually standardized IV alcohol infusions, followed by a 125-minute phase during which they could push the button for additional infusions. IV-ASA measures included the peak (PEAK) and average (AVG) BrAC and Number of Button Presses (NBP). Participants completed a social delayed discounting task (SDDT), where participants were presented with the choice of a small, sooner (SS) reward or a large, later (LL) reward. Before starting the task, participants chose peers who selected either the impulsive (SI) or non-impulsive choice (S). Intermittently, the peers' choice was not shown (X) or different choices (D) were selected. Participants also completed the MISS, the Barratt Impulsiveness Scale (BIS-11), UPPS-P Impulsive Behavior Scale, and the NEO personality inventory. RESULTS/ANTICIPATED RESULTS: Participants with higher suggestibility scores had greater NBP, AVG, and PEAK BrAC in the early phase of the IV-ASA session. Higher scores on the MISS were also correlated with higher impulsivity scores including the NEO Neuroticism (N-factor) measure, BIS-11, and UPPS-P. Results also showed that the MISS score was inversely correlated with the percent of impulsive choices in the SDDT, but that this was independent of peers' impulsive or nonimpulsive choices. DISCUSSION/SIGNIFICANCE OF IMPACT: These results indicate that non-dependent drinkers that were more susceptible to social influence had heavier drinking patterns, higher IV-ASA, and higher scores on impulsivity measures. In addition, individuals that were more susceptible to social influence made more impulsive choices in general, but those choices were not affected by peer decisions during the task. As such, susceptibility to social influence may be an important determinant of impulsive choices, particularly in relation to alcohol consumption.
Of Maps and Models: A New Method for Determining the Biological Significance of Sclerobiont Positions on Brachiopod Hosts
Kristina M. Barclay, Chris L. Schneider, Lindsey R. Leighton
Journal: The Paleontological Society Special Publications / Volume 13 / 2014
Published online by Cambridge University Press: 26 July 2017, p. 49
Superior petrosal sinus causing superior canal dehiscence syndrome
S M D Schneiders, J W Rainsbury, E F Hensen, R M Irving
Journal: The Journal of Laryngology & Otology / Volume 131 / Issue 7 / July 2017
To determine signs and symptoms for superior canal dehiscence syndrome caused by the superior petrosal sinus.
A review of the English-language literature on PubMed and Embase databases was conducted, in addition to a multi-centre case series report.
The most common symptoms of 17 patients with superior petrosal sinus related superior canal dehiscence syndrome were: hearing loss (53 per cent), aural fullness (47 per cent), pulsatile tinnitus (41 per cent) and pressure-induced vertigo (41 per cent). The diagnosis was made by demonstration of the characteristic bony groove of the superior petrosal sinus and the 'cookie bite' out of the superior semicircular canal on computed tomography imaging.
Pulsatile tinnitus, hearing loss, aural fullness and pressure-induced vertigo are the most common symptoms in superior petrosal sinus related superior canal dehiscence syndrome. Compared to superior canal dehiscence syndrome caused by the more common apical location of the dehiscence, pulsatile tinnitus and exercise-induced vertigo are more frequent, while sound-induced vertigo and autophony are less frequent. There is, however, considerable overlap between the two subtypes. The distinction cannot as yet be made on clinical signs and symptoms alone, and requires careful analysis of computed tomography imaging.
Network dynamics of HIV risk and prevention in a population-based cohort of young Black men who have sex with men – CORRIGENDUM
J. Schneider, B. Cornwell, A. Jonas, N. Lancki, R. Behler, B. Skaathun, L. E. Young, E. Morgan, S. Michaels, R. Duvoisin, A. S. Khanna, S. Friedman, P. Schumm, E. Laumann, for the uConnect Study Team
Journal: Network Science / Volume 5 / Issue 2 / June 2017
Published online by Cambridge University Press: 20 April 2017, p. 247
The order of the authors in the published article is incorrect. The authors should appear as follows:
J. Schneider, B. Cornwell, A. Jonas, R. Behler, N. Lancki, B. Skaathun, L. E. Young, E. Morgan, S. Michaels, R. Duvoisin, A. S. Khanna, S. Friedman, P. Schumm, E. Laumann, for the uConnect Study Team
The authors regret the error.
Ion angular distribution simulation of the Highly Efficient Multistage Plasma Thruster
Solved and Unsolved problems in Plasma Physics
J. Duras, D. Kahnfeld, G. Bandelow, S. Kemnitz, K. Lüskow, P. Matthias, N. Koch, R. Schneider
Journal: Journal of Plasma Physics / Volume 83 / Issue 1 / February 2017
Published online by Cambridge University Press: 22 February 2017, 595830107
Ion angular current and energy distributions are important parameters for ion thrusters, which are typically measured at a few tens of centimetres to a few metres distance from the thruster exit. However, fully kinetic particle-in-cell (PIC) simulations are not able to simulate such domain sizes due to high computational costs. Therefore, a parallelisation strategy of the code is presented to reduce computational time. The calculated ion beam angular distributions in the plume region are quite sensitive to boundary conditions of the potential, possible additional source contributions (e.g. from secondary electron emission at vessel walls) and charge exchange collisions. Within this work a model for secondary electrons emitted from the vessel wall is included. In order to account for limits of the model due to its limited domain size, a correction of the simulated angular ion energy distribution by the potential boundary is presented to represent the conditions at the location of the experimental measurement in $1~\text{m}$ distance. In addition, a post-processing procedure is suggested to include charge exchange collisions in the plume region not covered by the original PIC simulation domain for the simulation of ion angular distributions measured at $1~\text{m}$ distance.
Network dynamics of HIV risk and prevention in a population-based cohort of young Black men who have sex with men
Journal: Network Science / Volume 5 / Issue 3 / September 2017
Critical to the development of improved HIV elimination efforts is a greater understanding of how social networks and their dynamics are related to HIV risk and prevention. In this paper, we examine network stability of confidant and sexual networks among young black men who have sex with men (YBMSM). We use data from uConnect (2013–2016), a population-based, longitudinal cohort study. We use an innovative approach to measure both sexual and confidant network stability at three time points, and examine the relationship between each type of stability and HIV risk and prevention behaviors. This approach is consistent with a co-evolutionary perspective in which behavior is not only affected by static properties of an individual's network, but may also be associated with changes in the topology of his or her egocentric network. Our results indicate that although confidant and sexual network stability are moderately correlated, their dynamics are distinct with different predictors and differing associations with behavior. Both types of stability are associated with lower rates of risk behaviors, and both are reduced among those who have spent time in jail. Public health awareness and engagement with both types of networks may provide new opportunities for HIV prevention interventions.
Feasibility of common bibliometrics in evaluating translational science
M. Schneider, C. M. Kane, J. Rainwater, L. Guerrero, G. Tong, S. R. Desai, W. Trochim
Journal: Journal of Clinical and Translational Science / Volume 1 / Issue 1 / February 2017
A pilot study by 6 Clinical and Translational Science Awards (CTSAs) explored how bibliometrics can be used to assess research influence.
Evaluators from 6 institutions shared data on publications (4202 total) they supported, and conducted a combined analysis with state-of-the-art tools. This paper presents selected results based on the tools from 2 widely used vendors for bibliometrics: Thomson Reuters and Elsevier.
Both vendors located a high percentage of publications within their proprietary databases (>90%) and provided similar but not equivalent bibliometrics for estimating productivity (number of publications) and influence (citation rates, percentage of papers in the top 10% of citations, observed citations relative to expected citations). A recently available bibliometric from the National Institutes of Health Office of Portfolio Analysis, examined after the initial analysis, showed tremendous potential for use in the CTSA context.
Despite challenges in making cross-CTSA comparisons, bibliometrics can enhance our understanding of the value of CTSA-supported clinical and translational research.
Ice Volcanoes of the Lake Erie Shore Near Dunkirk, New York, U.S.A.
R. K. Fahnestock, D. J. Crowley, M. Wilson, H. Schneider
Journal: Journal of Glaciology / Volume 12 / Issue 64 / 1973
Conical mounds of ice have been observed to form in a few hours during violent winter storms along the edge of shore-fast ice near Dunkirk, New York. They occur in lines which parallel depth contours, and are evenly spaced in the manner of beach cusps. The height and spacing of mounds and number of rows vary from year to year depending on such factors as storm duration and intensity, and the position of the edge of the shore-fast ice at the beginning of the storm.
The evenly sloping conical mounds have central channels which increase in width lakeward. The ice between the channels forms headlands above the lake surface. Spray-formed levees develop along the headlands and slope gently away from the lake margin. Lake marginal walls of ice are usually vertical.
Spray, slush and ice blocks are ejected over the cone as each successive wave is focused by the converging channel walls. Ice blocks, interlayered with frozen slush and dirt, form bedding paralleling the sloping surface of cones, headlands and levees. These features are here termed "ice volcanoes" because their origin is in so many ways analogous to that of true volcanoes.
Conceptual design of initial opacity experiments on the national ignition facility
R. F. Heeter, J. E. Bailey, R. S. Craxton, B. G. DeVolder, E. S. Dodd, E. M. Garcia, E. J. Huffman, C. A. Iglesias, J. A. King, J. L. Kline, D. A. Liedahl, P. W. McKenty, Y. P. Opachich, G. A. Rochau, P. W. Ross, M. B. Schneider, M. E. Sherrill, B. G. Wilson, R. Zhang, T. S. Perry
Published online by Cambridge University Press: 09 January 2017, 595830103
Accurate models of X-ray absorption and re-emission in partly stripped ions are necessary to calculate the structure of stars, the performance of hohlraums for inertial confinement fusion and many other systems in high-energy-density plasma physics. Despite theoretical progress, a persistent discrepancy exists with recent experiments at the Sandia Z facility studying iron in conditions characteristic of the solar radiative–convective transition region. The increased iron opacity measured at Z could help resolve a longstanding issue with the standard solar model, but requires a radical departure for opacity theory. To replicate the Z measurements, an opacity experiment has been designed for the National Facility (NIF). The design uses established techniques scaled to NIF. A laser-heated hohlraum will produce X-ray-heated uniform iron plasmas in local thermodynamic equilibrium (LTE) at temperatures ${\geqslant}150$ eV and electron densities ${\geqslant}7\times 10^{21}~\text{cm}^{-3}$ . The iron will be probed using continuum X-rays emitted in a ${\sim}200$ ps, ${\sim}200~\unicode[STIX]{x03BC}\text{m}$ diameter source from a 2 mm diameter polystyrene (CH) capsule implosion. In this design, $2/3$ of the NIF beams deliver 500 kJ to the ${\sim}6$ mm diameter hohlraum, and the remaining $1/3$ directly drive the CH capsule with 200 kJ. Calculations indicate this capsule backlighter should outshine the iron sample, delivering a point-projection transmission opacity measurement to a time-integrated X-ray spectrometer viewing down the hohlraum axis. Preliminary experiments to develop the backlighter and hohlraum are underway, informing simulated measurements to guide the final design.
Developing mental health research in sub-Saharan Africa: capacity building in the AFFIRM project
M. Schneider, K. Sorsdahl, R. Mayston, J. Ahrens, D. Chibanda, A. Fekadu, C. Hanlon, S. Holzer, S. Musisi, A. Ofori-Atta, G. Thornicroft, M. Prince, A. Alem, E. Susser, C. Lund
Journal: Global Mental Health / Volume 3 / 2016
There remains a large disparity in the quantity, quality and impact of mental health research carried out in sub-Saharan Africa, relative to both the burden and the amount of research carried out in other regions. We lack evidence on the capacity-building activities that are effective in achieving desired aims and appropriate methodologies for evaluating success.
AFFIRM was an NIMH-funded hub project including a capacity-building program with three components open to participants across six countries: (a) fellowships for an M.Phil. program; (b) funding for Ph.D. students conducting research nested within AFFIRM trials; (c) short courses in specialist research skills. We present findings on progression and outputs from the M.Phil. and Ph.D. programs, self-perceived impact of short courses, qualitative data on student experience, and reflections on experiences and lessons learnt from AFFIRM consortium members.
AFFIRM delivered funded research training opportunities to 25 mental health professionals, 90 researchers and five Ph.D. students across 6 countries over a period of 5 years. A number of challenges were identified and suggestions for improving the capacity-building activities explored.
Having protected time for research is a barrier to carrying out research activities for busy clinicians. Funders could support sustainability of capacity-building initiatives through funds for travel and study leave. Adoption of a train-the-trainers model for specialist skills training and strategies for improving the rigor of evaluation of capacity-building activities should be considered.
Emergency department visits for attempted suicide and self harm in the USA: 2006–2013
J. K. Canner, K. Giuliano, S. Selvarajah, E. R. Hammond, E. B. Schneider
Journal: Epidemiology and Psychiatric Sciences / Volume 27 / Issue 1 / February 2018
Published online by Cambridge University Press: 17 November 2016, pp. 94-102
Aims.
To characterise and identify nationwide trends in suicide-related emergency department (ED) visits in the USA from 2006 to 2013.
We used data from the Nationwide Emergency Department Sample (NEDS) from 2006 to 2013. E-codes were used to identify ED visits related to suicide attempts and self-inflicted injury. Visits were characterised by factors such as age, sex, US census region, calendar month, as well as injury severity and mechanism. Injury severity and mechanism were compared between age groups and sex by chi-square tests and Wilcoxon rank-sum tests. Population-based rates were computed using US Census data.
Between 2006 and 2013, a total of 3 567 084 suicide attempt-related ED visits were reported. The total number of visits was stable between 2006 and 2013, with a population-based rate ranging from 163.1 to 173.8 per 100 000 annually. The frequency of these visits peaks during ages 15–19 and plateaus during ages 35–45, with a mean age at presentation of 33.2 years. More visits were by females (57.4%) than by males (42.6%); however, the age patterns for males and females were similar. Visits peaked in late spring (8.9% of all visits occurred in May), with a smaller peak in the fall. The most common mechanism of injury was poisoning (66.5%), followed by cutting and piercing (22.1%). Males were 1.6 times more likely than females to use violent methods to attempt suicide (OR = 1.64; 95% CI = 1.60–1.68; p < 0.001). The vast majority of patients (82.7%) had a concurrent mental disorder. Mood disorders were the most common (42.1%), followed by substance-related disorders (12.1%), alcohol-related disorders (8.9%) and anxiety disorders (6.4%).
Conclusions.
The annual incidence of ED visits for attempted suicide and self-inflicted injury in the NEDS is comparable with figures previously reported from other national databases. We highlighted the value of the NEDS in allowing us to look in depth at age, sex, seasonal and mechanism patterns. Furthermore, using this large national database, we confirmed results from previous smaller studies, including a higher incidence of suicide attempts among women and individuals aged 15–19 years, a large seasonal peak in suicide attempts in the spring, a predominance of poisoning as the mechanism of injury for suicide attempts and a greater use of violent mechanisms in men, suggesting possible avenues for further research into strategies for prevention.
Monitoring and evaluating capacity building activities in low and middle income countries: challenges and opportunities
M. Schneider, T. van de Water, R. Araya, B. B. Bonini, D. J. Pilowsky, C. Pratt, L. Price, G. Rojas, S. Seedat, M. Sharma, E. Susser
Published online by Cambridge University Press: 21 October 2016, e29
Lower and middle income countries (LMICs) are home to >80% of the global population, but mental health researchers and LMIC investigator led publications are concentrated in 10% of LMICs. Increasing research and research outputs, such as in the form of peer reviewed publications, require increased capacity building (CB) opportunities in LMICs. The National Institute of Mental Health (NIMH) initiative, Collaborative Hubs for International Research on Mental Health reaches across five regional 'hubs' established in LMICs, to provide training and support for emerging researchers through hub-specific CB activities. This paper describes the range of CB activities, the process of monitoring, and the early outcomes of CB activities conducted by the five research hubs.
The indicators used to describe the nature, the monitoring, and the early outcomes of CB activities were developed collectively by the members of an inter-hub CB workgroup representing all five hubs. These indicators included but were not limited to courses, publications, and grants.
Results for all indicators demonstrate a wide range of feasible CB activities. The five hubs were successful in providing at least one and the majority several courses; 13 CB recipient-led articles were accepted for publication; and nine grant applications were successful.
The hubs were successful in providing CB recipients with a wide range of CB activities. The challenge remains to ensure ongoing CB of mental health researchers in LMICs, and in particular, to sustain the CB efforts of the five hubs after the termination of NIMH funding. | CommonCrawl |
Inverse problems for evolution equations with time dependent operator-coefficients
DCDS-S Home
Existence of solutions for anisotropic Cahn-Hilliard and Allen-Cahn systems in higher space dimensions
June 2016, 9(3): 745-757. doi: 10.3934/dcdss.2016026
Observability of $N$-dimensional integro-differential systems
Paola Loreti 1, and Daniela Sforza 2,
Dipartimento di Scienze di Base e Applicate per l'Ingegneria, Sezione di Matematica, Sapienza Università di Roma, Via A. Scarpa 16, 00161 Roma
Dipartimento di Scienze di Base e Applicate per l'Ingegneria, Sapienza Università di Roma, Via Antonio Scarpa 16 I-00161 Roma, Italy
Received March 2015 Revised September 2015 Published April 2016
The aim of the paper is to show a reachability result for the solution of a multidimensional coupled Petrovsky and wave system when a non local term, expressed as a convolution integral, is active. Motivations to the study are in linear acoustic theory in three dimensions. To achieve that, we prove observability estimates by means of Ingham type inequalities applied to the Fourier series expansion of the solution.
Keywords: Coupled systems, Fourier series, Ingham estimates, convolution kernels, reachability..
Mathematics Subject Classification: Primary: 93B05, 45K05; Secondary: 42A3.
Citation: Paola Loreti, Daniela Sforza. Observability of $N$-dimensional integro-differential systems. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 745-757. doi: 10.3934/dcdss.2016026
G. Gripenberg, S. O. Londen and O. J. Staffans, Volterra Integral and Functional Equations,, Encyclopedia Math. Appl. 34, 34 (1990). doi: 10.1017/CBO9780511662805. Google Scholar
A. Hanyga, Dispersion and attenuation for an acoustic wave equation consistent with viscoelasticity,, J. Comput. Acoust., 22 (2014). doi: 10.1142/S0218396X14500064. Google Scholar
A. E. Ingham, Some trigonometrical inequalities with applications to the theory of series,, Math. Z., 41 (1936), 367. doi: 10.1007/BF01180426. Google Scholar
V. Komornik and P. Loreti, Ingham type theorems for vector-valued functions and observability of coupled linear system,, SIAM J. Control Optim., 37 (1999), 461. doi: 10.1137/S0363012997317505. Google Scholar
V. Komornik and P. Loreti, Fourier Series in Control Theory,, Springer Monogr. Math., (2005). doi: 10.1007/b139040. Google Scholar
J. E. Lagnese and J.-L. Lions, Modelling Analysis and Control of Thin Plates,, Rech. Math. Appl., (1988). Google Scholar
I. Lasiecka and R. Triggiani, Exact controllability of the wave equation with Neumann boundary control,, Appl. Math. Optim., 19 (1989), 243. doi: 10.1007/BF01448201. Google Scholar
J.-L. Lions, Contrôlabilité Exacte, Perturbations et Stabilisation de Systèmes Distribués. Tome 1. Contrôlabilité Exacte, with appendices by E. Zuazua, C. Bardos, G. Lebeau and J. Rauch,, Recherches en Mathématiques Appliquées [Research in Applied Mathematics] 8, 8 (1988). Google Scholar
J.-L. Lions, Contrôlabilité Exacte, Perturbations et Stabilisation de Systèmes Distribués. Tome 2. Perturbations,, Recherches en Mathématiques Appliquées [Research in Applied Mathematics] 9, 9 (1988). Google Scholar
P. Loreti and D. Sforza, Reachability problems for a class of integro-differential equations,, J. Differential Equations, 248 (2010), 1711. doi: 10.1016/j.jde.2009.09.016. Google Scholar
P. Loreti and D. Sforza, Multidimensional controllability problems with memory,, in Modern Aspects of the Theory of Partial Differential Equations (eds. M. Ruzhansky and J. Wirth), 216 (2011), 261. doi: 10.1007/978-3-0348-0069-3_15. Google Scholar
P. Loreti and D. Sforza, Control problems for weakly coupled systems with memory,, J. Differential Equations, 257 (2014), 1879. doi: 10.1016/j.jde.2014.05.016. Google Scholar
J. E. McDonald, Maxwellian Intepretation of the Laplacian,, Am. J. Phys., 33 (1965), 706. Google Scholar
J. Prüss, Evolutionary Integral Equations and Applications,, Monographs in Mathematics, 87 (1993). doi: 10.1007/978-3-0348-8570-6. Google Scholar
M. Renardy, W. J. Hrusa and J. A. Nohel, Mathematical Problems in Viscoelasticity,, Pitman Monogr. Pure Appl. Math., 35 (1987). Google Scholar
D. L. Russell, Controllability and stabilizability theory for linear partial differential equations: recent progress and open questions,, SIAM Rev., 20 (1978), 639. doi: 10.1137/1020095. Google Scholar
R. Triggiani, Exact boundary controllability on $L_2(\Omega)\times H^{-1}(\Omega)$ of the wave equation with Dirichlet boundary control acting on a portion of the boundary $\partial\Omega$, and related problems,, Appl. Math. Optim., 18 (1988), 241. doi: 10.1007/BF01443625. Google Scholar
Uwe Helmke, Jens Jordan, Julia Lieb. Probability estimates for reachability of linear systems defined over finite fields. Advances in Mathematics of Communications, 2016, 10 (1) : 63-78. doi: 10.3934/amc.2016.10.63
Michael Ruzhansky, Jens Wirth. Dispersive type estimates for fourier integrals and applications to hyperbolic systems. Conference Publications, 2011, 2011 (Special) : 1263-1270. doi: 10.3934/proc.2011.2011.1263
Mikhail Gusev. On reachability analysis for nonlinear control systems with state constraints. Conference Publications, 2015, 2015 (special) : 579-587. doi: 10.3934/proc.2015.0579
Diogo A. Gomes, Gabriele Terrone. Bernstein estimates: weakly coupled systems and integral equations. Communications on Pure & Applied Analysis, 2012, 11 (3) : 861-883. doi: 10.3934/cpaa.2012.11.861
El Mustapha Ait Ben Hassi, Farid Ammar khodja, Abdelkarim Hajjaj, Lahcen Maniar. Carleman Estimates and null controllability of coupled degenerate systems. Evolution Equations & Control Theory, 2013, 2 (3) : 441-459. doi: 10.3934/eect.2013.2.441
Da Xu. Numerical solutions of viscoelastic bending wave equations with two term time kernels by Runge-Kutta convolution quadrature. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2389-2416. doi: 10.3934/dcdsb.2017122
Michel Potier-Ferry, Foudil Mohri, Fan Xu, Noureddine Damil, Bouazza Braikat, Khadija Mhada, Heng Hu, Qun Huang, Saeid Nezamabadi. Cellular instabilities analyzed by multi-scale Fourier series: A review. Discrete & Continuous Dynamical Systems - S, 2016, 9 (2) : 585-597. doi: 10.3934/dcdss.2016013
Jiecheng Chen, Dashan Fan, Lijing Sun. Asymptotic estimates for unimodular Fourier multipliers on modulation spaces. Discrete & Continuous Dynamical Systems - A, 2012, 32 (2) : 467-485. doi: 10.3934/dcds.2012.32.467
Philippe Chartier, Ander Murua, Jesús María Sanz-Serna. A formal series approach to averaging: Exponentially small error estimates. Discrete & Continuous Dynamical Systems - A, 2012, 32 (9) : 3009-3027. doi: 10.3934/dcds.2012.32.3009
Jan-Cornelius Molnar. On two-sided estimates for the nonlinear Fourier transform of KdV. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3339-3356. doi: 10.3934/dcds.2016.36.3339
Barbara Brandolini, Francesco Chiacchio, Jeffrey J. Langford. Estimates for sums of eigenvalues of the free plate via the fourier transform. Communications on Pure & Applied Analysis, 2020, 19 (1) : 113-122. doi: 10.3934/cpaa.2020007
Wenmin Gong, Guangcun Lu. On coupled Dirac systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4329-4346. doi: 10.3934/dcds.2017185
Reinhard Racke. Instability of coupled systems with delay. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1753-1773. doi: 10.3934/cpaa.2012.11.1753
Bin Wang, Arieh Iserles. Dirichlet series for dynamical systems of first-order ordinary differential equations. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 281-298. doi: 10.3934/dcdsb.2014.19.281
Sabine Hittmeir, Sara Merino-Aceituno. Kinetic derivation of fractional Stokes and Stokes-Fourier systems. Kinetic & Related Models, 2016, 9 (1) : 105-129. doi: 10.3934/krm.2016.9.105
Gerard Gómez, Josep–Maria Mondelo, Carles Simó. A collocation method for the numerical Fourier analysis of quasi-periodic functions. II: Analytical error estimates. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 75-109. doi: 10.3934/dcdsb.2010.14.75
A. Zeblah, Y. Massim, S. Hadjeri, A. Benaissa, H. Hamdaoui. Optimization for series-parallel continuous power systems with buffers under reliability constraints using ant colony. Journal of Industrial & Management Optimization, 2006, 2 (4) : 467-479. doi: 10.3934/jimo.2006.2.467
Vilmos Komornik, Gérald Tenenbaum. An Ingham--Müntz type theorem and simultaneous observation problems. Evolution Equations & Control Theory, 2015, 4 (3) : 297-314. doi: 10.3934/eect.2015.4.297
Felipe Wallison Chaves-Silva, Sergio Guerrero, Jean Pierre Puel. Controllability of fast diffusion coupled parabolic systems. Mathematical Control & Related Fields, 2014, 4 (4) : 465-479. doi: 10.3934/mcrf.2014.4.465
V. Afraimovich, J.-R. Chazottes, A. Cordonet. Synchronization in directionally coupled systems: Some rigorous results. Discrete & Continuous Dynamical Systems - B, 2001, 1 (4) : 421-442. doi: 10.3934/dcdsb.2001.1.421
PDF downloads (14)
Paola Loreti Daniela Sforza | CommonCrawl |
Characterization of anti-plasmodial, analgesic and anti-inflammatory fraction of Maytenus senegalensis (lam.) Exell leaf extract in mice
Ali A. Jigam1,
Rachael Musa1,
Abdulkadir Abdullahi1 &
Bashir Lawal ORCID: orcid.org/0000-0003-0676-58751,2
The treatment inadequacy and toxicity associated with conventional anti-malarial, anti-inflammatory and analgesic drugs has called for the search of alternatives from medicinal plants, particularly, their phytochemicals with inherent pharmacological properties. In the present study, purified fraction of M. senegalensis leaf was evaluated for antimalarial, anti-inflammatory and analgesic properties.
Antimalarial study was conducted against Plasmodium chabaudi and Plasmodium berghei using 4 days suppressive test, while anti-inflammatory and analgesic studies were conducted using egg albumin induced paw oedema and acetic acid induced pain model respectively. Sub-acute toxicity was assessed using serum biochemical parameters following 3 weeks administrations of the purified fraction.
The purified fraction of M. senegalensis leaf shows dose dependent antiplasmodial activity with percentage curative effects of 15.24 ± 0.89, 45.70 ± 3.43 and 48.50 ± 4.56 at 75, 150 and 300 mg/kg bw against Plasmodium chabaudi and % curative effects of 44.25 ± 3.21, 72.74 ± 6.54 and 76.30 ± 8.32 respectively against Plasmodium berghei. The purified fraction exhibited 53.16 ± 4.09 and 60.76 ± 7.54 anti-inflammatory effect, 43.35 ± 4.98% and 44.83 ± 3.86% analgesic effect at 75 and 150 mg/kg bw respectively. GC-MS analysis confirmed the presence of 20α)-3-hydroxy-2-oxo-24-nor-friedela-1(10),3,5,7-tetraen-carboxylic acid-(29)-methylester, 2(4H)-Benzofuranone, 5,6,7,7a-tetrahydro- and 3-hydroxy-20(29)-lupen-28-ol and a terpenes (phytol) as the major antimalarial compounds in the fraction. The purified fraction increases the serum total proteins and transaminases concentrations but had no effect on serum levels of sodium, potassium, chloride, alkaline phosphatase, triglyceride and glucose in the mice.
The purified fraction of M. senegalensis leaf exhibited promising antimalarial, analgesic and anti-inflammatory activities. Thus, could serve as a template for the synthesis of new drug.
Malaria is an infectious protozoic and parasitic disease caused by five Plasmodium parasites: vivax, falciparum, malariae, Knowlesi and ovale [1]. More than half of the world's population are at risk of malaria, with about 212 million new case and 429, 000 death annually [2]. Sub-Saharan Africa accounts for over 90% of the malaria cases and deaths predominantly in children of age below five years and pregnant women [2]. Poor rural dwellers in in tropical and subtropical areas are highly vulnerable to this attacked owing to the favorable and ideal climatic condition for the reproduction and development of vectors, and parasites [3]. In addition, drug resistance is one of the major challenges facing malarial eradication program word wide [4].
Inflammation and pains are gaining research popularity owing to the etiologic role they play in various human diseases [5]. Dexamethasone, opioids, morphine and aspirin and other drugs have been established for the management of pain and inflammation; however, these drugs have recorded limited success due to unintended effects such as gastric lesions caused by non-steroidal anti-inflammatory drugs [6, 7]. Thus, the search for drugs alternative from natural product is recommended.
Natural products contain metabolite that has therapeutic values for uses in the managements of several diseases [8, 9]. The therapeutic effect plants are however, associated with their secondary metabolites they contain, particularly the alkaloids, terpenoids, and flavonoids, which are known to play defensive role in plants but exihibited different pharmacological effects in human/animals [10].
Maytenus senegalensis (Lam.) Exell is an African medicinal plant, commonly used traditionally for the treatment of a number of ailments, including rheumatism, snakebites, diarrhoea, eye infection, and dyspepsia [11]. Previous study has demonstrated that the extracts from various parts of M. senegalensis possess in vitro anti- plasmodial, anti-leishmanial, and antibacterial activities [12]. However, literature survey revealed dearth of scientific information on the pharmacological activities of purified fraction. The present study, therefore evaluated the antiplasmodial analgesic and anti-inflammatory effects of purified fractions from Maytenus senegalensis (Lam.) Exell leaf extract in mice.
A total of ninety (90) adult swiss albino mice weighing 25.34 ± 0.98 g were obtained from National Veterinary Research Institute (NVRI), Vom, Plateau State of Nigeria. The mice handling and experimentation was in concordance with the guidelines for laboratory animal use and care as contained in the European Convention on Animal Care Guidelines and Protocol.
Plasmodium chabaudi and Plasmodium berghei NK65 chloroquine-sensitive strain was obtained from National Institute of Pharmaceutical Research and Development (NIPRD) Abuja, Nigeria and maintained in the laboratory by serial passage in mice.
The plant Maytenus senegalensis (Lam.) Exell was collected from Bida, Niger state. The plant was authenticated by a botanist, from Department of Biological sciences, Federal university of Technology Minna, Nigeria. The leave was cleaned and air-dried at room temperature. The dried leave was pulverized into a coarse powder using mortar and pestle. The pulverized sample was stored in air-tight container.
Extraction and purification of Maytenus senegalensis (lam.) Exell fraction
Maytenus senegalensis (Lam.) Exell leaf (50 g) powder was moistened with 200 mL of 95% ethanol, alkalinified with 200 mL of ammonia solution and macerated for 24 h followed by extraction with ethanol. The ethanol extract was filtered, concentrated and treated with 1.0 N hydrochloric acid. The filtrate was further alkalinified with ammonia solution and the extract was obtained by fractionation in separating funnel using chloroform [13]. The fraction was purified and subjected to thin layer chromatography to obtained the pure fraction (0.6 g) for structural elucidation [13].
Anti-Plasmodial screening of the purified fraction of Maytenus senegalensis (lam.) Exell
Four days (4) suppressive test were used to evaluate the antimalarial properties of the purified fraction of Maytenus senegalensis (Lam.) Exell as described by Jigam et al. [14]. A total of 15 P. berghei infected mice were randomly grouped into five (I- V) of 3 mice each. Groups I – III animals were treated with 75, 150 and 300 mg/kg body weight each of purified fraction. Groups IV and V received normal saline (2 ml/kg body weight) and chloroquine (5 mg/kg body weight) to serve as negative and positive controls respectively. The same procedures (Four days (4) suppressive test) were repeated for Plasmodium chabaudi All the treatments were done orally for 4 consecutive days. Daily parasitaemia count was carried out by preparing a Giemsa stained-thin film and viewed under microscope as described by Jigam et al. [14].
$$ \%\mathrm{inhibition}=\frac{\mathrm{Mean}\ \mathrm{parasitemia}\ \mathrm{in}\ \mathrm{negative}\ \mathrm{control}-\mathrm{Mean}\ \mathrm{parasitemia}\ \mathrm{in}\ \mathrm{treated}}{\mathrm{Mean}\ \mathrm{parasitemia}\ \mathrm{in}\ \mathrm{negative}\ \mathrm{control}}\times 100 $$
Anti-inflammatory study
Anti-inflammatory activity of the purified fraction was tested using egg albumin induced paw oedema in mice according to the methods of Winter et al. [15]. A total of twelve (12) mice were randomly grouped into four (A- D) of 3 mice each and were administered a single dose of 75 and 150 mg/kg bw of the purified fraction, 150 mg/kg.bw acetylsalicylic acid and 2 ml/kg bw normal saline respectively 30 min before the injection of the albumin into the right hind limb. The percentage inhibition of oedema was calculated for each dose using the formula:
$$ \%\mathrm{inhibition}=\frac{\mathrm{Mean}\ \mathrm{in}\mathrm{crease}\ \mathrm{in}\ \mathrm{paw}\ \mathrm{in}\ \mathrm{negative}\ \mathrm{control}-\mathrm{Mean}\ \mathrm{in}\mathrm{crease}\ \mathrm{in}\ \mathrm{paw}\ \mathrm{in}\ \mathrm{treated}}{\mathrm{Mean}\ \mathrm{in}\mathrm{crease}\ \mathrm{in}\ \mathrm{paw}\ \mathrm{in}\ \mathrm{negative}\ \mathrm{control}}\times 100 $$
Analgesic study
Analgesic effect was assessed according to the method described by Nwafor et al. [16]. A total of twelve (12) mice were randomly grouped into four (A- D) of 3 mice each and were administered a single dose of 75 and 150 mg/kg bw of the purified fraction, 150 mg/kg.bw sodium diclofenac and 2 ml/kg bw normal saline respectively for 60 min before they were challenged with 0.75% v/v acetic acid. Group D (control group) received 2 mL/kg body weight of normal saline. The number of abdominal constrictions induced by acetic acid were counted after 5 min. Observations were made over 10 min and mean value for each group was calculated. Percentage inhibition of abdominal constriction by the purified fraction and sodium diclofenac were determined in relation to the control.
$$ \%\mathrm{inhibition}=\frac{\mathrm{Mean}\ \mathrm{in}\mathrm{crease}\ \mathrm{abdominal}\ \mathrm{const}.\mathrm{in}\ \mathrm{negative}\ \mathrm{control}-\mathrm{Mean}\ \mathrm{in}\mathrm{crease}\ \mathrm{in}\ \mathrm{abdominal}\ \mathrm{const}.\mathrm{in}\ \mathrm{treated}}{\mathrm{Mean}\ \mathrm{in}\mathrm{crease}\ \mathrm{in}\ \mathrm{abdominal}\ \mathrm{const}.\mathrm{in}\ \mathrm{negative}\ \mathrm{control}}\times 100 $$
Toxicological study
Animals (5 each) were dosed 0 (control), 75 mg/kg and 150 mg/kg bwt of purified fraction of Maytenus senegalensis (Lam.) Exell orally for 3 wks. Procedures described by Shittu et al. [17] was followed during blood sample collection and serum preparation for biochemical analysis. Serum biochemical parameters including alkaline phosphatase (ALP), Aspartate transaminase (AST) and alanine transaminase (ALT) were determined as described previously [18]. The concentrations of serum total proteins [19], sodium, potassium, and chloride [20] were determine using standard methods.
Gas chromatography and mass spectroscopy (GC-MS) analysis of bioactive compounds
The purified fraction of Maytenus senegalensis (Lam.) Exell was subjected to Gas Chromatography and Mass Spectroscopy for the determination of bioactive volatile compounds as described previously [21].
Data analysis was performed using Statistical Package for Social Sciences (SPSS) One-way Analysis of Variance (ANOVA) followed by Duncan Multiple Range Test (DMRT). Data were expressed as means ± SEM of triplicate determinations. Significant was considered at p < 0.05.
Antiplasmodial
The purified fraction of M. senegalensis leaf shows dose dependent antiplasmodial activity against Plasmodium chabaudi (Table 1) and Plasmodium berghei (Table 2). The purified fraction had curative effects of 15.24 ± 0.89%, 45.70 ± 3.43% and 48.50 ± 4.56% at 75, 150 and 300 mg/kg bw against Plasmodium chabaudi (Table 1) while curative effects of 44.25 ± 3.21%, 72.74 ± 6.54% and 76.30 ± 8.32% respectively against Plasmodium berghei (Table 2).
Table 1 Antiplasmodial activity of purified fraction of M senegalensis leaf against Plasmodium chabaudi Infected mice
Table 2 Antiplasmodial activity of purified fraction of M. senegalensis leaf against Plasmodium berghei Infected mice
The purified fraction of M. senegalensis leaf exhibited dose dependent inhibition egg albumin-induced paw oedema with percentage inhibition of 53.16 ± 4.09 and 60.76 ± 7.54 at 75 and 150 mg/kg bw respectively while ASA exhibited 63.29 ± 5.98 inhibition of paw oedema (Table 3).
Table 3 Effect of purified fraction of M. senegalensis leaf on oedema
Analgesic effect
The purified fraction of M. senegalensis leaf exhibited dose dependent abdominal constrictions with percentage inhibition of 43.35 ± 4.98and 44.83 ± 3.86 at 75 and 150 mg/kg bw respectively while sodium diclofenac (SD) exhibited 74.88 ± 6.87 inhibition of paw oedema (Table 4).
Table 4 Effect of purified fraction of M. senegalensis leaf on abdominal constrictions in mice
Biochemical parameters
Sub-chronic administration of the purified fraction of M. senegalensis significantly (p < 0.05) increase the concentrations of transaminases (aspartate transaminase and alanine transaminase), and proteins when compared with the untreated control. However, sodium, potassium, chloride, alkaline phosphatase, triglyceride and glucose concentrations were not (p < 0.05) significantly altered by treatment with purified fraction of M. senegalensis (Table 5).
Table 5 Effect of purified fraction of M. senegalensis leaf on biochemical parameters in mice
GC-MS of the purified fraction of Maytenus senegalensis (lam.) Exell
The results pertaining to gas chromatography and mass spectroscopy (GC-MS) analysis led to the identification of 13 compounds from the gas chromatography (GC) fractionations. The chromatogram of purified fraction of Maytenus senegalensis (Lam.) Exell is shown in Fig. 1. The results were tabulated in Table 6. The results revealed that the presence of 3-hydroxy-20(29)-lupen-28-ol (12.95%), 20α)-3-hydroxy-2-oxo-24-nor-friedela-1(10),3,5,7-tetraen-carboxylic acid-(29)-methylester (6.0%), 2(4H)-Benzofuranone, 5,6,7,7a-tetrahydro- (7.0%), and phytol (1.44%) as the major phytocompounds in the purified fraction of Maytenus senegalensis (Lam.) Exell. Other compounds identified in minute amounts include n-Hexadecanoic acid (0.207%), 9,12-Octadecadienoic acid, methyl ester (1.67%), cis-Vaccenic acid (0.4.90%), 6-Methyl-cyclodec-5-enol (0.66%) each with different biological activities (Table 6).
Chromatogram of the purified fraction of Maytenus senegalensis (Lam.) Exell
Table 6 Phyto-Components identified in the purified fraction of Maytenus senegalensis (Lam.) Exell
The anti-plasmodial potency of some plants has been associated with the presence of some secondary metabolites such as alkaloids [9]. Findings presented in Table 3 shows that the purified fraction of M. senegalensis demonstrated a good antimalarial activities, in concordance with the classification of Munoz et al. [22], which stated that the antiplasmodial agent are classified on the basis of the percentages parasite inhibition as moderate", "good", and "very good when there is percentage inhibition of above 50% at metabolite concentration of 500, 250 and 100 mg/kg bwt. However, the antiplasmodial effects of the purified fraction at 150 and 300 mg/kg bw not significantly difference (p > 0.05), this may suggest that the maximum anti malaria effect of the purified fraction was achieved at 150 mg/kg. The proposed mechanism of antiplasmodial effect of the purified fraction could be by the elevation of erythrocytes oxidation and inhibition of the plasmodium protein synthesis, a mechanism that has been attributed to antimalaria activities of some phytoconstituents [23].
Evidence for the anti-inflammatory properties of flavonoids and alkaloids have been reported by several studies using different models of inflammation [24,25,26]. The significant anti-inflammatory effects demonstrated by the purified fraction of M. senegalensis leaf could be mechanistically explained by the fact that phyochemicals are known to inhibit the enzymes involved in the production of inflammatory mediator including cyclooxygenase and 5-lipoxygenase pathways [27].
The present study revealed that the purified fraction of M. senegalensis leaf significantly decreased the acetic acid induced pain in mice. This study showed that the purified fraction of M. senegalensis leaf contains active analgesic component [28]. This finding is in concordance with previous studies on morphine alkaloid fraction of Stephia glabra known as Gindarudine, which showed significant analgesic effect when tested by the same method [29]. The significant analgesic and anti-inflammatory effects of the purified fractions of M. senegalensis leaf extracts in vivo is noteworthy. Plants with these added pharmacological phenomena in conjunction with antiplasmodial effects are better antimalarials than plants with the later potential only [30, 31].
Biochemical parameters have been widely used as an indicator of pathological condition, toxicology or safety of a test substance, treatment outcome and general health status of animals [32,33,34,35]. Among these biochemical parameters, transaminases, alkaline phosphatases, proteins, lipid profile and electrolyte are the most widely employed in assessing the liver and kidney integrity following plant extract administration to animals [33]. Alterations in the normal activities or concentrations of these biochemical parameters are conventional indicators of any of the following conditions; renal or nephrotic impairments, hepatocellular injury, cellular leakage, loss of functional integrity of cell membrane, biliary cirrhosis or liver hepatitis [32]. Consequently, the concentrations of triglyceride, sodium, potassium, chloride, alkaline phosphatase and glucose concentrations were not significantly (p < 0.05) altered by treatment with 75 and 150 mg/kg bw M senegalensis purified fraction. This is an indication that the functional integrity of kidney is well preserved and that the purified fraction of M. senegalensis does not induced any form of pathological conditions to the kidney. The increases in transaminases (aspartate transaminase and alanine transaminase), and proteins concentration is an indication that the liver integrity is not well preserved. The purified fraction might have interfered with the equilibrium in protein metabolism in favor of anabolism. Such drastic increase in protein levels could, negatively affect cellular homeostasis and consequently effect the health of the animals [36, 37].
GCMS analysis of the purified fraction confirmed the presence of 20α)-3-hydroxy-2-oxo-24-nor-friedela-1(10),3,5,7-tetraen-carboxylic acid-(29)-methylester, 2(4H)-Benzofuranone, 5,6,7,7a-tetrahydro, 3-hydroxy-20(29)-lupen-28-ol and a terpenes (phytol) as the major constituents of the fraction. The antimalarial activities of these compounds have been previously documented, in addition phytol have also been reported for anti-inflammatory activities.
All data are available in the manuscript.
Odeghe OB, Uwakwe AA, Monago CC. Antiplasmodial activity of Methanolic stem bark extract of Anthocleista grandiflorain mice. Intern J Appl Sci Technol. 2012;24:18–23.
World Health Organization This year's malaria report at glance. "World Malaria Report (19 November 2018) World Health Organization 2018.
Greenwood BM, Fidock DA, Kyle DE, Kappe SHI, Alonso PL, Collins FH, Duffy PE. Malaria: progress perils and prospects for eradication. J Clin Invest. 2008;118:1266–76.
Lawal B, Shittu Ok, Abubakar A, Kabiru AY. Human Genetic Markers and Structural Prediction of Plasmodium falciparum Multi-Drug Resistance Gene Pfmdr1 For Ligand Binding in Pregnant Women Attending General Hospital Minna. J Enviro public health. 2018; 1–13.
Mohiuddin M, Dewan SMD, Asarwar SM. Anti-nociceptive anti-inflammatory and antipyretic activities of Ethanolic extract of Atylosia scarabaeoides L. Benth family: Fabaceae leaves in experimental animal. J Appl Life Sci Inter. 2018;174:1–12.
Jigam AA, Mahmood F, Lawal B. Protective effects of crude and alkaloidal extracts of Tamarindus indica against acute inflammation and nociception in rats. J Acute Dis. 2017;62:78–81.
Mostafa M, Appidi JR, Yakubu MT, Afolayan AJ. Anti-inflammatory antinociceptive and antipyretic properties of the aqueous extract of Clematis brachiata leaf in male rats. Pharm Biol. 2010:486–92.
Bashir L, Shittu OK, Sani S, Busari MB, Adeniyi KA. African natural products with potential Antitrypanosoma properties: a review. Inter J Bioch Res Rev. 2015;72:45–79.
Lawal B, Shittu OK, Kabiru AY, Jigam AA, Umar MB. Berinyuy EB, Alozieuwa BU. Potential antimalarials from African natural products: a review. J Intercult Ethnopharmacol 2015; 44:318–343.
Lawal B, Shittu OK, Oibiokpa FI, Berinyuy EB, Muhammed H. African natural products with potential antioxidants and hepatoprotectives properties: a review. Clin Phytosci 2017; 2, 23. https://doi.org/10.1186/s40816-016-0037-0.
Da Silva G, Serrano R, Silva O. Maytenus heterophylla and Maytenus senegalensis (lam.) Exell two traditional herbal medicines. J Nat Sc Biol Med. 2011;2:59–65.
El Tahir A, Ibrahim AM, Satti GM, Theander TG, Kharazmi A, Khalid SA. The potential antileishmanial activity of some Sudanese medicinal plants. Phytother Res. 2014;12:576–9.
Babiker F, Jamal P, Mirghani MES, Ansari AH. Characterization, purification and identification of some alkaloids in Datura stramonium. Inter Food Res J. 2017;24:540–3.
Jigam AA, Abdulrazaq UT, Egbuta MN. In-vivo antimalarial and toxicological evaluation of Chrozophoras enegalensis A. Juss euphorbiaceae extracts. J Appl Pharma Sci. 2011;110:90–4.
Winter CA, Risley EA, Nuss GV. Carrageenin induced oedema in hindpaw of rats as an assay for anti-inflammatory drugs. Proc Soc for Exp Biol Med. 1962;3:544–7.
Nwafor PA, Nwajiobi N, Uko IE, Obot JS. Analgesic and anti- inflammatory activities of an ethanol extract of Smilax krausiana leaf in mice. Afr J Biomed Res. 2010;13:141–8.
Shittu OK, Lawal B, Alozieuwa BU, Haruna GM, Abubakar AN, Berinyuy EB. Alteration in biochemical indices following chronic administration of methanolic extract of Nigeria bee propolis in Wister rats. Asian Pac J Trop Dis. 2015;5(8):654–7.
Reitman S, Frankel S. A colorimetric method for the determination of serum glutamic oxalacetic and glutamic pyruvic transaminases. Am J Clin Pathol. 1957;28:56–63.
Gornall AC, Bardawill CJ, David MM. Determination of serum protein by means of biuret reaction. J Biol Chem. 1949;177:751–66.
Tietz NW. Clinical guide to laboratory tests. 3rd ed. Philadelphia, PA: WB Saunders Company; 1995. p. 286–8.
Patil A, Jadhav V. GC-MS analysis of bioactive components from methanol leaf extract of Toddalia asiatica (L.). Inter J Pharm Sci Rev Res. 2014;29:18–20.
Muñoz V, Sauvain M, Bourdy G, Callapa J, Bergeron S, Rojas I. A search for natural bioactive compounds in Bolivia through a multidisciplinary approach, part I. evaluation of the antimalarial activity of plants used by the Chacobo Indians. J Ethnopharmacol. 2000;69:139–55.
Pérez-Amador MC, Muñoz-Ocotero V, García JM, Castañeda AR, González E. Alkaloids in Solanum torvum Sw Solanaceae. Intern Exp Bot. 2017;76:39–45.
Lamikanra AA, Theron M, Kooij TWA, Roberts DJ. Hemozoin malarial pigment directly promotes apoptosis of erythroid precursors. PLoS One. 2009;412:e8446.
Capra C. Anti-inflammatory activity of the saponins from Ruscus aculeatus. Fitoterapia. 2009;43(4):99–113.
Adesina DA, Adefolalu SF, Jigam AA, Lawal B. Antiplasmodial effect and sub-acute toxicity of alkaloid, flavonoid and phenolic extracts of Sida acuta leaf on Plasmodium berghei-infected animals. J Taibah Univ Sci. 2020;14(1):943–53.
Chandel RS, Rastogi RP. Review: Triterpenoid Saponins and Sapogenins. Phytochem. 2012;19:1889–908.
Singh GB, Singh S, Bani S, Gupta BD, Banerjee SK. Anti-inflammatory activities of Oleanolic acid in rats and mice. J Pharm Pharmacol. 2012;445:456–8.
Adebayo AH, John-Africa LB, Agbafor AG, Omotosho OE, Mosaku TO. Anti-nociceptive and anti-inflammatory activities of extract of Anchomanes difformis in rats. Pak J Pharm Sci. 2014;27(2):265–70.
Turner RA. Screening methods in Pharmacology. Vol 1. New York: Academic Press. 2014;85–106.
Semwal DK, Semwal RB, Semwal R, Jacob V, Singh G. Analgesic and antipyretic activities of gindarudine a morphine alkaloid from Stephania glabra. Curr Bio Comp. 2011;7:214–7.
Pascual ME, Slowing K, Caretero E, Mara K, Villar D. A. Lippia, traditional uses, chemistry and pharmacology. A Review J Ethnopharmacol. 2001;76:201–14.
Yusuf AA, Lawal B, Abubakar AN, Berinyuy EB, Omonije YO, Umar SI, Shebe MN, Alhaji YM. In-vitro antioxidants, antimicrobial and toxicological evaluation of Nigerian Zingiber officinale. Clin Phytosci. 2018; 4: 12. https://doi.org/10.1186/s40816-018-0070-2.
Bashir L, Shittu OK, Busari MB, Sani S, Aisha MI. Safety evaluation of Giant African land snails (Archachatina marginata) Haemolymph on hematological and biochemical parameters of albino rats. J Adv Med Pharm Sci. 2015;3(3):122–30.
Umar SI, Ndako M, Jigam AA. Adefolalu SF, Ibikunle GF, Lawal B. Anti-plasmodial, Anti-inflammatory, antinociceptive and safety profile of Maytenus senegalensis (Lam.) Exell root bark extract on hepato-renal integrity in experimental animals. Comp Clin Pathol. 2019;1–9. https://doi.org/10.1007/s00580-019-02965-4.
Yusuf AA, Lawal B, Yusuf MA, Omonije YO, Adejoke AA, Raji FH, Wenawo DL. Free radical scavenging, antimicrobial activities and effect of sub-acute exposure to Nigerian Xylopia Aethiopica seed extract on liver and kidney functional indices of albino rat. Iran J Toxicol. 2018;12(3):51–8.
Lawal B, Shittu OK, Oibiokpa IF, Mohammed H, Umar SI, Haruna GM. Antimicrobial evaluation, acute and sub-acute toxicity studies of Allium sativum. J Acute Dis. 2016;5(4):296–301.
The authors appreciate the funding support of Tertiary Educational Fund of Nigeria and Federal University of Technology Minna.
The authors declared no conflict of interest exist.
This research is supported by Research grant to Prof Ali Audu Jigam et al. 2019 (TETFUND/FUTMINNA/2016–2017/6TH BRP/17).
Department of Biochemistry, Federal University of Technology, Minna, Nigeria
Ali A. Jigam, Rachael Musa, Abdulkadir Abdullahi & Bashir Lawal
Program for Cancer Molecular Biology and Drug Discovery, Taipei Medical University and Academia Sinica, Taipei, 111, Taiwan
Bashir Lawal
Ali A. Jigam
Rachael Musa
Abdulkadir Abdullahi
This work is a collaboration of all the authors. All authors read and approved the final manuscript.
Correspondence to Bashir Lawal.
The principles governing the use of laboratory animals as laid out by the Federal University of Technology, Minna Committee on Ethics for Medical and Scientific Research and also existing internationally accepted principles for laboratory animal use and care as contained in the Canadian Council on Animal Care Guidelines and Protocol Review were duly observed.
Jigam, A.A., Musa, R., Abdullahi, A. et al. Characterization of anti-plasmodial, analgesic and anti-inflammatory fraction of Maytenus senegalensis (lam.) Exell leaf extract in mice. Clin Phytosci 6, 56 (2020). https://doi.org/10.1186/s40816-020-00201-z
Anti-plasmodial
Maytenus senegalensis (lam.) Exell | CommonCrawl |
Find unknowns in multiplication and division problems (5,10)
Multiplication as repeated addition (10x10)
Interpreting products (10x10)
Arrays as products (10x10)
Multiplication and division using groups (10x10)
Multiplication and division using arrays (10x10)
Find unknowns in multiplication and division problems (0,1,2,4,8)
Find unknowns in multiplication and division problems (3,6,7,9)
Find unknowns in multiplication and division problems (mixed)
Complete multiplication and division facts in a table (10x10)
Multiplication and division (turn arounds and fact families) (10x10)
Find quotients (10x10)
Number sentences and word problems (10x10)
Multiplication and division by 10
Properties of multiplication (10x10)
Multiplication and division by 10 and 100
Distributive property for multiplication
Use the distributive property
Multiply a two digit number by a small single digit number using an area model
Multiply a two digit number by a small single digit number
Multiply a single digit number by a two digit number using an area model
Multiply a single digit number by a two digit number
Multiply a single digit number by a three digit number using an area model
Multiply a single digit number by a three digit number
Multiply a single digit number by a four digit number using an area model
Multiply a single digit number by a four digit number using algorithm
Multiply 2 two digit numbers using an area model
Multiply 2 two digit numbers
Multiply a two digit number by a 3 digit number
Multiply 3 numbers together
Divide a 2 digit number by a 1 digit number using area or array model
Divide a 2 digit number by a 1 digit number
Divide a 3 digit number by a 1 digit number resulting in a remainder
Multiply various single and double digit numbers
Extend multiplicative strategies to larger numbers
Divide various numbers by single digits
Solve division problems presented within contexts
Solve multiplication and division problems involving objects or words
Multiply various single, 2 and 3 digit numbers
Divide various 4 digit numbers by 2 digit numbers
Use the fact that $9\times10=90$9×10=90 and $9\times7=63$9×7=63 to answer the following questions.
Fill in the blank to make the statement true.
$9\times17=9\times\left(\editable{}+7\right)$9×17=9×(
Fill in the blanks to make the statement true.
$9\times\left(10+7\right)=9\times\editable{}+9\times\editable{}$9×(10+7)=9×
+9×
Use the answers from part (a) and part (b) to find $9\times17$9×17.
We want to use the distributive property to rewrite $2\times19$2×19 as easier multiplications.
Use a range of additive and simple multiplicative strategies with whole numbers, fractions, decimals, and percentages.
Generalise the properties of addition and subtraction with whole numbers | CommonCrawl |
How to explain that the sums of numerators over sums of denominators isn't the same as the mean of ratios?
I am a teaching assistant for an intro programming course. One assignment asked for the averages of a certain ratio, but most students, rather than returning $$\frac{\text{sum of all ratios}}{\text{total ratios}},$$ gave $$ \frac{\text{sum of the numerators used to calculate those ratios}}{\text{sum of the denominators used to calculate those ratios}}. $$ That isn't the same, but I can't put to words why it isn't the same; I just know not to do it and can't explain to these students why it gives the wrong result. What's the intuitive way to explain it?
The ratios in question were mileages per gallon. The question wanted the average mpg of all trips, and asked for miles driven and gallons of fuel used separately.
undergraduate-education students-mistakes statistics
JohnnyApplesauce
JohnnyApplesauceJohnnyApplesauce
$\begingroup$ I wonder if the error is more blindly coding while ignoring the meaning? Would the same misunderstandings arise were this a hand-calculation exercise? $\endgroup$
$\begingroup$ Google for "Simpson's paradox" and find a lot of interesting examples. $\endgroup$
$\begingroup$ Just a side comment: Guest's answer below shows that this is a good counter-example to the idea that one should give "real life" context to math questions. If the point was to measure the capacity to manipulate fractions, then the exercise failed by making the interpretation of the question the critical step. $\endgroup$
– Benoît Kloeckner
$\begingroup$ @BenoîtKloeckner It isn't a counterexample! It is a beautiful example of how providing a real world context allows for open ended conversation about the meaning of the computations, and will prepare students to select meaningful computations in the future. Along the way they can get practice "drilling" both interpretations just by answering questions which naturally arise from the discussion. They might even remember something, since there was a lively debate! $\endgroup$
$\begingroup$ @StevenGubkin: it is really depending on one's goal. While it is a commendable goal to work on interpretation of a somewhat informal problem into mathematics (modelling), it is too often the case when one's goal is to practice a given mathematical task, and this goal is messed up with by a sloppy real word context. $\endgroup$
The problem is with the question, not with the students' answers. The question is ambiguous and I think the students' answer is actually much better than yours.
Suppose I drive a thousand miles at 25mpg and you drive one mile at 35mpg. What's the average fuel efficiency? Your answer is 30mpg but I honestly can't think of any situation in which that is a meaningful or useful number. The students' answer is 25-and-a-bit mpg, which is a good measure of how far it can be expected to travel on one gallon of fuel.
Analogously, if I buy a thousand apples at 50c each and you buy one apple at 40c, you're claiming that the average price per apple is 45c, and your students are claiming that it's a hair under 50c.
David RicherbyDavid Richerby
One observation is that (sum of numerators) divided by (sum of denominators) is not well defined.
For example, let's work with the two ratios $a=\frac01$ and $b=\frac11$.
The ratio of the sum of numerators to sum of denominators is $\frac12$.
However, we can also write $a=\frac03$ and $b=\frac22$. Now the ratio is $\frac25$, which is not equal to $\frac12$!
$\begingroup$ Nice counterexample. Note also that $\displaystyle \dfrac{a}{b}\leq \dfrac{a+c}{b+d}\leq \dfrac{c}{d}$. $\endgroup$
$\begingroup$ You're changing the weights by doing so, which shouldn't be allowed. Especially if you're describing physical quantities. A 200 mile trip in 2h shouldn't be replaced by a 100 mile in 1h simply because they have the same average speed. $\endgroup$
– Eric Duminil
$\begingroup$ @EricDuminil: if you read the OP, there is hardly any mention of weights and physical quantities. On a side note, the method of dividing the two sums (as opposed to averaging the individual ratios) is used in calculating the "earned run average" in baseball. $\endgroup$
$\begingroup$ @JiK: I see what you mean: 30 mpg, right? In that case, it makes sense to calculate the average like OP proposed. $\endgroup$
$\begingroup$ @Paracosmiste You're claiming that, for arbitrary $a$, $b$, $c$, $d$, $\tfrac{a}b\le\tfrac{c}d$. That's simply not true. $\endgroup$
It actually depends on exactly what you're asking. Or even what you SHOULD be asking.
If you want the average profitability of all the 500+ operators in the Permian, you could just average all the profit margin percentages. This is taking the ratios (profit/revenue) for each company and averaging them. It corresponds to your expected (mean) profit margin if you just picked an operator at random.
If instead, I take all the profits and divide by all the revenues, this would give me the average profitability of the INDUSTRY (operators in the Permian). Often this is actually the question you are asking, or should be asking. IOW, you want the revenue-weighted average profit margin.
The same point would apply if you were, say doing sampling of different size demographic categories and were interested in the total population polling estimate. You need to weight by size of the buckets. (Or just take the totals, which is mathematically equivalent.)
Since we don't know exactly what you asked, or how precise you were, it's hard to say if the students were wrong. Of course, they may have been. But I would just check.
Responding to your edited-in update on the question. There's actually still some ambiguity about what you are (or should be) asking. But if I had to guess, the student's way is more likely giving the desired answer. (IOW, ratio the totals, rather than average the ratios.) IOW, "average fuel efficiency" should be "gallon" weighted. That's the one impacting your pocketbook.
In addition, you may find that there's some correlation of fuel efficiency with trip length. The engine being warmed up, operates more efficiently. Also, highway speeds may be more efficient than slow speeds. Also the issue of frequent stops and starts.
So if I were selecting a car, I would want the one that has the better average fuel efficiency (total miles/total gallons). Ideally with something approximating my type of usage. But I definitely wouldn't want to skew things by saying one "10 mile, 1 gallon" trip mean the same as one "100 mile, 5 gallon" trip in terms of the importance to my pocketbook.
guestguest
$\begingroup$ Yes. The OPs answer is good if you care about which kind of car to buy (you can compare to the average efficiency of the cars). The student's answer is good if you want to keep track of how efficiently gasoline is being used across the whole economy. $\endgroup$
I like guest's answer. To elaborate, here is a possible question to ask them.
You take two trips in your car:
Trip 1 is a 100 mile drive that takes you 2 hours.
Trip 2 is a 200 mile drive that takes you 1 hour.
(a) What is the average speed of your car?
(b) What is the average speed on an average trip?
The answer to (a) is $\frac{100 \text{ miles} + 200 \text{ miles }}{2 \text{ hours } + 1 \text{ hour }} = \frac{300 \text{ miles }}{3 \text{ hours }} = 100 \text {mph}$.
The answer to (b) is $\frac{\frac{100 \text{ miles }}{2 \text{ hours }} + \frac{200 \text{ miles }}{1 \text{ hour }}}{2} = \frac{50 \text{ mph} + 200 \text{ mph} }{2} = 125 \text {mph}$.
You need to be very clear about the question if you prefer one of these answers.
Chris Cunningham♦Chris Cunningham
$\begingroup$ The premise of (b) being misused scares me. It would offer a coach the math showing his track team running speeds that routinely blow away a 4 minute mile. i.e. 100m sprints are far faster than actual mile runs. 9 instances of that data with one instance of the mile run makes for odd results. This is why I was hoping OP would offer the example, and not blindly use the math. Either way. $\endgroup$
– JTP - Apologise to Monica
$\begingroup$ (b) is wrong. What does "average speed of a trip" even mean? We speak of a speed of a particle. The average speed is the weighted arithmetic mean of the 2 speeds : $\displaystyle v=\dfrac{t_1v_1+t_2v_2}{t_1+t_2}=\dfrac{2\times\dfrac{100}{2}+1\times\dfrac{200}{1}}{2+1}$ which gives the same answer as (a). $\endgroup$
$\begingroup$ Does this sound better to you @EricDuminil ? "Here are 100 players and their hits and at-bats. (a) What is the overall batting average for the league? (b) What is the batting average of an average player?" $\endgroup$
– Chris Cunningham ♦
$\begingroup$ @ChrisCunningham: I have no clue whatsoever about baseball (that's a baseball example, right?), but this does sound more appropriate, yes. $\endgroup$
$\begingroup$ OK, hopefully these comments combined with my edit to include the phrase "an average trip" improve the answer a bit. The whole situation is still a bit wonky. $\endgroup$
Allow me to offer another example:
Imagine you and your best friend both want to buy a new smart phone. The phone you have chosen will cost you 300€ but your friend chooses a phone that will cost as much as 600€! Luckily, you have two vouchers that will give you a discount:
The first voucher will give you the cheaper phone for free, if you buy two phones.
The second voucher will give you 50% off your entire purchase.
Which voucher do you choose?
Ratios are a tricky thing, because the ratio of two large numbers can be the same as the ratio of two small numbers. But in most real life applications, the number of elements used to determine these ratios (i.e. averages) are just as important. Because an average over a great number of elements (e.g. people in a survey) is more meaningful than an average over a select few. Therefore, they can not always be compared one-to-one.
Think of an example with two ratios: 1/3 and 4/5.
When you add the numerators, and divide this by the sum of the denominators, you get (1 + 4)/(3 + 5) = 5/8. Now, think about what is happening with the denominators - the denominator of the first ratio should only act on the first numerator. But instead, when you add the ratios in this way, the denominator of the first ratio is acting on both numerators. Likewise for the second denominator.
However, when you average the two ratios, like this
(1/3 + 4/5) / 2 = (5/15 + 12/15) / 2 = 17/30
the denominators of the ratios work together to act on the numerators.
Division has a higher order of precedence than addition. Thus, division should be carried out first, rather than combining the ratios.
Hope this makes sense and helps you assist them.
waitariawaitaria
Do you want students to think or not? According to your logic, when I do a 300 mile trip with a car and a day later reverse it in the drive way, the average mpg value of those two days can be calculated by taking the mpg value of the first day and averaging it with the (likely much higher) mpg value of the second day that saw almost no mileage and almost no gas use.
Don't use "textbook examples" for meaningless calculations and then complain that the students were unable to make the calculations as meaningless as you wanted them done. That's teaching them neither mathematics nor its proper application.
$\begingroup$ Sometimes the correct answer to a question is to ask a different question. This is one of those times. If the 'customer' wants an average of ratios, then the only responsible thing to do is ask for clarification. $\endgroup$
$\begingroup$ -1, This is needlessly hostile to the questioner. Edit your answer from the point of view of helping an educator, or delete it. $\endgroup$
I think that the most interesting application of (sum of numerators/sum of denominators) "addition" is to continued fractions. For example, if you want to calculate the continued fraction expansion of the square root of 2,start with 1/0 and 0/1 and "add" them in the following way:
In the top row you put the results of the "adding" that are greater then the square root of 2 , in the bottom row the results that are less then the square root of 2, and you always "add" the last two results from the different sides of the square root of 2 . The pattern of 1 down, 2 up, 2 down, 2 up, 2 down, 2 up, … is the continued fraction expansion of the square root of 2 :
Zvonimir SikicZvonimir Sikic
Not the answer you're looking for? Browse other questions tagged undergraduate-education students-mistakes statistics or ask your own question.
How to explain that a negative number multiplied by a negative number is a positive number, and that $-(-x)=x$?
How to mark student's answer, when you realize that he/she just memorized the answer?
Is there an elementary way to explain that a map of the earth cannot preserve distances?
How to intuitively explain the role of transistors in boolean logic and switching?
How to explain the topic of Fourier transform interactively?
How to deal with students who object to me teaching material that won't be in the exam?
How to explain what's wrong with this application of the chain rule?
How to intuitively convince the students that a strip with two full twists is homeomorphic to the standard annulus? | CommonCrawl |
Kalman Filter | Difference Between Minimizing the Mean Square Error (MMSE) & Maximizing Likelihood Value in Bayesian Estimation
I am going through data assimilation slides on Multi Sensor Data Fusion by Hugh Durrant Whyte and it mentions:
The Kalman Filter, and indeed any mean-squared-error estimator, computes an estimate which is the conditional mean; an average, rather than a most likely value. (Q: what is the most likely value ?)
I understand what MSQ is, but what does it mean that Kalman Filter estimates mean-square error rather then most likely value? Isnt mean square estimate, the most likely value?
kalman-filters estimation bayesian-estimation
Royi
GENIVI-LEARNERGENIVI-LEARNER
$\begingroup$ Thank you for letting me know I need to study up on this :| . This may help: cs.princeton.edu/courses/archive/fall18/cos324/files/… $\endgroup$
– TimWescott
$\begingroup$ glad the reference helped you $\endgroup$
– GENIVI-LEARNER
$\begingroup$ Does this help? Understanding the Difference Between MAP Estimation and ML Estimation $\endgroup$
$\begingroup$ I don't think it is about Maximum Likelihood vs. MAP but MAP vs. MMSE as the blog post is all about Bayesian Estimators. I derived the 3 most popular Bayesian Estimators in my answer below. Enjoy... $\endgroup$
– Royi
Actually the first section of the notes in the link your provided are about the most likely value in the Bayesian Framework.
So we have a comparison between the Minimum Mean Square Error (MMSE) Estimator and the Maximum a Posterior Estimator.
Both are Bayes Estimator, namely they are a loss function of Posterior Probability:
$$ \hat{\theta} = \arg \min_{a} \int \int l \left( \theta, a \right) p \left( \theta, x \right) d \theta d x $$
Where $ \theta $ is the parameter to be estimated, $ \hat{\theta} $ is the Bayesian estimator, and $ l \left( \cdot, \cdot \right) $ is the loss function. The above integral called the Risk Integral (Bayes Risk).
With the the properties of Bayes Rule it can be shown:
$$\begin{aligned} \arg \min_{a} \int \int l \left( \theta, a \right) p \left( \theta, x \right) d \theta d x & = \arg \min_{a} \int \int l \left( \theta, a \right) p \left( \theta \mid x \right) p \left( x \right) d \theta d x && \text{By Bayes rule} \\ & = \arg \min_{a} \int \left( \int l \left( \theta, a \right) p \left( \theta \mid x \right) d \theta \right) p \left( x \right) d x && \text{Integral is converging hence order can be arbitrary} \\ & = \arg \min_{a} \int l \left( \theta, a \right) \left( \theta \mid x \right) d \theta && \text{Since $ p \left( x \right) $ is positive} \end{aligned}$$
So now, the solution depends on the definition of the loss function $ l \left( \cdot, \cdot \right) $:
For $ l \left( \theta, a \right) = {\left\| \theta - a \right\|}_{2}^{2} $ we have the MMSE estimator which is given by the conditional expectation $ E \left[ \theta \mid x \right] $. This is what Kalman Filter estimates.
For $ l \left( \theta, a \right) = {\left\| \theta - a \right\|}_{1} $ we have the Median of the posterior as $ \arg \min_{a} \int \left| \theta - a \right| \left( \theta \mid x \right) d \theta \Rightarrow \int_{- \infty}^{\hat{\theta}} p \left( \theta \mid x \right) d \theta = \int_{\hat{\theta}}^{\infty} p \left( \theta \mid x \right) d \theta $.
For $ l \left( \theta, a \right) = \begin{cases} 0 & \text{ if } \left| x \right| \leq \delta \\ 1 & \text{ if } \left| x \right| > \delta \end{cases} $ (Hit or Miss Loss) we need to maximize $ \int_{\hat{\theta} - \delta}^{\hat{\theta} + \delta} p\left( \theta \mid x \right) d \theta $ which is maximized by the Mode of the posterior - $ \hat{\theta} = \arg \max_{\theta} p \left( \theta \mid x \right) $ which is known as the MAP Estimator.
As you can see above, different estimators are derived from different loss.
In the case the posterior is Gaussian the Mode, Median and Mean collide (There are other distributions which have this property as well). So in the classic model of the Kalman Filter (Where the Posterior is also Gaussian) the Kalman Filter is actually the MMSE, The Median and the MAP Estimator all in one.
Derivation with More Details
To show full derivation we will assume $ \theta \in \mathbb{R} $ just for simplicity.
The $ {L}_{2} $ Loss
We're after $ \hat{\theta} = \arg \min_{a} \int {\left( a - \theta \right)}^{2} p \left( \theta \mid x \right) d \theta $. Since it is smooth with respect to $ \hat{\theta} $ we can find where the derivative vanishes:
$$\begin{aligned} \frac{d}{d \hat{\theta}} \int {\left( \hat{\theta} - \theta \right)}^{2} p \left( \theta \mid x \right) d \theta & = 0 \\ & = \int \frac{d}{d \hat{\theta}} {\left( \hat{\theta} - \theta \right)}^{2} p \left( \theta \mid x \right) d \theta && \text{Converging integral} \\ & = \int 2 \left( \hat{\theta} - \theta \right) p \left( \theta \mid x \right) d \theta \\ & \Leftrightarrow \hat{\theta} \int p \left( \theta \mid x \right) d \theta \\ & = \int \theta p \left( \theta \mid x \right) d \theta \\ & \Leftrightarrow \hat{\theta} = \int \theta p \left( \theta \mid x \right) d \theta && \text{As $ \int p \left( \theta \mid x \right) d \theta = 1 $} \\ & = E \left[ \theta \mid x \right] \end{aligned}$$
Which is the conditional expectation as required.
RoyiRoyi
$\begingroup$ Good comprehensive answer however I fail to see how is MMSE $ \left( \theta, a \right) = {\left\| \theta - a \right\|}_{2}^{2}$ is given by conditional expectation, cause expectation is just taking average given x. But the square term you are defining is the root mean square loss ${\left\| \theta - a \right\|}_{2}^{2}$ $\endgroup$
$\begingroup$ Also if posterior is Gaussian then does it mean the Mean, Mode and Median are all same? I really thought that the values at the tail of the gaussian curve are the Modes as they have low probability but the x's are large compared to the mean. $\endgroup$
$\begingroup$ @GENIVI-LEARNER, I added the derivation for the $ {L}_{2} $ case. Yes, as I wrote above for Gaussian PDF the Mean equals the Median which equals the Mode. Actually for all symmetric distributions the Mean equals to the Median. If you add the property of Uni Modality with the Peak at the symmetric point you get Mean will equal the Median which will equal the Mode. You should read at Wikipedia - Mode. $\endgroup$
$\begingroup$ @GENIVI-LEARNER, Keep being generous and I will always be happy to try to assist you. If you have more questions, feel free. $\endgroup$
Not the answer you're looking for? Browse other questions tagged kalman-filters estimation bayesian-estimation or ask your own question.
Understanding the Difference Between MAP Estimation and ML Estimation
Justification for Squared $ {L}_{2} $ Data and Smoothness Term as an Error Bound
Relation between Kalman filter and Sequential linear MMSE estimation
Estimate the Transfer Function of an Unknown System
minimizing the mean squared error?
Estimating a Low Frequency Signal Corrupted by High Frequency Noise
Will an Unscented Kalman Filter Be "As Good" as Other Optimization Algorithms for This Problem?
Fundamental questions about state-space and Kalman filters
In what sense is the Kalman filter optimal? | CommonCrawl |
BioMedical Engineering OnLine
Gated recurrent unit-based heart sound analysis for heart failure screening
Shan Gao1,
Yineng Zheng2 &
Xingming Guo ORCID: orcid.org/0000-0003-3872-08661
BioMedical Engineering OnLine volume 19, Article number: 3 (2020) Cite this article
Heart failure (HF) is a type of cardiovascular disease caused by abnormal cardiac structure and function. Early screening of HF has important implication for treatment in a timely manner. Heart sound (HS) conveys relevant information related to HF; this study is therefore based on the analysis of HS signals. The objective is to develop an efficient tool to identify subjects of normal, HF with preserved ejection fraction and HF with reduced ejection fraction automatically.
We proposed a novel HF screening framework based on gated recurrent unit (GRU) model in this study. The logistic regression-based hidden semi-Markov model was adopted to segment HS frames. Normalized frames were taken as the input of the proposed model which can automatically learn the deep features and complete the HF screening without de-nosing and hand-crafted feature extraction.
To evaluate the performance of proposed model, three methods are used for comparison. The results show that the GRU model gives a satisfactory performance with average accuracy of 98.82%, which is better than other comparison models.
The proposed GRU model can learn features from HS directly, which means it can be independent of expert knowledge. In addition, the good performance demonstrates the effectiveness of HS analysis for HF early screening.
Heart failure (HF) has attracted widespread attentions due to the high morbidity and mortality, especially with the aging of population. The risk indicators of HF are numerous and complicated. Beside the well-known factors, like obesity, smoking and alcohol abuse, some cardiovascular diseases such as hypertension, earlier heart attack and myocardial infarction have also been verified as the precursors for HF developing in clinical practice [1, 2]. Therefore, keeping a healthy lifestyle and paying attention to the early screening of HF play an important role in the preventive and timely treatment.
HF can be divided into two categories—HF with reduced ejection fraction (HFrEF) and HF preserved ejection fraction (HFpEF), and the following conditions are often used to diagnose of HFrEF and HFpEF in clinical [3]: (1) typical symptoms and/or signs of HF; (2) the indicator of left ventricular ejection fraction; (3) the levels of natriuretic peptides; (4) relevant structural heart disease or diastolic dysfunction. However, these common ways have their own limitations. For instance, the symptoms or signs may be non-specific in the early stages of HF [3], and the invasive measurement [4, 5] is not suitable for promotion among people. The insufficiency in the existing methods prompted us to explore new measures for HF screening.
Nowadays, the non-invasive methods are widely explored for the detection of cardiovascular diseases. For instance, Gao et al. [6, 7] utilized the elasticity-based and a nonlinear state-space approaches to track the motion of carotid artery wall which can be used in the status evaluation of atherosclerotic disease. Many studies used the electrocardiograph signals for cardiac arrhythmia detection [8, 9]; however, the cardiac contractility may not be reflected by electrocardiograph, whose variation is an important sign of HF [10]. Heart sound (HS) can reflect the mechanical dysfunctions of myocardial activity directly, which is a non-stationary physiological signal produced by the beat of muscles [11]. In addition, HS analysis is another non-invasive method. Zheng et al. [12] built a HS-based computer-assisted model in distinguishing HF patients and normal by analyzing the cardiac reserve.
In traditional HS analysis, the feature extraction and/or selection is a crucial step, and various features have been used in HS field, such as wavelet transform [13], wavelet packet transform [14], energy entropy [15] and Mel-frequency cepstral coefficients [16]. These features may be more intuitive to reflect the physical meaning of HS in different states. However, three main limitations also exist: (1) feature extraction and/or selection depends largely on professional knowledge in the fields of medicine and signal processing; (2) extraction of hand-crafted features may miss valuable deep features which contain the latent information of HS; (3) some hand-crafted features are ineffective when the sample quality varies greatly [17]. Deep learning methods, as the new field in machine learning, can learn the features automatically from the inputs without the process of hand-crafted feature extraction and have become popular in the field of biomedical. A convolutional neural network-based transfer learning approach is proposed by Zhang et al. [18] for automatic colorectal cancer diagnosis. Gao et al. [19] proposed a novel deep neural network to learn the implicit strain reconstruction from 2D-radio frequency images and assess the conditions of disease. However, these models have limited ability to mine the features from time-series signals. The improved recurrent neural networks (RNN), including long short-term memory (LSTM) and gated recurrent unit (GRU), can keep the relation of input sequences; therefore, they have been successfully used in sequential data prediction or classification. Yu et al. [20] have adopted the LSTM with attention mechanisms to predict the patient mortality in hospital. Vetek et al. [21] applied LSTM to classify temporal sleep stage using several physiological signals. Similar studies based on EEG were tested by Michielli [22]. Xu et al. [23] reported a LSTM-based architecture for motion-feature extraction from the region of interest sequences. Although RNN-based networks have been extensive used and gained resounding success in biomedical sequence processing, they are barely applied in HS classification.
To address the above issues, we proposed a novel GRU-based method for HF screening using HS. The contributions of this paper lie in: (1) to our best knowledge, this is the first study to distinguish the normal, HFpEF and HFrEF subjects using HS; (2) without heavy reliance on expert knowledge and any hand-crafted features, the proposed method screens HF utilizing HS signals; (3) the performances show that our method is substantially better than two other deep learning models and one traditional features extraction method. The main framework of this paper is depicted in Fig. 1.
The illustration of the workflow of this paper. The GRU is the proposed model while others are the methods compared
The algorithms of signal preprocessing (resampling, segmentation and normalization), hand-crafted feature extraction and classification with support vector machine (SVM) were all implemented on Matlab (version R2016b) programming. The deep learning models in this work were implemented using python (version 3.5.4) on Tensorflow library (version 1.12.0). The computer used with a 3.7-GHz Intel Core i7-8700 K CPU, GTX 2080Ti GPU with 11 GB video memory and 64 GB RAM to train the networks.
Model setting experiments
The basic settings of GRU model are determined as follows: Adam is selected as the optimizer and the learning rate is set as 0.001. Softmax cross entropy with logits v2 is chosen as the main loss function. Besides, L2 norm is added in the loss function to prevent model overfitting [24]. The L2 norm of the weight \(\lambda\) for weight decay is calculated by some experiments carefully, and finally set as 0.0001 according to Fig. 2. All the parameters in this paper are trained with the batch size of 64, and the models are trained for 50 epochs in total.
The test accuracy influenced by the weight \(\lambda\) of L2 loss. When \(\lambda\) is set as 0.0001, the GRU and LSTM both reach the highest accuracy
Considering the experimental results about the number of layers and hidden units/layer, the structures of GRU are finally determined. The number of layers varies in {1,2,3}, and the number of units for per layer ranges in {8,16,32,64,128}. As the experimental results show in Fig. 3a, the overall effect of two layers is better than one layer. When the number of units exceeds 64, the performance of three layers is even worse than that of two layers. Considering the complexity of model and the recognition accuracy comprehensively, the GRU structure finally is chosen as two layers with 64 hidden units/layer. Figure 4 shows the final architecture of the GRU network. Moreover, the structure of LSTM is defined the same with that of GRU. Figure 3b exemplifies the relevant experimental results of LSTM.
The accuracy comparison between the number of layers and the number of hidden units/layer: a GRU; b LSTM
The proposed GRU framework for HF screening. The input of the model is the frame of normalized HS with the length of 960 sampling points. The architecture has two GRU layers with 64 units/layer and a fully connected layer of 3 units (the number of HS categories). The LSTM has the similar framework, but the GRU units are changed to LSTM units
Screening performance
To evaluate the robustness and to ensure the repeatability of proposed models, the tenfold cross-validation was used in this work. For each fold, 90% of the HS frames are used for training and the remaining 10% is used to test the performance of our models. To monitor and tune the parameters of training process, 20% frames of the training set are sampled to be used as validation set.
The performance of tenfold cross-validation for all methods is summarized in Table 1. It can be seen that GRU achieves the best average accuracy of 98.82%, which is 2.53%, 4.17% and 11.2% higher than LSTM, fully convolutional network (FCN) and SVM, respectively. SVM is the lowest performing model compared with the other three deep learning models. In addition, the performance of the GRU is more stable as the accuracy deviation is the minimum compared with that of the other three models, which is depicted in the box-plot in Fig. 5.
Table 1 The tenfold cross-validation results of different models and their average accuracy
The accuracies of different models with box-plot. The mean value ± standard deviation for these models are:\({\text{Acc}}_{\text{GRU}} = 98.82\% \pm 0.46\%\), \({\text{Acc}}_{\text{SVM}} = 87.62\% \pm 1.77\%\), \({\text{Acc}}_{\text{FCN}} = 94.65\% \pm 3.07\%\), \({\text{Acc}}_{\text{LSTM}} = 96.29\% \pm 1.02\%\). Deep features based on GRU model show the highest accuracy on average
Table 2 shows the confusion matrix of GRU with all tenfold testing data. The values of precision in three categories are in the range of 98.7–98.93%, and the values of recall are in the range of 98.31–99.46%. It shows that the proposed GRU model can recognize three classes of HS precisely, in which the accuracy of normal class is recognized best. Figure 6 shows an intuitive normalized confusion matrix.
Table 2 A confusion matrix of HF for GRU across all tenfold testing data
Final normalized confusion matrix of GRU model with all tenfold testing data. The columns of the confusion matrix represent the predicted classes and the rows represent the true classes
The impact of the length of frames on classification results
In this paper, the HS signals were segmented to fixed length (1.6 s) frames, and the length of frames might affect the classification stage. To evaluate the possible effect of frame length on final performance, the experiments with fixed length of 0.8 s (approximately one cycle) frames were explored. The corresponding tenfold cross-validation results using the proposed GRU model are listed in Table 3. The results show that the dataset with 1.6 s frames could obtain the average accuracy about 2% higher than 0.8 s frames. The deviation may be caused by the missing of interval features in one cycle frame, which contributes a lot on the classification stage.
Table 3 Tenfold cross-validation results of GRU model with two types of frame length
The comparison of the methods used in this study
In this paper, four models were used to compare the performance for HF screening. GRU and LSTM models are modified kind of RNN architectures. Generally, RNN models can achieve better results than others used in this study. It is because the RNN models can keep the relation of the input time series while others cannot [24]. The results of tenfold cross-validation show that GRU model can achieve higher performance than LSTM model in every attempt of HF screening. Moreover, our comparative experiments have proven that deep learning models outperform the SVM in HF screening. As a representative of traditional knowledge-driven methods, the unsatisfactory results of SVM may be related to the selection of features. Additionally, taking HS signals directly as the input, deep learning models can realize automatic classification without any hand-crafted feature extraction or selection; therefore, our model with fine-tuned parameters can also be applied into other signal processing areas. In sum, the deep learning models can get the higher precision and better performance than traditional SVM, especially the proposed GRU model.
The comparison of the relevant studies
Over the years, many studies on screening of HFrEF and HFpEF have been conducted. However, most of the studies were based upon biochemical indicators, phenotype and statistical analysis of medical records information. For instance, Savarese et al. [25] used N-terminal pro-B-type natriuretic peptide to distinguish different HF category. These biochemical indicators are useful to diagnose HF and predict prognosis in HF, but they play a very limited role in the early screening of HF. In addition, such invasive diagnostic methods are not suitable for pervasive application. Xanthopoulos et al. [26] proposed a method to classify the HFpEF based on the phenotype of hypertension, which requires researchers to have a wealth of medical knowledge.
HS signals are closely related to cardiovascular diseases and have been widely studied, while objects of these researches were different. For example, the identification and classification of HS components [27, 28], classification of normal and other abnormal HS [29,30,31], differentiating the murmurs between physiological and pathological [32, 33]. However, the previously published papers about classification of HFrEF, HFpEF and normal were few and incomplete. Liu et al. [34] explored the difference between HFpEF and normal, but they omitted the study about HFrEF. Zheng et al. [35] reported a HF identification method using HS; however, the HFrEF and HFpEF were not explored separately. It can be seen that the study on HF screening, which included normal, HFpEF and HFrEF, has not been studied sufficiently. Hence, this study could be an efficient complement for HF screening.
The limitations and future work of this study
This study has three limitations. Firstly, for the lack of HS databases about HFrEF and HFpEF, the experimental tests for generalization ability on other public databases using our method could not be made. Secondly, experimental method was used for the hyper-parameters setting of GRU and LSTM in this study. This method needs to run many experiments to involve approximating optimal value. In the future work, other methods of tuning parameters like grid search may be used in our model to improve the efficiency. In addition, the normal HS may be quite different from that of HF patients, in order to better verify the performance of the proposed method, the abnormal HS with normal systolic and diastolic function can be considered as the control group in the feature.
Early screening of HF can provide a timely guide for treatment. In this paper, GRU-based HS analysis method was proposed to screen HF automatically. Taking HS signals as input, the method eliminates the dependence on hand-crafted feature extraction. To verify the screening accuracy, LSTM, FCN and SVM models were carried out as the comparative experiments. The results show that the performance of GRU model is competitive with the methods compared, especially the traditional method of SVM, and it is promising as an effective method for the non-invasive HF screening. In future, the applicability of the method mentioned in this paper will be validated in other cardiovascular diseases, like cardiac murmurs, valvular disease.
Experimental data description
The HS data used in this paper contain three categories—HFrEF, HFpEF and normal. The HS signals of HF patients were acquired from University-Town Hospital of Chongqing Medical University using the HS acquisition system (Patent No.: CN2013093000306700) with the sampling frequency at 11,025 Hz. HF samples were collected from 42 HFrEF and 66 HFpEF patients, respectively. Moreover, all the patients of HFrEF and HFpEF were diagnosed and confirmed by the cardiologists. All patients signed informed consent forms before participating this study, and this study has been ratified by Ethical Commission Chongqing University. The normal HS was obtained from the PhysioNet/Computing in Cardiology Challenge 2016. It contains nine databases from different research groups, and all recordings in the dataset were resampled to 2000 Hz. The dataset includes 2435 normal HS recordings collected from 1297 healthy subjects. Details of the dataset can be referenced in [36, 37]. In this paper, 1286 recordings were randomly selected as the normal group.
Signal preprocessing
HS preprocessing is an essential part to achieve a good identification performance. In this study, the preprocessing includes three steps introduced as follows.
In general, HS mainly comprises two components: the first HS (S1) and the second HS (S2). S1 is the transient low-frequency acoustic signals, which is mainly between among 10 and 200 Hz, produced by the vibrations of heart chambers, heart valves and blood in systolic. S2 is produced at the end of systole, following the closure of semilunar valves about aortic and pulmonary [27, 38]. S2 has a higher-pitch than S1, with its frequency range between 20 and 250 Hz [39]. Since the original sampling frequency may cause high computational cost, all recordings are down-sampled at 600 Hz in accordance with Nyquist Sampling Theorem.
S1 marking and segmentation
In order to standardize the input length for the model, one strategy was used in this paper to obtain HS frames. Two main steps are involved in this process: marking S1 onset and segmentation HS with fixed frame length.
Marking S1 onset
Positioning the boundaries of HS components is the critical operation of segmentation. A cardiac period contains four states, namely S1, systole, S2 and diastole. Since S1 is the start of a cardiac cycle, the S1 onset is considered as the boundary of frames.
In this paper, logistic regression-based hidden semi-Markov model (LR-HSMM) is selected to localize the onset of S1. The method of LR-HSMM, developed by Springer et al. [40] and verified by Liu et al. [36], is usually treated as the state-of-the-art method for HS segmentation or marking the onset of cycles, which has great robustness in processing noisy recordings. To preserve more details of HS, the step of signal denoising was skipped in this study. Thanks to the advantages of LR-HSMM, the onset of S1 can be located accurately as shown with the dotted line in Fig. 2.
Segmentation HS with fixed frame length
The mechanical activity of heart is captured in one cardiac period [41]. Moreover, the interval features may vary between each cycle. In view of these two factors, period synchronous segmentation with the fixed frame length was applied in this study. The duration of a cardiac cycle is about 0.6–0.8 s, thus the frame length is fixed as 1.6 s, which includes approximately two cardiac cycles. Depicted in Fig. 7a, we segmented the frames with an interval of one cardiac cycle. Whenever the frame length exceeds two periods, overlap is inherent, which is exemplified in Fig. 7b. A total of 23,120 HS frames have been segmented, which, respectively, include the frames of HFrEF, HFpEF and normal are 7670, 7710 and 7740.
Automatic S1 onset marking using LR-HSMM and period synchronous segmentation into 1.6 s frames. The dotted lines are the S1 onset and the red lines are the end boundaries of frames: a is without overlap; b is with overlap
Normalization is necessary to eliminate the difference of HS amplitude caused by the differences of acquisition locations and individual variation of subjects [15, 16]. All frames used in this paper were normalized by the following formula:
$$X{\kern 1pt} \,{ = }{\kern 1pt} \,\frac{{x - x_{\text{min} } }}{{x_{\text{max} } - x_{\text{min} } }}.$$
RNN-based structures
RNN models, including LSTM and GRU, were used in this work to learn deep features from HS. In this part, some detailed information about the RNN, LSTM and GRU are described as follows.
Generally, neural networks assume that inputs and outputs are independent from each other, while many relatedness exist between outputs and previous inputs in reality. Different from other deep learning models, RNN is a network with memory capabilities that can be used to process time sequence data. Hidden layers inputs \(h^{(t)}\) include both the previous hidden output \(h^{(t - 1)}\) and the current input \(x^{(t)}\). It can be expressed as:
$$h^{(t)} = f(Ux^{(t)} { + }Wh^{{(t{ - 1})}} { + }b),$$
where \(U\), \(W\) and \(b\) represent the input weight, hidden unit weight and bias, severally. RNN networks can mine information from arbitrarily long sequences theoretically, but they are limited to just a few steps in practice. For engineering application, LSTM and GRU, the improved RNN networks, are used widely.
As an advanced version of general RNN, LSTM was proposed by Hochreiter and Schmidhuber [42] firstly and improved by Graves [43]. It solved the problem of weight explosion or gradient disappearing due to recursion under long-term time correlation conditions.
The architecture of LSTM contains a cluster of cyclically connected memory cells, and each LSTM unit is equipped with input gate, forget gate and output gate. These gates control the manner of which internal states are retained or discarded. The structure of LSTM unit is shown in Fig. 8a. The algorithm equations of LSTM cell from inputs to outputs are specified as follows:
$$g^{(t)} = \sigma (b_{g} + U_{g} x^{(t)} + W_{g} h^{(t - 1)} ),$$
$$f^{(t)} = \sigma (b_{f} + U_{f} x^{(t)} + W_{f} h^{(t - 1)} ),$$
$$o^{(t)} = \sigma (b_{o} + U_{o} x^{(t)} + W_{o} h^{(t - 1)} ),$$
$$s^{(t)} = f^{(t)} s^{(t - 1)} + g^{(t)} \sigma (b + Ux^{(t)} + Wh^{(t - 1)} ),$$
$$h^{(t)} = \tanh (s^{(t)} )o^{(t)} ,$$
Structures of LSTM unit and GRU unit: a is the structure of LSTM unit, including three gates: input gate, forget gate and output gate; b is the structure of GRU unit, which is equipped with the reset gate and update gate
where the \(\sigma\) represents the sigmoid function keeping the weights at 0–1, and \(g^{(t)}\), \(f^{(t)}\), \(o^{(t)}\), \(s^{(t)}\) indicate the external input gate, forget gate, output gate and cell state unit, respectively. The \(b\), \(U\) and \(W\) mean the biases, input weights and circular weights, respectively.
Behind the LSTM layers, a fully connected layer with a softmax function is applied for classification. The softmax function is as follows:
$${\text{softmax}}(x_{i} ) = \frac{{{ \exp }(x_{i} )}}{{\sum\nolimits_{i} {{ \exp }(x_{i} )} }},$$
where \(x_{i}\) is the output of former layer.
GRU, a special variant of the LSTM network, was proposed by Cho et al. [44] in 2014. The structure of the GRU is simplified from the LSTM, with two gates, but not separate memory cell. A single update gate \(z^{(t)}\), which replaced the input gate and the forget gate in LSTM, is used to estimate the current state of output. Furthermore, the reset gate \(r^{(t)}\) is introduced to control the influence of the previous hidden state on the \(x^{(t)}\) directly. The update gate and reset gate are described as below:
$$z^{(t)} = \sigma (b_{z} + U_{z} x^{(t)} + W_{z} h^{(t - 1)} ),$$
$$r^{(t)} = \sigma (b_{r} + U_{r} x^{(t)} + W_{r} h^{(t - 1)} ),$$
and the state of the hidden layer \(h^{(t)}\) is computed as below:
$$h^{(t)} = z^{(t)} h^{(t - 1)} + (1 - z^{(t)} )\tilde{h}^{(t)} ,$$
where \(\tilde{h}^{(t)} = \tanh (b_{h} + U_{h} x^{(t)} + W_{h} r^{(t)} h^{(t - 1)} )\), \(U\), \(W\) are the weight matrices of different gate referring to the subscripts, and \(b\) represents the bias. Figure 8b gives the structure of GRU unit.
Output states of GRU are calculated using a softmax function (Eq. (8)), which is the same with LSTM.
Methods compared
FCN: FCN with a softmax output layer has been used for time series classification [45]. The model comprised three convolutional blocks with the filter size of 128, 256, 128 and kernel sizes 8, 5, 3, respectively. A batch normalization layer and a ReLU layer are followed by every block. Then the global average pooling layer is added before the softmax layer to reduce the number of weights. The model is trained for 50 epochs with the batch size and learning rate of 64 and 0.001, respectively.
SVM: A one-versus-one SVM classifier with radial basis function kernel is adopted. Grid search method is used for parameters tuning. Following Ref. [46], we extracted multiple-type features from HS of HFrEF, HFpEF and normal. Three features with P-value less than 0.001 in Tamhane's T2 one-way ANOVA are chosen as the feature vector for SVM. To ensure the compactness of this paper, the hand-crafted feature selection and analysis are presented in the "Appendix" at the end of the paper.
LSTM: A structure with two layers and 64 hidden units/layer is adopted. The details are explained in the results.
GRU: Proposed method.
The normal HS database is available on PhysioNet. (https://www.physionet.org/physiobank/database/challenge/2016/). The HFrEF and HFpEF databases are not publicly available due to the interest of National Natural Science Foundation of China.
HF:
HS:
heart sound
GRU:
LVEF:
left ventricular ejection fraction
HFrEF:
heart failure with reduced ejection fraction
HFpEF:
heart failure with preserved ejection fraction
SVM:
RNN:
LSTM:
long short-term memory
FCN:
fully convolutional network
LR-HSMM:
logistic regression-based hidden semi-Markov model
Xu L, Huang X, Ma J, Huang J, Fan Y, Li H, et al. Value of three-dimensional strain parameters for predicting left ventricular remodeling after ST-elevation myocardial infarction. Int J Cardiovasc Imaging. 2017;33:663–73.
Ford I, Robertson M, Komajda M, Böhm M, Borer JS, Tavazzi L, et al. Top ten risk factors for morbidity and mortality in patients with chronic systolic heart failure and elevated heart rate: the SHIFT Risk Model. Int J Cardiol. 2015;184:163–9.
McMurray JJV, Adamopoulos S, Anker SD, Auricchio A, Böhm M, Dickstein K, et al. ESC Guidelines for the diagnosis and treatment of acute and chronic heart failure 2012. Eur Heart J. 2012;33:1787–847.
Nair N, Gupta S, Collier IX, Gongora E, Vijayaraghavan K. Can microRNAs emerge as biomarkers in distinguishing HFpEF versus HFrEF ? Int J Cardiol. 2014;175:395–9.
Faxén UL, Hage C, Benson L, Zabarovskaja S, Andreasson A, Donal E. HFpEF and HFrEF display different phenotypes as assessed by IGF-1 and IGFBP-1. J Card Fail. 2017;23:293–303.
Gao Z, Li Y, Sun Y, Yang J, Xiong H, Zhang H, et al. Motion tracking of the carotid artery wall from ultrasound image sequences: a nonlinear state-space approach. IEEE Trans Med Imaging. 2018;37:273–83.
Gao Z, Xiong H, Liu X, Zhang H, Ghista D, Wu W, et al. Robust estimation of carotid artery wall motion using the elasticity-based state-space approach. Med Image Anal. 2017;37:1–21.
Yıldırım Ö, Pławiak P, Tan RS, Acharya UR. Arrhythmia detection using deep convolutional neural network with long duration ECG signals. Comput Biol Med. 2018;102:411–20.
Acharya UR, Fujita H, Lih OS, Hagiwara Y, Tan JH, Adam M. Automated detection of arrhythmias using different intervals of tachycardia ECG segments with convolutional neural network. Inf Sci. 2017;405:81–90.
Mabote T, Wong K, Cleland JG. The utility of novel non-invasive technologies for remote hemodynamic monitoring in chronic heart failure. Expert Rev Cardiovasc Ther. 2014;12:923–8.
Hofmann S, Groß V, Dominik A. Recognition of abnormalities in phonocardiograms for computer-assisted diagnosis of heart failures. In: 2016 computing in cardiology conference (CinC), Vancouver, BC, Canada, 11–14 September 2016, vol. 43, p. 561–4. https://doi.org/10.22489/CinC.2016.161-187.
Zheng Y, Guo X, Qin J, Xiao S. Computer-assisted diagnosis for chronic heart failure by the analysis of their cardiac reserve and heart sound characteristics. Comput Methods Programs Biomed. 2015;122:372–83.
Eslamizadeh G, Barati R. Heart murmur detection based on wavelet transformation and a synergy between artificial neural network and modified neighbor annealing methods. Artif Intell Med. 2017;78:23–40.
Safara F, Doraisamy S, Azman A, Jantan A, Ramaiah ARA. Multi-level basis selection of wavelet packet decomposition tree for heart sound classification. Comput Biol Med. 2013;43:1407–14.
Zheng Y, Guo X, Ding X. A novel hybrid energy fraction and entropy-based approach for systolic heart murmurs identification. Expert Syst Appl. 2015;42:2710–21.
Chauhan S, Wang P, Lim CS, Anantharaman V. A computer-aided MFCC-based HMM system for automatic auscultation. Comput Biol Med. 2008;38:221–33.
Gao Z, Chung J, Abdelrazek M, Leung S, Hau WK. Privileged Modality Distillation for Vessel Border Detection in Intracoronary Imaging. IEEE Trans Med Imaging. 2019. https://doi.org/10.1109/TMI.2019.2952939.
Zhang R, Zheng Y, Mak TWC, Yu R, Wong SH, Lau JYW, et al. Automatic detection and classification of colorectal polyps by transferring low-level CNN features from nonmedical domain. IEEE J Biomed Health Inform. 2017;21:41–7.
Gao Z, Wu S, Liu Z, Luo J, Zhang H, Gong M, et al. Learning the implicit strain reconstruction in ultrasound elastography using privileged information. Med Image Anal. 2019;58:101534.
Yu R, Zheng Y, Zhang R, Jiang Y, Poon CCY. Using a multi-task recurrent neural network with attention mechanisms to predict hospital mortality of patients. IEEE J Biomed Health Inform. 2019. https://doi.org/10.1109/JBHI.2019.2916667.
Vetek A, Muller K, Lindholm H. A compact deep learning network for temporal sleep stage classification. In: 2018 IEEE life sciences conference (LSC). 2018. p. 114–7. https://doi.org/10.1109/lsc.2018.8572286.
Michielli N, Acharya UR, Molinari F. Cascaded LSTM recurrent neural network for automated sleep stage classification using single-channel EEG signals. Comput Biol Med. 2019;106:71–81.
Xu C, Xu L, Gao Z, Zhao S, Zhang H, Zhang Y, et al. Direct delineation of myocardial infarction without contrast agents using a joint motion feature learning architecture. Med Image Anal. 2018;50:82–94.
Zhao Y, Yang R, Chevalier G, Xu X, Zhang Z. Deep residual Bidir-LSTM for human activity recognition using wearable sensors. Math Probl Eng. 2018;2018:7316954.
Savarese G, Orsini N, Hage C, Vedin O, Cosentino F, Rosano GMC, et al. Utilizing NT-proBNP for eligibility and enrichment in trials in HFpEF, HFmrEF, and HFrEF. JACC Heart Fail. 2018;6:246–56.
Xanthopoulos A, Triposkiadis F, Starling RC. Heart failure with preserved ejection fraction: classification based upon phenotype is essential for diagnosis and treatment. Trends Cardiovasc Med. 2018;28:392–400.
Amit G, Gavriely N, Intrator N. Cluster analysis and classification of heart sounds. Biomed Signal Process Control. 2009;4:26–36.
Giordano N, Knaflitz M. A novel method for measuring the timing of heart sound components through digital phonocardiography. Sensors. 2019;19:1868.
Ren Z, Cummins N, Pandit V, Han J, Qian K, Schuller B. Learning image-based representations for heart sound classification. In: The 2018 international conference on digital health. 2018. p. 143–7. https://doi.org/10.1145/3194658.3194671.
Boutana D, Djeddi M, Benidir M. Identification of aortic stenosis and mitral regurgitation by heart sound segmentation on time-frequency domain. In: 5th international symposium on image and signal processing and analysis. 2007. p. 1–6. https://doi.org/10.1109/ispa.2007.4383654.
Beritelli F, Capizzi G, Lo Sciuto G, Napoli C, Scaglione F. Automatic heart activity diagnosis based on Gram polynomials and probabilistic neural networks. Biomed Eng Lett. 2018;8:77–85.
Jiang Z, Choi S, Wang H. A new approach on heart murmurs classification with SVM technique. In: 2007 international symposium on information technology convergence. 2007. p. 240–4. https://doi.org/10.1109/isitc.2007.40.
Sanei S, Ghodsi M, Hassani H. An adaptive singular spectrum analysis approach to murmur detection from heart sounds. Med Eng Phys. 2011;33:362–7.
Liu Y, Guo X, Zheng Y. An automatic approach using ELM classifier for HFpEF identification based on heart sound characteristics. J Med Syst. 2019;43:285.
Zheng Y, Guo X. Identification of chronic heart failure using linear and nonlinear analysis of heart sound. In: 2017 39th annual international conference of the IEEE engineering in medicine and biology society (EMBC). 2017. p. 4586–9. https://doi.org/10.1109/embc.2017.8037877.
Liu C, Springer D, Li Q, Moody B, Juan RA, Chorro FJ, et al. An open access database for the evaluation of heart sound algorithms. Physiol Meas. 2016;37:2181–213.
Clifford GD, Liu C, Moody B, Springer D, Silva I, Li Q, et al. Classification of normal/abnormal heart sound recordings: the PhysioNet/computing in cardiology challenge 2016. In: 2016 computing in cardiology conference (CinC), Vancouver, BC, Canada, 11-14 September 2016, vol. 43, p. 609–12. https://doi.org/10.22489/CinC.2016.179-154.
Tang H, Chen H, Li T. Discrimination of aortic and pulmonary components from the second heart sound using respiratory modulation and measurement of respiratory split. Appl Sci. 2017;7:690.
Dwivedi AK, Imtiaz SA, Rodriguez-Villegas E. Algorithms for automatic analysis and classification of heart sounds—a systematic review. IEEE Access. 2019;7:8316–45.
Springer DB, Tarassenko L, Clifford GD. Support vector machine hidden semi-Markov model-based heart sound segmentation. In: 2014 computing in cardiology conference, Cambridge, MA, USA, 7-10 September 2014, vol. 41, p. 625–8. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043120&isnumber=7042955.
Deng SW, Han JQ. Towards heart sound classification without segmentation via autocorrelation feature and diffusion maps. Future Gener Comput Syst. 2016;60:13–21.
Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9:1735–80.
Graves A. Generating sequences with recurrent neural networks. Comput Sci. 2013. http://arxiv.org/abs/1308.0850.
Chung J, Gulcehre C, Cho K, Bengio Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. Eprint Arxiv. 2014. http://arxiv.org/abs/1412.3555v1.
Wang Z, Yan W, Oates T. Time series classification from scratch with deep neural networks: a strong baseline. In: Proc Int Jt Conf Neural Networks. 2017. p. 1578–85.
Li H, Guo X, Zheng Y. An automatic approach of heart failure staging based on heart sound wavelet packet entropy. J Mech Med Biol (accepted).
The authors would like to thank National Natural Science Foundation of China for financial support, and the physicians of University-Town Hospital of Chongqing Medical University for professional instructions.
This research was funded by National Natural Science Foundation of China, Grant numbers 31570003, 31870980 and 31800823.
Key Laboratory of Biorheology Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, 400044, China
Shan Gao
& Xingming Guo
Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
Yineng Zheng
Search for Shan Gao in:
Search for Yineng Zheng in:
Search for Xingming Guo in:
SG, YZ and XG collected the experimental data, reviewed literatures and discussed the method for this study. SG performed the experiments and drafted the manuscript. YZ and XG reviewed and edited the writing. All authors SG, YZ and XG finalized the manuscript for submission. All authors read and approved the final manuscript.
Correspondence to Xingming Guo.
All patients signed informed consent forms before participating in this study, and this study has been ratified by Ethical Commission Chongqing University.
The hand-crafted features we extracted include wavelet packet energy entropy (WPEE), wavelet packet singular entropy (WPSE), sample entropy (SE) and eight components of sub-band power spectral entropy (SPSE), respectively. For the detailed description of these features, refer to [46]. Tamhane's T2 one-way ANOVA is adopted for multiple comparisons, which is a reliable pairwise comparison based on independent sample T-test. The P values of extracted features are presented in Table 4, and the P-values of WPEE, WPSE and SPSE1 are less than 0.001, indicating that these three features are significantly different among three categories. The SE has the difference between normal and HF groups, but no difference in HF groups. The rest of the features almost have no differences. Therefore, WPEE, WPSE and SPSE1 are finally chosen as the feature vector for SVM.
Table 4 The P-values of Tamhane's T2 one-way ANOVA
Figure 9 shows the qualitative results of WPEE, WPSE, SPSE1 and SE using box-plots. The values of WPEE, WPSE, SPSE1 keep the same trends among the three groups, i.e., the normal group is the lowest, while HFrEF group is the highest. These trends indicate the myocardial contractility changes in cardiac energy and information complexity during the development of HF.
The statistical results for three categories with box-plots. The red dots represent the means, and the midlines in the boxes represents the medians: a WPEE, b WPSE, c SE and d SPSE1
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Gao, S., Zheng, Y. & Guo, X. Gated recurrent unit-based heart sound analysis for heart failure screening. BioMed Eng OnLine 19, 3 (2020). https://doi.org/10.1186/s12938-020-0747-x
Accepted: 06 January 2020
DOI: https://doi.org/10.1186/s12938-020-0747-x
Heart failure screening | CommonCrawl |
Analytic function
context $\mathcal O\subset \mathbb C$
definiendum $f\in \mathrm{it}$
inclusion $f:\mathcal O\to\mathbb C$
for all $c$ … series in $\mathbb C$
todo, roughly
$\exists c.\ \forall z.\ f(z)=\sum_{n=-\infty}^\infty c_n\,z^n$
Picture a continuous function $f:\mathbb R^2\to\mathbb R$ as a surface given by $f(x,y)$ and imagine drawing a circle of radius $1$ around the point at the origin and is parametrized by $\langle \cos\theta,\sin\theta,h\rangle$ where $\theta\in[0,2\pi)$ and $h\in[0,f(\cos\theta,\sin\theta))$. See the picture below. It looks like a cylinder cut of at height $f(\cos\theta,\sin\theta)$. Let's call it a "fence". What is the surface of that fence? Clearly, it's given by the integral $\int_0^{2\pi}\mathrm{d}\theta$ of $f(\cos\theta,\sin\theta)$. And so the average height (if we count negative height as negative contributions) of the fence is
$\frac{1}{2\pi}\int_0^{2\pi}f(\cos\theta,\sin\theta)\,\mathrm{d}\theta$
For example, a parabola $x^2+y^2$ has average height of $1$ and a tilted plane like $7x+3y$ always has average height of $0$.
As a remark, we can trivially extend the definition to compute the fence height of a fence with radius $R$ at a point $p=\langle p_x,p_y\rangle$ by shifting an scaling the integral
$\frac{1}{2\pi R}\int_0^{2\pi}f(R\cos\theta+p_x,R\sin\theta+p_y)\,\mathrm{d}\theta$
A complex function $f(z)$ consists of two real functions, so it's fence height is just given by the sum of the fence heigh of $\mathrm{Re}\,f(z)$ and $i\,\mathrm{Im}\,f(z)$. Let's consider the function $f(z):=z=r\,\cos\theta+i\,r\,\sin\theta$. We have
$z^n=r^n\,\mathrm{e}^{i\,n\,\theta}=r^n\,\cos(n\,\theta)+i\,r^n\,\sin(n\,\theta)$
Both real and imaginary part oscillate along $\theta$. So from the plots below alone it is obvious that the average fence height for $n\neq 0$ must be zero
$n\neq 0\implies\frac{1}{2\pi}\int_0^{2\pi}z^n\mathrm{d}\theta=0$
and if $n=1$, then it's clearly $1$.
Note that we can compute the real and the imaginary fence height at once. The circle is parametrized by $z:=\mathrm{e}^{i\,\theta}$ and so we can express the infinitesimal length in $\mathbb R^2$ as $\mathrm{d}\theta=\frac{1}{i}\frac{\mathrm{d}z}{z}$. The factor $\frac{1}{z}$ corrects the orientation of $\mathrm{d}z$, it cancels out the complex mixing of components introduced by walking along the complex plane. The fence height of $z\cdot f(z)$ is called the residue and equals $\frac{1}{2\pi\,i}\oint_\gamma f(z)\ \mathrm d z$. In this language, the picture saying that only the fence height of the constant function $z^0$ isn't zero is the message that $\frac{1}{z}$ is the special function with non-vanishing line integral.
Now consider an analytic function, i.e. a function which can be written by a countable series with coefficients $c_n\equiv a_n+i\,b_n$
$f(z)=\sum_{n=-\infty}^\infty c_nz^n=\sum_{n=-\infty}^\infty \left(a_n+i\,b_n\right)r^n\,\mathrm{e}^{i\,n\,\theta}$
Explicitly separating real and imaginary parts, this reads
$\mathrm{Re}\,f(z)=\sum_{n=-\infty}^\infty \left(a_n\cos(n\,\theta)+b_n\sin(n\,\theta)\right)r^n$
$\mathrm{Im}\,f(z)=\sum_{n=-\infty}^\infty \left(b_n\cos(n\,\theta)+a_n\sin(n\,\theta)\right)r^n$
Now it's clear that analyticity is a strong restriction: They are just a two real functions where, for every $n\neq 0$, their two coefficients oscillate along $\theta$, and even more, they even all have fence height zero. This implies that $f$'s coefficient $c_0$ is already the fence height of all of $f$. In fact, this implies that $f(z)=\sum_{n=-\infty}^\infty c_nz^n$ itself is determined by the fence heights of of the functions $z^k\,f(z)$! So for analytic functions we have $c_n = \frac{1}{2\pi\, i} \oint_\gamma \frac{f(z)}{z^n}\, \frac{\mathrm dz}{z}$, or by shifting the fences to $p$ we get
Cauchy's integral formula
$\frac{1}{n!}f^{(n)}(p) = \frac{1}{2\pi\, i} \oint_\gamma \frac{f(z)}{(z-p)^{n+1}}\, \mathrm dz$
Roughly, the Laplace transform uses this for a re-encoding of a functions $f:\mathbb R^+\to\mathbb R$ with Taylor expansion $f(t)=\sum_{n=0}^\infty a_n t^n$, namely by mapping $t^n$ to $s^{-n}\cdot \frac{1}{s}$.
Wikipedia: Analyticity of holomorphic functions
Holomorphic function
Infinite sum of complex numbers
Improvements of the human condition
Graphxioms
analytic_function.txt · Last modified: 2018/03/10 19:52 by nikolaj | CommonCrawl |
Astronomy Stack Exchange is a question and answer site for astronomers and astrophysicists. It only takes a minute to sign up.
Amount of energy of the Big Bang
What is the currently accepted estimated range of the amount of energy of the Big Bang event?
In joules at some estimated size, so a temperature may be calculated.
For context, I wonder if the temperature of the Big Bang would make its heat radiation wavelength shorter than the Planck length.
big-bang-theory high-energy-astrophysics
BohemianBohemian
Let's start by making some points clear:
1. We don't know what the Big Bang was.
Rather, we know that the Universe is expanding. If you extrapolate backwards, you'd expect the Universe to be denser and denser. More specifically, we talk about this as a change in the scale factor $a$, and this gets smaller and smaller as we look further back in time. According to general relativity (our modern theory of gravity), 13.8 billion years ago, $a$ should have been $0$; however, you can't have a metric with $a = 0$.
Thus, we know that general relativity is necessarily incomplete. It breaks down at the conditions of the early universe, so we currently have no physical model to explain that time. Rather, we know that the early universe expanded, and the Big Bang is the time that perplexes cosmologists. Some theories, like quantum gravity, have emerged in an effort to explain the Big Bang; however, we currently have little understanding of what it actually was.
So no, we can't tell you what the energy output of the event was, since we don't know what actually happened.
2. The temperature of the early Universe was high
Our theories break down at the Planck epoch of the Universe. The Planck epoch was the earliest epoch of the Universe and lasted until $10^{-42}$ seconds after the Big Bang — that's 200 Planck times, which are the shortest meaningful measurement of time.
During this epoch, the entire Universe was at $1.417×10^{32} \; \mathrm{K}$, which is the Planck temperature. This is the hottest possible temperature; an object at this temperature will emit photons with wavelengths of a Planck length (you can read more about this in my answer here). The point is that there is no meaningful distance smaller than a Planck length, so the Universe couldn't be hotter than the Planck temperature.
Sir CumferenceSir Cumference
$\begingroup$ +1 for answering the question, but don't forget that theories break down, and they are only theories. We don't actually know that the temperature of the very early universe was high. $\endgroup$
– John Duffield
$\begingroup$ @JohnDuffield The WMAP observations provide strong evidence that the temperature of the early universe was high, ruling out a cold Big Bang. See Komatsu et al. (2010). $\endgroup$
– Sir Cumference
$\begingroup$ @JohnDuffield Well, we don't really know what the Big Bang was, so I'm a bit confused as to how you derived that. $\endgroup$
$\begingroup$ @SirCumference total size of the universe would be something like the mass of the matter plus the energy, though I am sure there are better ways to say that. I did not suggest you would equate as you've stated, but I think it's reasonable to expect a larger universe to have a larger energy big bang, no? I still think you need a 3rd uncertainty item for the size of the universe, of which we have no upper limit. $\endgroup$
– uhoh
$\begingroup$ @uhoh If we're talking about total energy of the universe, then yes. But the energy density of our universe at the Big Bang rises to infinity for any size of the universe. $\endgroup$
Thanks for contributing an answer to Astronomy Stack Exchange!
Not the answer you're looking for? Browse other questions tagged big-bang-theory high-energy-astrophysics or ask your own question.
Is there a limit to how hot an object an get?
The reason behind Big Bang
Big Bang / Big Crunch cycle?
The big bang and our expanding universe
BIg Bang Happened everywhere
What powered the Big Bang? | CommonCrawl |
Conference/Symposium | March 1 – 3, 2019 every day | Jodo Shinshu Center
Location: 2140 Durant Avenue, Berkeley, CA 94704
Sponsors: Center for Japanese Studies (CJS), Center for Buddhist Studies, Otani University, Ryukoku University, BCA Center for Buddhist Education, Institute of Buddhist Studies, Shinshu Center of America
The Centers for Japanese Studies and Buddhist Studies at the University of California, Berkeley, together with Ōtani University and Ryūkoku University in Kyoto announce a workshop under the supervision of Mark Blum that will focus on critically examining premodern and modern hermeneutics of the Tannishō, a core text of the Shin sect of Buddhism, and arguably the most well-read... More >
Conference/Symposium | March 1 | 9 a.m.-6:30 p.m. | Stephens Hall, Geballe Room, Townsend Center
Sponsor: Arts Research Center
Geballe Room, Townsend Center for the Humanities
Featuring Victor Albarracin, Neda Atanasoski, Natalia Brizuela, Tarek Elhaik, Adriana Johnson, Koyo Kouoh, Anneka Lenssen, Leigh Raiford, Kriss Ravetto, Poulomi Saha, and Kalindi Vora.
Conference/Symposium | March 1 – 2, 2019 every day | 10 a.m.-4:30 p.m. | 3335 Dwinelle Hall
Sponsors: Berkeley Center for the Study of Religion, Department of German
This is a multi-day, interdisciplinary workshop. Presentations on Friday, March 1st will run from 10:00am-4:30pm, and from 10:00am-2:00pm on Saturday, March 2nd.
A genealogy of the historical forms of imagination or of attentiveness in literature and the other arts traces these forms back to epistemological realms that predate aesthetic experience: to the medieval formation of the soul, to... More >
Dealing with Infinity: Art and the transformations of the symbolic order
Workshop | March 1 – 2, 2019 every day | 10 a.m.-4:30 p.m. | 3335 Dwinelle Hall
Panelist/Discussants: Niklaus Largier, Professor of German and Comparative Literature, Berkeley Center for the Study of Religion; David Marno, Associate Professor of English, Berkeley Center for the Study of Religion
Sponsor: Berkeley Center for the Study of Religion
This is a multi-day, interdisciplinary workshop. Presentations on Friday, March 1st will run from 10:00am-4:30pm., and from 10:00am-2:00pm on Saturday, March 2nd.
Seminar | March 1 | 12-1:30 p.m. | Sutardja Dai Hall, Banatao Auditorium
Featured Speaker: Anne Case, Princeton
Sponsor: Haas Institute for a Fair and Inclusive Society and the Berkeley Opportunity lab
Seminar | March 1 | 12-1:30 p.m. | 311 Wellman Hall
Featured Speaker: Aviv Nevo, Northwestern University
Sponsor: Department of Economics
joint with ARE Friday Seminar Series
Seminar | March 1 | 12:10-1:30 p.m. | 311 Wellman Hall
Speaker/Performer: Aviv Nevo, University of Pennsylvania-Wharton
Sponsor: Agricultural & Resource Economics
The growth of the Internet has constrained broadband networks, forcing service providers to search for solutions. We develop a dynamic model of daily usage during peak and non-peak periods, and estimate consumers' price and congestion sensitivity using high frequency usage data. Using the model estimates, we calculate usage changes associated with different economic and technological solutions... More >
Sovereign Bodies: Fighting Gender-Based and Sexual Violence Against Indigenous People
Panel Discussion | March 1 | 12:50-2 p.m. | Simon Hall, Goldberg Room
Speakers/Performers: Annita Lucchesi, Sovereign Bodies Institute; Valentin Sierra, Sovereign Bodies Institute; Cheyenne Tex, Sovereign Bodies Institute
Sponsor: Human Rights Center
Sovereign Bodies Institute (SBI), founded in 2019, builds on indigenous traditions using research and data sharing to fight gender and sexual violence against undigenous people. Their projects include the Missing and Murdered Indigenous Women (MMIW) database, Uniting Against Femicide and Supporting Indigenous Survivors of Campus Sexual Violence (conducted in part at UC Berkeley).
RSVP info: RSVP online
Solid State Technology and Devices Seminar: Engineering of LiNbO3 films for next generation acoustic and energy harvesting applications
Seminar | March 1 | 1-2 p.m. | 521 Cory Hall
Speaker/Performer: Ausrine Bartasyte, FEMTO-ST Institute, University of Franche-Comté, France
Sponsor: Electrical Engineering and Computer Sciences (EECS)
The next generation of high –frequency wide-band RF filters or frequency-agile filters are urgently needed for the development of 5G infrastructures/networks/communications. Today, LiNbO3 and LiTaO3 single crystals are key materials in electro-optics and RF acoustic filters. This motivates further development of acoustic wave devices based on highly electromechanically coupled LiNbO3 thin films,... More >
Panel Discussion | March 1 | 1:30-3:30 p.m. | 101 2251 College (Archaeological Research Facility)
Sponsor: Archaeological Research Facility
Please join us to learn about opportunities for archaeologists in cultural resources management. This event will feature brief presentations, a discussion on the state of consulting, and a chance to speak with representatives from six local CRM firms.
New Directions in Himalayan Studies: A Joint UC Berkeley-CNRS Workshop
Conference/Symposium | March 1 | 1:30-6:45 p.m. | 370 Dwinelle Hall | Note change in time
Workshop Co-convenor: Alexander von Rospatt, Professor, Buddhist and South Asian Studies; Acting Chair, South and Southeast Asian Studies; and Director, Himalayan Studies Initiative
Workshop Co-convenor: Stéphane Gros, ISAS Visiting Scholar, 2017; Researcher, Centre d'Études Himalayennes, CNRS - Villejuif
Sponsors: Institute for South Asia Studies, The Berkeley Himalayan Studies Program, France Berkeley Fund, Centre d'Etudes Himalayennes (CEH) of the National Center for Scientific Research (CNRS), France
A three-day workshop at UC Berkeley that will bring together experts working on the Himalayan region in the Humanities and Social Sciences.
Mechano- and Visco-NPS: An Electronic Method to Measure the Mechanical Properties of Cells: Nano Seminar Series
Seminar | March 1 | 2-3 p.m. | 4 LeConte Hall
Speaker/Performer: Prof. Lydia Sohn, UC Berkeley, Mechanical Engineering
Sponsor: Berkeley Nanosciences and Nanoengineering Institute
We have developed an efficient, label-free method of screening cells for their phenotypic profile, which we call Node-Pore Sensing (NPS). NPS involves measuring the modulated current pulse caused by a cell transiting a microfluidic channel that has been segmented by a series of inserted nodes.
Previously, we showed that when segments between the nodes are functionalized with different... More >
Civil and Environmental Engineering Department Seminar: Liquefaction of gravelly soils and the impact on critical infrastructure
Seminar | March 1 | 2-3 p.m. | 542 Davis Hall
Speaker: Adda Athanasopoulos-Zekkos
Sponsor: Civil and Environmental Engineering (CEE)
Our natural and built environment continues to be threatened by grand challenges such as urbanization, climate change, as well as natural and man-made hazards. At the same time, infrastructure performance requirements are increasing and engineering methods of the past are no longer adequate. As Civil and Environmental Engineers, we are called to enhance infrastructure resiliency.
Student Probability/PDE Seminar: A Fractional Kinetic Process Describing the Intermediate Time Behaviour of Cellular Flows
Seminar | March 1 | 2:10-3:30 p.m. | 891 Evans Hall
Speaker: Alexei Novikov, Penn State University
Sponsor: Department of Mathematics
This is joint work with Martin Hairer, Gautam Iyer, Leonid Koralov, and Zsolt Pajor-Gyulai. This work studies the intermediate time behaviour of a small random perturbation of a periodic cellular flow. Our main result shows that on time scales shorter than the diffusive time scale, the limiting behaviour of trajectories that start close enough to cell boundaries is a fractional kinetic process: A... More >
Workshop | March 1 | 3-4 p.m. | 340 Stephens Hall
Sponsor: Center for Middle Eastern Studies
In February, Ilhan Omar, the first of two American Muslim women elected to the US House of Representatives, went under-fire from Democrats and Republicans. Omar tweeted "It's all about the Benjamins baby" in response to the move of Republican House minority leader Kevin McCarthy to seek formal sanctions against Omar and fellow congresswomen Rashida Tlaib for their criticism of Israel's occupation... More >
Colloquium | March 1 | 3 p.m. | 250 Morrison Hall
"One of the most interesting composers anywhere today" (Chicago Sun-Times), with a distinct voice that is "adventurous and winning" (Denver Post) López has created works performed by such renowned ensembles as the Chicago Symphony, Philadelphia Orchestra, Boston Symphony, Sydney Symphony, Helsinki Philharmonic, Radio France Philharmonic, Baltimore Symphony, St. Paul Chamber Orchestra, Atlanta... More >
Seminar | March 1 | 3:10-5 p.m. | 107 South Hall
Speaker/Performer: Wayne de Fremery
Sponsor: Information, School of
Wayne de Fremeryâs current book project, Computational Bibliography and the Sociology of Data, reinvigorates analytical bibliography by expanding the scope of what bibliography describes and by diversifying the forms used in bibliographic description. As etymologies of the word bibliography suggest, bibliographers have used bibliographic forms â books â to document books. Analytical... More >
Student 3-Manifold Seminar: JSJ Decompositions
Seminar | March 1 | 4-5:30 p.m. | 939 Evans Hall
Speaker: Kyle Miller, UC Berkeley
The irreducible 3-manifolds that come from a prime decomposition can be further decomposed along embedded tori. Jaco, Shalen, and Johannson proved there is a minimal collection of such tori, unique up to isotopy, that splits an irreducible compact orientable manifold into pieces that are either Seifert-fibered or atoroidal. We will discuss examples, incompressible surfaces, and Seifert-fibered... More >
Colloquium | March 1 | 4-6 p.m. | 180 Doe Library
Speaker: Levi S. Gibbs, Assistant Professor of Asian and Middle Eastern Languages and Literatures, Dartmouth College
Panelist/Discussant: Andrew Jones, Professor and Louis B. Agassiz Chair in Chinese, UC Berkeley
Sponsor: Center for Chinese Studies (CCS)
In China and around the world, performances of songs can create virtual meeting grounds where different voices and perspectives engage with one another. In his new book about the rise of "Folksong King of Western China" Wang Xiangrong, Levi S. Gibbs explores parallels between the song culture of Wang's childhood mountain village and his contemporary national and international performances where... More >
Student Arithmetic Geometry Seminar: Uniqueness Properties for Spherical Varieties
Seminar | March 1 | 4:10-5 p.m. | 891 Evans Hall
Speaker: Alexander Sherman, UCB
Toric varieties are varieties with an action of a torus having an open orbit. Spherical varieties are natural generalizations, having an action of reductive group with an open Borel orbit. Like with toric varieties, there are natural combinatorial invariants that one can define from a spherical variety, such as the irreducible summands which appear in the ring of regular functions. Losev proved... More >
Music Studies Colloquium: Neil Verma (Northwestern University: Screamlines: Anatomy and Geology of Radio
Colloquium | March 1 | 4:30 p.m. | 128 Morrison Hall
Neil Verma
Neil Verma is assistant professor in Radio/Television/Film. He teaches in the Screen Cultures PhD program and the MA program in Sound Arts and Industries, where he is also associate director. He is author of Theater of the Mind: Imagination, Aesthetics, and American Radio Drama (Chicago, 2012), winner of the Best First Book Award from the Society for Cinema and Media Studies. He is... More >
Conference/Symposium | March 2 | 9 a.m.-6 p.m. | Hearst Gymnasium
Speakers/Performers: Shabba Doo, The Original Lockers; Ejoe Wilson, Elite Force Crew; Traci Bartlow, Starchild Entertainment; Darrin Hodges, Gentlemen of Production
Sponsor: Department of Theater, Dance, and Performance Studies
As a symposium and workshop offering, Dancing Cyphers: Hip Hop's Embodied Expression will bring together dance communities broadly interested in Hip Hop. More specifically, the event will delve into the history of African American street dance, culture, and the scholarship around its global impact and ancestral connections to specific African dance traditions. Panel discussions and... More >
Tickets required: $15 UC Berkeley Student/Faculty, $20 Student, $25 General Public
Ticket info: Buy tickets online
Conference/Symposium | March 2 | 9 a.m.-6 p.m. | 370 Dwinelle Hall
Colloquium | March 2 | 5-6:30 p.m. | Jodo Shinshu Center
Speaker: Michihiro Ama, University of Montana
Moderator: Mark Blum, UC Berkeley
In this lecture, Natsume Sōseki's The Miner and "A Rainy Day" in To the Spring Equinox and Beyond are treated as works of path literature. During the Buddhist funerals, periods of transition in the lives of the literary characters and new sensations regarding life and death are identified through the connection of the term "path" as a synonym for passage. The funerals lead the fictional... More >
Workshop | March 3 | 2 p.m. | Berkeley Art Museum and Pacific Film Archive
Join independent writer, curator, and public scholar Christian L. Frock and friends for a reading and informal dialogue about how we can foster greater inclusion in public life—as a means of resistance in a political atmosphere focused on exclusion, and as a matter of building community and personal integrity. Free limited-edition Risograph posters featuring Frock's essay "Prompts for Inclusion:... More >
Civil and Environmental Engineering Department Seminar: Data-assisted high-fidelity modeling for systems design and monitoring
Seminar | March 4 | 10-11 a.m. | 542 Davis Hall
Speaker: Audrey Olivier
Increased availability of measured data has recently generated tremendous interest in the development of methods to learn from data. In parallel, engineers have a long history of building high-fidelity physics- based models that allow us to model the behavior of highly complex systems. This talk aims at presenting some of the exciting research opportunities that arise.
Seminar | March 4 | 11:10 a.m.-12:30 p.m. | 489 Minor Hall
Speakers/Performers: Katharina Foote, Roorda Lab; Liz Lawler, Silver Lab
Sponsor: Neuroscience Institute, Helen Wills
Katharina Foote's Abstract
Structure and function in retinitis pigmentosa patients with mutations in RHO vs. RPGR
Retinitis pigmentosa (RP) causes slow, progressive, relentless death of photoreceptors. In order to gain insight on how cone survival differs between different mutations affecting rods vs. affecting rods and cones, we measured cone structure and function in patients with mutations... More >
Instabilities and Phase Transitions in Multiphase Flow Through Porous Media: Fluids Seminar
Seminar | March 4 | 12-1 p.m. | 3110 Etcheverry Hall
Speaker/Performer: Xiaojing (Ruby) Fu, Miller Fellow, Department of Earth and Planetary Sciences, University of California, Berkeley
Sponsor: Department of Mechanical Engineering (ME)
Flow and transport through porous media is ubiquitous in nature. They are key processes behind subsurface resources such as oil and gas, geothermal energy, and groundwater. They also mediate corrosion and ageing of porous engineering materials as well as geohazards such as landslides, volcanic eruptions and earthquakes. Central to many of these processes is the strong coupling between porous... More >
Colloquium | March 4 | 12:10-1:30 p.m. | 1102 Berkeley Way West
Speaker/Performer: Kalina Michalska, University of California, Riverside
Sponsor: Department of Psychology
A fundamental question in developmental affective science is how children come to understand the emotions of others when deciding how to behave towards them. One consequential domain of such an ability is responding to others' distress with empathy and kindness. In this talk, I will explore the neurobiological and social factors that lead some children to respond maladaptively to the distress of... More >
Combinatorics Seminar: On statistic of irreducible components
Seminar | March 4 | 12:10-1 p.m. | 939 Evans Hall
Speaker: Nicolai Reshetikhin, UC Berkeley
For finite dimensional representations $V_1, \dots , V_m$ of a simple finite dimensional Lie algebra $\mathfrak g$ consider the tensor product $W=\otimes _{I=1}^m V_i^{\otimes N_i}$. The first result, which will be presented in the talk, is the asymptotic of the multiplicity of an irreducible representation $V_\lambda $ with the highest weight λ in this tensor product when $N_i=\tau _i/\epsilon... More >
Seminar | March 4 | 12:30-2 p.m. | 223 Moses Hall
Speaker/Performer: Santiago Oliveros, University of Essex
The Political Economy Seminar focuses on formal and quantitative work in the political economy field, including formal political theory.
Seminar | March 4 | 1:30-2:30 p.m. | 775B Tan Hall
Featured Speaker: Prof. Daniel Werz, Technical University Braunschweig
Sponsor: College of Chemistry
A characteristic feature of carbopalladation reactions is the syn-attack of the organopalladium species LnX[Pd]-R on the reacting π-system. Such a step results in compounds bearing Pd and R on the same side of the originating alkene moiety. Embedded into longer domino sequences complex structures are efficiently obtained by
a repetition of this syn-carbopalladation step. In this way, linear... More >
Speaker: Chad Jones, Stanford Business School
Sponsor: Robert D. Burch Center for Tax Policy and Public Finance
Reproducing AlphaZero: what we learn: BLISS Seminar
Speaker/Performer: Yuandong Tian, Facebook AI Research
We reproduce and open source AlphaGoZero/AlphaZero framework using 2000 GPUs and 9 days, achieving super-human performance of Go AI that beats 4 top-30 professional players with 20-0, provide extensive ablation studies and perform basic analysis.
Arithmetic Geometry and Number Theory RTG Seminar: Arithmetic Siegel-Weil formula for orthogonal Shimura varieties
Seminar | March 4 | 3-5 p.m. | 748 Evans Hall
Speaker: Tonghai Yang, University of Wisconsin
After reviewing Siegel-Weil formula and progress on arithmetic Siegel-Weil formula, I will talk about my new work with Jan Bruinier on this subject. Let $L$ be an integral lattice of signature $(n, 2)$ over $\mathbb Q$, and let $T$ be a non-singular symmetric integral matrix. Associated to it are two objects. One is the $T$-th Fourier coefficient $a(T)$ of the derivative of some `incoherent'... More >
Colloquium | March 4 | 4 p.m. | 180 Doe Library
Speaker: George C.S. Lin, Chair Professor of Geography, Department of Geography, The University of Hong Kong
Panelist/Discussant: You-tien Hsing, Professor of Geography, UC Berkeley
Sponsors: Center for Chinese Studies (CCS), Center of Global Metropolitan Studies
Phenomenal transformation of the landscape in Chinese cities has been conventionally understood as the spatial outcome of the reformation of state-market relations. The current urban landscape observable today is described as a juxtaposition of two elements, namely the legacy of the socialist city and the newly emerged space of marketization. This research identifies a new wave of urbanization in... More >
Featured Speaker: Eric Siggia, The Rockefeller University
Sponsors: College of Chemistry, Department of Physics
Embryology at the beginning of the 21st century finds itself in a situation similar to neurobiology; the behavior of the component pieces is understood in some detail, but how they self-assemble to become life is still very hazy. There are 100's of molecules that enable cell communication and genetics defines their function by classifying aberrant embryos at a suitable intermediate stage of... More >
Science in the Schoolyards of Detroit, Cairo, and Philadelphia: What are the seven Ss of success?
Colloquium | March 4 | 4-5:30 p.m. | Berkeley Way West, Room 1215, 2121 Berkeley Way, Berkeley, CA 94720
Speaker/Performer: Nancy Butler Songer, Drexel University, School of Education
Sponsor: Graduate School of Education
This talk will present three stories and empirical research results associated with middle and high school-based systemic reform with investigation and design projects as the focus of the reform. Where was systemic change realized, and where did it falter? Drawing from these research-based stories, what are the seven Ss of secondary science success?
Speaker: Leo Bursztyn, University of Chicago
Joint with the Psychology and Economics seminar
Seminar | March 4 | 4-5:30 p.m. | 648 Evans Hall | Note change in date and time
Featured Speaker: Leonardo Bursztyn, University of Chicago
*Joint with Development and Planning Seminar. Please note change from regularly scheduled Psychology and Economics time.
Link to NBER Working Paper
ABSTRACT: Through the custom of guardianship, husbands typically have the final word on their wives' labor supply decisions in Saudi Arabia, a country with very low female labor force participation... More >
Seminar | March 4 | 4-5 p.m. | 310 Sutardja Dai Hall
Speaker: Angjoo Kanazawa, Postdoctoral Scholar, UC Berkeley
In this talk, I will discuss my work in reconstructing 3D non-rigid, deformable objects such as humans and animals from everyday photographs and video, and show how such systems can be used to train a simulated character to learn to act by watching YouTube videos.
Seminar | March 4 | 4:10-5:30 p.m. | 639 Evans Field
Speaker/Performer: Annie Liang, University of Pennsylvania
We develop a model of social learning from complementary information: Shortlived agents sequentially choose from a large set of (flexibly correlated) information sources for prediction of an unknown state, and information is passed down across periods. Will the community collectively acquire the best kinds of information? Longrun outcomes fall into one of two cases: (1) efficient information... More >
Analysis and PDE Seminar: Dispersive decay of small data solutions for the KdV equation
Speaker: Mihaela Ifrim, UW Madison
We consider the Korteweg-de Vries (KdV) equation, and prove that small localized data yields solutions which have dispersive decay on a quartic time-scale. This result is optimal, in view of the emergence of solitons at quartic time, as predicted by inverse scattering theory. Joint work with Herbert Koch and Daniel Tataru.
Presentation | March 4 | 6:30 p.m. | Berkeley Art Museum and Pacific Film Archive
"Nature is the greatest artist and scientist," writes Nnedi Okorafor, an award-winning author of African-based science fiction, fantasy, and magical realism for both children and adults. "If we human beings, with our rather brilliant, often flawed, sometimes evil creativity, joined forces with our creator (nature), as opposed to trying to control it and treat it like our slave, imagine the... More >
Workshop | March 5 | 10 a.m.-12 p.m. | International House, Sproul Rooms
Sponsor: Berkeley International Office(BIO))
J-1 and J-2 visitors subject to this requirement must return to their country of legal permanent residence for two years or obtain a waiver before being eligible for certain employment visas such as H (temporary employment), L (intra-company transfer), or Permanent Resident status ("green card"). Not all J visitors are subject as it depends on specific factors.
At this workshop, you will... More >
Seminar | March 5 | 11 a.m.-12:30 p.m. | 648 Evans Hall
Featured Speaker: Speakers: Ola Mahmoud, University of Zurich
Sponsor: Consortium for Data Analytics in Risk
Diversification is a fundamental concept in financial economics, risk management, and decision theory. From a broad perspective, it conveys the idea of introducing variety to a set of objects. Today, there is general consensus that some form of diversification is beneficial in asset allocation, however its definition is context-dependent and there is no consensus on a widely accepted,... More >
Seminar | March 5 | 11 a.m.-12 p.m. | 120 Latimer Hall | Canceled
Featured Speaker: David Sarlah, Department of Chemistry, University of Illinois at Urbana-Champaign
Presentation | March 5 | 12-1 p.m. | 639 Evans Hall
Speaker: Mathieu Pedemonte, Postdoctoral Associate, UC Berkeley
Sponsor: Clausen Center
This workshop consists of one-hour informal presentations on topics related to macroeconomics and international finance, broadly defined. The presenters are UC Berkeley PhD students, faculty, and visitors.
** MUST RSVP**
RSVP info: RSVP by emailing [email protected] by March 1.
Workshop | March 5 | 12-5 p.m. | 405 Moffitt Undergraduate Library
Wikimedia's race and gender trouble is well-documented. While the reasons for the gap are up for debate, the practical effect of this disparity is not: content is skewed by the lack of participation by women and underrepresented groups. This adds up to an alarming absence in an important repository of shared knowledge.
Let's change that. Join us in 405 Moffitt Library on Tuesday, March 5... More >
Attendance restrictions: A Cal ID card is required to enter Moffitt. The Library attempts to offer programs in accessible, barrier-free settings. If you think you may require disability-related accommodations, please contact the event sponsor -- ideally at least two weeks pri
Panel Discussion | March 5 | 12-1:30 p.m. | Martin Luther King Jr. Student Union, BNorth Conference Room
Sponsors: Student Environmental Resource Center, Career Center, Association of Environmental Professionals- Berkeley Student Chapter
Come and learn more about employers and organizations who hire environmentally minded students. This panel will be focusing on environmental planning careers. Employer: TBA
UCOP Virtual Career Series: Unique ways to use your degree in the Humanities
Workshop | March 5 | 12-1 p.m. | Virtual
Sponsor: University of California Office of the President
Learn how to market and position your degree by gaining insights and advice from UC alumni who've found career success as a result of their Humanities education.
Workshop | March 5 | 12-1:30 p.m. | 303 Doe Library
Speaker/Performer: Jesse Loesberg, Web Designer, Library Communications Office
Sponsor: Director of Staff Learning and Development
Panel Discussion | March 5 | 12:45-2 p.m. | 240 Boalt Hall, School of Law
Panelist/Discussant: Danny Murillo, Solitary Survivor
Speaker/Performer: Terry A. Kupers, M.D., M.S.P., Institue Professor Emeritus, The Wright Institute
Sponsors: Human Rights Law Student Association, National Lawyers Guild - Berkeley Law Chapter
Solitary confinement is routinely used to further confine and punish those in prison, despite that the U.N. has found extended periods of solitary to constitute torture. A panel of survivors and experts will explore the legal implications and human cost of this practice. Lunch will be served.
Speaker: Stephen Yeaple, Professor of Economics, Penn State University
Ex-post firm heterogeneity can result from different strategies to overcome labor market imperfections by ex-ante identical firms—with far-reaching consequences for the welfare effects of trade. With asymmetric information about workers' abilities and costly screening, in equilibrium some firms screen and pay wages based on the true productivity of their workers, and some firms do not screen and... More >
RSVP info: RSVP by emailing Joseph G. Mendoza at [email protected]
Workshop | March 5 | 3-5 p.m. | Hearst Museum of Anthropology
Speaker/Performer: Dr. Andrew Hamilton
Inca art featured a corpus of motifs called tocapus that are highly contested in scholarship. Were they a long-lost form of Inca writing? Were they part of an Inca calendar? Current readings of tocapus suggest that they were badges of the Inca state, worn to define identities within the empire and even the sprawling landscape of the empire as a whole. This workshop will examine a number of... More >
3-Manifold Seminar: Special cube complexes and quasiconvexity
Speaker: Ian Agol, UC BERKELEY
We'll discuss quasiconvex subgroups of fundamental groups of special cube complexes. These give rise to isometrically immersed complexes with separable fundamental group, proving that quasiconvex subgroups are separable.
Commutative Algebra and Algebraic Geometry: The Fellowship of the Ring: The nef cone of a Coxeter complex: Φ-submodular functions and deformations of Φ-permutahedra
Speaker: Federico Ardila, San Francisco State University
We describe the nef cone of the toric variety corresponding to a Coxeter complex. Equivalently, this is the cone of deformations of a Coxeter permutahedron. This family contains polyhedral models for the Coxeter-theoretic analogs of compositions, graphs, matroids, posets, and associahedra. Our description extends the known correspondence between generalized permutahedra and submodular functions... More >
Seminar | March 5 | 4-5 p.m. | 120 Latimer Hall
Featured Speaker: Dimitrios Stamou, Center for Geometrically Engineered Cellular Systems, University of Copenhagen
Membranes serve multiple crucial roles in cell biology: they act as hosts to membrane proteins, as templates for the nucleation of signalling domains, and as boundaries that define cells and their organelles. We are broadly interested at elucidating molecular mechanisms that regulate the structure, function and organization of membranes and membrane proteins. In this talk I will discuss the role... More >
Workshop | March 5 | 4-5 p.m. | 9 Durant Hall
Speaker/Performer: Leah Carroll, UC Berkeley Office of Undergraduate Research and Scholarships
Sponsor: UC Berkeley Office of Undergraduate Research and Scholarships
If you missed the workshop given by the staff of the Office for the Protection of Human Subjects, or even if you were there, you may want to attend one of these workshops given by me -- Leah Carroll, Haas Scholars Program Manager and Advisor. Note that they are timed to be very shortly after SURF and Haas Scholars human subjects selection, respectively.
I will go through, step by step, the... More >
Seminar | March 5 | 4-5 p.m. | Soda Hall, Wozniak Lounge (430)
Speaker: Somayeh Sojoudi, Assistant Professor in Residence, University of California, Berkeley
Computation plays a crucial role in the design, analysis and operation of intelligent societal systems appearing in smart cities, such as modernized power grids. We motivate the talk by discussing how advances in computation can revolutionize energy systems and then study two problems.
Featured Speaker: Fernando Luco, Texas A&M University
Come work on your human subjects protocol in a space where others are doing the same, and one representative of the Haas Scholars or SURF program will be present to answer questions and guide you.
Conference/Symposium | March 6 – 7, 2019 every day | Hyatt Regency
Location: Hyatt Regency, San Francisco, CA
Sponsor: Fung Institute for Engineering Leadership
Whether you're a startup seeking capital and exposure, or an investor seeking new deals, Venture Summit West presented by youngStartup Ventures - is the event of the year you won't want to miss.
A highly productive venture conference, Venture Summit | West is dedicated to showcasing VCs, Corporate VCs and angel investors committed to funding venture backed, emerging and early stage... More >
Registration info: Register online
The Lost Generation? Scarring After the Great Recession: A Brown Bag Talk
Colloquium | March 6 | 12-1 p.m. | 2232 Piedmont, Seminar Room
Speaker: Jesse Rothstein, Professor, Public Policy & Economics, UC Berkeley
Sponsors: Population Science, Department of Demography
A lunch time talk and discussion session, featuring visiting and local scholars presenting their research on a wide range of topics of interest to demography.
Seminar | March 6 | 12-1 p.m. | 106 Stanley Hall
Speaker/Performer: Sanjeevi Sivasankar, Univerisity of California, Davis
Sponsor: Bioengineering (BioE)
Cells in tissues exert forces as they squeeze, stretch, flex and pull on each other. These
forces are incredibly small - on the scale of piconewtons, but they are essential in mediating cell
survival, proliferation, and differentiation. A key protein responsible for sensing mechanical forces,
are the classical cadherin family of cell-cell adhesion proteins. Cadherins are essential for... More >
MVZ LUNCH SEMINAR - Joana Meier: Hybridization fuels cichlid fish adaptive radiations
Seminar | March 6 | 12-1 p.m. | Valley Life Sciences Building, 3101 VLSB, Grinnell-Miller Library
Speaker: Joana Meier
Sponsor: Museum of Vertebrate Zoology
MVZ Lunch is a graduate level seminar series (IB264) based on current and recent vertebrate research. Professors, graduate students, staff, and visiting researchers present on current and past research projects. The seminar meets every Wednesday from 12- 1pm in the Grinnell-Miller Library. Enter through the MVZ's Main Office, 3101 Valley Life Sciences Building, and please let the receptionist... More >
Speaker/Performer: Muhammad Mustafa Hussain, Ph.D., Visiting Professor, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley
CMOS technology and electronics are rigid and bulky. Their applications are focused on computation-communication-infotainment. Scaling down their dimensions has been enabling their triumph. However, what about larger area applications? How about a singular gadget whose size can be reconfigured without any compromise in their functionality? How about spherical solar cell or imaging system? Is it... More >
Seminar | March 6 | 12-1 p.m. | 101 Barker Hall
Speaker: Francis-André Wollman, Institut de Biologie Physico-Chimique
Sponsor: Department of Plant and Microbial Biology
Dr. Wollman is the Director of the Institut de Biologie Physico-Chimique in Paris, France. His work is dedicated to the study of the biogenesis and the function of the photosynthetic apparatus, which is present in the network of internal membranes of the chloroplast, the thylacoids.
Conference/Symposium | March 6 | 12-1 p.m. | 310 Sutardja Dai Hall
Sponsor: CITRIS and the Banatao Institute
Katherine Yelick is a Professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley and the Associate Laboratory Director for Computing Sciences at Lawrence Berkeley National Laboratory. Her research is in programming languages, compilers, parallel algorithms, and automatic performance tuning. She is well known for her work in... More >
The Development of Reasoning about Religious Norms: Insights from Hindu and Muslim children in India
Speaker: Mahesh Srinivasan, Assistant Professor, UC Berkeley Psychology
Sponsor: Institute of Personality and Social Research
Children who live in pluralistic societies often encounter members of other religious and secular groups who hold radically different beliefs and norms. Under these circumstances, developing religious tolerance––respecting that each group has its own beliefs and norms––is both challenging and crucial. When individuals in pluralistic societies fail to develop religious tolerance, the consequences... More >
Workshop | March 6 | 12:10-1:30 p.m. | Tang Center, University Health Services, Section Club
Speaker: Julie Johnson, The Parent Child Connection
Sponsor: Be Well at Work - Work/Life
Rarely do we feel like playing when our children have been whiny, uncooperative, or are headed for a meltdown. But play is often what children need to get their behavior back on track. In this workshop you'll learn how play can be used to release tension, work through difficult behaviors, and bring you closer to your child. You'll also learn what you can do on the days that you don't have the... More >
Enrollment info: Enroll online
Colloquium | March 6 | 12:30-2 p.m. | 223 Moses Hall
Speaker/Performer: Kweku Opoku-Agyemang, www.kwekuopokuagyemang.com
Sponsor: Center for African Studies
Randomized controlled trials in African development - inspired by the scientific method - have remade what economic development means not only for the fields of comparative politics and development economics - but in the lived experiences of many Africans in ways that academic scholarship and even policy making could not have possibly anticipated. In my talk, I bring analyze these evaluations... More >
Kweku Opuku-Agyeang, CEGA Research Fellow
Workshop | March 6 | 12:30-2 p.m. | 9 Durant Hall
Harmonic Analysis Seminar: On the Fourier restriction inequality in $\mathbb R^3$
Speaker: Kevin O'Neill, UC Berkeley
This seminar is an ongoing discussion of Guth's Fourier restriction inequality based on the method of polynomial partitioning. This week's talk continues discussion of the core part of the proof. The structure of the induction — on the radius and on the $L^2$ norm of $f$, applied to the cellular term — will be presented. Insofar as time allows, the tranverse and tangential terms will be... More >
Topology Seminar (Introductory Talk): Topological and Geometric Complexity for Hyperbolic 3-Manifolds
Speaker: Diane Hoffoss, University of San Diego
We will introduce Scharlemann-Thompson handle decompositions of a 3-manifold, and a generalization of this which we call a graph decomposition. Using these, we define topological measures of complexity for the manifold. In the case where the manifold has additional metric structure, we use Morse and Morse-like functions to give geometric definitions of complexity as well. We then show that some... More >
Deformation Theory Seminar: Curved deformations and categories of singularities
Speaker: Constantin Teleman, UC Berkeley
We will construct deformations of categories for Hochschild Maurer-Cartan cochains with non-trivial curving components. These will be related to fixed point categories for Lie algebra actions and, in the special case of matrix factorizations, to the category of singularities
Seminar | March 6 | 3-4 p.m. | 1011 Evans Hall
Speaker/Performer: Piyush Srivastava, Tata Institute of Fundamental Research
Sponsor: Department of Statistics
A special case of the classical Dvoretzky theorem states that the space of n-dimensional real vectors equipped with the l1 norm admits a large "Euclidean section", i.e. a subspace of dimension Θ(n) on which a scaled l1 norm essentially agrees with the Euclidean norm. In particular, such a subspace can be realized as the column space of a "tall" n × (n/k) random matrix A with identically... More >
Featured Speaker: David Card, University of California, Berkeley
Co-authored with Alessandra Fenizia and David Silver
Topology Seminar (Main Talk): Topological and Geometric Complexity for Hyperbolic 3-Manifolds
Seminar | March 6 | 4-5 p.m. | 3 Evans Hall
Seminar | March 6 | 4-5 p.m. | 114 Morgan Hall
Speaker/Performer: Raul Mostoslavsky, Harvard Medical School
Sponsor: Nutritional Sciences and Toxicology
Colloquium | March 6 | 4-5:30 p.m. | 126 Barrows Hall
Speaker/Performer: David Anthoff, Energy and Resources Group
Sponsor: Energy and Resources Group
THE ENERGY AND RESOURCES GROUP SPRING 2019 COLLOQUIUM SERIES PRESENTS:
David Anthoff
Energy and Resources Group
DATE: Wednesday, March 6, 2019
PLACE: 126 Barrows
TITLE: Inequality and the Social Cost of Carbon
We present a novel way to disentangle inequality aversion over time from... More >
Speaker/Performer: David Madigan, Columbia University
In practice, our learning healthcare system relies primarily on observational studies generating
one effect estimate at a time using customized study designs with unknown operating
characteristics and publishing – or not – one estimate at a time. When we investigate
the distribution of estimates that this process has produced, we see clear evidence
of its shortcomings, including an apparent... More >
Speaker: Amy Zhang, Graduate Student, MIT
My research in human-computer interaction is on reimagining outdated designs towards designing novel online discussion systems that fix what's broken about online discussion.
Panel Discussion | March 6 | 4-6 p.m. | 1102 Berkeley Way West
Featured Speaker: Dr. Ibram X. Kendi, Professor of History and International Relations; Founding Director of the Antiracist Research and Policy Center, American University
Panelist/Discussants: john a. powell, Director, Haas Institute for a Fair and Inclusive Society; Lisa García Bedolla, Professor, Graduate School of Education; and Director, Institute of Governmental Studies, UC Berkeley; Dan Perlstein, Professor, Graduate School of Education, UC Berkeley
Moderator: Prudence L. Carter, Dean, Graduate School of Education, UC Berkeley
Sponsors: Graduate School of Education, Department of African American Studies, Haas Institute for a Fair and Inclusive Society
Come join in the discussion with Dr. Ibram X. Kendi, a professor of History and International Relations at American University, who speaks with great expertise and compassion about the findings of his book and how they can fit into the national conversation surrounding movements such as #BlackLivesMatter and social justice.
RSVP recommended
Colloquium | March 6 | 4-6 p.m. | 180 Tan Hall
Speaker: Anne Andrews, Professor, UC Los Angeles
Sponsor: Department of Chemical Engineering
Measurements of neurotransmitters in the extracellular space are limited by combinations of poor chemical, spatial, and temporal resolution. Brain chemistries, therefore, are unable to be investigated dynamically, particularly at the level of neural circuits and across numerous signaling molecules.1 To understand neurochemical signaling at scales pertinent to encoded information, micro- to... More >
Center for Computational Biology Seminar: Dr. Shamil Sunyaev, Department of Biomedical Informatics, Harvard Medical School
Seminar | March 6 | 4:30-5:30 p.m. | 125 Li Ka Shing Center
Sponsor: Center for Computational Biology
Large-scale genomic data reveal mechanisms of mutagenesis and help predict complex phenotypes
Statistical analysis of large genomic datasets has recently emerged as a discovery tool in many areas of genetics. Two examples include studies of mutagenesis and of the relationship between genotype and phenotype. We developed a statistical model of regional variation of human mutation... More >
Women in Intellectual Life: The "Erotics" of Intellectual Life
Colloquium | March 6 | 5-6:30 p.m. | 330 Wheeler Hall
Sponsor: Department of English
The (open-ended and thus intentionally ill-defined here) theme we hope to explore is that of the "erotics" of intellectual life. Why and how do we love intellectual work? How and where does it get charged with eros, welcome or, alas, unwelcome--and why does it get so charged? What kinds of intellectual work do we love, and what kinds are unloveable, and what kinds are done without anyone loving... More >
Workshop | March 6 | 6-7 p.m. | Eshleman Hall, 5th floor
Sponsors: Berkeley International Office(BIO)), ASUC (Associated Students of the University of California)
If you are graduating soon and have questions about applying for F-1 employment eligibility after you graduate, then join BIO and the ASUC on March 6th at 6 PM for this in-person OPT workshop at the ASUC Senate Chambers. We'll do a brief overview of the OPT application process and timelines, followed by a Question and Answer session to clarify any questions you might have! Prior to attending this... More >
Refugee Crises - Past and Present: A book reading and discussion with author-activists Lauren Markham and Thi Bui
Reading - Nonfiction | March 6 | 6-8 p.m. | Boalt Hall, School of Law, Goldberg Room
Speakers/Performers: Lauren Markham; Thi Bui
A book reading and discussion with author-activists Lauren Markham (The Faraway Brothers) and Thi Bui (The Best We Could Do), moderated by Kim Thuy Seelinger.
Panel Discussion | March 6 | 6:30-8 p.m. | 120 Kroeber Hall
Speakers/Performers: Purin Phanichphant, Artist and Lecturer, Jacobs Institute for Design Innovation; Lydia Majure, Science policy advocate, Gallant Lab for Cognitive, Computational & Systems Neuroscience; Albert Lai, Data Scientist
Sponsor: Science@Cal
What can neuroscience of human perception can learn from the design of artificial intelligence, and vice versa? Join a panel of scientists and artists for a discussion of how our brains work, how we design computer networks to think, and how we explore and illuminate the intangible concept of thought.
Scanning an Artificial Brain - installation by Purin Phanichphant
Conference/Symposium | March 7 | 8:30 a.m.-5 p.m. | Haas School of Business, Chou Hall, Spieker Forum (6th Floor)
Sponsors: Center for Responsible Business, Human Rights Center
When talking about artificial intelligence, or AI, positive social impact is often not the first thing that comes to mind. Some think about AI as an amorphous, hard-to-understand, futuristic technology that will bring about more harm than good. These fears may stem from the complex and opaque nature of AI—and key actors across society must come together to discuss, debate, and solve the... More >
Workshop | March 7 | 9 a.m.-1 p.m. | Sutardja Dai Hall
The Financial Fair for Personal Finance is an opportunity for UC Berkeley faculty, staff, and retirees to attend workshops, learn about campus resources, and visit with campus financial vendors.
Sponsored by Work/Life Program (UHS), Human Resources, The Retirement Center, and CITRIS.
Workshop | March 7 | 9-9:45 a.m. | Sutardja Dai Hall
Determine how much savings you will need to retire the way you want, understand how much you can save through the UC Retirement Savings Program, discover additional ways to save, and learn strategies to help you protect and grow your savings.
MOOCs and Film Studies: Teaching Hong Kong Cinema Online: Faculty and Graduate Student Seminar/Workshop
Seminar | March 7 | 10 a.m.-12 p.m. | 9 Durant Hall
Speaker/Performer: Gina Marchetti, University of Hong Kong
Sponsors: Department of Gender and Women's Studies, Media Studies, Center for Chinese Studies (CCS), Center for Race and Gender, Film & Media Studies, Center for New Media
Gina Marchetti will be leading a workshop/seminar for faculty and graduate students on teaching Hong Kong cinema in Massive Open Online Courses (MOOCs). Seating is limited. To register, go to https://docs.google.com/forms/d/1kzwUKL4L0TJk0jL44OXF4gYkuQgppCLzSI3urPJkUyE/edit?ts=5c48aec9
Gina Marchetti teaches courses in film, gender and sexuality, critical theory and cultural studies and... More >
Attendance restrictions: Registration Required. To register go to https://www.edx.org/course/hong-kong-cinema-through-global-lens-hkux-hku06-1x
Registration: $0
Registration info: Registration opens February 1. Register online by March 6.
Workshop | March 7 | 10 a.m.-12 p.m. | Sutardja Dai Hall
Speaker/Performer: Donald Goldberg, UC Retirement Administration Center
The session addresses the many areas one needs to consider for a successful and satisfying retirement and the benefits available through the UC Retirement Plan (UCRP). Content includes information about monthly retirement income and how it is calculated, cost of living adjustments, and lump sum cash out. The Retirement Savings Program and retiree health and welfare benefits will be discussed.... More >
Course | March 7 | 10:30-11:30 a.m. | 331 University Hall | Note change in date
Speaker/Performer: Jason Smith, UC Berkeley Office of Environment, Health, & Safety
Sponsor: Office of Environment, Health & Safety
This session briefly covers the UC Berkeley specific radiation safety information you will need to start work. In addition, dosimeter will be issued, if required.
Applied Math Seminar: Intrinsic complexity and its scaling law: from approximation of random vectors and rand fields to high frequency waves
Seminar | March 7 | 11 a.m.-12 p.m. | 891 Evans Hall
Speaker: Hongkai Zhao, UC Irvine
We characterize the intrinsic complexity of a set in a metric space by the least dimension of a linear space that can approximate the set to a given tolerance. This is dual to the characterization using Kolmogorov n-width, the distance from the set to the best n-dimensional linear space. We study the approximation of random vectors (via principal component analysis a.k.a. singular value... More >
Seminar | March 7 | 11:10 a.m.-12:30 p.m. | C330 Haas School of Business
Speaker/Performer: Nikolai Roussanov, Wharton
Joint with Haas Finance Seminar
Seminar | March 7 | 12-1:30 p.m. | C325 Haas School of Business
Speaker: John Horton, NYU Stern
The Oliver E. Williamson Seminar on Institutional Analysis, named after our esteemed colleague who founded the seminar, features current research by faculty, from UCB and elsewhere, and by advanced doctoral students. The research investigates governance and its links with economic and political forces. Markets, hierarchies, hybrids, and the supporting institutions of law and politics all come... More >
Workshop | March 7 | 12-1:30 p.m. | Dwinelle Hall, Academic Innovation Studio, 117 Dwinelle Hall
Sponsor: Data Sciences
A brief review of the elements of teaching using Jupyter notebooks and deployment of classes via Jupyterhub.
Workshop | March 7 | 12:15-1 p.m. | Sutardja Dai Hall
What you need to know about budgeting, debt and making room for saving.
Panel Discussion | March 7 | 12:15-1:30 p.m. | 250 Goldman School of Public Policy
Sponsors: Berkeley Institute for the Future of Young Americans, Terner Center for Housing Innovation
Join us for a discussion about why it costs so much to build in the Bay Area. We'll have Andrew Cussen from RAD Urban along with Elizabeth Kuwada from Eden Housing discuss this topic. Elizabeth Kneebone from the Terner Center for Housing Innovation will moderate.
Seminar | March 7 | 12:30-1:30 p.m. | 2040 Valley Life Sciences Building
Featured Speaker: Mary Stoddard, Princeton University
Sponsor: Department of Integrative Biology
Workshop | March 7 | 1-2:30 p.m. | 356 Barrows Hall
Speaker/Performer: Rachael Samberg, Library
This training will help you navigate the copyright, fair use, and usage rights of including third-party content in your digital project. Whether you seek to embed video from other sources for analysis, post material you scanned from a visit to the archives, add images, upload documents, or more, understanding the basics of copyright and discovering a workflow for answering copyright-related... More >
Speaker/Performer: Benjamin Knox
Speaker: Sara Heller, University of Michigan
Sponsor: Center for Labor Economics
Speaker: Peter Aronow, Yale
Speaker: Gal Mishne, Gibbs Assistant Professor, Yale University
In this talk, I present new unsupervised geometric approaches for extracting structure from large-scale high-dimensional data.
Speaker/Performer: Professor John Marshall, Division of Epidemiology and Biostatistics, UC Berkeley
Malaria, dengue, Zika and other mosquito-borne diseases continue to pose a major global health burden through much of the world, despite the widespread distribution of insecticide-based tools and antimalarial drugs. Consequently, there is interest in novel strategies to control these diseases, including the release of mosquitoes transfected with Wolbachia and engineered with CRISPR-based gene... More >
Colloquium | March 7 | 4-5:30 p.m. | 2538 Channing (Inst. for the Study of Societal Issues), Wildavsky Conference Room
Featured Speaker: Carlos G. Vélez-Ibáñez, ASU Regents' Professor; Presidential Motorola Professor of Neighborhood Revitalization; Founding Director Emeritus, School of Transborder Studies; Professor, School of Human Evolution and Social Change; Emeritus Professor of Anthropology of the University, University of Arizona
Sponsor: Center for Native American Issues Research on
Spanish and English have fought a centuries-long battle for dominance in the Southwest North American Region, commonly known as the U.S.-Mexico transborder region. Covering the time period of 1540 to the present, the book provides a deep and broad understanding of the contradictory methods of establishing language supremacy and details the linguistic and cultural processes used by penetrating... More >
Workshop | March 7 | 4-5 p.m. | 14 Durant Hall
Seminar | March 7 | 4-5 p.m. | 125 Li Ka Shing Center
Featured Speaker: Onja Razafindratsima, College of Charleston
Mathematics Department Colloquium: Iwahori Kazhdan-Lusztig equivalence and other animals
Colloquium | March 7 | 4:10-5 p.m. | 60 Evans Hall
Speaker: Dennis Gaitsgory, Harvard
In their influential series of papers in the 90's, Kazhdan and Lusztig established an equivalence between the category of G(O)-integrable representations of the Kac-Moody Lie algebra and the category of modules over the "big" (i.e., Lusztig's) quantum group. In this talk we will explain what happens if we try to describe in terms of the quantum group the full affine category O. We will also... More >
Presentation | March 7 | 5-6 p.m. | 540AB Cory Hall
Speakers: Gireeja Ranade, Assistant Teaching Professor, UC Berkeley EECS; Ruzena Bajcsy, Professor, UC Berkeley EECS
Come listen to Professor Ruzena Bajcsy and Professor Gireeja Ranade talk about the interesting research that they do. Join HKN in celebrating the impactful research contributions of female faculty in the Berkeley EECS Department.
Panel Discussion | March 7 | 6-7:30 p.m. | David Brower Center
Location: 2150 Allston Way, Berkeley, CA 94704
Panelist/Discussants: Celeste Kidd, UC Berkeley; Bruno Olshausen, UC Berkeley; Christos Papadimitriou, Columbia University; Michael Pollan, UC Berkeley
Moderator: Anil Ananthaswamy, Fall 2018 Simons Institute Journalist in Residence
Sponsor: Simons Institute for the Theory of Computing
How does the brain perceive? Does it use the information coming in through the various senses, such as our eyes and ears, and build up a perception of the world outside from the bottom up? Or is it doing something quite different? New thinking in neuroscience suggests that the brain builds models of what's out there and uses these models to interpret the incoming sensory data—an idea that goes... More > | CommonCrawl |
Emergence of zero-field non-synthetic single and interchained antiferromagnetic skyrmions in thin films
Robust Formation of Ultrasmall Room-Temperature Neél Skyrmions in Amorphous Ferrimagnets from Atomistic Simulations
Chung Ting Ma, Yunkun Xie, … S. Joseph Poon
Current-driven dynamics and inhibition of the skyrmion Hall effect of ferrimagnetic skyrmions in GdFeCo films
Seonghoon Woo, Kyung Mee Song, … Joonyeon Chang
Isolated zero field sub-10 nm skyrmions in ultrathin Co films
Sebastian Meyer, Marco Perini, … Stefan Heinze
An achiral ferromagnetic/chiral antiferromagnetic bilayer system leading to controllable size and density of skyrmions
F. J. Morvan, H. B. Luo, … J. P. Liu
Antiskyrmions and their electrical footprint in crystalline mesoscale structures of Mn1.4PtSn
Moritz Winter, Francisco J. T. Goncalves, … Toni Helm
Spin photogalvanic effect in two-dimensional collinear antiferromagnets
Rui-Chun Xiao, Ding-Fu Shao, … Hua Jiang
Tuning the density of zero-field skyrmions and imaging the spin configuration in a two-dimensional Fe3GeTe2 magnet
Bei Ding, Xue Li, … Wenhong Wang
The microscopic origin of DMI in magnetic bilayers and prediction of giant DMI in new bilayers
Priyamvada Jadaun, Leonard F. Register & Sanjay K. Banerjee
Observation of Skyrmions at Room Temperature in Co2FeAl Heusler Alloy Ultrathin Film Heterostructures
Sajid Husain, Naveen Sisodia, … Sujeet Chaudhary
Amal Aldarawsheh ORCID: orcid.org/0000-0003-4163-76681,2,
Imara Lima Fernandes ORCID: orcid.org/0000-0002-5078-78041,
Sascha Brinker ORCID: orcid.org/0000-0002-7077-12441,
Moritz Sallermann1,3,4,
Muayad Abusaa5,
Stefan Blügel ORCID: orcid.org/0000-0001-9987-47331 &
Samir Lounis ORCID: orcid.org/0000-0003-2573-28411,2
Nature Communications volume 13, Article number: 7369 (2022) Cite this article
Magnetic properties and materials
Antiferromagnetic (AFM) skyrmions are envisioned as ideal localized topological magnetic bits in future information technologies. In contrast to ferromagnetic (FM) skyrmions, they are immune to the skyrmion Hall effect, might offer potential terahertz dynamics while being insensitive to external magnetic fields and dipolar interactions. Although observed in synthetic AFM structures and as complex meronic textures in intrinsic AFM bulk materials, their realization in non-synthetic AFM films, of crucial importance in racetrack concepts, has been elusive. Here, we unveil their presence in a row-wise AFM Cr film deposited on PdFe bilayer grown on fcc Ir(111) surface. Using first principles, we demonstrate the emergence of single and strikingly interpenetrating chains of AFM skyrmions, which can co-exist with the rich inhomogeneous exchange field, including that of FM skyrmions, hosted by PdFe. Besides the identification of an ideal platform of materials for intrinsic AFM skyrmions, we anticipate the uncovered knotted solitons to be promising building blocks in AFM spintronics.
Magnetic skyrmions are particle-like topologically protected twisted magnetic textures1,2,3 with exquisite and exciting properties4,5. They often result from the competition between the Heisenberg exchange and the relativistic Dzyaloshinskii-Moriya interaction (DMI)6,7, which is present in materials that lack inversion symmetry and have a finite spin orbit coupling. Since their discovery in multiple systems, ranging from bulk, thin films, surfaces to multilayers8,9,10,11,12,13,14,15,16, skyrmions are envisioned as promising candidates for bits, potentially usable in the transmission and storage of information in the next generation of spintronic devices17,18,19,20,21,22. However, requirements for future (nano-)technologies are not only limited to the generation of information bits but are also highly stringent from the point of view of simultaneous efficiency in reading, control, and power consumption21,23. Miniaturization of ferromagnetic (FM) skyrmions suffers from the presence of dipolar interactions24, while their stabilization generally requires an external magnetic field. Another drawback is the skyrmion Hall effect4, caused by the Magnus force that deflects FM skyrmions when driven with a current, which hinders the control of their motion. Additionally, FM skyrmions exhibit a rather complex dynamical behavior as function of applied currents25,26,27,28,29,30,31, under the presence of defects.
Antiferromagnetic (AFM) skyrmions are expected to resolve several of the previous issues and offer various advantages. Indeed, AFM materials being at the heart of the rapidly evolving field of AFM spintronics32,33,34 are much more ubiquitous than ferromagnets. Their compensated spin structure inherently forbids dipolar interactions, which should allow the stabilization of rather small skyrmions while enhancing their robustness against magnetic perturbations. AFM skyrmions were predicted early on using continuum models35, followed with multiple phenomenology-based studies on a plethora of properties and applications, see e.g., Refs. 36,37,38,39,40,41,42,43,44,45,46,47,48. The predicted disappearance of the Magnus force, which triggers the skyrmion Hall effect, would then enable a better control of the skyrmion's motion38,49, which has been partially illustrated experimentally in a ferrimagnet50,51.
Intrinsic AFM meronic spin-textures (complexes made of half-skyrmions) were recently detected in bulk phases52,53,54 while synthetic AFM skyrmions were found within multilayers55. However, the observation of intrinsic AFM skyrmions has so far been elusive, in particular at surfaces and interfaces, where they are highly desirable for racetrack concepts. A synthetic AFM skyrmion consists of two FM skyrmions realized in two different magnetic layers, which are antiferromagnetically coupled through a non-magnetic spacer layer. In contrast to that an intrinsic AFM skyrmion is a unique magnetic entity since it is entirely located in a single layer. Here we predict from first-principles (see Method section) intrinsic AFM skyrmions in a monolayer of Cr deposited on a surface known to host ferromagnetic skyrmions: A PdFe bilayer grown on Ir(111) fcc surface as illustrated in Fig. 1a. The AFM nature of Cr coupled antiferromagnetically to PdFe remarkably offers the right conditions for the emergence of a rich set of complex AFM textures. The ground state is collinear row-wise AFM (RW-AFM) within the Cr layer (see inset of Fig. 1c), a configuration hosted by a triangular lattice so far observed experimentally only in Mn/Re(0001)56,57. The difference to the latter, however, is that although being collinear, the Cr layer interfaces with a magnetic surface, the highly non-collinear PdFe bilayer.
Fig. 1: Interchained AFM skyrmions in CrPdFe trilayer on Ir(111).
a Schematic representation of the investigated trilayer deposited on Ir(111) following fcc stacking. b The interchaining of skyrmions is reminiscent of interpenetrating rings, which realize topologically protected phases. c The ground state being RW-AFM (see inset) can host AFM skyrmions that can be isolated or interlinked to form multimers of skyrmions, here we show examples ranging from dimers to pentamers. The AFM skyrmions can be decomposed into FM skyrmions living in sublattices illustrated in d. In case of the single AFM skyrmion, two of the sublattices, L1 and L2, are occupied by the FM skyrmions shown in e. L3 and L4 host quasi-collinear AFM spins in contrast to the FM skyrmions emerging in the case of the double AFM skyrmion presented in f. Note that the separation of sublattices L1, L2, L3, and L4 shown in e and f is only done for illustration.
A plethora of localized chiral AFM-skyrmionic spin textures (Fig. 1c) and metastable AFM domain walls (see Supplementary Fig. 1) emerge in the Cr overlayer. Besides isolated topological AFM solitons, we identify strikingly unusual interpenetrating AFM skyrmions, which are reminiscent of crossing rings (see schematic Fig. 1b), the building blocks of knot theory where topological concepts such as Brunnian links are a major concept58. The latter has far reaching consequences in various fields of research, not only in mathematics or physics but extends to chemistry and biology. For instance, the exciting and intriguing interchain process, known also as catenation, is paramount in carbon-, molecular-, protein- or DNA-based assemblies59,60,61. We discuss the mechanisms enforcing the stability of the unveiled interchained topological objects, their response to magnetic fields and the subtle dependence on the underlying magnetic textures hosted in PdFe bilayer. Our findings are of prime importance in promoting AFM localized entities as information carriers in future AFM spintronic devices.
AFM skyrmions in CrPdFe/Ir(111) surface
PdFe deposited on Ir(111) surface hosts a homo-chiral spin spiral as a ground state14,62 emerging from the interplay of the Heisenberg exchange interactions and DMI. The latter is induced by the heavy Ir substrate, which has a strong spin-orbit coupling. Upon application of a magnetic field, sub 10-nm FM skyrmions are formed14,18,62,63,64,65,66. After deposition of the Cr overlayer, the magnetic interactions characterizing Fe are strongly modified (see comparison plotted in Supplementary Fig. 2) due to changes induced in the electronic structure (Supplementary Fig. 3). The Heisenberg exchange interaction among Fe nearest neighbors (n.n.) reduces by 5.5 meV (a decrease of 33%). This enhances the non-collinear magnetic behavior of Fe, which leads to FM skyrmions even without the application of a magnetic field (see Supplementary Fig. 4). The n.n. Cr atoms couple strongly antiferromagnetically (−51.93 meV), which along with the antiferromagnetic coupling of the second n.n. (−6.69 meV) favors the Néel state. The subtle competition with the ferromagnetic exchange interactions of the third n.n. (5.32 meV) stabilizes the RW-AFM state independently from the AFM interaction with the Fe substrate (the detailed magnetic interactions are shown in Supplementary Fig. 2). As illustrated in Fig. 1c, the RW-AFM configuration is characterized by parallel magnetic moments along a close-packed atomic row, with antiparallel alignment between adjacent rows. Due to the hexagonal symmetry of the atomic lattice, the AFM rows can be rotated in three symmetrically equivalent directions. We note that the moments point out-of-plane due to the magnetic anisotropy energy (0.5 meV per magnetic atom).
The DM interactions among Cr atoms arise due to the broken inversion symmetry and is mainly induced by the underlying Pd atoms hosting a large spin-orbit coupling. The n.n. Cr DMI (1.13 meV) is of the same chiral nature and order of magnitude as that of Fe atoms (1.56 meV), which gives rise to the chiral non-collinear behavior illustrated in Fig. 1c. We note that the solitons are only observed if Cr magnetic interactions beyond the n.n. are incorporated, which signals the significance of the long-range coupling in stabilizing the observed textures. Since the Heisenberg exchange interaction among the Cr atoms is much larger than that of Fe, the AFM solitons are bigger, about a factor of three larger than the FM skyrmions found in Fe.
While the RW-AFM state is defined by two sublattices, the different AFM skyrmions, isolated or overlapped, can be decomposed into interpenetrating FM skyrmions living in four sublattices illustrated in Fig. 1d and denoted as L1, L2, L3, and L4. In the RW-AFM phase, L1 and L4 are equivalent and likewise for L2 and L3. It is evident that the moments in L1 and L4 are antiparallel to the ones in L2 and L3. Taking a closer look at the isolated AFM magnetic texture, one can dismantle it into two FM skyrmions with opposite topological charges anchored in the distinct antiparallel FM sublattices L1 and L2, while L3 and L4 carry rather collinear magnetization (Fig. 1e). In the case of the overlapped AFM skyrmions, however, no sublattice remains in the collinear state. As an example, the dimer consists of two couples of antiferromagnetically aligned skyrmions, each being embedded in one of the four sublattices (Fig. 1f).
Our study reveals that in contrast to interlinked magnetic textures, single AFM skyrmions are significantly sensitive to the magnetic environment hosted by the underlying PdFe bilayer. For brevity, we focus in the next sections on the single and two-overlapping AFM skyrmions and address the mechanisms dictating their stability.
Stabilization mechanism of the overlapping AFM skyrmions
The formation of overlapped solitons is an unusual phenomenon since FM skyrmions repel each other. It results from competing interactions among the skyrmions living in the different sublattices, which finds origin in the natural AFM coupling between the n.n. magnetic moments. Depending on the hosting sublattice (L1 to L4), the four skyrmions shown in Fig. 1f experience attraction or repulsion. The sublattices are chosen such that nearest neighbors within a sublattice are third nearest neighbors in the overall system. This choice leads to the exchange coupling preferring the parallel alignment of spins within a given sublattice. When looking at any sublattice in isolation, this effective ferromagnetic-like exchange interaction enables the existence of skyrmions in a collinear background. In the overall system, however, pairs of sublattices interact via the first and second nearest-neighbor exchange interactions, which prefers anti-parallel spin alignments. Therefore, the exchange interaction between skyrmions formed at sublattices with a parallel background, such as (L1, L4) and (L2, L3), and denoted in the following as skyrmion-skyrmion homo-interactions, are repulsive as usually experienced by FM skyrmions. In contrast, and for the same reasons, interaction between skyrmions in sublattices with oppositely oriented background spins, denoted as hetero-interactions, are attractive as it is for (L1, L2), (L2, L4), (L3, L4), and (L1, L3). Clearly, the set of possible hetero-interactions, enforced by the attractive nature induced by the DMI, outnumbers the homo ones. The interchained AFM skyrmion is simply the superposition of the sublattice skyrmions at the equilibrium distance, here 2.58 nm between the two AFM skyrmions, where both interactions (attraction and repulsion) are equal.
To substantiate the proposed mechanism, we quantify the skyrmion-skyrmion interaction. We simplify the analysis by neglecting the Cr-Fe magnetic interactions, which puts aside the impact of the rich non-collinear magnetic behavior hosted by the PdFe bilayer. In this case, single AFM skyrmions disappear and only the overlapping ones are observed. We take the skyrmion dimer illustrated in Fig. 2a and proceed to a rigid shift of the lower AFM skyrmion while pinning the upper one at the equilibrium position. We extract the skyrmion-skyrmion interaction map as a function of distance, as shown in Fig. 2b, which clearly demonstrates that as soon as the AFM skyrmions are pulled away from each other, the energy of the system increases. Note that within this procedure, the sublattice interactions (L1, L2) and (L3, L4) do not contribute to the plots since they are assigned to each of the AFM skyrmions moved apart from each other. Two minima are identified along a single direction as favored by the symmetry reduction due to the AFM arrangement of the magnetic moments in which the skyrmions are created. Indeed, one notices in Fig. 1d that due to the sublattice decomposition symmetry operations are reduced to C2, i.e., rotation by 180∘, while mirror symmetries, for example, originally present in the fcc(111) lattice are broken. Figure 2c, d depict the skyrmion-skyrmion interaction, which hosts either one or two minima, as a function of distance along two directions indicated by the dashed lines, blue and black, in Fig. 2a. The two minima found along the blue line should be degenerate and correspond to the swapping of the two AFM skyrmions. The breaking of degeneracy is an artifact of the rigid shift assumed in the simulations, which can be corrected by allowing the moments to relax (see red circle in Fig. 2c). The maximum of repulsion is realized when the two AFM skyrmions perfectly overlap (see inset).
Fig. 2: Energetics of two interchained AFM skyrmions.
a Two overlapping AFM skyrmions decoupled from the PdFe bilayer with black and blue lines representing two examples of paths along which the lower skyrmion is rigid-shifted with respect to the upper one, which is pinned. b Two-dimensional map of the total energy difference with respect to the magnetic state shown in a as a function of the distance between the skyrmion centers. c Energy profile along the blue line shown in a. A double minimum is found once the skyrmions swap their positions and become truly degenerate once the rigidity of the spin state is removed (see the red circle). d The Heisenberg exchange is the most prominent contribution to the skyrmion stabilization, as shown along the path hosting a single minimum. The total skyrmion-skyrmion repulsive homo-interaction is dominated by the attractive hetero-interaction, red curves in e and f, respectively. The DMI contribution, shown in insets, is smaller and sublattice independent. It favors the overlap of AFM skyrmions.
The interaction profile shown in Fig. 2d is decomposed into two contributions: the skyrmion-skyrmion homo- and hetero-interactions, which we plot in Fig. 2e, f, respectively. The data clearly reveals the strong repulsive nature of the homo-interaction mediated by the Heisenberg exchange, which competes with the attractive hetero-interaction driven by both the Heisenberg exchange coupling and DMI. The latter skyrmion-skyrmion interaction is strong enough to impose the unusual compromise of having strongly overlapping solitons.
Impact of magnetic field
Prior to discussing stability aspects pertaining to the single AFM skyrmion in detail, we apply a magnetic field perpendicular to the substrate and disclose pivotal ingredients for the formation of the isolated solitons. In general, the reaction of FM and AFM skyrmions to an external magnetic field is expected to be deeply different. When applied along the direction of the background magnetization, FM skyrmions reduce in size while recent predictions expect a size expansion of AFM skyrmions36,41,42, thereby enhancing their stability.
To inspect the response of AFM skyrmions to a magnetic field perpendicular to the substrate, we first remove, as done in the previous section, the Cr-Fe interaction since it gives rise to a non-homogeneous and strong effective exchange field. In this particular case, we can only explore the case of interchained AFM skyrmions. As illustrated in Fig. 3a, the size of each of the sublattice skyrmions, which together form the AFM skyrmion dimer, increases with an increasing magnetic field. The type of the hosting sublattice, with the magnetization being parallel or antiparallel to the applied field, seems important in shaping the skyrmions dimension. Strikingly, and in strong contrast to what is known for FM skyrmions, the AFM skyrmions, single and multimers, were found to be stable up to extremely large magnetic fields. Although the assumed fields are unrealistic in the lab, they can be emulated by the exchange field induced by the underlying magnetic substrate. Indeed, the magnetic interaction between Cr and its nearest neighboring Fe atoms, carrying each a spin moment of 2.51 μB, reaches −3.05 meV, which translates to an effective field of about 21 T. At this value, the average skyrmion radius is about 1.6 nm, which is 30% smaller than the one found once the Cr-Fe magnetic coupling is enabled (see Fig. 3b).
Fig. 3: Impact of magnetic field of AFM skyrmion radius.
Radius of the sublattice FM skyrmions for two-interchained AFM skyrmions a decoupled from and b coupled to the Fe magnetization. c A single case is shown for the isolated AFM skyrmion since it disappears without the inhomogeneous magnetic field emerging from the substrate. Examples of snapshots of the AFM skyrmions are illustrated as insets of the different figures. In Fe, the amount of FM skyrmions and antiskyrmions increases once applying a magnetic field, which erases the ground state spin-spiral. The coupling to the Fe magnetization affects the evolution of the AFM skyrmions as function of the magnetic field dramatically. d depicts the dependence of the AFM skyrmion under a magnetic field of 70 Tesla on the surrounding magnetic environment, by sequentially deleting one FM skyrmion or antiskyrmion in the Fe layer and relaxing the spin structure. At some point, removing any of the single FM skyrmions in the lower left of d annihilates the AFM skyrmion.
We note that since the skyrmions are not circular in shape, their radius is defined as the average distance between the skyrmion's center and the position where the spin moments lie in-plane. The significant size difference is induced by the strong inhomogenous exchange field emanating from the Fe sub-layer, which can host spirals, skyrmions and antiskyrmions.
If the Cr-Fe interaction is included, the size dependence changes completely. Instead of the rather monotonic increase with the field, the size of the skyrmion is barely affected until reaching about 50 T, which is accompanied by substantial miniaturization of the AFM skyrmions. Here, a phase transition occurs in Fe, which initially hosts spin spirals that turn into FM skyrmions (see Supplementary Fig. 5). After being squeezed down to an average radius of 1.48 nm at 140 T, the size expansion observed without the Cr-Fe interaction is recovered because the substrate magnetization is fully homogeneous and parallel to the Zeeman field. Likewise, single AFM skyrmions, found only once the coupling to the substrate is enabled, react in a similar fashion to the field as depicted in Fig. 3c. The substantial difference, however, is that fields larger than 80 T destroy the AFM skyrmions due to the annihilation of the Fe FM skyrmions. This highlights an enhanced sensitivity to the underlying magnetic environment and clearly demonstrates the robustness enabled by skyrmion interchaining.
Stabilization mechanism for single AFM skyrmions
We learned that single AFM skyrmions can be deleted after application of an external magnetic field or by switching off the exchange coupling to the magnetic substrate. Both effects find their origin in the magnetization behavior of the PdFe bilayer. To explore the underlying correlation, we consider as an example the magnetic configuration obtained with a field of 70 T and delete one after the other the skyrmions and antiskyrmions found in Fe, then check whether the AFM skyrmion in Cr survives (see example in Fig. 3d). We notice that the AFM skyrmion disappears by deleting the FM solitons located directly underneath or even a bit away. Supplementary Fig. 6 shows that when shifted across the lattice, the AFM skyrmion disappears if fixed above a magnetically collinear Fe area.
We proceed in Fig. 4 to an analysis of the Fe-Cr interaction pertaining to the lower-right snapshot presented in Fig. 3d by separating the Heisenberg exchange contribution from that of DMI and plotting the corresponding site-dependent heat maps of these two contributions for each sublattice. Here, we consider as reference energy that of the RW-AFM collinear state surrounding the non-collinear states in Fig. 3d. The building-blocks of the AFM skyrmion are shown in Fig. 4a, d, where one can recognize the underlying Fe FM skyrmions in the background. The latter are more distinguishable in the sublattices free from the AFM skyrmion as illustrated in Fig. 4g, j. The order of magnitude of the interactions clearly indicates that the DMI plays a minor role and that one can basically neglect the interactions arising in the skyrmion-free sublattices, namely L3 and L4. It is the Heisenberg exchange interaction emerging in the sublattices L1 and L2 that dictates the overall stability of the AFM skyrmion.
Fig. 4: Interaction map of the single AFM skyrmion with the magnetic substrate.
In the first row of figures, sublattice decomposition of Cr skyrmion including the underlying Fe skyrmions shown in four columns a, d, g, and j, corresponding respectively to L1, L2, L3, and L4. The AFM skyrmion is made of two FM skyrmions hosted by sublattices L1 and L2. In Fe, FM skyrmions and antiskyrmions can be found in all four lattices. The second row (b, e, h, and k) illustrates the sublattice dependent two dimensional Heisenberg exchange energy map corresponding to the areas plotted in the first row, followed by the third row (c, f, i, and l) corresponding to DMI. Note that the energy difference ΔE is defined with respect to the RW-AFM background.
In L2, the core of the magnetization of the Cr FM skyrmion points along the same direction as that of the underlying Fe atoms, which obviously is disfavored by the AFM coupling between Cr and Fe (−3.05 meV for nearest neighbors). This induces the red exchange area surrounding the core of the AFM skyrmion (black circle in Fig. 4e), which is nevertheless sputtered with blue spots induced by the magnetization of the core of the Fe FM skyrmions pointing in the direction opposite to that of the Cr moments in L2. The latter is a mechanism reducing the instability of the Cr skyrmion. Overall, the total energy cost in having the Cr skyrmion in L2 reaches +693.7 meV and is compensated by the exchange energy of −712.4 meV generated by the one living in sublattice L1. Here, the scenario is completely reversed since the core of the Cr skyrmion has its magnetization pointing in the opposite direction than that of the neighboring Fe atoms and therefore the large negative blue area with the surrounding area being sputtered by the Fe skyrmions, similar to the observation made in L2 (see Fig. 4d). Overall, the Cr AFM skyrmion arranges its building blocks such that the energy is lowered by the skyrmion anchored in sublattice L1. Here, the details of the non-collinear magnetic textures hosted by Fe play a primary role in offering the right balance to enable stabilization. This explains the sensitivity of the single AFM skyrmion to the number and location of the underlying FM Fe skyrmions. Removing non-collinearity in Fe makes both building blocks of the AFM skyrmion equivalent without any gain in energy from the Cr-Fe interaction, which facilitates the annihilation of the Cr skyrmion.
It is enlightening to explore the phase diagrams of the AFM skyrmions as function of the underlying magnetic interactions. The latter are multiplied by a factor renormalizing the initial parameters. In Fig. 5 we illustrate the impact of DMI vector's (D) magnitude, Heisenberg exchange J and anisotropy K on the formation of various phases including the one hosting double overlapped AFM skyrmions. For simplicity, we consider the case where the interaction between Cr and the underlying Fe layer is switched off. A color code is amended to follow the changes induced on the distance between the AFM skyrmions. From this study, we learn that in contrast to the DMI, which tends to increase the size of the structures, J and K tend to miniaturize the skyrmions, ultimately favoring their annihilation. The phase hosting AFM skyrmions is sandwiched between the RW-AFM state and a phase hosting stripe domains. It is convenient to analyse the unveiled overall behavior in terms of the impact of DMI. The latter protects the AFM skyrmions structure from shrinking, similarly to FM skyrmions67,68. So for small values of DMI compared to J in Fig. 5a, or compared to K in Fig. 5b, the AFM skyrmions shrink and disappear. In contrast, large values of the DMI increase the size of the skyrmions till reaching a regime where stripe domains are formed. Within the phase hosting AFM skyrmions, increasing J or K results in smaller skyrmions.
Fig. 5: Phase diagrams of the free double interchained AFM skyrmions.
a Phase diagram obtained by fixing the magnetic anisotropy energy K while changing the set of DMI and Heisenberg exchange interaction J, or b by fixing J while modifying K and DMI. The color gradient pertaining to the skyrmion phase indicates the distance between two AFM skyrmions. c Illustration of the states shown in the phase diagrams. Note that an in-plane Néel state is predicted for large DMI and small J.
Thermal stability with Geodesic nudged elastic band (GNEB) method
So far we have demonstrated that the interlinked AFM skyrmion multimers can indeed exist as local minima of the energy expression given by the Heisenberg Hamiltonian (Eq. (1)). Another important question, however, is the stability of these structures against thermal excitations. Answering this question requires knowledge about how deep or shallow these energy minima are, which can be quantified as a minimal energy barrier that the system has to overcome in order to escape a minimum, keeping in mind that the Néel temperature of the RW-AFM ground state is ≈ 310 K as obtained from our Monte Carlo simulations69,70,71. To investigate this issue, we systematically carried out a series of Geodesic nudged elastic band (GNEB) simulations71,72,73 for AFM multimers, containing initially ten interchained skyrmions, then calculating the energy barrier needed to annihilate one AFM skyrmion at a time as depicted in Fig. 6a, showing the successive magnetic states between which, the energy barrier has been calculated. Note that deleting one of the AFM skyrmions forming the dimer leads to the RW-AFM state. The energy barrier is given by the energy difference between the nth AFM skyrmions state local minimum (hosting n AFM interchained skyrmions) and the relevant saddle point located on the minimum energy path connecting the initial state with the (n−1)th AFM skyrmions state. The energy barrier increases from about 8 meV (≈90 K) for the double interchained AFM skyrmions to 13 meV (≈150 K) for three interchained ones, reaching a saturation value of ≈18.5 meV (≈214 K) for chains containing more than five AFM skyrmions, see Fig. 6b. Hence, increasing the number of interchained skyrmions enhances their stability, which is further amplified when enabling the interaction with the PdFe substrate. Instead of 8 meV pertaining to the free skyrmion dimer, the barrier reaches 45.7 meV (≈530 K) owing to the interaction with the underlying substrate while the single AFM skyrmion experiences a barrier of 10 meV (≈113 K). Thus, the exchange field emanating from the PdFe substrate promotes the use of interchained AFM skyrmions in room temperature applications. By analysing how the different interactions contribute to the barrier, we identified the DMI as a key parameter for the thermal stability of the interchained AFM skyrmions. For example, in the case of free double interchained AFM skyrmions, the Heisenberg exchange interactions contribution is −87 meV, the magnetic anisotropy contribution is −150 meV while the DMI provides a barrier of 245 meV. Interestingly and as expected, it is the magnetic exchange interaction between Cr and Fe that is mainly responsible for the thermal stability of the single AFM skyrmion.
Fig. 6: Energy barriers for chains of free interchained AFM skyrmions.
a Snapshots of the explored skyrmion chains. b The energy barrier obtained with GNEB simulations for deleting a single AFM skyrmion from the lower edge of the free (not interacting with PdFe) chains.
Following a two-pronged approach based on first-principles simulations combined with atomistic spin dynamics, we identify a thin film that can host intrinsic, i.e., non-synthetic, AFM skyrmions at zero magnetic field. A Cr monolayer deposited on a substrate known to host FM skyrmions, PdFe/Ir(111), offers the right AFM interface combination enabling the emergence of a rich set of AFM topological solitons. Owing to the AFM nature of Cr, its ground state is RW-AFM as induced by magnetic interactions beyond nearest neighbors. We strikingly discovered interchained AFM skyrmions, which can be cut and decoupled into isolated solitons via the inhomogenous exchange field emanating from PdFe bilayer. Interestingly, interchaining enhances their stability, which is largely amplified by the exchange field emanating from the substrate. The intra-overlayer skyrmion-skyrmion interaction favors the important overlap of the AFM skyrmions, and makes them robust against the rich magnetic nature of the PdFe substrate. In contrast, the single AFM skyrmion annihilates if positioned on a homogeneously magnetized substrate and it is only through the presence of various spin-textures such as spin spirals and multiples of skyrmions or antiskyrmions that one of the building-blocks of the AFM skyrmion can lower the energy enough to enable stability.
Since the experimental observation of intrinsic AFM skyrmions has so far been elusive at interfaces, our predictions open the door for their realization in well-defined materials and offer the opportunity to explore them in thin film geometries. The robustness of the interchained skyrmions qualify them as ideal particles for room temperature racetrack memory devices to be driven with currents while avoiding the skyrmion Hall effect. Preliminary work indicates that AFM skyrmions move faster than their FM partners when reacting to an applied current, with an intriguing behavior induced by the non-collinear exchange field emanating from the substrate. If the latter is collinear or non-magnetic, the AFM textures are predicted to be quasi-free from the skyrmion Hall effect. The ability to control the substrate magnetization makes the system, we studied, a rich playground to design and tune the highly sensitive single AFM skyrmion living in the overlayer. We envision patterning the ferromagnetic surface with regions hosting different magnetic textures, being trivial, such as the ferromagnetic regions, or topological, to define areas where the AFM skyrmion can be confined or driven within specific paths. We envisage a rich and complex response to in-plane currents, which could move the underlying FM solitons along specific directions subjected to the Magnus effect. The latter would affect the response of the overlaying isolated AFM skyrmions in a non-trivial fashion.
We anticipate that the proposed material and the FM-substrate on which AFM films can be deposited offer an ideal platform to explore the physics of AFM skyrmions. Noticing already that different overlapped AFM skyrmions can co-exist establishes a novel set of multi-soliton objects worth exploring in future studies. Besides their fundamental importance enhanced by the potential parallel with topological concepts known in knot theory, such unusual AFM localized entities might become exciting and useful constituents of future nanotechnology devices resting on non-collinear spin-textures and the emerging field of antiferromagnetic spintronics.
First-principles calculations
The relaxation parameters were obtained using the Quantum-Espresso computational package74. The projector augmented wave pseudo potentials from the PS Library75 and a 28 × 28 × 1 k-point grid were used for the calculations. The Cr, Pd, Fe, and Ir interface layer were fcc-stacked along the [111] direction and relaxed by 4%, 5.8%, 8.1%, and −1% with respect to the Ir ideal interlayer distance, respectively. Positive (negative) numbers refer to atomic relaxations towards (outward of) the Ir surface.
The electronic structure and magnetic properties were simulated using all electron full potential scalar relativistic Koringa -Kohn-Rostoker (KKR) Green function method76,77 in the local spin density approximation. The slab contains 30 layers (3 vacuum + 1 Cr + 1 Pd + 1 Fe + 20 Ir + 4 vacuum). The momentum expansion of the Green function was truncated at \({\ell }_{\max }=3\). The self-consistent calculations were performed with a k-mesh of 30 × 30 points and the energy contour contained 23 complex energy points in the upper complex plane with 9 Matsubara poles. The Heisenberg exchange interactions and DM vectors were extracted using the infinitesimal rotation method78,79 with a k-mesh of a 200 × 200.
Hamiltonian Model and atomistic spin dynamics
In our study, we consider a two dimensional Heisenberg model on a triangular lattice, equipped with Heisenberg exchange coupling, DMI, the magnetic anisotropy energy, and Zeeman term. All parameters were obtained from ab-initio. The energy functional reads as follows:
$$H={H}_{{{{{{{{\rm{Exc}}}}}}}}}+{H}_{{{{{{{{\rm{DMI}}}}}}}}}+{H}_{{{{{{{{\rm{Ani}}}}}}}}}+{H}_{{{{{{{{\rm{Zeem}}}}}}}}},$$
$${H}_{{{{{{{{\rm{Exc}}}}}}}}}=-\mathop{\sum}\limits_{ < i,\, j > }{J}_{ij}^{{{{{{{{\rm{Cr-Cr}}}}}}}}}{{{{{{{{\bf{S}}}}}}}}}_{i}\cdot {{{{{{{{\bf{S}}}}}}}}}_{j}-\mathop{\sum}\limits_{ < i,\, j > }{J}_{ij}^{{{{{{{{\rm{Fe-Cr}}}}}}}}}{{{{{{{{\bf{S}}}}}}}}}_{i}\cdot {{{{{{{{\bf{S}}}}}}}}}_{j}-\mathop{\sum}\limits_{ < i,\, j > }{J}_{ij}^{{{{{{{{\rm{Fe-Fe}}}}}}}}}{{{{{{{{\bf{S}}}}}}}}}_{i}\cdot {{{{{{{{\bf{S}}}}}}}}}_{j},$$
$${H}_{{{{{{{{\rm{DMI}}}}}}}}}=\mathop{\sum}\limits_{ < i,\, j > }{{{{{{{{\bf{D}}}}}}}}}_{ij}^{{{{{{{{\rm{Cr-Cr}}}}}}}}}\cdot [{{{{{{{{\bf{S}}}}}}}}}_{i}\times {{{{{{{{\bf{S}}}}}}}}}_{j}]+\mathop{\sum}\limits_{ < i,\, j > }{{{{{{{{\bf{D}}}}}}}}}_{ij}^{{{{{{{{\rm{Fe-Cr}}}}}}}}}\cdot [{{{{{{{{\bf{S}}}}}}}}}_{i}\times {{{{{{{{\bf{S}}}}}}}}}_{j}]+\mathop{\sum}\limits_{ < i,\, j > }{{{{{{{{\bf{D}}}}}}}}}_{ij}^{{{{{{{{\rm{Fe-Fe}}}}}}}}}\cdot [{{{{{{{{\bf{S}}}}}}}}}_{i}\times {{{{{{{{\bf{S}}}}}}}}}_{j}],$$
$${H}_{{{{{{{{\rm{Ani}}}}}}}}}=-{K}^{{{{{{{{\rm{Cr}}}}}}}}}\mathop{\sum}\limits_{i}{\left({S}_{i}^{z}\right)}^{2}-{K}^{{{{{{{{\rm{Fe}}}}}}}}}\mathop{\sum}\limits_{i}{\left({S}_{i}^{z}\right)}^{2},$$
$${H}_{{{{{{{{\rm{Zeem}}}}}}}}}=-\mathop{\sum}\limits_{i}{h}_{i}{S}_{i}^{z},$$
where i and j are site indices carrying each magnetic moments. S is a unit vector of the magnetic moment. \({J}_{ij}^{{{{{{{{\rm{X-Y}}}}}}}}}\) is the Heisenberg exchange coupling strength, being < 0 for AFM interaction, between an X atom on site i and a Y atom on site j. A similar notation is adopted for the DMI vector D and the magnetic anisotropy energy K (0.5 meV per magnetic atom). The latter favors the out-of-plane orientation of the magnetization, and hi = μiB describes the Zeeman coupling to the atomic spin moment μ at site i assuming an out-of-plane field.
To explore the magnetic properties and emerging complex states we utilize the Landau–Lifshitz-equation (LLG) as implemented in the Spirit code71. We assumed periodic boundary conditions to model the extended two-dimensional system with cells containing 1002, 2002, 3002, and 4002 sites.
The data needed to evaluate the conclusions in the paper are present in the paper and the Supplementary Information.
Code availability
We used the following codes: Quantum ESPRESSO, SPIRIT can be found at https://github.com/spirit-code/spirit, and the KKR code is a rather complex ab-initio DFT-based code, which is in general impossible to use without proper training on the theory behind it and on the practical utilization of the code. We are happy to provide the latter code upon request.
Bogdanov, A. N. & Yablonskii, D. Thermodynamically stable "vortices" in magnetically ordered crystals. the mixed state of magnets. J. Exp. Theor. Phys. 95, 178 (1989).
Bogdanov, A. & Hubert, A. Thermodynamically stable magnetic vortex states in magnetic crystals. J. Magn. Magn. Mater. 138, 255–269 (1994).
Rössler, U. K., Bogdanov, A. N. & Pfleiderer, C. Spontaneous skyrmion ground states in magnetic metals. Nature 442, 797–801 (2006).
Nagaosa, N. & Tokura, Y. Topological properties and dynamics of magnetic skyrmions. Nat. Nanotechnol. 8, 899–911 (2013).
Fert, A., Cros, V. & Sampaio, J. Skyrmions on the track. Nat. Nanotechnol. 8, 152–156 (2013).
Dzyaloshinsky, I. A thermodynamic theory of "weak" ferromagnetism of antiferromagnetics. J. Phys. Chem. Solids 4, 241–255 (1958).
Moriya, T. Anisotropic superexchange interaction and weak ferromagnetism. Phys. Rev. 120, 91–98 (1960).
Mühlbauer, S. et al. Skyrmion lattice in a chiral magnet. Science 323, 915–919 (2009).
Pappas, C. et al. Chiral paramagnetic skyrmion-like phase in mnsi. Phys. Rev. Lett. 102, 197202 (2009).
Yu, X. et al. Real-space observation of a two-dimensional skyrmion crystal. Nature 465, 901–904 (2010).
Yu, X. et al. Skyrmion flow near room temperature in an ultralow current density. Nat. Commun. 3, 1–6 (2012).
Yu, X. et al. Near room-temperature formation of a skyrmion crystal in thin-films of the helimagnet fege. Nat. Mater. 10, 106–109 (2011).
Heinze, S. et al. Spontaneous atomic-scale magnetic skyrmion lattice in two dimensions. Nat. Phys. 7, 713–718 (2011).
Romming, N. et al. Writing and deleting single magnetic skyrmions. Science 341, 636–639 (2013).
Chen, G., Mascaraque, A., N'Diaye, A. T. & Schmid, A. K. Room temperature skyrmion ground state stabilized through interlayer exchange coupling. Appl. Phys. Lett. 106, 242404 (2015).
Soumyanarayanan, A. et al. Tunable room-temperature magnetic skyrmions in ir/fe/co/pt multilayers. Nat. Mater. 16, 898–904 (2017).
Kiselev, N., Bogdanov, A., Schäfer, R. & Rößler, U. Chiral skyrmions in thin magnetic films: new objects for magnetic storage technologies? J. Phys. D-Appl. Phys. 44, 392001 (2011).
Crum, D. M. et al. Perpendicular reading of single confined magnetic skyrmions. Nat. Commun. 6, 1–8 (2015).
Wiesendanger, R. Nanoscale magnetic skyrmions in metallic films and multilayers: a new twist for spintronics. Nat. Rev. Mater. 1, 16044 (2016).
Garcia-Sanchez, F., Sampaio, J., Reyren, N., Cros, V. & Kim, J. A skyrmion-based spin-torque nano-oscillator. New J. Phys. 18, 075011 (2016).
Fert, A., Reyren, N. & Cros, V. Magnetic skyrmions: advances in physics and potential applications. Nat. Rev. Mater. 2, 17031 (2017).
Fernandes, I. L., Bouhassoune, M. & Lounis, S. Defect-implantation for the all-electrical detection of non-collinear spin-textures. Nat. Commun. 11, 1–9 (2020).
Zhang, X. et al. Skyrmion-electronics: writing, deleting, reading and processing magnetic skyrmions toward spintronic applications. J. Phys.-Condes. Matter 32, 143001 (2020).
Büttner, F., Lemesh, I. & Beach, G. S. D. Theory of isolated magnetic skyrmions: From fundamentals to room temperature applications. Sci. Rep. 8, 4464 (2018).
Lin, S.-Z., Reichhardt, C., Batista, C. D. & Saxena, A. Particle model for skyrmions in metallic chiral magnets: Dynamics, pinning, and creep. Phys. Rev. B 87, 214419 (2013).
Woo, S. et al. Observation of room-temperature magnetic skyrmions and their current-driven dynamics in ultrathin metallic ferromagnets. Nat. Mater. 15, 501–506 (2016).
Jiang, W. et al. Direct observation of the skyrmion Hall effect. Nat. Phys. 13, 162–169 (2017).
Litzius, K. et al. Skyrmion hall effect revealed by direct time-resolved x-ray microscopy. Nat. Phys. 13, 170–175 (2017).
Fernandes, I. L., Bouaziz, J., Blügel, S. & Lounis, S. Universality of defect-skyrmion interaction profiles. Nat. Commun. 9, 1–7 (2018).
Fernandes, I. L., Chico, J. & Lounis, S. Impurity-dependent gyrotropic motion, deflection and pinning of current-driven ultrasmall skyrmions in PdFe/Ir(111) surface. J. Phys.-Condes. Matter 32, 425802 (2020).
Arjana, I. G., Lima Fernandes, I., Chico, J. & Lounis, S. Sub-nanoscale atom-by-atom crafting of skyrmion-defect interaction profiles. Sci. Rep. 10, 14655 (2020).
Jungwirth, T., Marti, X., Wadley, P. & Wunderlich, J. Antiferromagnetic spintronics. Nat. Nanotechnol 11, 231–241 (2016).
Olejník, K. et al. Terahertz electrical writing speed in an antiferromagnetic memory. Sci. Adv. 4, eaar3566 (2018).
Gomonay, O., Baltz, V., Brataas, A. & Tserkovnyak, Y. Antiferromagnetic spin textures and dynamics. Nat. Phys. 14, 213–216 (2018).
Bogdanov, A., Roessler, U. K., Wolf, M. & Müller, K.-H. Magnetic structures and reorientation transitions in noncentrosymmetric uniaxial antiferromagnets. Phys. Rev. B 66, 214410 (2002).
Rosales, H. D., Cabra, D. C. & Pujol, P. Three-sublattice skyrmion crystal in the antiferromagnetic triangular lattice. Phys. Rev. B 92, 214439 (2015).
Keesman, R., Raaijmakers, M., Baerends, A., Barkema, G. & Duine, R. Skyrmions in square-lattice antiferromagnets. Phys. Rev. B 94, 054402 (2016).
Zhang, X., Zhou, Y. & Ezawa, M. Antiferromagnetic skyrmion: stability, creation and manipulation. Sci. Rep. 6, 1–8 (2016).
Zhang, X., Ezawa, M. & Zhou, Y. Thermally stable magnetic skyrmions in multilayer synthetic antiferromagnetic racetracks. Phys. Rev. B 94, 064406 (2016).
Göbel, B., Mook, A., Henk, J. & Mertig, I. Antiferromagnetic skyrmion crystals: Generation, topological hall, and topological spin hall effect. Phys. Rev. B 96, 060406 (2017).
Bessarab, P. et al. Stability and lifetime of antiferromagnetic skyrmions. Phys. Rev. B 99, 140411 (2019).
Potkina, M. N., Lobanov, I. S., Jónsson, H. & Uzdin, V. M. Skyrmions in antiferromagnets: Thermal stability and the effect of external field and impurities. J. Appl. Phys. 127, 213906 (2020).
Liu, Z., dos Santos Dias, M. & Lounis, S. Theoretical investigation of antiferromagnetic skyrmions in a triangular monolayer. J. Phys.-Condes. Matter 32, 425801 (2020).
Shen, L. et al. Dynamics of the antiferromagnetic skyrmion induced by a magnetic anisotropy gradient. Phys. Rev. B 98, 134448 (2018).
Silva, R., Silva, R., Pereira, A. & Moura-Melo, W. Antiferromagnetic skyrmions overcoming obstacles in a racetrack. J. Phys.-Condes. Matter 31, 225802 (2019).
Khoshlahni, R., Qaiumzadeh, A., Bergman, A. & Brataas, A. Ultrafast generation and dynamics of isolated skyrmions in antiferromagnetic insulators. Phys. Rev. B 99, 054423 (2019).
Díaz, S. A., Klinovaja, J. & Loss, D. Topological magnons and edge states in antiferromagnetic skyrmion crystals. Phys. Rev. Lett. 122, 187203 (2019).
Zarzuela, R., Kim, S. K. & Tserkovnyak, Y. Stabilization of the skyrmion crystal phase and transport in thin-film antiferromagnets. Phys. Rev. B 100, 100408 (2019).
Barker, J. & Tretiakov, O. A. Static and dynamical properties of antiferromagnetic skyrmions in the presence of applied current and temperature. Phys. Rev. Lett. 116, 147203 (2016).
Woo, S. et al. Current-driven dynamics and inhibition of the skyrmion hall effect of ferrimagnetic skyrmions in gdfeco films. Nat. Commun. 9, 1–8 (2018).
Hirata, Y. et al. Vanishing skyrmion hall effect at the angular momentum compensation temperature of a ferrimagnet. Nat. Nanotechnol. 14, 232–236 (2019).
Gao, S. et al. Fractional antiferromagnetic skyrmion lattice induced by anisotropic couplings. Nature 586, 37–41 (2020).
Ross, A. et al. Structural sensitivity of the spin hall magnetoresistance in antiferromagnetic thin films. Phys. Rev. B 102, 094415 (2020).
Jani, H. et al. Antiferromagnetic half-skyrmions and bimerons at room temperature. Nature 590, 74–79 (2021).
Legrand, W. et al. Room-temperature stabilization of antiferromagnetic skyrmions in synthetic antiferromagnets. Nat. Mater. 19, 34–42 (2020).
Spethmann, J. et al. Discovery of magnetic single-and triple-Q states in Mn/Re (0001). Phys. Rev. Lett. 124, 227203 (2020).
Spethmann, J., Grünebohm, M., Wiesendanger, R., von Bergmann, K. & Kubetzka, A. Discovery and characterization of a new type of domain wall in a row-wise antiferromagnet. Nat. Commun. 12, 1–8 (2021).
Wu, F. Y. Knot theory and statistical mechanics. Rev. Mod. Phys. 64, 1099–1131 (1992).
Amabilino, D. B. & Stoddart, J. F. Interlocked and intertwined structures and superstructures. Chem. Rev. 95, 2725–2828 (1995).
Dabrowski-Tumanski, P. & Sulkowska, J. I. Topological knots and links in proteins. Proc. Natl. Acad. Sci. USA. 114, 3415–3420 (2017).
Bates, A. D. et al. DNA topology (Oxford University Press, USA, 2005).
Dupé, B., Hoffmann, M., Paillard, C. & Heinze, S. Tailoring magnetic skyrmions in ultra-thin transition metal films. Nat. Commun. 5, 1–6 (2014).
Simon, E., Palotás, K., Rózsa, L., Udvardi, L. & Szunyogh, L. Formation of magnetic skyrmions with tunable properties in PdFe bilayer deposited on Ir (111). Phys. Rev. B 90, 094410 (2014).
Romming, N., Kubetzka, A., Hanneken, C., von Bergmann, K. & Wiesendanger, R. Field-dependent size and shape of single magnetic skyrmions. Phys. Rev. Lett. 114, 177203 (2015).
Bouhassoune, M. & Lounis, S. Friedel oscillations induced by magnetic skyrmions: From scattering properties to all-electrical detection. Nanomaterials 11, 194 (2021).
Fernandes, I. L., Blügel, S. & Lounis, S. Spin-orbit enabled all-electrical readout of chiral spin-textures. ArXiv:2202.11637 (2022).
Sampaio, J., Cros, V., Rohart, S., Thiaville, A. & Fert, A. Nucleation, stability and current-induced motion of isolated magnetic skyrmions in nanostructures. Nat. Nanotechnol. 8, 839–844 (2013).
Rohart, S. & Thiaville, A. Skyrmion confinement in ultrathin film nanostructures in the presence of Dzyaloshinskii-Moriya interaction. Phys. Rev. B 88, 184422 (2013).
Hinzke, D. & Nowak, U. Monte carlo simulation of magnetization switching in a heisenberg model for small ferromagnetic particles. Comput. Phys. Commun. 121, 334–337 (1999).
Nowak, U. Thermally activated reversal in magnetic nanostructures. Annu. Rev. Comput. Phys. 9, 105–151 (2001).
Article CAS MATH Google Scholar
Müller, G. P. et al. Spirit: Multifunctional framework for atomistic spin simulations. Phys. Rev. B 99, 224414 (2019).
Bessarab, P. F., Uzdin, V. M. & Jónsson, H. Method for finding mechanism and activation energy of magnetic transitions, applied to skyrmion and antivortex annihilation. Comput. Phys. Commun. 196, 335–347 (2015).
Müller, G. P. et al. Duplication, collapse, and escape of magnetic skyrmions revealed using a systematic saddle point search method. Phys. Rev. Lett. 121, 197202 (2018).
Giannozzi, P. et al. Quantum espresso: a modular and open-source software project for quantum simulations of materials. J. Phys.-Condes. Matter 21, 395502 (2009).
Dal Corso, A. Pseudopotentials periodic table: From h to pu. Comput. Mater. Sci. 95, 337–350 (2014).
Papanikolaou, N., Zeller, R. & Dederichs, P. H. Conceptual improvements of the KKR method. J. Phys. Condens. Matter 14, 2799 (2002).
Bauer, D. S. G. Development of a relativistic full-potential first-principles multiple scattering Green function method applied to complex magnetic textures of nano structures at surfaces (Forschungszentrum Jülich Jülich, 2014).
Liechtenstein, A., Katsnelson, M., Antropov, V. & Gubanov, V. Local spin density functional approach to the theory of exchange interactions in ferromagnetic metals and alloys. JMMM 67, 65–74 (1987).
Ebert, H. & Mankovsky, S. Anisotropic exchange coupling in diluted magnetic semiconductors: Ab initio spin-density functional theory. Phys. Rev. B 79, 045209 (2009).
We thank Markus Hoffmann for fruitful discussions. This work was supported by the Federal Ministry of Education and Research of Germany in the framework of the Palestinian-German Science Bridge (BMBF grant number 01DH16027) and the Deutsche Forschungsgemeinschaft (DFG) through SPP 2137 "Skyrmionics" (Projects LO 1659/8-1, BL 444/16-2). The authors gratefully acknowledge the computing time granted through JARA on the supercomputer JURECA at Forschungszentrum Jülich.
Open Access funding enabled and organized by Projekt DEAL.
Peter Grünberg Institute and Institute for Advanced Simulation, Forschungszentrum Jülich and JARA, D-52425, Jülich, Germany
Amal Aldarawsheh, Imara Lima Fernandes, Sascha Brinker, Moritz Sallermann, Stefan Blügel & Samir Lounis
Faculty of Physics, University of Duisburg-Essen and CENIDE, 47053, Duisburg, Germany
Amal Aldarawsheh & Samir Lounis
RWTH Aachen University, 52056, Aachen, Germany
Moritz Sallermann
Science Institute and Faculty of Physical Sciences, University of Iceland, VR-III, 107, Reykjavík, Iceland
Department of Physics, Arab American University, Jenin, Palestine
Muayad Abusaa
Amal Aldarawsheh
Imara Lima Fernandes
Sascha Brinker
Stefan Blügel
Samir Lounis
S.L. initiated, designed and supervised the project. A.A. performed the simulations with support and supervision from I.L.F., S.Br. and M.S. A.A., I.L.F., S.Br., M.S., M.A., S.Bl. and S.L. discussed the results. A.A. and S.L. wrote the manuscript to which all co-authors contributed.
Correspondence to Amal Aldarawsheh or Samir Lounis.
Aldarawsheh, A., Fernandes, I.L., Brinker, S. et al. Emergence of zero-field non-synthetic single and interchained antiferromagnetic skyrmions in thin films. Nat Commun 13, 7369 (2022). https://doi.org/10.1038/s41467-022-35102-x
Editors' Highlights
Nature Communications (Nat Commun) ISSN 2041-1723 (online) | CommonCrawl |
You are here: Home ∼ Weekly Papers on Quantum Foundations (37)
Weekly Papers on Quantum Foundations (37)
Published by editor on September 16, 2017
Time in the theory of relativity: on natural clocks, proper time, the clock hypothesis, and all that
Philsci-Archive: No conditions. Results ordered -Date Deposited.
on 2017-9-15 9:50pm GMT
Bacelar Valente, Mario (2013) Time in the theory of relativity: on natural clocks, proper time, the clock hypothesis, and all that. [Preprint]
The relativity of simultaneity and presentism
Bacelar Valente, Mario (2012) The relativity of simultaneity and presentism. [Preprint]
Conceptual problems in quantum electrodynamics: a contemporary historical-philosophical approach
Bacelar Valente, Mario (2011) Conceptual problems in quantum electrodynamics: a contemporary historical-philosophical approach. UNSPECIFIED.
Dark Energy and Dark Matter in Emergent Gravity. (arXiv:1709.04914v1 [hep-th])
gr-qc updates on arXiv.org
on 2017-9-15 12:48am GMT
Authors: Jungjai Lee, Hyun Seok Yang
We suggest that dark energy and dark matter may be a cosmic ouroboros of quantum gravity due to the coherent vacuum structure of spacetime. We apply the emergent gravity to a large $N$ matrix model by considering the vacuum in the noncommutative (NC) Coulomb branch satisfying the Heisenberg algebra. We observe that UV fluctuations in the NC Coulomb branch are always paired with IR fluctuations and these UV/IR fluctuations can be extended to macroscopic scales. We show that space-like fluctuations give rise to the repulsive gravitational force while time-like fluctuations generate the attractive gravitational force. When considering the fact that the fluctuations are random in nature and we are living in the (3+1)-dimensional spacetime, the ratio of the repulsive and attractive components will end in $\frac{3}{4}: \frac{1}{4}=75:25$ and this ratio curiously coincides with the dark composition of our current Universe. If one includes ordinary matters which act as the attractive force, the emergent gravity may explain the dark sector of our Universe more precisely.
Contact Geometry and Quantum Mechanics. (arXiv:1709.04557v1 [hep-th])
Authors: Gabriel Herczeg, Andrew Waldron
We present a generally covariant approach to quantum mechanics. Generalized positions, momenta and time variables are treated as coordinates on a fundamental "phase-spacetime" manifold. Dynamics are encoded by giving phase-spacetime a contact structure. BRST quantization then yields a physical Hilbert space whose elements satisfy a parallel transport equation on a certain vector bundle over phase-spacetime. The inner product of solutions both reproduces and generalizes the Wigner functions of standard quantum mechanics.
Taking Heisenberg's Potentia Seriously. (arXiv:1709.03595v2 [quant-ph] UPDATED)
physics.hist-ph updates on arXiv.org
Authors: R. E. Kastner, Stuart Kauffman, Michael Epperson
It is argued that quantum theory is best understood as requiring an ontological dualism of res extensa and res potentia, where the latter is understood per Heisenberg's original proposal, and the former is roughly equivalent to Descartes' 'extended substance.' However, this is not a dualism of mutually exclusive substances in the classical Cartesian sense, and therefore does not inherit the infamous 'mind-body' problem. Rather, res potentia and res extensa are defined as mutually implicative ontological extants that serve to explain the key conceptual challenges of quantum theory; in particular, nonlocality, entanglement, null measurements, and wave function collapse. It is shown that a natural account of these quantum perplexities emerges, along with a need to reassess our usual ontological commitments involving the nature of space and time.
Investigating the Effects of the Interaction Intensity in a Weak Measurement. (arXiv:1709.04869v1 [quant-ph])
quant-ph updates on arXiv.org
Authors: Fabrizio Piacentini, Alessio Avella, Marco Gramegna, Rudi Lussana, Federica Villa, Alberto Tosi,Giorgio Brida, Ivo Pietro Degiovanni, Marco Genovese
Measurements are crucial in quantum mechanics, in fundamental research as well as in applicative fields like quantum metrology, quantum-enhanced measurements and other quantum technologies. In the recent years, weak-interaction-based protocols like Weak Measurements and Protective Measurements have been experimentally realized, showing peculiar features leading to surprising advantages in several different applications. In this work we analyze the validity range for such measurement protocols, that is, how the interaction strength affects the weak value extraction, by measuring different polarization weak values measured on heralded single photons. We show that, even in the weak interaction regime, the coupling intensity limits the range of weak values achievable, putting a threshold on the signal amplification effect exploited in many weak measurement based experiments.
Nonanomalous realism-based measure of nonlocality. (arXiv:1709.04783v1 [quant-ph])
Authors: V. S. Gomes, R. M. Angelo
Based on a recently proposed model of physical reality and an underlying criterion of nonlocality for contexts [A. L. O. Bilobran and R. M. Angelo, Europhys. Lett. 112, 40005 (2015)] we introduce a realism-based quantifier of nonlocality for bipartite quantum states. We prove that this measure reduces to entanglement for pure states, thus being free of anomalies in arbitrary dimensions, and identify the class of states with null realism-based nonlocality. Then, we show that such a notion of nonlocality can be positioned at the lowest level in the hierarchy of quantumness quantifiers, meaning that it can occur even for Bell-local states. These results open a new perspective for nonlocality studies.
History of Science as a Facilitator for the Study of Physics: A Repertoire from the History of Contemporary Physics
Angeloni, Roberto (2017) History of Science as a Facilitator for the Study of Physics: A Repertoire from the History of Contemporary Physics. [Preprint]
Credence and Chance in Quantum Theory
Earman, John (2017) Credence and Chance in Quantum Theory. [Preprint]
Does Physics Provide Us With Knowledge About the Things in Themselves ?
Webermann, Michael (2017) Does Physics Provide Us With Knowledge About the Things in Themselves ? [Preprint]
Ontic structural realism and quantum field theory: Are there intrinsic properties at the most fundamental level of reality?
Studies in History and Philosophy of Modern Physics
Publication date: Available online 13 September 2017
Source:Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics
Author(s): Philipp Berghofer
Ontic structural realism refers to the novel, exciting, and widely discussed basic idea that the structure of physical reality is genuinely relational. In its radical form, the doctrine claims that there are, in fact, no objects but only structure, i.e., relations. More moderate approaches state that objects have only relational but no intrinsic properties. In its most moderate and most tenable form, ontic structural realism assumes that at the most fundamental level of physical reality there are only relational properties. This means that the most fundamental objects only possess relational but no non-reducible intrinsic properties. The present paper will argue that our currently best physics refutes even this most moderate form of ontic structural realism. More precisely, I will claim that 1) according to quantum field theory, the most fundamental objects of matter are quantum fields and not particles, and show that 2) according to the Standard Model, quantum fields have intrinsic non-relational properties.
Comment on "Peres experiment using photons: No test for hypercomplex (quaternionic) quantum theories"
PRA: Fundamental concepts
Author(s): Lorenzo M. Procopio, Lee A. Rozema, Borivoje Dakić, and Philip Walther
In his recent article [Phys. Rev. A 95, 060101(R) (2017)], Adler questions the usefulness of the bound found in our experimental search for genuine effects of hypercomplex quantum mechanics [Nat. Commun. 8, 15044 (2017)]. Our experiment was performed using a black-box (instrumentalist) approach to g…
[Phys. Rev. A 96, 036101] Published Thu Sep 14, 2017
Can the Two-Time Interpretation of Quantum Mechanics Solve the Measurement Problem?
Robertson, Kate (2017) Can the Two-Time Interpretation of Quantum Mechanics Solve the Measurement Problem? [Preprint]
Information Causality, the Tsirelson Bound, and the 'Being-Thus' of Things
Cuffaro, Michael E. (2017) Information Causality, the Tsirelson Bound, and the 'Being-Thus' of Things. [Preprint]
The Principal Principle does not imply the Principle of Indifference
Pettigrew, Richard (2017) The Principal Principle does not imply the Principle of Indifference. [Preprint]
What if the diminutive electron isn't as small as it gets?
New Scientist – Home
We thought electrons and their two mysterious siblings were fundamental particles. Now there are hints that we need to go smaller still to understand matter
What quantum measurements measure
Author(s): Robert B. Griffiths
A solution to the second measurement problem, determining what prior microscopic properties can be inferred from measurement outcomes ("pointer positions"), is worked out for projective and generalized (POVM) measurements, using consistent histories. The result supports the idea that equipment prope…
[Phys. Rev. A 96, 032110] Published Wed Sep 13, 2017
First quantum computers need smart software
Nature – Issue – nature.com science feeds
on 2017-9-13 5:00am GMT
Nature 549, 7671 (2017). doi:10.1038/549149a
Authors: Will Zeng, Blake Johnson, Robert Smith, Nick Rubin, Matt Reagor, Colm Ryan & Chad Rigetti
Early devices must solve real-world problems, urge Will Zeng and colleagues.
Roads towards fault-tolerant universal quantum computation
Nature 549, 7671 (2017). doi:10.1038/nature23460
Authors: Earl T. Campbell, Barbara M. Terhal & Christophe Vuillot
A practical quantum computer must not merely store information, but also process it. To prevent errors introduced by noise from multiplying and spreading, a fault-tolerant computational architecture is required. Current experiments are taking the first steps toward noise-resilient logical qubits. But to convert these quantum
On the correct interpretation of p values and the importance of random variables
Rochefort-Maranda, Guillaume (2016) On the correct interpretation of p values and the importance of random variables. Synthese, 193. ISSN 1573-0964
Emergence of cosmological Friedmann equations from quantum entanglement. (arXiv:1709.03290v1 [gr-qc])
on 2017-9-12 12:39pm GMT
Authors: Xian-Hui Ge, Can-Can Wang
We study the deep connections between the concepts of quantum information theory and cosmology. Employing Fermi normal coordinates and conformal Fermi coordinates, we construct a relation between Friedmann equations of Friedmann-Lemaitre-Robertson-Walker universe and entanglement. Friedmann equations are derived with the first law of entanglement under the assumption that entanglement entropy in a geodesic balls is maximized at fixed volume.
Free Will in the Theory of Everything. (arXiv:1709.02874v1 [quant-ph])
Authors: Gerard 't Hooft
From what is known today about the elementary particles of matter, and the forces that control their behavior, it may be observed that still a host of obstacles must be overcome that are standing in the way of further progress of our understanding. Most researchers conclude that drastically new concepts must be investigated, new starting points are needed, older structures and theories, in spite of their successes, will have to be overthrown, and new, superintelligent questions will have to be asked and investigated. In short, they say that we shall need new physics. Here, we argue in a different manner. Today, no prototype, or toy model, of any so-called Theory of Everything exists, because the demands required of such a theory appear to be conflicting. The demands that we propose include locality, special and general relativity, together with a fundamental finiteness not only of the forces and amplitudes, but also of the set of Nature's dynamical variables. We claim that the two remaining ingredients that we have today, Quantum Field Theory and General Relativity, indeed are coming a long way towards satisfying such elementary requirements. Putting everything together in a Grand Synthesis is like solving a gigantic puzzle. We argue that we need the correct analytical tools to solve this puzzle. Finally, it seems to be obvious that this solution will give room neither for "Divine Intervention", nor for "Free Will", an observation that, all by itself, can be used as a clue. We claim that this reflects on our understanding of the deeper logic underlying quantum mechanics.
The Algebra of the Pseudo-Observables II: The Measurement Problem. (arXiv:1708.01170v2 [quant-ph] UPDATED)
Authors: Edoardo Piparo
In this second paper, we develop the full mathematical structure of the algebra of the pseudo-observables, in order to solve the quantum measurement problem. Quantum state vectors are recovered but as auxiliary pseudo-observables storing the information acquired in a set of observations. The whole process of measurement is deeply reanalyzed in the conclusive section, evidencing original aspects. The relation of the theory with some popular interpretations of Quantum Mechanics is also discussed, showing that both Relational Quantum Mechanics and Quantum Bayesianism may be regarded as compatible interpretations of the theory. A final discussion on reality, tries to bring a new insight on it.
Relativity, Anomalies and Objectivity Loophole in Recent Tests of Local Realism. (arXiv:1709.03348v1 [quant-ph])
Authors: Adam Bednorz
Local realism is in conflict with special quantum Bell-type models. Recently, several experiments have demonstrated violation of local realism if we trust their setup assuming special relativity valid. In this paper we question the assumption of relativity, point out not commented anomalies and show that the experiment have not closed objectivity loophole because clonability of the result has not been demonstrated. We propose several improvements in further experimental tests of local realism make the violation more convincing.
Relativistic Dynamical Collapse Theories Must Employ Nonstandard Degrees of Freedom. (arXiv:1709.03219v1 [quant-ph])
Authors: Wayne C. Myrvold
The impossibility of an indeterministic evolution for standard relativistic quantum field theories, that is, theories in which all fields satisfy the condition that the generators of spacetime translation have spectrum in the forward light-cone, is demonstrated. The demonstration proceeds by arguing that a relativistically invariant theory must have a stable vacuum, and then showing that stability of the vacuum, together with the requirements imposed by relativistic causality, entails deterministic evolution, if all degrees of freedom are standard degrees of freedom.
Perturbation Theory for Weak Measurements in Quantum Mechanics, I — Systems with Finite-Dimensional State Space. (arXiv:1709.03149v1 [math-ph])
Authors: M. Ballesteros, N. Crawford, M. Fraas, J. Fröhlich, B. Schubnel
The quantum theory of indirect measurements in physical systems is studied. The example of an indirect measurement of an observable represented by a self-adjoint operator $\mathcal{N}$ with finite spectrum is analysed in detail. The Hamiltonian generating the time evolution of the system in the absence of direct measurements is assumed to be given by the sum of a term commuting with $\mathcal{N}$ and a small perturbation not commuting with $\mathcal{N}$. The system is subject to repeated direct (projective) measurements using a single instrument whose action on the state of the system commutes with $\mathcal{N}$. If the Hamiltonian commutes with the observable $\mathcal{N}$ (i.e., if the perturbation vanishes) the state of the system approaches an eigenstate of $\mathcal{N}$, as the number of direct measurements tends to $\infty$. If the perturbation term in the Hamiltonian does \textit{not} commute with $\mathcal{N}$ the system exhibits "jumps" between different eigenstates of $\mathcal{N}$. We determine the rate of these jumps to leading order in the strength of the perturbation and show that if time is re-scaled appropriately a maximum likelihood estimate of $\mathcal{N}$ approaches a Markovian jump process on the spectrum of $\mathcal{N}$, as the strength of the perturbation tends to $0$.
All Entangled States can Demonstrate Nonclassical Teleportation
PRL: General Physics: Statistical and Quantum Mechanics, Quantum Information, etc.
Author(s): Daniel Cavalcanti, Paul Skrzypczyk, and Ivan Šupić
A new benchmark for quantum teleportation shows that more entangled states are viable for this that previously thought.
[Phys. Rev. Lett. 119, 110501] Published Tue Sep 12, 2017
Improved Noninterferometric Test of Collapse Models Using Ultracold Cantilevers
Author(s): A. Vinante, R. Mezzena, P. Falferi, M. Carlesso, and A. Bassi
Oscillations of a microcantilever at milliKelvin temperature have been accurately measured, leading to more stringent limits on the collapse parameters of wave functions and revealing a noise source of unknown origin.
The Madelung Picture as a Foundation of Geometric Quantum Theory
Latest Results for Foundations of Physics
Despite its age, quantum theory still suffers from serious conceptual difficulties. To create clarity, mathematical physicists have been attempting to formulate quantum theory geometrically and to find a rigorous method of quantization, but this has not resolved the problem. In this article we argue that a quantum theory recursing to quantization algorithms is necessarily incomplete. To provide an alternative approach, we show that the Schrödinger equation is a consequence of three partial differential equations governing the time evolution of a given probability density. These equations, discovered by Madelung, naturally ground the Schrödinger theory in Newtonian mechanics and Kolmogorovian probability theory. A variety of far-reaching consequences for the projection postulate, the correspondence principle, the measurement problem, the uncertainty principle, and the modeling of particle creation and annihilation are immediate. We also give a speculative interpretation of the equations following Bohm, Vigier and Tsekov, by claiming that quantum mechanical behavior is possibly caused by gravitational background noise.
If NYC subways obeyed quantum maths trains wouldn't be delayed
New York's notoriously unreliable subway system isn't all bad. Some lines follow statistical patterns seen in quantum systems, and run better for it
Quantum Image Processing and Its Application to Edge Detection: Theory and Experiment
Recent Articles in Phys. Rev. X
Author(s): Xi-Wei Yao, Hengyan Wang, Zeyang Liao, Ming-Cheng Chen, Jian Pan, Jun Li, Kechao Zhang, Xingcheng Lin, Zhehui Wang, Zhihuang Luo, Wenqiang Zheng, Jianzhong Li, Meisheng Zhao, Xinhua Peng, and Dieter Suter
Analysis of the large amounts of image data requires increasingly expensive and time-consuming computational resources. Quantum computing may offer a shortcut. A new edge-detection algorithm based on a specific quantum image representation shows exponentially faster performance compared to classical methods.
[Phys. Rev. X 7, 031041] Published Mon Sep 11, 2017
Posted in @all Weekly Papers
Article written by editor | CommonCrawl |
Skip to main content Skip to sections
Applied Water Science
December 2017 , Volume 7, Issue 8, pp 4793–4800 | Cite as
Isotherm investigation for the sorption of fluoride onto Bio-F: comparison of linear and non-linear regression method
Manish Yadav
Short Research Communication
First Online: 01 August 2017
A comparison of the linear and non-linear regression method in selecting the optimum isotherm among three most commonly used adsorption isotherms (Langmuir, Freundlich, and Redlich–Peterson) was made to the experimental data of fluoride (F) sorption onto Bio-F at a solution temperature of 30 ± 1 °C. The coefficient of correlation (\(r^{2}\)) was used to select the best theoretical isotherm among the investigated ones. A total of four Langmuir linear equations were discussed and out of which linear form of most popular Langmuir-1 and Langmuir-2 showed the higher coefficient of determination (0.976 and 0.989) as compared to other Langmuir linear equations. Freundlich and Redlich–Peterson isotherms showed a better fit to the experimental data in linear least-square method, while in non-linear method Redlich–Peterson isotherm equations showed the best fit to the tested data set. The present study showed that the non-linear method could be a better way to obtain the isotherm parameters and represent the most suitable isotherm. Redlich–Peterson isotherm was found to be the best representative (\(r^{2}\) = 0.999) for this sorption system. It is also observed that the values of \(\beta\) are not close to unity, which means the isotherms are approaching the Freundlich but not the Langmuir isotherm.
Sorption Bio-F Fluoride Equilibrium isotherm Linear regression Non-linear regression
Bio-F
Bio-filter
Coefficient of correlation
Milligramme
g/L
Gramme per litre
mg/L
Milligramme per litre
Degree centigrade
High-density polyethylene
Adsorption equilibrium constant
Maximum adsorption capacity
Adsorbate adsorbed onto adsorbent
Equilibrium liquid-phase concentration
Freundlich constant
1/n
Heterogeneity factor
Redlich–Peterson constant
Equilibrium capacity
Among the various methods used for defluoridation of drinking water, the adsorption process has been widely used because of its simplicity, affordability, easy operation, and satisfactory results (Liu et al. 2010; Deng et al. 2011; Bhatnagar et al. 2011; Xiang et al. 2014). Adsorption process includes the selective transfer of solute components onto the surface or the bulk of solid adsorbent materials in the aqueous phase (Kumar and Sivanesan 2007). The effectiveness of an adsorbent is estimated on the basis of its uptake capacity, adsorption rate, mechanical strength, possibility of regeneration, and reuse options (Tang and Zhang 2016). Among these, adsorbent capacity is the most important parameter which plays a vital role in overall process of adsorption (Oh and Park 2002; Gong et al. 2005). The uptake capacity and adsorption performance are usually determined on the basis of equilibrium experiments and sorption isotherms describing the interaction of pollutant with the adsorbent material (Brdar et al. 2012). The equilibrium studies are also very important in optimizing the design parameters for any adsorption system which provide sufficient information about physicochemical data to evaluate the adsorption process as a unit operation (Leyva-Ramos et al. 2010). The distribution of a solute between solid adsorbent and liquid phase is also a measure of the position of equilibrium. Therefore, equilibrium data should be accurately fit into different isotherm models to find a suitable one that can be used to design the process (Khaled et al. 2009).
Among the various tested isotherms for the defluoridation of drinking water, Langmuir, Freundlich, and Redlich–Peterson isotherms are frequently used over a wide concentration range of solute and sorbent to describe the adsorption equilibrium for water and wastewater treatment applications (Ho et al. 2005; Ho 2006a, b; Kumar and Sivanesan 2007).
The conventional approach for parameter evaluation of non-linear forms of aforementioned isotherms involves linearization of the expressions through transformation, followed by the linear regression method. The main disadvantage of the linear regression technique, that limit its use, includes estimation of only two variables in an empirical equation; whereas, non-linear optimization provides a more complex, yet mathematically rigorous method to determine the isotherm parameter values (Pal et al. 2013).
Linear regression analysis has been frequently employed in accessing the quality of fits and adsorption performance for fluoride removal from aqueous solutions (Fan et al. 2003; Onyango et al. 2004; Kamble et al. 2009; Foo and Hameed 2010). The fitting validity of different models was tested in linearized forms using coefficient of determination. Some other methods, recently were reported to predict the optimum isotherm, which includes correlation coefficient, the sum of errors squared, a hybrid error function, Spearman's correlation coefficient, Standard deviation of relative errors, coefficient of non-determination Marquardt's percent standard deviation, the average relative error, and the sum of absolute errors (Kumar et al. 2008; Foo and Hameed 2010). Currently, non-linear regression method is observed to be the best way in selecting the optimum isotherm, but very limited published literature is available for such type of adsorption systems, i.e., Bio-F–F system. This method of non-linear regression involves the step of minimizing the error distribution between the experimental equilibrium data and predicted isotherm (Krishni et al. 2014).
Having these considerations in mind, a comparison of linear least squares method and non-linear regression method was discussed in present study using the experimental adsorption data of F onto Bio-F. The three widely used isotherms (Langmuir, Freundlich, and Redlich–Peterson) were investigated to discuss this issue. A trial-and-error optimization method was used for the non-linear regression using the solver add-in function, Microsoft Excel, Microsoft Corporation (Kumar and Sivanesan 2007; Krishni et al. 2014). In order to solve the non-linear equations of applied isotherms, an efficient Microsoft's add-in software xlstat is used in this study (Shahmohammadi-Kalalagh and Babazadeh 2014; Kausar et al. 2014). This study was done in continuation of our previous work (Yadav et al. 2014, 2015) to examine the most suitable isotherm for Bio-F and F system using linear and non-linear equations.
Experimental programmes
All chemicals used throughout this study were of analytical grade and purchased mainly from Merck India Limited. The stock solution of fluoride was prepared by dissolving the appropriate amount (221 mg) of anhydrous NaF in 1 L of double-distilled water to a concentration of 100 mg/L. The test working solutions of F were prepared by successive dilution with double-distilled water. All the batch adsorption studies were undertaken using Bio-F adsorbent, manufactured by HES Water Engineers (I) Pvt. Ltd. (a joint venture company of water engineers, Australia). More details about this adsorbent are given in our previous studies (Yadav et al. 2014, 2015).
Batch equilibrium adsorption experiments were conducted to investigate the adsorption behaviour of Bio-F at a constant dose of 10 g/L and varying concentrations of fluoride. All the adsorption experiments were carried out at room temperature of 30 ± 1 °C. To study the various process parameters, a series of conical flasks having test solution and adsorbent, was then shaken at a constant speed of 90 rpm in an orbital shaker with thermostatic control (Remi, India). At the end of the required contact time (when equilibrium was achieved), flasks were removed from the shaker and allowed to stand for 5 min for the adsorbent to settle down. After the fluoride adsorption equilibrium studies, the treated and untreated samples were filtered through Whatman filter paper No. 42 and stored in HDPE bottles for the further analysis of the residual F using an ion meter (Thermo Scientific Orion 5-Star ion meter).
Isotherm models
Equilibrium isotherm equations are used in this study to describe the experimental sorption data of present adsorption system. The equation parameters with the underlying thermodynamic assumptions of these isotherm models often provide an insight into the sorption mechanisms, surface properties, as well as the degree of affinity of the sorbents (Ho 2006a, b). The three most common isotherms for describing solid–liquid sorption systems are Langmuir, Freundlich, and Redlich–Peterson isotherms. Langmuir adsorption isotherm, which was originally developed to describe gas–solid phase adsorption onto activated carbon, has been traditionally used to investigate the performance and potential of different bio-sorbents (Foo and Hameed 2010). This is also valid for adsorption of solutes from aqueous solutions, as monolayer adsorption on specific homogenous sites (a finite number of identical sites) within the adsorbent surface. Therefore, the Langmuir isotherm model estimates the maximum adsorption capacity achieved from complete monolayer coverage on the adsorbent surface (Nur et al. 2014). The Freundlich isotherm is an empirical model, which is the earliest known relationship describing the adsorption process. It is applicable to the gas–solid phase non-ideal and multilayer adsorption on heterogeneous surfaces with interaction between adsorbed molecules. It also suggests that sorption energy exponentially decreases upon the completion of adsorption process. Therefore, Freundlich isotherm can be applied to describe the heterogeneous adsorption systems (Ghorai and Pant 2005). The Redlich–Peterson isotherm, which contains three parameters, \(K_{\text{R}}\),\(a_{\text{R}}\) and \(\beta\), also includes the features of Langmuir and Freundlich isotherm (Brdar et al. 2012; Khaled et al. 2009). This model has a linear dependence on concentration in the numerator and an exponential function in the denominator to describe adsorption equilibrium over a wide concentration range of adsorbate, thus can be applied for both homogeneous or heterogeneous systems due to its versatility (Foo and Hameed 2010). The non-linear and linear forms of equation of Langmuir, Freundlich, and Redlich–Peterson isotherms are given in Table 1.
Selected adsorption isotherms and their linear forms with corresponding plots
Isotherm model (non-linear form)
Linear form
Langmuir 1
\(q_{\text{e }} = \frac{{Q_{\text{m}} K_{\text{L}} C_{\text{e}} }}{{1 + K_{\text{L}} C_{\text{e}} }}\)
\({\raise0.7ex\hbox{${C_{\text{e}} }$} \!\mathord{\left/ {\vphantom {{C_{\text{e}} } {q_{\text{e}} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${q_{\text{e}} }$}} = \frac{1}{{q_{\text{m}} }}C_{\text{e}} + \frac{1}{{K_{\text{L}} q_{\text{m}} }}\)
\({\raise0.7ex\hbox{${C_{\text{e}} }$} \!\mathord{\left/ {\vphantom {{C_{\text{e}} } {q_{\text{e}} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${q_{\text{e}} }$}}{\text{ vs }}C_{\text{e}}\)
\(\frac{1}{{q_{\text{e}} }} = \left( {\frac{1}{{K_{\text{L}} q_{\text{m}} }}} \right)\frac{1}{{C_{\text{e}} }} + \frac{1}{{q_{\text{m}} }}\)
\(\frac{1}{{q_{\text{e}} }} \,{\text{vs }} \frac{1}{{C_{\text{e}} }}\)
\(q_{\text{e}} = q_{\text{m}} - \left( {\frac{1}{{K_{\text{L}} }}} \right)\frac{{q_{\text{e}} }}{{C_{\text{e}} }}\)
\(q_{\text{e}} \,{\text{vs }} \frac{{q_{\text{e}} }}{{C_{\text{e}} }}\)
\(\frac{{q_{\text{e}} }}{{C_{\text{e}} }} = K_{\text{L}} q_{\text{m}} - K_{\text{L}} q_{\text{e}}\)
\(\frac{{q_{\text{e}} }}{{C_{\text{e}} }} \,{\text{vs }} q_{\text{e}}\)
\(q_{\text{e }} = K_{\text{F}} C_{\text{e}}^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 n}}\right.\kern-0pt} \!\lower0.7ex\hbox{$n$}}}}\)
\(\log q_{\text{e}} = \log K_{\text{F}} + \frac{1}{n}\log C_{\text{e}}\)
\(\log q_{\text{e}} {\text{ vs }}\log C_{\text{e}}\)
Redlich–Peterson
\(q_{\text{e}} = \frac{{K_{\text{R}} C_{\text{e}} }}{{1 + a_{\text{R}} C_{\text{e}}^{\beta } }}\)
\(\ln \left( {K_{\text{R}} \frac{{C_{\text{e}} }}{{q_{\text{e}} }} - 1} \right) = \beta \ln \left( {C_{\text{e}} } \right) + \ln \left( {a_{\text{R}} } \right)\)
\(\ln \left( {K_{\text{R}} \frac{{C_{\text{e}} }}{{q_{\text{e}} }} - 1} \right) \,{\text{vs }} { \ln }\left( {C_{\text{e}} } \right)\)
K L is the adsorption equilibrium constant (L/mg), K F and 1/n are empirical constants of Freundlich isotherm, K R, a R and β (0 < β < 1) are three constants of Redlich–Peterson Isotherm
A trial-and-error procedure was applied to determine the parameters of the investigated isotherms. In this study, the coefficient of determination, \(r^{2}\), was used for the experimental data to determine the best-fit isotherm model out of the three used isotherms (Freundlich, Langmuir and Redlich–Peterson). The value of \(r^{2}\) for non-linear regression was evaluated using following formula (Ho 2006a, b):
$$r^{2} = \frac{{\sum \left( {q_{\text{m}} - \overline{{q_{\text{e}} }} } \right)^{2} }}{{\sum \left( {q_{\text{m}} - \overline{{q_{\text{e}} }} } \right)^{2} - \sum \left( {q_{\text{m}} - q_{\text{e}} } \right)^{2} }} ,$$
where \(q_{\text{m}}\) is the sorption equilibrium capacity obtained by calculating from the isotherm model, \(q_{\text{e}}\) is experimental capacity obtained from experiment, and \(\overline{{q_{\text{e}} }}\) is the average of \(q_{\text{e}}\) values.
Linear regression method
Linear regression is the most commonly used method to determine the best-fit isotherm and the method of least squares has been used for finding parameters of the isotherm models (Krishni et al. 2014; Kausar et al. 2014). It was observed that Langmuir isotherm could be linearized into at least four different forms as in Table 1, and simple linear regression will result in different parameter estimates. The most popular linear forms used are Langmuir-1 and Langmuir-2. The best-fit is obtained using Langmuir-2 because of the minimized deviations from the fitted equation resulting in the best error distribution (Ho 2006a, b). Figure 1 shows the plots of four linear Langmuir equations and the experimental data for the sorption of F onto Bio-F, and it was found that Langmuir-2 isotherm provides a better fit to the experimental equilibrium data.
Open image in new window
Different linear forms of Langmuir isotherm obtained using the linear method for the sorption of fluoride onto Bio-F
The applicability of the Freundlich isotherm model was also analysed, using the same set of experimental data, by plotting \(\log q_{\text{e}} \,{\text{vs}}\,\log C_{\text{e}}\). Considering the linear forms of Langmuir isotherm for the comparison between the different isotherms, Freundlich isotherm was found to be more suitable as compared to all linear forms of the Langmuir isotherm because of the higher value of coefficient of determination, \(r^{2}\). The Freundlich isotherm constants, \(K_{\text{F}}\), 1/n, and coefficients of determination are calculated and shown in Table 2.
Isotherm parameters obtained using the linear and non-linear method
Langmuir isotherm
Freundlich isotherm
Redlich–Peterson isotherm
\(Q_{\text{m}}\)
\(K_{\text{L}}\)
\(R^{2}\)
\(K_{\text{F}}\)
\(n\)
\(K_{\text{R}}\)
\(a_{\text{R}}\)
\(\beta\)
Linear 1
Figure 2 shows the plots of linear Freundlich equations with experimental data for sorption of F onto Bio-F. The Redlich–Peterson isotherm constants, \(K_{\text{R}}\),\(a_{\text{R}}\) and \(\beta\), as well as the \(r^{2}\) for present adsorption system were obtained using the linear form of isotherm and presented in Table 2.
Freundlich isotherm obtained using the linear method for the sorption of fluoride onto Bio-F
In all cases the Redlich–Peterson isotherm exhibited the highest coefficient of determinations same as Freundlich isotherm, which provided a considerably better fit as compared to Langmuir but similar to the Freundlich isotherm. It can also be observed that the values of β are not close to unity, which means the isotherms are approaching the Freundlich and not the Langmuir. Figure 3 shows the plot of the Redlich–Peterson isotherm equations with the equilibrium experimental data.
Redlich–Peterson isotherm obtained using the linear method for the sorption of fluoride onto Bio-F
Non-linear regression method
Non-linear regression method is a trial-and-error procedure and is the best way in selecting the optimum isotherm. To perform this, xlstat software is used which is an add-in software with Microsoft's excel spread sheet, Microsoft, and highly efficient than the solver add-in. The abilities of three used isotherms, i.e., Freundlich, Langmuir, and Redlich–Peterson isotherms were examined to model the equilibrium sorption data of fluoride onto Bio-F. The obtained results from the four linear Langmuir equations were observed to be quite similar with each other. When using the non-linear method, there were no problems with non-linear Langmuir isotherm equations as they were in the same error structures. The Langmuir constants obtained from non-linear and linear methods differed even when compared with the results of Langmuir-1 isotherm, which had the highest coefficient of determination for any Langmuir isotherm (Table 2). It seems that the isotherm obtained from Langmuir-1 provided the best fit to the experimental data as compared to other Langmuir linear equations because it had the highest coefficient of determination. The values of Langmuir constants, i.e., \(K_{\text{a}}\) and \(q_{\text{m}}\) were found to be close to those obtained by using the non-linear method. Figure 4 shows that the Redlich–Peterson isotherm, with almost similar coefficients of determination, seems to be the best-fit model for the experimental data. It has been reported that it is inappropriate to use the coefficient of determination of a linear regression analysis to compare the best-fitting models of different isotherms (Ho 2006a, b; Maliyekkal et al. 2006). Inversely, the linear regression has produced a vast amount of different outcomes. Consequently, the Redlich–Peterson isotherm was found to be the best-fit model for the present sorption system. Unlike the linear analysis, a different isotherm would significantly affect the \(r^{2}\) value and will have an impact on determination of other parameters, whereas the use of the non-linear method would avoid such errors.
Isotherms obtained using the non-linear method for the fluoride adsorption onto Bio-F: a Langmuir, b Freundlich, c Redlich–Peterson
Comparative account of linear and non-linear regression method
In this study, it is important to mention here that predicting the optimum isotherm by using only linear method is not appropriate (Yadav et al. 2014), as different forms of one single Langmuir equation may be applicable for a particular adsorption system. Subsequently, the results produced by these four equations may differ significantly as shown in Table 2. The probable reason behind unlike outcomes of different linearized forms of one equation may be the variation in derived error functions. Moreover, the error distribution may vary depending on the way of linearization. The same has been evidenced for the equilibrium sorption data of present adsorption system. However, another possible reason behind the variable results may be the different axial settings, which alter the result of linear regression and influence the determination process. Thus, it can be concluded that it is more appropriate to use non-linear method to estimate the parameters of an isotherm or a rate equation (Kumar and Sivanesan 2007). Also, non-linear method had an advantage that the error distribution does not get altered as in case of linear technique, because all the equilibrium parameters are fixed on the same axis.
Following conclusions can be drawn from this study:
The equilibrium sorption data of F onto Bio-F sorbent is explained using the linear and non-linear forms of Langmuir, Freundlich and Redlich–Peterson isotherms.
The comparison of linear and non-linear regression method shows that non-linear regression is more reliable as compared to linear regression method for the prediction of best-fit isotherm as well as parameter determination for the adsorption of F onto Bio-F.
The values of \(r^{2}\) of Langmuir-1 and Langmuir-2 presented in this study are close to those of the non-linear form of Langmuir isotherm, while Langmuir-3 and Langmuir-4 showed almost similar \(r^{2}\) values. The values of \(r^{2}\) of linear forms of Freundlich isotherm were observed to be different in comparison to the value of non-linear form of the Redlich–Peterson isotherm.
Redlich–Peterson isotherm was found to be the best-fit among the investigated isotherms suggesting that the isotherms are approaching the Freundlich, but not the Langmuir isotherm on basis of \(\beta\) value which is not close to unity.
The authors are thankful to Dr. A.B. Gupta for his suggestions and support in carrying out experimental work.
Bhatnagar A, Kumar E, Sillanpää M (2011) Fluoride removal from water by adsorption—a review. Chem Eng J 171(3):811–840CrossRefGoogle Scholar
Brdar M, Šćiban M, Takači A, Došenović T (2012) Comparison of two and three parameters adsorption isotherm for Cr(VI) onto Kraft lignin. Chem Eng J 183(February):108–111CrossRefGoogle Scholar
Deng S, Liu H, Zhou W, Huang J, Yu G (2011) Mn-Ce oxide as a high-capacity adsorbent for fluoride removal from water. J Hazard Mater 186(2–3):1360–1366CrossRefGoogle Scholar
Fan X, Parker DJ, Smith MD (2003) Adsorption kinetics of fluoride on low cost materials. Water Res 37(20):4929–4937CrossRefGoogle Scholar
Foo KY, Hameed BH (2010) Insights into the modeling of adsorption isotherm systems. Chem Eng J 156(1):2–10CrossRefGoogle Scholar
Ghorai S, Pant KK (2005) Equilibrium, kinetics and breakthrough studies for adsorption of fluoride on activated alumina. Sep Purif Technol 42:265–271. doi: 10.1016/j.seppur.2004.09.001 CrossRefGoogle Scholar
Gong R, Sun Y, Chen J, Liu H, Yang C (2005) Effect of chemical modification on dye adsorption capacity of peanut hull. Dyes Pigment 67:175–181. doi: 10.1016/j.dyepig.2004.12.003 CrossRefGoogle Scholar
Ho YS (2006a) Isotherms for the sorption of lead onto peat: comparison of linear and non-linear methods. Pol J Environ Stud 15(1):81–86Google Scholar
Ho YS (2006b) Second-order kinetic model for the sorption of cadmium onto tree fern: a comparison of linear and non-linear methods. Water Res 40(1):119–125CrossRefGoogle Scholar
Ho YS, Chiu WT, Wang CC (2005) Regression analysis for the sorption isotherms of basic dyes on sugarcane dust. Biores Technol 96(11):1285–1291CrossRefGoogle Scholar
Kamble SP, Dixit P, Rayalu SS, Labhsetwar NK (2009) Defluoridation of drinking water using chemically modified bentonite clay. Desalination 249(2):687–693CrossRefGoogle Scholar
Kausar A, Bhatti HN, Sarfraz RA, Shahid M (2014) Prediction of optimum equilibrium and kinetic models for U(VI) sorption onto rice husk: comparison of linear and nonlinear regression methods. Desalination Water Treat 52(7–9):1495–1503CrossRefGoogle Scholar
Khaled A, El Nemr A, El-Sikaily A, Abdelwahab O (2009) Treatment of artificial textile dye effluent containing direct yellow 12 by orange peel carbon. Desalination 238(1–3):210–232CrossRefGoogle Scholar
Krishni RR, Foo KY, Hameed BH (2014) Adsorption of methylene blue onto papaya leaves: comparison of linear and nonlinear isotherm analysis. Desalination and Water Treat 52(34–36):6712–6719CrossRefGoogle Scholar
Kumar KV, Sivanesan S (2007) Sorption isotherm for safranin onto rice husk: comparison of linear and non-linear methods. Dyes Pigment 72(1):130–133CrossRefGoogle Scholar
Kumar KV, Porkodi K, Rocha F (2008) Comparison of various error functions in predicting the optimum isotherm by linear and non-linear regression analysis for the sorption of basic red 9 by activated carbon. J Hazard Mater 150(1):158–165CrossRefGoogle Scholar
Leyva-Ramos R, Rivera-Utrilla J, Medellin-Castillo NA, Sanchez-Polo M (2010) Kinetic modeling of fluoride adsorption from aqueous solution onto bone char. Chem Eng J 158:458–467. doi: 10.1016/j.cej.2010.01.019 CrossRefGoogle Scholar
Liu Q, Guo H, Shan Y (2010) Adsorption of fluoride on synthetic siderite from aqueous solution. J Fluorine Chem 131(5):635–641CrossRefGoogle Scholar
Maliyekkal SM, Sharma AK, Philip L (2006) Manganese-oxide-coated alumina: a promising sorbent for defluoridation of water. Water Res 40(19):3497–3506CrossRefGoogle Scholar
Nur T, Loganathan P, Nguyen TC, Vigneswaran S, Singh G, Kandasamy J (2014) Batch and column adsorption and desorption of fluoride using hydrous ferric oxide: solution chemistry and modeling. Chem Eng J 247:93–102. doi: 10.1016/j.cej.2014.03.009 CrossRefGoogle Scholar
Oh GH, Park CR (2002) Preparation and characteristics of rice-straw-based porous carbons with high adsorption capacity. Fuel 81:327–336. doi: 10.1016/S0016-2361(01)00171-5 CrossRefGoogle Scholar
Onyango MS, Kojima Y, Aoyi O, Bernardo EC, Matsuda H (2004) Adsorption equilibrium modeling and solution chemistry dependence of fluoride removal from water by trivalent-cation-exchanged zeolite F-9. J Colloid Interface Sci 279(2):341–350CrossRefGoogle Scholar
Pal S, Mukherjee S, Ghosh S (2013) Nonlinear kinetic analysis of phenol adsorption onto peat soil. Environ Earth Sci 71:1593–1603. doi: 10.1007/s12665-013-2564-z CrossRefGoogle Scholar
Shahmohammadi-Kalalagh SH, Babazadeh H (2014) Isotherms for the sorption of zinc and copper onto kaolinite: comparison of various error functions. Int J Environ Sci Technol 11(1):111–118CrossRefGoogle Scholar
Tang D, Zhang G (2016) Efficient removal of fluoride by hierarchical Ce–Fe bimetal oxides adsorbent: thermodynamics, kinetics and mechanism. Chem Eng J 283:721–729. doi: 10.1016/j.cej.2015.08.019 CrossRefGoogle Scholar
Xiang W, Zhang G, Zhang Y, Tang D, Wang J (2014) Synthesis and characterization of cotton-like Ca–Al–La composite as an adsorbent for fluoride removal. Chem Eng J 250:423–430. doi: 10.1016/j.cej.2014.03.118 CrossRefGoogle Scholar
Yadav M, Singh NK, Brighu U, Mathur S (2014) Adsorption of F on Bio-Filter sorbent: kinetics, equilibrium, and thermodynamic study. Desalin Water Treat 56(2):463–474CrossRefGoogle Scholar
Yadav M, Tripathi P, Choudhary A, Brighu U, Mathur S (2015) Adsorption of fluoride from aqueous solution by Bio-F sorbent: a fixed-bed column study. Desalin Water Treat 57(14):6624–6631CrossRefGoogle Scholar
© The Author(s) 2017
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.Department of Civil EngineeringMalviya National Institute of TechnologyJaipurIndia
2.Department of Civil EngineeringIndian Institute of TechnologyRoorkeeIndia
Yadav, M. & Singh, N.K. Appl Water Sci (2017) 7: 4793. https://doi.org/10.1007/s13201-017-0602-9
Received 02 August 2016
First Online 01 August 2017
King Abdulaziz City for Science and Technology
Not logged in Not affiliated 54.237.249.90 | CommonCrawl |
npj quantum information
Unfolding quantum computer readout noise
Benjamin Nachman ORCID: orcid.org/0000-0003-1024-09321,
Miroslav Urbanek ORCID: orcid.org/0000-0001-9960-74242,
Wibe A. de Jong ORCID: orcid.org/0000-0002-7114-83152 &
Christian W. Bauer ORCID: orcid.org/0000-0001-9820-58101
npj Quantum Information volume 6, Article number: 84 (2020) Cite this article
Quantum information
In the current era of noisy intermediate-scale quantum computers, noisy qubits can result in biased results for early quantum algorithm applications. This is a significant challenge for interpreting results from quantum computer simulations for quantum chemistry, nuclear physics, high energy physics (HEP), and other emerging scientific applications. An important class of qubit errors are readout errors. The most basic method to correct readout errors is matrix inversion, using a response matrix built from simple operations to probe the rate of transitions from known initial quantum states to readout outcomes. One challenge with inverting matrices with large off-diagonal components is that the results are sensitive to statistical fluctuations. This challenge is familiar to HEP, where prior-independent regularized matrix inversion techniques ("unfolding") have been developed for years to correct for acceptance and detector effects, when performing differential cross section measurements. We study one such method, known as iterative Bayesian unfolding, as a potential tool for correcting readout errors from universal gate-based quantum computers. This method is shown to avoid pathologies from commonly used matrix inversion and least squares methods.
While quantum algorithms are promising techniques for a variety of scientific and industrial applications, current challenges limit their immediate applicability. One significant limitation is the rate of errors and decoherence in noisy intermediate-scale quantum (NISQ) computers1. Mitigating errors is hard in general because quantum bits ("qubits") cannot be copied2,3,4. An important family of errors are readout errors. They typically arise from two sources: (1) measurement times are significant in comparison to decoherence times, and thus a qubit in the \(\left|1\right\rangle\) state can decay to the \(\left|0\right\rangle\) state during a measurement, and (2) probability distributions of measured physical quantities that correspond to the \(\left|0\right\rangle\) and \(\left|1\right\rangle\) states have overlapping support, and there is a small probability of measuring the opposite value. The goal of this paper is to investigate methods for correcting these readout errors. This is complementary to efforts for gate error corrections. One strategy for mitigating such errors is to build in error correcting components into quantum circuits. Quantum error correction5,6,7,8,9 is a significant challenge because qubits cannot be cloned2,3,4. This generates a significant overhead in the additional number of qubits and gates required to detect or correct errors. Partial error detection/correction has been demonstrated for simple quantum circuits10,11,12,13,14,15,16,17,18,19, but complete error correction is infeasible for current qubit counts and moderately deep circuits. As a result, many studies with NISQ devices use the alternative zero noise extrapolation strategy, whereby circuit noise is systematically increased and then extrapolated to zero20,21,22,23,24,25. Ultimately, both gate and readout errors must be corrected for a complete measurement and a combination of strategies may be required.
Correcting measured histograms for the effects of a detector has a rich history in image processing, astronomy, high energy physics (HEP), and beyond. In the latter, the histograms represent binned differential cross sections and the correction is called unfolding. Many unfolding algorithms have been proposed and are currently in use by experimental high energy physicists (see, e.g., refs. 26,27,28 for reviews). One of the goals of this paper is to introduce these methods and connect them with current algorithms used by the quantum information science (QIS) community.
Quantum readout error correction can be represented as histogram unfolding, where each bin corresponds to one of the possible \({2}^{{n}_{\text{qubit}}}\) configurations, where nqubit is the number of qubits (Fig. 1). Correcting readout noise is a classical problem (though there has been a proposal to do it with quantum annealing29), but relies on calibrations or simulations from quantum hardware. Even though discussions of readout errors appear in many papers (see, e.g., refs. 24,30,31,32), we are not aware of any dedicated study comparing unfolding methods for QIS applications. Furthermore, current QIS methods have pathologies that can be avoided with techniques from HEP. In particular, the most popular quantum simulators pyQuil33 (by Rigetti), Cirq34,35 (by Google), and XACC36,37,38 implement a version of matrix inversion, and the other popular simulator Qiskit by IBM39,40 uses a least squares method (see also refs. 41,42) that is the same as the matrix inversion solution when the the latter is nonnegative. Challenges with methods based on matrix inversion will be discussed in more detail below.
Fig. 1: A schematic diagram illustrating the connection between binned differential cross section measurements in high energy physics (left) and interpreting the output of repeated measurements from quantum computers (right).
Given this connection, techniques used to mitigate detector effects for measurements in high energy physics can be studied for readout error corrections in quantum computing.
Unfolded quantum readout
For the results presented here, we simulate a quantum computer using qiskit-terra 0.9.0, qiskit-aer 0.2.3, and qiskit-ignis 0.1.139. Note that multi-qubit readout errors are not supported in all versions of qiskit-aer and qiskit-ignis. This work uses a custom measure function to implement the response matrices. We choose a Gaussian distribution as the true distribution, as this is ubiquitous in quantum mechanics as the ground state of the harmonic oscillator. An alternative distribution that is mostly zero with a small number of spikes is presented in Supplementary Fig. 5. This system has been recently studied in the context quantum field theory as a benchmark 0 + 1-dimensional noninteracting scalar field theory43,44,45,46,47,48,49,50. In practice, all of the qubits of a system would be entangled to achieve the harmonic oscillator wave function. However, this is unnecessary for studying readout errors, which act at the ensemble level. The Gaussian is mapped to qubits using the following map:
$$t(b)\propto \exp \left[-\frac{1}{2\sigma }{\left(b-{2}^{{n}_{\text{qubit}}-1}\right)}^{2}\right],$$
where b is the binary representation of a computational basis state, i.e., \(\left|00000\right\rangle\, \mapsto \,0\) and \(\left|00011\right\rangle\, \mapsto \,3\). For the results shown below, σ = 3.5.
As a first test, a pathological response matrix (see the "Methods" section) is used to illustrate a failure mode of the matrix inversion and ignis approaches. Figure 2 compares \({\hat{t}}_{\text{matrix}}\), \({\hat{t}}_{\text{ignis}}\), and \({\hat{t}}_{\text{IBU}}\) for a four-qubit Gaussian distribution. As the matrix inversion result is already nonnegative, the \({\hat{t}}_{\text{ignis}}\) result is nearly identical to \({\hat{t}}_{\text{matrix}}\). Both of these approaches s how large oscillations. In contrast, the iterative Bayesian unfolding (IBU) method with ten iterations is nearly the same as the truth distribution. This result is largely insensitive to the choice of iteration number, though it approaches the matrix inversion result for more than about a thousand iterations. This is because the IBU method converges to a maximum likelihood estimate51, which in this case aligns with the matrix inversion result. If the matrix inversion would have negative entries, then it would differ from the asymptotic limit of the IBU method, which is always nonnegative.
Fig. 2: The measurement of a Gaussian distribution (ground state of harmonic oscillator) using the pathological response matrix.
One million total shots are used both to sample from m and to construct R. The IBU method uses ten iterations.
While Fig. 2 is illustrative for exposing a potential failure mode of matrix inversion and the ignis method, it is also useful to consider a realistic response matrix. Figure 3 uses the same Gaussian example as above, but for an IBM Q Johannesburg response matrix (see the "Methods section). Even though the migrations are large, the pattern is such that all three methods qualitatively reproduce the truth distribution. The dip in the middle of the distribution is due to the mapping between the Gaussian and qubit states: all of the state to the left of the center have a zero as the last qubit, while the ones on the right have a one as the last qubit. The asymmetry of 0 → 1 and 1 → 0 induces the asymmetry in the measured spectrum. For example, it is much more likely to migrate from any state to \(\left|00000\right\rangle\) than to \(\left|11111\right\rangle\).
Fig. 3: The measurement of a Gaussian distribution using the response matrix from the IBM Q Johannesburg machine.
One million total shots are used to construct R and 104 are used for t and m. The IBU method uses 100 iterations. The significant deviations on the far left and right of the distributions are due in part to large statistical fluctuations, where the counts are low. The uncertainty band in the ratio is the statistical uncertainty on t.
The three unfolding approaches are quantitatively compared in Fig. 4. The same setup as Fig. 2 is used, repeated over 1000 pseudo-experiments. For each of the 1000 pseudo-experiments, a discretized Gaussian state is prepared (same as Fig. 2) with no gate errors and it is measured 104 times. The bin counts without readout errors are the "true" values. These values include the irreducible Poisson noise that the unfolding methods are not expected to correct. The measured distribution with readout errors is unfolded using the three method, and the resulting counts are the "predicted" values. Averaging over many pseudo-experiments, the spread in the predictions is a couple of percent smaller for \({\hat{t}}_{\text{IBU}}\) compared to \({\hat{t}}_{\text{ignis}}\), and both of these are ~10% more precise than \({\hat{t}}_{\text{matrix}}\). The slight bias of \({\hat{t}}_{\text{ignis}}\) and \({\hat{t}}_{\text{IBU}}\) to the right results from the fact that they are nonnegative. Similarly, the sharp peak at zero results from \({\hat{t}}_{\text{ignis}}\) and \({\hat{t}}_{\text{IBU}}\) values that are nearly zero when ti ~ 0. In contrast, zero is not special for matrix inversion so there is no particular feature in the center of its distribution.
Fig. 4: The distribution of the difference between true and predicted counts from a Gaussian distribution using the response matrix from the IBM Q Johannesburg machine.
The simulation is repeated 1000 times. For each of the 1000 pseudo-experiments, a discretized Gaussian state is prepared with no gate errors and it is measured 104 times. The bin counts without readout errors are the "true" values. The measured distribution with readout errors is unfolded and the resulting counts are the "predicted" values. Each of the 25 states over all 1000 pseudo-experiments contribute one entry to the above histogram. The standard deviations of the distributions are given in the legend. The IBU method uses 100 iterations.
Regularization and uncertainties
One feature of any regularization method is the choice of regularization parameters. For the IBU method, these parameters are the prior and number of iterations. Figure 5 shows the average bias from statistical and systematic sources in the measurement from the Gaussian example shown in Fig. 2, as a function of the number of iterations. With a growing number of iterations, \({\hat{t}}_{\text{IBU}}\) approaches the oscillatory \({\hat{t}}_{\text{ignis}}\). The optimal number of iterations from the point or view of the bias is three. However, the number of iterations cannot be chosen based on the actual bias because the true answer is not known a priori. In HEP, the number of iterations is often chosen before unblinding the data by minimizing the total expected uncertainty. In general, there are three sources of uncertainty: statistical uncertainty on m, statistical and systematic uncertainties on R, and non-closure uncertainties from the unfolding method. Formulae for the statistical uncertainty on m are presented in ref. 52 and can also be estimated by bootstrapping53 the measured counts. Similarly, the statistical uncertainty on R can be estimated by bootstrapping and then repeating the unfolding for each calibration pseudo-dataset. The sources of statistical uncertainty are shown as dot-dashed and dashed lines in Fig. 6. Adding more iterations enhances statistical fluctuations and so these sources of uncertainty increase monotonically, with the number of iterations.
Fig. 5: The difference between \({\hat{t}}_{\text{IBU}}\) and t as a function of the number of iterations for the example presented in Fig. 2.
By definition, the ignis method does not depend on the number of iterations.
Fig. 6: Sources of uncertainty for \({\hat{t}}_{\text{IBU}}\) as a function of the number of iterations for the example presented in Fig. 2.
Each uncertainty is averaged over all states. The total uncertainty is the sum in quadrature of all the individual sources of uncertainty, except gate noise (which is not used in the measurement simulation, but would be present in practice).
The systematic uncertainty on R and the method non-closure uncertainty are not unique and require careful consideration. In HEP applications, R is usually determined from simulation, so the systematic uncertainties are simulation variations that try to capture potential sources of mis-modeling. These simulation variations are often estimated from auxiliary measurements with data. There are additional uncertainties related to the modeling of background processes, as well as related to events that fall into or out of the measurement volume. These are not relevant for QIS. In the QIS context, R is determined directly from the data, so the only uncertainty is on the impurity of the calibration circuits. In particular, the calibration circuits are constructed from a series of single-qubit X gates. Due to gate imperfections and thermal noise, there is a chance that the application of an X gate will have a different effect on the state than intended. In principle, one can try to correct for such potential sources of bias by an extrapolation method. In such methods, the noise is increased in a controlled fashion and then extrapolated to zero noise20,21,22,23,24. This method may have a residual bias and the uncertainty on the method would then become the systematic uncertainty on R. A likely conservative alternative to this approach is to modify R by adding in gate noise, and taking the difference between the nominal result and one with additional gate noise, simulated using the thermal_relaxation_error functionality of qiskit. This is the choice made in Fig. 6, where the gate noise is shown as a dot-dashed line. In this particular example, the systematic uncertainty on R increases monotonically with the number of iterations, just like the sources of statistical uncertainty.
The non-closure uncertainty is used to estimate the potential bias from the unfolding method. One possibility is to compare multiple unfolding methods and take the spread in predictions as an uncertainty. Another method advocated in ref. 54 and widely used in HEP is to perform a data-driven reweighting. The idea is to reweight the t0 so that when folded with R, the induced m0 is close to the measurement m. Then, this reweighted m0 is unfolded with the nominal response matrix and compared with the reweighted t0. The difference between these two is an estimate of the non-closure uncertainty. The reweighting function is not unique, but should be chosen so that the reweighted t0 is a reasonable prior for the data. For Fig. 6, the reweighting is performed using the nominal unfolded result itself. In practice, this can be performed in a way that is blinded from the actual values of \({\hat{t}}_{\text{IBU}}\) so that the experimenter is not biased, when choosing the number of iterations.
Altogether, the sources of uncertainty presented in Fig. 6 show that the optimal choice for the number of iterations is 2. In fact, the difference in the uncertainty between two and three iterations is <1% and so consistent with the results from Fig. 5. Similar plots for the measurement in Fig. 3 can be found in Supplementary Figs. 3 and 4.
This work has introduced a suite of readout error correction algorithms developed in HEP for binned differential cross section measurements. These unfolding techniques are well-suited for quantum computer readout errors, which are naturally binned and without acceptance effects (counts are not lost or gained during readout). In particular, the iterative Bayesian method has been described in detail, and shown to be robust to a failure mode of the matrix inversion and ignis techniques. When readout errors are sufficiently small, all the methods perform well, with a preference for the ignis and Bayesian methods that produce nonnegative results. The ignis method is a special case of the TUnfold algorithm, where the latter uses the covariance matrix to improve precision and incorporates regularization to be robust to the failure modes of matrix inversion. It may be desirable to augment the ignis method with these features or provide the iterative method as an alternative approach. In either case, Fig. 3 showed that even with a realistic response matrix, readout error corrections can be significant and must be accounted for in any measurement on near-term hardware.
An important challenge facing any readout error correction method is the exponential resources required to construct the full R matrix (see the "Methods" section). While R must be constructed only once per hardware setup and operating condition, it could become prohibitive when the number of qubits is large. On hardware with few connections between qubits, per-qubit transition probabilities may be sufficient for accurate results. When that is not the case, one may be able to achieve the desired precision with polynomially many measurements. These ideas are left to future studies.
Another challenge is optimizing the unfolding regularization. The numerical examples presented in the "Results" section considered various sources of uncertainty, and studied how they depend on the number of iterations in the IBU method. The full measurement is performed for all \({2}^{{n}_{\text{qubit}}}\) states and the studies in the "Results" section collapsed the uncertainty across all bins into a single number by averaging across bins. This way of choosing a regularization parameter is common in HEP, but is not a unique approach. Ultimately, a single number is required for optimization, but it may be that other metrics are more important for specific applications, such as the uncertainty for a particular expectation value, or the maximum or most probable uncertainty across states. Such requirements are application specific, but should be carefully considered prior to the measurement.
With active research and development across a variety of application domains, there are many promising applications of quantum algorithms in both science and industry on NISQ hardware. Readout errors are an important source of noise that can be corrected to improve measurement fidelity. HEP experimentalists have been studying readout error correction techniques for many years under the term unfolding. These tools are now available to the QIS community and will render the correction procedure more robust to resolution effects, in order to enable near-term breakthroughs.
The unfolding challenge
Let t be a vector that represents the true bin counts before the distortions from detector effects (HEP) or readout noise (QIS). The corresponding measured bin counts are denoted by m. These vectors are related by a response matrix R as m = Rt, where Rij = Pr(m = i∣t = j). In HEP, the matrix R is usually estimated from detailed detector simulations, while in QIS, R is constructed from measurements of computational basis states. The response matrix construction is discussed in more detail below.
The most naive unfolding procedure would be to simply invert the matrix R: \({\hat{t}}_{\text{matrix}}={R}^{-1}m\). However, simple matrix inversion has many known issues. Two main problems are that \({\hat{t}}_{\text{matrix}}\) can have unphysical entries, and that statistical uncertainties in R can be amplified and can result in oscillatory behavior. For example, consider the case
$$R=\left(\begin{array}{ll}1-\epsilon &\epsilon \\ \epsilon &1-\epsilon \end{array}\right),$$
where 0 < ϵ < 1/2. Then, \(\,\text{Var}\,({\hat{t}}_{\text{matrix}})\propto 1/\det (R)=1/(1-2\epsilon)\to \infty\) as ϵ → 1/2. As a generalization of this example to more bins (from ref. 55), consider a response matrix with a symmetric probability of migrating one bin up or down,
$$R=\left(\begin{array}{llll}1-\epsilon &\epsilon &0&\cdots \\ \epsilon &1-2\epsilon &\epsilon &\cdots \\ 0&\epsilon &1-2\epsilon &\cdots \\ \vdots &\vdots &\vdots &\ddots \end{array}\right).$$
Unfolding with the above response matrix and ϵ = 25% is presented in Fig. 7. The true bin counts have the Gaussian distribution with a mean of zero and a standard deviation of 3. The values are discretized with 21 uniform unit width bins spanning the interval [−10, 10]. The leftmost bin correspond to the first index in m and t. The indices are monotonically incremented with increasing bin number. The first and last bins include underflow and overflow values, respectively. Due to the symmetry of the migrations, the true and measured distributions are nearly the same. For this reason, the measured spectrum happens to align with the true distribution; the optimal unfolding procedure should be the identity. The significant off-diagonal components result in an oscillatory behavior and the statistical uncertainties are also amplified by the limited size of the simulation dataset used to derive the response matrix.
Fig. 7: A comparison of unfolding techniques for a Gaussian example and the R matrix from Eq. (3).
The symbols t and m denote the true and measured probability mass functions, respectively. Simple matrix inversion is represented by \({\hat{t}}_{\text{matrix}}\). The ignis and IBU methods are represented by \({\hat{t}}_{\text{ignis}}\) and \({\hat{t}}_{\text{IBU}}\), respectively. For this example, ∣∣m∣∣1 = 104 and R is assumed to be known exactly. The IBU method uses ten iterations and a uniform prior (other iterations choices are studied in Supplementary Fig. 1). The simulation used in this plot is based on standard Python functions and does not use a quantum computer simulator (see instead Fig. 2).
Even though it is simple, matrix inversion is an important benchmark for comparing with the methods described below because it is widely used in quantum computing, as introduced at the end of the "Introduction" section. Many of these implementations are even simpler than full matrix inversion by assuming that errors are uncorrelated across qubits. In some cases, additional mitigation strategies are used to make the matrix inversion corrections smaller. This is demonstrated in ref. 35 (see also ref. 56), where a symmetrization step is first applied with X gates prior to matrix inversion. A minimal modification to matrix inversion is to ensure that the counts are nonnegative. One such approach is to use a least squares method40,41,42 that will be described in more detail in the next section.
Unfolding methods
The fact that matrix inversion can result in unphysical outcomes (\({\hat{t}}_{i}\,<\,0\) or \({\hat{t}}_{i}\,>\,| | t| {| }_{1}\), where ∣∣x∣∣1 = ∑i∣xi∣ is the L1 norm) is often unacceptable. One solution is to find a vector that is as close to \(\hat{t}\) as possible, but with physical entries. This is the method implemented in qiskit-ignis39,40, a widely used quantum computation software package. The ignis method solves the optimization problem
$${\hat t}_{\rm{ignis}}={\mathop {{\rm{arg}} \, \min} \limits_{t^{\prime}{:}\Vert t^{\prime} \Vert_{1}=\Vert m \Vert_{1},\, t^{\prime} > 0}}\Vert m-Rt^{\prime}\Vert^{2}.$$
The same method was also recently studied by other authors as well, see, e.g., refs. 41,42. Note that the norm invariance is also satisfied for simple matrix inversion: \(| | {\hat{t}}_{\text{matrix}}| {| }_{1}=| | m| {| }_{1}\) by construction because ∣∣Rx∣∣1 = ∣∣x∣∣1 for all x so in particular for y = R−1m, ∣∣Ry∣∣1 = ∣∣y∣∣1 implies that ∣∣m∣∣1 = ∣∣R−1m∣∣1. This means that \({\hat{t}}_{\text{ignis}}={\hat{t}}_{\text{matrix}}\) whenever the latter is nonnegative, and so \({\hat{t}}_{\text{ignis}}\) inherits some of the pathologies of \({\hat{t}}_{\text{matrix}}\).
Three commonly used unfolding methods in HEP are IBU52 (also known as Richardson–Lucy deconvolution57,58), singular value decomposition (SVD) unfolding59, and TUnfold60. There are other less widely used methods, such as fully Bayesian unfolding61 and others62,63,64,65,66. Even though IBU calls for the repeated application of Bayes' theorem, this is not a Bayesian method as there is no prior/posterior over possible distributions, only a prior for initialization. TUnfold is similar to Eq. (4), but imposes further regularization requirements in order to avoid pathologies from matrix inversion. The SVD approach applies some regularization directly on R before applying matrix inversion. The focus of this paper will be on the widely used IBU method, which avoids fitting and matrix inversion altogether with an iterative approach. Reference 52 also suggested that one could combine the iterative method with smoothing from a fit, in order to suppress amplified statistical fluctuations.
Given a prior truth spectrum \({t}_{i}^{0}=\Pr (\,\text{truth is}\,i\text{}\,)\), the IBU technique proceeds according to the equation
$$\begin{array}{ll}{t}_{i}^{n+1}&=\mathop{\sum }\limits_{j}\Pr (\,{\text{truth}}\, {\text{is}}\,{i}| {\text{measure}}\,j)\times {m}_{j}\\ &=\mathop{\sum }\limits_{j}\frac{{R}_{ji}{t}_{i}^{n}}{{\sum }_{k}{R}_{jk}{t}_{k}^{n}}\times {m}_{j},\end{array}$$
where n is the iteration number and one iterates a total of N times. The advantage of Eq. (5) over simple matrix inversion is that the result is a probability (nonnegative and unit measure) when m ≥ 0. In HEP applications, m can have negative entries resulting from background subtraction. In this case, the unfolded result can also have negative entries. The parameters \({t}_{i}^{0}\) and N must be specified ahead of time. A common choice for \({t}_{i}^{0}\) is the uniform distribution. The number of iterations needed to converge depends on the desired precision, how close \({t}_{i}^{0}\) is to the final distribution, and the importance of off-diagonal components in R. In practice, it may be desirable to choose a relatively small N prior to convergence to regularize the result. Typically, \(\lesssim {\mathcal{O}}(10)\) iterations are needed.
In addition to the \({\hat{t}}_{\text{matrix}}\) and \({\hat{t}}_{\text{ignis}}\) approaches discussed in the previous section, Fig. 7 shows the IBU result with N = 10. Unlike the \({\hat{t}}_{\text{matrix}}\) and \({\hat{t}}_{\text{ignis}}\) results, the \({\hat{t}}_{\text{IBU}}\) does not suffer from rapid oscillations and like \({\hat{t}}_{\text{ignis}}\), is nonnegative. Analogous results for quantum computer simulations are presented in the "Results" section.
Constructing the response matrix
In practice, the R matrix is not known exactly, and must be measured for each quantum computer and set of operating conditions. One way to measure R is to construct a set of \({2}^{{n}_{\text{qubit}}}\) calibration circuits as shown in Fig. 8. Simple X gates are used to prepare all of the possible qubit configurations and then they are immediately measured.
Fig. 8: The set of \(2^{n_{\rm{qubit}}}\) calibration circuits.
The calibration circuits are simple combinations of initialization followed by measurement with all possible combinations of single X gates in between.
Figure 9 shows the R matrix from five qubits of the IBM Q Johannesburg machine. As expected, there is a significant diagonal component that corresponds to cases, when the measured and true states are the same. However, there are significant off-diagonal components, which are larger toward the right when more configurations start in the one state. The diagonal stripes with transition probabilities of ~5–7% are the result of the same qubit flipping from 0 ↔ 1. This matrix is hardware dependent and its elements can change over time due to calibration drift. Machines with higher connectivity have been observed to have more readout noise.
Fig. 9: An example R matrix from five qubits of the IBM Q Johannesburg machine using 8192 shots for each of the 25 possible states.
Note that this matrix depends on the calibration quality, so while it is representative, it is not precisely the version valid for all measurements made on this hardware.
Figure 10 explores the universality of qubit migrations across R. If every qubit was identical, and there were no effects from the orientation and connectivity of the computer, then one may expect that R can actually be described by just two numbers p0→1 and p1→0, the probability for a zero to be measured as a one and vice versa. These two numbers are extracted from R by performing the fit in Eq. (6):
$${\mathop {\min }\limits_{\mathop {{p_{0 \to 1}}}\limits_{{p_{1 \to 0}}}}}{\sum\limits_{ij}}{{{\left| {{R_{ij}} - p_{0 \to 1}^\alpha {{(1 - {p_{0 \to 1}})}^{{\alpha ^{\prime}}}}p_{1 \to 0}^\beta {{(1 -{p_{1 \to 0}})}^{{\beta ^{\prime}}}}} \right|}^2},}$$
where α is the number of qubits corresponding to the response matrix entry Rij that flipped from 0 to 1, \(\alpha ^{\prime}\) is the number that remained as a 0 and \(\beta ,\beta ^{\prime}\) are the corresponding entries for a one state; \(\alpha +\alpha ^{\prime} +\beta +\beta ^{\prime} ={n}_{\text{qubit}}\). For example, if the Rij entry corresponds to the state \(\left|01101\right\rangle\) migrating to \(\left|01010\right\rangle\), then α = 1, \(\alpha ^{\prime} =1\), β = 2, and \(\beta ^{\prime} =1\).
Fig. 10: Test of universality in the response matrix R.
Horizontal dotted lines show the result of a global fit to p0→1 and p1→0, assuming an equal rate of readout errors for each qubit. Big filled markers represent a fit with independent values for each qubit. Small semitransparent open markers show the \({2}^{{n}_{\text{qubit}}-1}\) transition probabilities for each qubit, when the other nqubit − 1 qubits are unchanged between truth and measured states.
A global fit to these parameters for the Johannesburg machine results in p0→1 ≈ 3.2% and p1→0 ≈ 7.5%. In reality, the qubits are not identical and so one may expect that p0→1 and p1→0 depend on the qubit. A fit to nqubit values for p0→1 and p1→0 (Eq. (6), but fitting ten parameters instead of two) are shown as filled triangles in Fig. 10. While these values cluster around the universal values (dotted lines), the spread is a relative 50% for p0→1 and 60% for p1→0. Furthermore, the transition probabilities can depend on the values of neighboring qubits. The open markers in Fig. 10 show the transition probabilities for each qubit with the other nqubit − 1 qubits held in a fixed state. In other words, the \({2}^{{n}_{\text{qubit}}-1}\) open triangles for each qubit show
$$\Pr (|{q}_{0},\ldots ,{q}_{i},\ldots ,{q}_{{n}_{\text{qubit}}}\rangle \to |{q}_{0},\ldots ,q^{\prime} ,\ldots ,{q}_{{n}_{\text{qubit}}}\rangle)\ ,$$
where \(({q}_{i},q^{\prime})\in \{(0,1),(1,0)\}\) with the other qubits qj held in a fixed configuration. The spread in these values is smaller than the variation in the solid markers, which indicates that per-qubit readout errors are likely sufficient to capture most of the salient features of the response matrix. This spread has a contribution from connection effects, but also from Poisson fluctuations as each measurement is statistically independent. However, this is hardware dependent and higher connectivity computers may depend more on the state of neighboring qubits.
Constructing the entire response matrix requires exponential resources in the number of qubits. While the measurement of R only needs to be performed once per quantum computer per operational condition, this can be untenable when nqubit ≫ 1. The tests above indicate that a significant fraction of the \({2}^{{n}_{\text{qubit}}}\) calibration circuits may be required for a precise measurement of R. Sub-exponential approaches may be possible and will be studied in future work.
The data shown in this paper are available upon request from the corresponding author.
The code for the work presented here can be found at https://github.com/bnachman/QISUnfolding.
Preskill, J. Quantum computing in the NISQ era and beyond. Quantum 2, 79 (2018).
Park, J. L. The concept of transition in quantum mechanics. Found. Phys. 1, 23–33 (1970).
Wootters, W. K. & Zurek, W. H. A single quantum cannot be cloned. Nature 299, 802–803 (1982).
ADS MATH Article Google Scholar
Dieks, D. Communication by EPR devices. Phys. Lett. A 92, 271–272 (1982).
Gottesman, D. An introduction to quantum error correction andfault-tolerant quantum computation. Preprint at https://arxiv.org/abs/0904.2557 (2009).
Devitt, S. J., Munro, W. J. & Nemoto, K. Quantum error correction for beginners. Rep. Prog. Phys. 76, 076001 (2013).
Terhal, B. M. Quantum error correction for quantum memories. Rev. Mod. Phys. 87, 307–346 (2015).
ADS MathSciNet Article Google Scholar
Lidar, D. A. & Brun, T. A. Quantum Error Correction (Cambridge University Press, 2013).
Nielsen, M. A. & Chuang, I. L. Quantum Computation and Quantum Information: 10th Anniversary Edition, 10th edn (Cambridge University Press, New York, NY, 2011).
Miroslav Urbanek, B. N. & de Jong, W. A. Quantum error detection improves accuracy of chemical calculations on a quantum computer. Phys. Rev. A 102, 022427 (2020).
Wootton, J. R. & Loss, D. Repetition code of 15 qubits. Phys. Rev. A 97, 052313 (2018).
Barends, R. et al. Superconducting quantum circuits at the surface code threshold for fault tolerance. Nature 508, 500–503 (2014).
Kelly, J. et al. State preservation by repetitive error detection in a superconducting quantum circuit. Nature 519, 66–69 (2015).
Linke, N. M. et al. Fault-tolerant quantum error detection. Sci. Adv. 3, e1701074 (2017).
Takita, M., Cross, A. W., Córcoles, A. D., Chow, J. M. & Gambetta, J. M. Experimental demonstration of fault-tolerant state preparation with superconducting qubits. Phys. Rev. Lett. 119, 180501 (2017).
Roffe, J., Headley, D., Chancellor, N., Horsman, D. & Kendon, V. Protecting quantum memories using coherent parity check codes. Quantum Sci. Technol. 3, 035010 (2018).
Vuillot, C. Is error detection helpful on IBM 5Q chips? Quantum Inf. Comput. 18, 0949–0964 (2018).
Willsch, D., Willsch, M., Jin, F., De Raedt, H. & Michielsen, K. Testing quantum fault tolerance on small systems. Phys. Rev. A 98, 052348 (2018).
Harper, R. & Flammia, S. T. Fault-tolerant logical gates in the IBM Quantum Experience. Phys. Rev. Lett. 122, 080504 (2019).
Kandala, A. et al. Error mitigation extends the computational reach of a noisy quantum processor. Nature 567, 491–495 (2019).
Li, Y. & Benjamin, S. C. Efficient variational quantum simulator incorporating active error minimization. Phys. Rev. X 7, 021050 (2017).
Temme, K., Bravyi, S. & Gambetta, J. M. Error mitigation for short-depth quantum circuits. Phys. Rev. Lett. 119, 180509 (2017).
Endo, S., Benjamin, S. C. & Li, Y. Practical quantum error mitigation for near-future applications. Phys. Rev. X 8, 031027 (2018).
Dumitrescu, E. F. et al. Cloud quantum computing of an atomic nucleus. Phys. Rev. Lett. 120, 210501 (2018).
He, A., Jong, W. A. d., Nachman, B. & Bauer, C. Resource efficient zero noise extrapolation with identity insertions. Phys. Rev. A 102, 012426 (2020).
Cowan, G. A survey of unfolding methods for particle physics. Conf. Proc. C0203181, 248 (2002).
Blobel, V. Unfolding methods in particle physics. In PHYSTAT 2011 Proceedings, 240–251 (CERN, Geneva, 2011).
Blobel, V. In Data Analysis in High Energy Physics, Ch. 6, 187–225 (Wiley, Hoboken, 2013).
Cormier, K., Di Sipio, R. & Wittek, P. Unfolding as quantum annealing. JHEP 11, 128 (2019).
Kandala, A. et al. Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. Nature 549, 242–246 (2017).
Klco, N. & Savage, M. J. Minimally-entangled state preparation of localized wavefunctions on quantum computers. Phys. Rev. A 102, 012612 (2020).
Yeter-Aydeniz, K. et al. Scalar quantum field theories as a benchmark for near-term quantum computers. Phys. Rev. A 99, 032306 (2019).
Rigetti Forest Software Development Kit. Source Code for pyquil.noise. http://docs.rigetti.com/en/stable/noise.html (2020).
The Cirq Contributors. Cirq, a Python Framework for Creating, Editing, and Invoking Noisy Intermediate Scale Quantum (NISQ) circuits. https://github.com/quantumlib/Cirq (2020).
Arute, F. et al. Quantum approximate optimization of non-planar graph problems on a planar superconducting processor. Preprint at https://arxiv.org/abs/2004.04197 (2020).
McCaskey, A. J., Lyakh, D. I., Dumitrescu, E. F., Powers, S. S. & Humble, T. S. Xacc: A system-level software infrastructure for heterogeneous quantum-classical computing. Preprint at https://arxiv.org/abs/1911.02452 (2019).
The XACC Contributors. XACC Documentation. https://xacc.readthedocs.io/en/latest/ (2020).
McCaskey, A. J. et al. Quantum chemistry as a benchmark for near-term quantum computers. npj Quantum Inf. 5, 99 (2019).
IBM Research. Qiskit. https://qiskit.org (2019).
IBM Research. Qiskit Ignis. https://qiskit.org/ignis (2019).
Yanzhu Chen, S. Y., Farahzad, M. & Wei, T.-C. Detector tomography on ibm 5-qubit quantum computers and mitigation of imperfect measurement. Phys. Rev. A 100, 052315 (2019).
Maciejewski, Filip B., Zimborás, Z. & Oszmaniec, M. Mitigation of readout noise in near-term quantum devices by classical post-processing based on detector tomography. Quantum 4, 257 (2020).
Jordan, S. P., Krovi, H., Lee, K. S. M. & Preskill, J. BQP-completeness of scattering in scalar quantum field theory. Quantum 2, 44 (2018).
Jordan, S. P., Lee, K. S. M. & Preskill, J. Quantum computation of scattering in scalar quantum field theories. Quant. Inf. Comput. 14, 1014–1080 (2014).
Jordan, S. P., Lee, K. S. M. & Preskill, J. Quantum algorithms for quantum field theories. Science 336, 1130–1133 (2012).
Jordan, S. P., Lee, K. S. M. & Preskill, J. Quantum algorithms for fermionic quantum field theories. Preprint at https://arxiv.org/abs/1404.7115 (2014).
Somma, R. D. Quantum simulations of one dimensional quantum systems. Quantum Info. Comput. 16, 1125?1168 (2016).
Macridin, A., Spentzouris, P., Amundson, J. & Harnik, R. Electron-phonon systems on a universal quantum computer. Phys. Rev. Lett. 121, 110504 (2018).
Macridin, A., Spentzouris, P., Amundson, J. & Harnik, R. Digital quantum computation of fermion-boson interacting systems. Phys. Rev. A 98, 042312 (2018).
Klco, N. & Savage, M. J. Digitization of scalar fields for quantum computing. Phys. Rev. A99, 052335 (2019).
Shepp, L. A. & Vardi, Y. Maximum likelihood reconstruction for emission tomography. IEEE Trans. Med. Imaging 1, 113–122 (1982).
D'Agostini, G. A Multidimensional unfolding method based on Bayes' theorem. Nucl. Instrum. Methods Phys. Res. A 362, 487–498 (1995).
Efron, B. Bootstrap methods: another look at the jackknife. Ann. Statist. 7, 1–26 (1979).
MathSciNet MATH Article Google Scholar
Malaescu, B. An iterative, dynamically stabilized method of data unfolding. Preprint at https://arxiv.org/abs/0907.3791 (2009).
Blobel, V. Unfolding methods in high-energy physics experiments. In Proceedings, CERN School of Computing DESY-84-118 88–127 (CERN School of Computing, Aigua Blava, 1984).
Tannu, S. S. & Qureshi, M. K. Mitigating measurement errors in quantum computers by exploiting state-dependent bias. In Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, MICRO '52, 279-290 (Association for Computing Machinery, New York, NY, 2019).
Lucy, L. B. An iterative technique for the rectification of observed distributions. Astron. J. 79, 745 (1974).
Richardson, W. H. Bayesian-based iterative method of image restoration. J. Opt. Soc. Am. 62, 55–59 (1972).
Hocker, A. & Kartvelishvili, V. SVD approach to data unfolding. Nucl. Instrum. Methods Phys. Res. A 372, 469–481 (1996).
Schmitt, S. TUnfold: an algorithm for correcting migration effects in high energy physics. J. Instrum. 7, T10003 (2012).
Choudalakis, G. Fully Bayesian unfolding. Preprint at https://arxiv.org/abs/1201.4612 (2012).
Gagunashvili, N. D. Machine learning approach to inverse problem and unfolding procedure. Preprint at https://arxiv.org/abs/1004.2006 (2010).
Glazov, A. Machine learning as an instrument for data unfolding. Preprint at https://arxiv.org/abs/1712.01814 (2017).
Datta, K., Kar, D. & Roy, D. Unfolding with generative adversarial networks. Preprint at https://arxiv.org/abs/1806.00433 (2018).
Zech, G. & Aslan, B. Binning-free unfolding based on Monte Carlo Migration. In PHYSTAT 2003 Proceedings, Vol. C030908, TUGT001 (SLAC, Stanford, CA, 2003).
Lindemann, L. & Zech, G. Unfolding by weighting Monte Carlo events. Nucl. Instrum. Methods Phys. Res. A 354, 516–521 (1995).
This work is supported by the U.S. Department of Energy, Office of Science under contract DE-AC02-05CH11231. In particular, support comes from Quantum Information Science Enabled Discovery (QuantISED) for High Energy Physics (KA2401032) and the Office of Advanced Scientific Computing Research (ASCR) through the Quantum Algorithms Team program. We acknowledge access to quantum chips and simulators through the IBM Quantum Experience and Q Hub Network through resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. B.N. would also like to thank Michael Geller for spotting a typo, Jesse Thaler for stimulating discussions about unfolding, and the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611.
Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA
Benjamin Nachman & Christian W. Bauer
Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA
Miroslav Urbanek & Wibe A. de Jong
Benjamin Nachman
Miroslav Urbanek
Wibe A. de Jong
Christian W. Bauer
B.N. conceived the project idea, wrote the code, performed the numerical analysis, and wrote the manuscript. W.A.d.J. organized the measurements on IBM Q. All authors discussed the results and revised the manuscript.
Correspondence to Benjamin Nachman.
Nachman, B., Urbanek, M., de Jong, W.A. et al. Unfolding quantum computer readout noise. npj Quantum Inf 6, 84 (2020). https://doi.org/10.1038/s41534-020-00309-7
A quantum Hopfield associative memory implemented on an actual quantum processor
Nathan Eli Miller
Saibal Mukhopadhyay
Event Classification with Quantum Machine Learning in High-Energy Physics
Koji Terashi
Michiru Kaneda
Junichi Tanaka
Computing and Software for Big Science (2021)
npj Quantum Information (npj Quantum Inf) ISSN 2056-6387 (online) | CommonCrawl |
For what real part of $s$ as a function of $q$ is the Euler-Maclaurin formula a valid analytic continuation of the Riemann zeta function?
The familiar formula for the Riemann zeta function:
$$\zeta(s)=\lim\limits_{k \rightarrow \infty} \left( \sum\limits_{n=1}^{n=k} \frac{1}{n^s}\right) \mbox{ is true for } \Re(s)>1$$
adding one more term of the Euler-Maclaurin formula we get:
$$\zeta(s)=\lim\limits_{k \rightarrow \infty} \left( \sum\limits_{n=1}^{n=k} \frac{1}{n^s}+ \frac{1}{k^{s - 1} \cdot (s - 1)}\right) \mbox{ which appears to be true for }\Re(s)>0$$
adding yet one more term of the Euler-Maclaurin formula we get:
$$\zeta(s)=\lim\limits_{k \rightarrow \infty} \left( \sum\limits_{n=1}^{n=k} \frac{1}{n^s}+ \frac{1}{k^{s - 1} \cdot (s - 1)} -\frac{k^{-s}}{2} \right) \mbox{ which appears to be true for } \Re(s)>-1$$
From that on in general, it appears that:
$$\zeta(s)=\lim\limits_{k \rightarrow \infty} \left(\sum\limits_{n=1}^{n=k} \frac{1}{n^s}+\frac{k^{1-s}}{s-1}-\frac{k^{-s}}{2}+\sum\limits_{r=1}^{q-1} \frac{B_{2 r} k^{-2 r-s+1} \left(\prod _{i=0}^{2 r-2} (i+s)\right)}{(2 r)!}\right)$$
is true whenever: $\Re(s)>-(2q-1)$ where $q=1,2,3,4,5,...$
Is this last generalization a simple fact of the Euler Maclaurin formula for the analytic continuation of the Riemann zeta function?
Clear[n, k, s];
Limit[Sum[1/n^s, {n, 1, k}], k -> Infinity, Assumptions -> Re[s] > 1]
Limit[Sum[1/n^s, {n, 1, k}] + 1/k^(s - 1)/(s - 1), k -> Infinity,
Assumptions -> Re[s] > 0]
Limit[Sum[1/n^s, {n, 1, k}] + 1/k^(s - 1)/(s - 1) - (k^(-s))/2,
k -> Infinity, Assumptions -> Re[s] > -1]
q = 9;
Limit[Sum[1/n^s, {n, 1, k}] + k^(1 - s)/(s - 1) - (k^(-s))/2 +
Sum[BernoulliB[2*r]/((2*r)!)*Product[s + i, {i, 0, 2*r - 2}]*
k^(-s - 2*r + 1), {r, 1, q - 1}], k -> Infinity,
Assumptions -> Re[s] > -(2*q - 1)]
convergence riemann-zeta analytic-continuation
Cragfelt
Mats GranvikMats Granvik
$\begingroup$ This answer math.stackexchange.com/a/47183/8530 appears to answer it in the positive, at least in a similar manner. $\endgroup$ – Mats Granvik Mar 23 at 20:59
$\begingroup$ If this is "a simple fact", why nobody tell that? $\endgroup$ – Aleksey Druggist Mar 25 at 9:51
$\begingroup$ Perhaps this is relevant to the subject: "The Prime Number Theorem" by J.G.O. Jameson, Cambridge University Press, 2003, p.112 $\endgroup$ – Aleksey Druggist Mar 27 at 8:12
$\begingroup$ @AlekseyDruggist Semiclassical in the chat room sent me the following link: math.ucdavis.edu/~tracy/courses/math205A/… where it says at the end: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .." which answers my question when you index the Bernoulli numbers differently. $\endgroup$ – Mats Granvik Mar 27 at 15:18
Browse other questions tagged convergence riemann-zeta analytic-continuation or ask your own question.
Riemann Zeta Function and Analytic Continuation
New tools for complex analysis and application to the Riemann Zeta function?
Euler product for Riemann zeta and analytic continuation
Euler Product Formula - Zeta Function
Analytic Bound on The Riemann Zeta Function
Abel-Plana Analytic Continuation of the Partial Sums of the Zeta Function
On the Maclaurin expansion of the Riemann zeta function and a related sequence.
Why should the series representation of the zeta function know about its analytic continuation?
Graphical representation of the zeta function in the region $0 < s <1$ for real $s$
Are these formulas for the Riemann zeta function $\zeta(s)$ globally convergent? | CommonCrawl |
Module 5: Sequences, Probability, and Counting Theory
Arithmetic Sequences
Find the common difference for an arithmetic sequence.
Write terms of an arithmetic sequence.
Use a recursive formula for an arithmetic sequence.
Use an explicit formula for an arithmetic sequence.
Companies often make large purchases, such as computers and vehicles, for business use. The book-value of these supplies decreases each year for tax purposes. This decrease in value is called depreciation. One method of calculating depreciation is straight-line depreciation, in which the value of the asset decreases by the same amount each year.
As an example, consider a woman who starts a small contracting business. She purchases a new truck for $25,000. After five years, she estimates that she will be able to sell the truck for $8,000. The loss in value of the truck will therefore be $17,000, which is $3,400 per year for five years. The truck will be worth $21,600 after the first year; $18,200 after two years; $14,800 after three years; $11,400 after four years; and $8,000 at the end of five years. In this section, we will consider specific kinds of sequences that will allow us to calculate depreciation, such as the truck's value.
Finding Common Differences
The values of the truck in the example are said to form an arithmetic sequence because they change by a constant amount each year. Each term increases or decreases by the same constant value called the common difference of the sequence. For this sequence, the common difference is –3,400.
The sequence below is another example of an arithmetic sequence. In this case, the constant difference is 3. You can choose any term of the sequence, and add 3 to find the subsequent term.
A General Note: Arithmetic Sequence
An arithmetic sequence is a sequence that has the property that the difference between any two consecutive terms is a constant. This constant is called the common difference. If [latex]{a}_{1}[/latex] is the first term of an arithmetic sequence and [latex]d[/latex] is the common difference, the sequence will be:
[latex]\left\{{a}_{n}\right\}=\left\{{a}_{1},{a}_{1}+d,{a}_{1}+2d,{a}_{1}+3d,…\right\}[/latex]
Example 1: Finding Common Differences
Is each sequence arithmetic? If so, find the common difference.
[latex]\left\{1,2,4,8,16,…\right\}[/latex]
[latex]\left\{-3,1,5,9,13,…\right\}[/latex]
Subtract each term from the subsequent term to determine whether a common difference exists.
The sequence is not arithmetic because there is no common difference.
The sequence is arithmetic because there is a common difference. The common difference is 4.
The graph of each of these sequences is shown in Figure 1. We can see from the graphs that, although both sequences show growth, [latex]a[/latex] is not linear whereas [latex]b[/latex] is linear. Arithmetic sequences have a constant rate of change so their graphs will always be points on a line.
If we are told that a sequence is arithmetic, do we have to subtract every term from the following term to find the common difference?
No. If we know that the sequence is arithmetic, we can choose any one term in the sequence, and subtract it from the subsequent term to find the common difference.
Is the given sequence arithmetic? If so, find the common difference.
[latex]\left\{18,\text{ }16,\text{ }14,\text{ }12,\text{ }10,\dots \right\}[/latex]
[latex]\left\{1,\text{ }3,\text{ }6,\text{ }10,\text{ }15,\dots \right\}[/latex]
Writing Terms of Arithmetic Sequences
Now that we can recognize an arithmetic sequence, we will find the terms if we are given the first term and the common difference. The terms can be found by beginning with the first term and adding the common difference repeatedly. In addition, any term can also be found by plugging in the values of [latex]n[/latex] and [latex]d[/latex] into formula below.
[latex]{a}_{n}={a}_{1}+\left(n - 1\right)d[/latex]
How To: Given the first term and the common difference of an arithmetic sequence, find the first several terms.
Add the common difference to the first term to find the second term.
Add the common difference to the second term to find the third term.
Continue until all of the desired terms are identified.
Write the terms separated by commas within brackets.
Example 2: Writing Terms of Arithmetic Sequences
Write the first five terms of the arithmetic sequence with [latex]{a}_{1}=17[/latex] and [latex]d=-3[/latex] .
Adding [latex]-3[/latex] is the same as subtracting 3. Beginning with the first term, subtract 3 from each term to find the next term.
The first five terms are [latex]\left\{17,14,11,8,5\right\}[/latex]
As expected, the graph of the sequence consists of points on a line as shown in Figure 2.
List the first five terms of the arithmetic sequence with [latex]{a}_{1}=1[/latex] and [latex]d=5[/latex] .
How To: Given any the first term and any other term in an arithmetic sequence, find a given term.
Substitute the values given for [latex]{a}_{1},{a}_{n},n[/latex] into the formula [latex]{a}_{n}={a}_{1}+\left(n - 1\right)d[/latex] to solve for [latex]d[/latex].
Find a given term by substituting the appropriate values for [latex]{a}_{1},n[/latex], and [latex]d[/latex] into the formula [latex]{a}_{n}={a}_{1}+\left(n - 1\right)d[/latex].
Given [latex]{a}_{1}=8[/latex] and [latex]{a}_{4}=14[/latex] , find [latex]{a}_{5}[/latex] .
The sequence can be written in terms of the initial term 8 and the common difference [latex]d[/latex] .
[latex]\left\{8,8+d,8+2d,8+3d\right\}[/latex]
We know the fourth term equals 14; we know the fourth term has the form [latex]{a}_{1}+3d=8+3d[/latex] .
We can find the common difference [latex]d[/latex] .
[latex]\begin{array}{ll}{a}_{n}={a}_{1}+\left(n - 1\right)d\hfill & \hfill \\ {a}_{4}={a}_{1}+3d\hfill & \hfill \\ {a}_{4}=8+3d\hfill & \text{Write the fourth term of the sequence in terms of } {a}_{1} \text{ and } d.\hfill \\ 14=8+3d\hfill & \text{Substitute } 14 \text{ for } {a}_{4}.\hfill \\ d=2\hfill & \text{Solve for the common difference}.\hfill \end{array}[/latex]
Find the fifth term by adding the common difference to the fourth term.
[latex]{a}_{5}={a}_{4}+2=16[/latex]
Notice that the common difference is added to the first term once to find the second term, twice to find the third term, three times to find the fourth term, and so on. The tenth term could be found by adding the common difference to the first term nine times or by using the equation [latex]{a}_{n}={a}_{1}+\left(n - 1\right)d[/latex].
Using Formulas for Arithmetic Sequences
Some arithmetic sequences are defined in terms of the previous term using a recursive formula. The formula provides an algebraic rule for determining the terms of the sequence. A recursive formula allows us to find any term of an arithmetic sequence using a function of the preceding term. Each term is the sum of the previous term and the common difference. For example, if the common difference is 5, then each term is the previous term plus 5. As with any recursive formula, the first term must be given.
[latex]\begin{array}{lllll}{a}_{n}={a}_{n - 1}+d\hfill & \hfill & \hfill & \hfill & n\ge 2\hfill \end{array}[/latex]
A General Note: Recursive Formula for an Arithmetic Sequence
The recursive formula for an arithmetic sequence with common difference [latex]d[/latex] is:
How To: Given an arithmetic sequence, write its recursive formula.
Subtract any term from the subsequent term to find the common difference.
State the initial term and substitute the common difference into the recursive formula for arithmetic sequences.
Example 4: Writing a Recursive Formula for an Arithmetic Sequence
Write a recursive formula for the arithmetic sequence.
[latex]\left\{-18\text{, }-7\text{, }4\text{, }15\text{, }26\text{, \ldots }\right\}[/latex]
The first term is given as [latex]-18[/latex] . The common difference can be found by subtracting the first term from the second term.
[latex]d=-7-\left(-18\right)=11[/latex]
Substitute the initial term and the common difference into the recursive formula for arithmetic sequences.
[latex]\begin{array}{l}{a}_{1}=-18\hfill \\ {a}_{n}={a}_{n - 1}+11,\text{ for }n\ge 2\hfill \end{array}[/latex]
We see that the common difference is the slope of the line formed when we graph the terms of the sequence, as shown in Figure 3. The growth pattern of the sequence shows the constant difference of 11 units.
How To: Do we have to subtract the first term from the second term to find the common difference?
No. We can subtract any term in the sequence from the subsequent term. It is, however, most common to subtract the first term from the second term because it is often the easiest method of finding the common difference.
[latex]\left\{25\text{, } 37\text{, } 49\text{, } 61\text{, } \text{\ldots }\right\}[/latex]
Using Explicit Formulas for Arithmetic Sequences
We can think of an arithmetic sequence as a function on the domain of the natural numbers; it is a linear function because it has a constant rate of change. The common difference is the constant rate of change, or the slope of the function. We can construct the linear function if we know the slope and the vertical intercept.
[latex]{a}_{n}={a}_{1}+d\left(n - 1\right)[/latex]
To find the y-intercept of the function, we can subtract the common difference from the first term of the sequence. Consider the following sequence.
The common difference is [latex]-50[/latex] , so the sequence represents a linear function with a slope of [latex]-50[/latex] . To find the [latex]y[/latex] -intercept, we subtract [latex]-50[/latex] from [latex]200:200-\left(-50\right)=200+50=250[/latex] . You can also find the [latex]y[/latex] -intercept by graphing the function and determining where a line that connects the points would intersect the vertical axis. The graph is shown in Figure 4.
Recall the slope-intercept form of a line is [latex]y=mx+b[/latex]. When dealing with sequences, we use [latex]{a}_{n}[/latex] in place of [latex]y[/latex] and [latex]n[/latex] in place of [latex]x[/latex]. If we know the slope and vertical intercept of the function, we can substitute them for [latex]m[/latex] and [latex]b[/latex] in the slope-intercept form of a line. Substituting [latex]-50[/latex] for the slope and [latex]250[/latex] for the vertical intercept, we get the following equation:
[latex]{a}_{n}=-50n+250[/latex]
We do not need to find the vertical intercept to write an explicit formula for an arithmetic sequence. Another explicit formula for this sequence is [latex]{a}_{n}=200 - 50\left(n - 1\right)[/latex] , which simplifies to [latex]{a}_{n}=-50n+250[/latex].
A General Note: Explicit Formula for an Arithmetic Sequence
An explicit formula for the [latex]n\text{th}[/latex] term of an arithmetic sequence is given by
How To: Given the first several terms for an arithmetic sequence, write an explicit formula.
Find the common difference, [latex]{a}_{2}-{a}_{1}[/latex].
Substitute the common difference and the first term into [latex]{a}_{n}={a}_{1}+d\left(n - 1\right)[/latex].
Example 5: Writing the nth Term Explicit Formula for an Arithmetic Sequence
Write an explicit formula for the arithmetic sequence.
[latex]\left\{2\text{, }12\text{, }22\text{, }32\text{, }42\text{, \ldots }\right\}[/latex]
The common difference can be found by subtracting the first term from the second term.
[latex]\begin{array}{ll}d\hfill & ={a}_{2}-{a}_{1}\hfill \\ \hfill & =12 - 2\hfill \\ \hfill & =10\hfill \end{array}[/latex]
The common difference is 10. Substitute the common difference and the first term of the sequence into the formula and simplify.
[latex]\begin{array}{l}{a}_{n}=2+10\left(n - 1\right)\hfill \\ {a}_{n}=10n - 8\hfill \end{array}[/latex]
The graph of this sequence, represented in Figure 5, shows a slope of 10 and a vertical intercept of [latex]-8[/latex] .
Write an explicit formula for the following arithmetic sequence.
[latex]\left\{50,47,44,41,\dots \right\}[/latex]
Finding the Number of Terms in a Finite Arithmetic Sequence
Explicit formulas can be used to determine the number of terms in a finite arithmetic sequence. We need to find the common difference, and then determine how many times the common difference must be added to the first term to obtain the final term of the sequence.
How To: Given the first three terms and the last term of a finite arithmetic sequence, find the total number of terms.
Find the common difference [latex]d[/latex].
Substitute the last term for [latex]{a}_{n}[/latex] and solve for [latex]n[/latex].
Example 6: Finding the Number of Terms in a Finite Arithmetic Sequence
Find the number of terms in the finite arithmetic sequence.
[latex]\left\{8\text{, }1\text{, }-6\text{, }…\text{, }-41\right\}[/latex]
[latex]1 - 8=-7[/latex]
The common difference is [latex]-7[/latex] . Substitute the common difference and the initial term of the sequence into the [latex]n\text{th}[/latex] term formula and simplify.
[latex]\begin{array}{l}{a}_{n}={a}_{1}+d\left(n - 1\right)\hfill \\ {a}_{n}=8+-7\left(n - 1\right)\hfill \\ {a}_{n}=15 - 7n\hfill \end{array}[/latex]
Substitute [latex]-41[/latex] for [latex]{a}_{n}[/latex] and solve for [latex]n[/latex]
[latex]\begin{array}{l}-41=15 - 7n\hfill \\ 8=n\hfill \end{array}[/latex]
There are eight terms in the sequence.
[latex]\left\{6\text{, }11\text{, }16\text{, }…\text{, }56\right\}[/latex]
Solving Application Problems with Arithmetic Sequences
In many application problems, it often makes sense to use an initial term of [latex]{a}_{0}[/latex] instead of [latex]{a}_{1}[/latex]. In these problems, we alter the explicit formula slightly to account for the difference in initial terms. We use the following formula:
[latex]{a}_{n}={a}_{0}+dn[/latex]
Example 7: Solving Application Problems with Arithmetic Sequences
A five-year old child receives an allowance of $1 each week. His parents promise him an annual increase of $2 per week.
Write a formula for the child's weekly allowance in a given year.
What will the child's allowance be when he is 16 years old?
The situation can be modeled by an arithmetic sequence with an initial term of 1 and a common difference of 2.Let [latex]A[/latex] be the amount of the allowance and [latex]n[/latex] be the number of years after age 5. Using the altered explicit formula for an arithmetic sequence we get:
[latex]{A}_{n}=1+2n[/latex]
We can find the number of years since age 5 by subtracting.
[latex]16 - 5=11[/latex]
We are looking for the child's allowance after 11 years. Substitute 11 into the formula to find the child's allowance at age 16.
[latex]{A}_{11}=1+2\left(11\right)=23[/latex]
The child's allowance at age 16 will be $23 per week.
A woman decides to go for a 10-minute run every day this week and plans to increase the time of her daily run by 4 minutes each week. Write a formula for the time of her run after n weeks. How long will her daily run be 8 weeks from today?
recursive formula for nth term of an arithmetic sequence [latex]{a}_{n}={a}_{n - 1}+d\phantom{\rule{1}{0ex}}n\ge 2[/latex]
explicit formula for nth term of an arithmetic sequence [latex]\begin{array}{l}{a}_{n}={a}_{1}+d\left(n - 1\right)\end{array}[/latex]
An arithmetic sequence is a sequence where the difference between any two consecutive terms is a constant.
The constant between two consecutive terms is called the common difference.
The common difference is the number added to any one term of an arithmetic sequence that generates the subsequent term.
The terms of an arithmetic sequence can be found by beginning with the initial term and adding the common difference repeatedly.
A recursive formula for an arithmetic sequence with common difference [latex]d[/latex] is given by [latex]{a}_{n}={a}_{n - 1}+d,n\ge 2[/latex].
As with any recursive formula, the initial term of the sequence must be given.
An explicit formula for an arithmetic sequence with common difference [latex]d[/latex] is given by [latex]{a}_{n}={a}_{1}+d\left(n - 1\right)[/latex].
An explicit formula can be used to find the number of terms in a sequence.
In application problems, we sometimes alter the explicit formula slightly to [latex]{a}_{n}={a}_{0}+dn[/latex].
arithmetic sequence
a sequence in which the difference between any two consecutive terms is a constant
common difference
the difference between any two consecutive terms in an arithmetic sequence
Section Exercises
1. What is an arithmetic sequence?
2. How is the common difference of an arithmetic sequence found?
3. How do we determine whether a sequence is arithmetic?
4. What are the main differences between using a recursive formula and using an explicit formula to describe an arithmetic sequence?
5. Describe how linear functions and arithmetic sequences are similar. How are they different?
For the following exercises, find the common difference for the arithmetic sequence provided.
6. [latex]\left\{5,11,17,23,29,…\right\}[/latex]
7. [latex]\left\{0,\frac{1}{2},1,\frac{3}{2},2,…\right\}[/latex]
For the following exercises, determine whether the sequence is arithmetic. If so find the common difference.
8. [latex]\left\{11.4,9.3,7.2,5.1,3,…\right\}[/latex]
9. [latex]\left\{4,16,64,256,1024,…\right\}[/latex]
For the following exercises, write the first five terms of the arithmetic sequence given the first term and common difference.
10. [latex]{a}_{1}=-25[/latex] , [latex]d=-9[/latex]
11. [latex]{a}_{1}=0[/latex] , [latex]d=\frac{2}{3}[/latex]
For the following exercises, write the first five terms of the arithmetic series given two terms.
12. [latex]{a}_{1}=17,{a}_{7}=-31[/latex]
13. [latex]{a}_{13}=-60,{a}_{33}=-160[/latex]
For the following exercises, find the specified term for the arithmetic sequence given the first term and common difference.
14. First term is 3, common difference is 4, find the 5th term.
For the following exercises, find the first term given two terms from an arithmetic sequence.
19. Find the first term or [latex]{a}_{1}[/latex] of an arithmetic sequence if [latex]{a}_{6}=12[/latex] and [latex]{a}_{14}=28[/latex].
21. Find the first term or [latex]{a}_{1}[/latex] of an arithmetic sequence if [latex]{a}_{8}=40[/latex] and [latex]{a}_{23}=115[/latex].
23. Find the first term or [latex]{a}_{1}[/latex] of an arithmetic sequence if [latex]{a}_{11}=11[/latex] and [latex]{a}_{21}=16[/latex].
For the following exercises, find the specified term given two terms from an arithmetic sequence.
24. [latex]{a}_{1}=33[/latex] and [latex]{a}_{7}=-15[/latex]. Find [latex]{a}_{4}[/latex].
25. [latex]{a}_{3}=-17.1[/latex] and [latex]{a}_{10}=-15.7[/latex]. Find [latex]{a}_{21}[/latex].
For the following exercises, use the recursive formula to write the first five terms of the arithmetic sequence.
26. [latex]{a}_{1}=39;\text{ }{a}_{n}={a}_{n - 1}-3[/latex]
27. [latex]{a}_{1}=-19;\text{ }{a}_{n}={a}_{n - 1}-1.4[/latex]
For the following exercises, write a recursive formula for each arithmetic sequence.
28. [latex]{a}_{n}=\left\{40,60,80,…\right\}[/latex]
30. [latex]{a}_{n}=\left\{-1,2,5,…\right\}[/latex]
32. [latex]{a}_{n}=\left\{-15,-7,1,…\right\}[/latex]
33. [latex]{a}_{n}=\left\{8.9,10.3,11.7,…\right\}[/latex]
34. [latex]{a}_{n}=\left\{-0.52,-1.02,-1.52,…\right\}[/latex]
35. [latex]{a}_{n}=\left\{\frac{1}{5},\frac{9}{20},\frac{7}{10},…\right\}[/latex]
36. [latex]{a}_{n}=\left\{-\frac{1}{2},-\frac{5}{4},-2,…\right\}[/latex]
37. [latex]{a}_{n}=\left\{\frac{1}{6},-\frac{11}{12},-2,…\right\}[/latex]
For the following exercises, write a recursive formula for the given arithmetic sequence, and then find the specified term.
38. [latex]{a}_{n}=\left\{7\text{, }4\text{, }1\text{, }…\right\}[/latex]; Find the 17th term.
39. [latex]{a}_{n}=\left\{4\text{, }11\text{, }18\text{, }…\right\}[/latex]; Find the 14th term.
40. [latex]{a}_{n}=\left\{2\text{, }6\text{, }10\text{, }…\right\}[/latex]; Find the 12th term.
For the following exercises, use the explicit formula to write the first five terms of the arithmetic sequence.
41. [latex]{a}_{n}=24 - 4n[/latex]
42. [latex]{a}_{n}=\frac{1}{2}n-\frac{1}{2}[/latex]
For the following exercises, write an explicit formula for each arithmetic sequence.
43. [latex]{a}_{n}=\left\{3,5,7,…\right\}[/latex]
45. [latex]{a}_{n}=\left\{-5\text{, }95\text{, }195\text{, }…\right\}[/latex]
46. [latex]{a}_{n}=\left\{-17\text{, }-217\text{, }-417\text{,}…\right\}[/latex]
47. [latex]{a}_{n}=\left\{1.8\text{, }3.6\text{, }5.4\text{, }…\right\}[/latex]
48. [latex]{a}_{n}=\left\{-18.1,-16.2,-14.3,…\right\}[/latex]
49. [latex]{a}_{n}=\left\{15.8,18.5,21.2,…\right\}[/latex]
50. [latex]{a}_{n}=\left\{\frac{1}{3},-\frac{4}{3},-3\text{, }…\right\}[/latex]
51. [latex]{a}_{n}=\left\{0,\frac{1}{3},\frac{2}{3},…\right\}[/latex]
52. [latex]{a}_{n}=\left\{-5,-\frac{10}{3},-\frac{5}{3},\dots \right\}[/latex]
For the following exercises, find the number of terms in the given finite arithmetic sequence.
53. [latex]{a}_{n}=\left\{3\text{,}-4\text{,}-11\text{, }…\text{,}-60\right\}[/latex]
54. [latex]{a}_{n}=\left\{1.2,1.4,1.6,…,3.8\right\}[/latex]
55. [latex]{a}_{n}=\left\{\frac{1}{2},2,\frac{7}{2},…,8\right\}[/latex]
For the following exercises, determine whether the graph shown represents an arithmetic sequence.
For the following exercises, use the information provided to graph the first 5 terms of the arithmetic sequence.
58. [latex]{a}_{1}=0,d=4[/latex]
59. [latex]{a}_{1}=9;{a}_{n}={a}_{n - 1}-10[/latex]
60. [latex]{a}_{n}=-12+5n[/latex]
For the following exercises, follow the steps to work with the arithmetic sequence [latex]{a}_{n}=3n - 2[/latex] using a graphing calculator:
Press [MODE]
Select SEQ in the fourth line
Select DOT in the fifth line
Press [ENTER]
Press [Y=]
[latex]n\text{Min}[/latex] is the first counting number for the sequence. Set [latex]n\text{Min}=1[/latex]
[latex]u\left(n\right)[/latex] is the pattern for the sequence. Set [latex]u\left(n\right)=3n - 2[/latex]
[latex]u\left(n\text{Min}\right)[/latex] is the first number in the sequence. Set [latex]u\left(n\text{Min}\right)=1[/latex]
Press [2ND] then [WINDOW] to go to TBLSET
Set [latex]\text{TblStart}=1[/latex]
Set [latex]\Delta \text{Tbl}=1[/latex]
Set Indpnt: Auto and Depend: Auto
Press [2ND] then [GRAPH] to go to the TABLE
61. What are the first seven terms shown in the column with the heading [latex]u\left(n\right)\text{?}[/latex]
62. Use the scroll-down arrow to scroll to [latex]n=50[/latex]. What value is given for [latex]u\left(n\right)\text{?}[/latex]
63. Press [WINDOW]. Set [latex]n\text{Min}=1,n\text{Max}=5,x\text{Min}=0,x\text{Max}=6,y\text{Min}=-1[/latex], and [latex]y\text{Max}=14[/latex]. Then press [GRAPH]. Graph the sequence as it appears on the graphing calculator.
For the following exercises, follow the steps given above to work with the arithmetic sequence [latex]{a}_{n}=\frac{1}{2}n+5[/latex] using a graphing calculator.
64. What are the first seven terms shown in the column with the heading [latex]u\left(n\right)[/latex] in the TABLE feature?
65. Graph the sequence as it appears on the graphing calculator. Be sure to adjust the WINDOW settings as needed.
66. Give two examples of arithmetic sequences whose 4th terms are [latex]9[/latex].
67. Give two examples of arithmetic sequences whose 10th terms are [latex]206[/latex].
68. Find the 5th term of the arithmetic sequence [latex]\left\{9b,5b,b,\dots \right\}[/latex].
69. Find the 11th term of the arithmetic sequence [latex]\left\{3a - 2b,a+2b,-a+6b\dots \right\}[/latex].
70. At which term does the sequence [latex]\left\{5.4,14.5,23.6,…\right\}[/latex] exceed 151?
71. At which term does the sequence [latex]\left\{\frac{17}{3},\frac{31}{6},\frac{14}{3},…\right\}[/latex] begin to have negative values?
72. For which terms does the finite arithmetic sequence [latex]\left\{\frac{5}{2},\frac{19}{8},\frac{9}{4},…,\frac{1}{8}\right\}[/latex] have integer values?
73. Write an arithmetic sequence using a recursive formula. Show the first 4 terms, and then find the 31st term.
74. Write an arithmetic sequence using an explicit formula. Show the first 4 terms, and then find the 28th term. | CommonCrawl |
2020-12-01 Solar Radio Science Highlights
Type II radio bursts, produced near the local plasma frequency and/or its harmonic by energetic electrons accelerated by shock waves moving outward through the inner heliosphere, have long been recognized as evidence of shock waves origin and propagation in the solar corona.
In this work, we analyze the early evolution of a coronal shock wave, associated with a prominence eruption, with the aim of investigating the properties of the compressed plasma through both radio and Extreme-Ultra Violet (EUV) data.
Observations and Analysis
On 2014 October 30, a solar eruption occurred at the east limb in active region NOAA 12201 (S04E70) involving a C6.9 flare, a CME, and a type II radio burst starting at about 13:08 UT.
Analysis of Radio data
The type II radio burst, also observed by the Nancay RadioHeliograph (NRH), was rather complex, as evinced by inspecting the compound radio dynamic spectrum obtained by combining data from the Compound Astronomical Low-cost Low-frequency Instrument for Spectroscopy and Transportable Observatory (CALLISTO) station located in Birr, Ireland (BIR) and the USAF Radio Solar Telescope Network (RSTN) spectrometer located in San Vito, Italy (see Figure 1).
Figure 1 – Radio dynamic spectrum obtained from CALLISTO (mid and high frequencies) and RSTN (low frequencies).
The harmonic component was split into two sub-bands, a lower (L) and an upper (U) frequency component, most probably due to shock/streamer interactions (discussion in Mancuso et al. 2019 paper). The further splitting of the upper harmonic band, visible in the time interval between 13:08.5 UT and 13:08.7 UT, is instead attributable to simultaneous radio emission occurring in the upstream (ahead) and downstream (behind) regions of the shock front. Under the above assumption, we calculated the compression ratio Xradio of the expanding front as:
\[X_{radio}=\frac{n_{e}^{D}}{ n_{e}^{U}}=\frac{f_{U}^{2}}{ f_{D}^{2}} \tag{1} \]
The calculated Xradio values lie between 1.1 and 1.4 in the temporal range [13:08:30 – 13:09:00] UT.
Analysis of Extreme Ultra-Violet data
EUV data from SDO/AIA (Figure 2, left panel) were used to estimate the temperature of the emitting plasma and to infer the density compression ratio, XEUV, from emission measure (EM) modeling.
Temporal variations of the observed EUV intensities related to the evolution of the electron density and ionization state (depending on temperature) of the plasma were used to infer the presence of the shock from the images.
Figure 2 – Left Panel:SDO/AIA running difference EUV images of the event at 171 Å (top), 193 Å (middle), and 211 Å (bottom). Right Panel: Temporal dependence of the intensity fluxes in the 171 Å, 193 Å, and 211 Å channels as measured in the red filled-dot region marked by a red arrow in the left panel.
The shock wave front was clearly detected in three of the SDO/AIA channels (171 Å, 193 Å, and 211 Å) and it was distinctly separated from the CME bubble represented by the observed expanding EUV front. Moreover, given the temperature response of each of the used AIA filters, the emitting plasma temperature was estimated in the range of 1.75 – 4.00 MK.
From the EM (Eq. 2), representing the amount of emitting material as a function of coronal plasma temperature along the line of sight (LOS), we calculated the plasma electron density and the compression ratio XEUV (Eq 3., see Frassati et al. 2019 and Frassati 2020 for more details).
\[EM\simeq\int_{\rm LOS}n_{\rm e}^2\mathrm{d}L , \tag{2}\]
\[X_{\rm EUV} = \sqrt{\frac{EM_{\rm D}-EM_{\rm U}}{P_{\rm U}}+1}, \tag{3}\]
where EMU and EMD are the up- and downstream EMs and PU is the contribution to the pre-event EM from the coronal plasma region located in the plasma region being compressed after the transit of the EUV front.
The calculated value XEUV ≈ 1.23 at 13:08:45 UT and for T = 2.5 – 3.0 MK is in good agreement with what estimated from the above type II band-splitting analysis.
From the Differential Emission Measure, $DEM(T)=n_{e}^{2}(\frac{dT}{ds})^{-1}$, we inferred the peak upstream temperature TU ≈ 1.78 MK and, using the results from Mancuso et al. (2019) together with the Rankine-Hugoniot jump conditions, we estimated the post-shock temperature, TD ≈ 2.75 MK, under the usual hypothesis that the shock was perpendicular.
In this work, we analyzed a peculiar coronal event that occurred on 2014 October 30 in which both a type II radio burst and a CME- driven shock were identified.
The type II emission was observed during the lateral over-expansion phase of the CME bubble; to this respect, the intersection of coronal loops and streamers might be a fundamental factor for the formation and/or enhancement of the shock and the excitation of type II emission (see Mancuso et al. 2019). The presence of the type II radio burst in the low corona at the time and location where the EUV front was identified, together with the comparable values of compression ratios derived from the analysis of both radio and EUV data, concur to indicate that the observed EUV front was, in fact, a CME-driven shock wave.
We are grateful to the SDO/AIA teams, the Radio Solar Telescope Network (RSTN), and the e-CALLISTO network for providing open data access.
The authors declare that they have no conflicts of interest.
Frassati, F., Susino, R., Mancuso, S., Bemporad, A.: 2019, Comprehensive Analysis of the Formation of a Shock Wave Associated with a Coronal Mass Ejection. Astrophys. J. 871, 212. DOI. ADS.
Frassati, F., Mancuso, S. & Bemporad, A. Estimate of Plasma Temperatures Across a CME-Driven Shock from a Comparison Between EUV and Radio Data. Sol Phys 295, 124 (2020). https://doi.org/10.1007/s11207-020-01686-0. ADS
Mancuso, S., Frassati, F., Bemporad, A., Barghini, D.: 2019, Three-dimensional reconstruction of CME-driven shock-streamer interaction from radio and EUV observations: a different take on the diagnostics of coronal magnetic fields.Astron. Astrophys.624, L2. DOI. ADS.
Full list of authors: Federica Frassati1, Salvatore Mancuso1 and Alessandro Bemporad1.
1 Istituto Nazionale di Astrofisica, Osservatorio Astrofisico di Torino, via Osservatorio 20, 10025 Pino Torinese, Italy
Coronal Mass Ejection (CME)
Type II solar radio bursts | CommonCrawl |
AJM Home Page
AJM Content Home
All AJM Volumes
Asian Journal of Mathematics
The fundamental group of reductive Borel–Serre and Satake compactifications
DOI: https://dx.doi.org/10.4310/AJM.2015.v19.n3.a4
Lizhen Ji (Department of Mathematics, University of Michigan, Ann Arbor, Mich., U.S.A.)
V. Kumar Murty (Department of Mathematics, University of Toronto,, Ontario, Canada)
Leslie Saper (Department of Mathematics, Duke University, Durham, North Carolina, U.S.A.)
John Scherk (Department of Computer and Mathematical Sciences, University of Toronto Scarborough, Toronto, Ontario, Canada)
Let $ \mathbf{G}$ be an almost simple, simply connected algebraic group defined over a number field $k$, and let $S$ be a finite set of places of $k$ including all infinite places. Let $X$ be the product over $v \in S$ of the symmetric spaces associated to $\mathbf{G}(k_v)$, when $v$ is an infinite place, and the Bruhat–Tits buildings associated to $\mathbf{G}(k_v)$, when $v$ is a finite place. The main result of this paper is to compute explicitly the fundamental group of the reductive Borel–Serre compactification of $\Gamma \setminus X$, where $\Gamma$ is an $S$-arithmetic subgroup of $\mathbf{G}$. In the case that $\Gamma$ is neat, we show that this fundamental group is isomorphic to $\Gamma / E \, \Gamma$, where $E \, \Gamma$ is the subgroup generated by the elements of $\Gamma$ belonging to unipotent radicals of $k$-parabolic subgroups. Analogous computations of the fundamental group of the Satake compactifications are made. It is noteworthy that calculations of the congruence subgroup kernel $C(S, \mathbf{G})$ yield similar results.
fundamental group, reductive Borel-Serre compactification, Bruhat-Tits buildings, congruence subgroup kernel
Primary 20F34, 22E40, 22F30. Secondary 14M27, 20G30. | CommonCrawl |
Inverse Problems and Imaging
2022, Volume 16, Issue 2: 417-450. Doi: 10.3934/ipi.2021056
This issue Previous Article An inverse source problem for the stochastic wave equation Next Article Weighted area constraints-based breast lesion segmentation in ultrasound image analysis
Small defects reconstruction in waveguides from multifrequency one-side scattering data
Éric Bonnetier1, ,
Angèle Niclas2, , ,
Laurent Seppecher2, and
Grégory Vial2,
Institut Fourier, Université Grenoble Alpes, France
Institut Camille Jordan, École Centrale Lyon, France
* Corresponding author: Angèle Niclas
Early access: August 2021
Published: April 2022
Abstract Full Text(HTML) Figure(14) / Table(2) Related Papers Cited by
Localization and reconstruction of small defects in acoustic or electromagnetic waveguides is of crucial interest in nondestructive evaluation of structures. The aim of this work is to present a new multi-frequency inversion method to reconstruct small defects in a 2D waveguide. Given one-side multi-frequency wave field measurements of propagating modes, we use a Born approximation to provide a $ \text{L}^2 $-stable reconstruction of three types of defects: a local perturbation inside the waveguide, a bending of the waveguide, and a localized defect in the geometry of the waveguide. This method is based on a mode-by-mode spacial Fourier inversion from the available partial data in the Fourier domain. Indeed, in the available data, some high and low spatial frequency information on the defect are missing. We overcome this issue using both a compact support hypothesis and a minimal smoothness hypothesis on the defects. We also provide a suitable numerical method for efficient reconstruction of such defects and we discuss its applications and limits.
Inverse problem,
Helmholtz equation,
waveguides,
multi-frequency data,
Born approximation.
Mathematics Subject Classification: 35R30, 78A46.
Figure 1. Representation of the three types of defects: in $ (1) $ a local perturbation $ q $, in $ (2) $ a bending of the waveguide, in $ (3) $ a localized defect in the geometry of $ \Omega $. A controlled source $ s $ generates a wave field $ u^\text{inc}_k $. When it crosses the defect, it generates a scattered wave field $ u^s_k $. Both $ u^\text{inc}_k $ and $ u^s_k $ are measured on the section $ \Sigma $
Figure 2. Condition number of $ M_t^TM_t $ for different sizes of support and values of $ \omega_0 $. Here, $ X $ is the discretization of $ [1-r, 1+r] $ with $ 500r+1 $ points. The $ x $-axis represents the evolution of $ r $, and the $ y $-axis $ \text{cond}_2(M_t^TM_t) $. Each curve corresponds to value of $ \omega_0 $ as indicated in the left rectangle
Figure 3. Representation of a bend in a waveguide
Figure 4. Representation of a shape defect in a waveguide
Figure 5. Reconstruction of $ f(x) = (x-0.8)(1.2-x)\textbf{1}_{0.8\leq x\leq 1.2} $ for different values of $ \omega_1 $ using the discrete operator $ \gamma $ and the algorithm (91) with $ \lambda = 0.001 $. Here, $ X $ is the discretization of $ [0.5, 1.5] $ with $ 10\omega_1 $ points, and $ K $ is the discretization of $ [0.01, \omega_1] $ with $ 1000 $ points
Figure 6. $ \text{L}^2 $-error between $ f(x) = (x-0.8)(1.2-x)\textbf{1}_{0.8\leq x\leq 1.2} $ and its reconstruction $ f_{app} $ for different values of $ \omega_1 $ using the discrete operator $ \gamma $ and the algorithm (91) with $ \lambda = 0.001 $. Here, $ X $ is the discretization of $ [0.5, 1.5] $ with $ 10\omega_1 $ points, and $ K $ is the discretization of $ [0.01, \omega_1] $ with $ 1000 $ points
Figure 7. Reconstruction of $ f(x) = (x-0.8)(1.2-x)\textbf{1}_{0.8\leq x\leq 1.2} $ for different values of $ \omega_0 $ and $ r = 0.5 $ using the discrete operator $ \gamma $ and the algorithm (91) with $ \lambda = 0.001 $. Here, $ X $ is the discretization of $ [0.5, 1.5] $ with $ 251 $ points, and $ K $ is the discretization of $ [\omega_0, 50] $ with $ 1000 $ points
Figure 8. Reconstruction of $ f(x) = (x-0.8)(1.2-x)\textbf{1}_{0.8\leq x\leq 1.2} $ for different sizes of support $ r $ and $ \omega_0 = 3\pi $ using the discrete operator $ \gamma $ and the algorithm (91) with $ \lambda = 0.001 $. Here, $ X $ is the discretization of $ [1-r, 1+r] $ with $ 500r+1 $ points, and $ K $ is the discretization of $ [3\pi, 50] $ with $ 1000 $ points
Figure 9. Reconstruction of two different bends. The black lines represent the initial shape of $ \Omega $, and the red the reconstruction of $ \Omega $. In both cases, $ K $ is the discretization of $ [0.01, 40] $ with $ 100 $ points, and the reconstruction is obtain by (94). On the left, the initial parameters of the bend are $ (x_c, r, \theta) = (4, 10, \pi/12) $ and on the right, $ (x_c, r, \theta) = (2, 5, \pi/6) $
Figure 10. Reconstruction of a waveguide with two successive bends. The black lines represent the initial shape of $ \Omega $, and the red the reconstruction of $ \Omega $, slightly shifted for comparison purposes. In both cases, $ K $ is the discretization of $ [0.01, 40] $ with $ 100 $ points. The parameters of the two bends are $ (x_c^{(1)}, r^{(1)}, \theta^{(1)}) = (2, 10, \pi/30)) $ and $ (x_c^{(2)}, r^{(2)}, \theta^{(2)}) = (3.8, 8, -\pi/20)) $
Figure 11. Reconstruction of two shape defects. In black, the initial shape of $ \Omega $, and in red the reconstruction, slightly shifted for comparison purposes. In both cases, $ K $ is the discretization of $ [0.01, 70]\setminus \{[n\pi-0.2, n\pi+0.2], n\in \mathbb{N}\} $ with $ 300 $ points, $ X $ is the discretization of $ [3, 4.5] $ with $ 151 $ points and we use the algorithm (91) with $ \lambda = 0.08 $ to reconstruct $ s_0 $ and $ s_1 $. On the left, $ h(x) = \frac{5}{16}\textbf{1}_{3.2\leq x\leq 4.2}(x-3.2)^2(4.2-x)^2 $ and $ g(x) = -\frac{35}{16}\textbf{1}_{3.4\leq x\leq 4}(x-3.4)^2(4-x)^2 $. On the right, $ h(x) = \frac{125}{16}\textbf{1}_{3.7\leq x\leq 4.2}(x-3.7)^2(4.2-x)^2 $ and $ g(x) = \frac{125}{16}\textbf{1}_{3.4\leq x\leq 4}(x-3.4)^2(4-x)^2 $
Figure 12. Recontruction of $ h_n $ for $ 0\leq n\leq 9 $, where $ h(x) = 0.05\textbf{1}_{\left|\left(\frac{x-4}{0.05}, \frac{y-0.6}{0.15}\right)\right|\leq 1}\left|\left(\frac{x-4}{0.05}, \frac{y-0.6}{0.15}\right)\right|^2 $. In blue, we represent $ h_n $ and in red the reconstruction of $ h_{n_{\text{app}}} $. In every reconstruction, $ K $ is the discretization of $ [0.01, 150] $ with $ 200 $ points, $ X $ is the discretization of $ [3.8, 4.2] $ with $ 101 $ points and we use the algorithm (91) with $ \lambda = 0.002 $ to reconstruct every $ h_n $
Figure 13. Recontruction of an inhomogeneity $ h $, where $ h(x) = 0.05\textbf{1}_{\left|\left(\frac{x-4}{0.05}, \frac{y-0.6}{0.15}\right)\right|\leq 1}\left|\left(\frac{x-4}{0.05}, \frac{y-0.6}{0.15}\right)\right|^2 $. On the left, we represent the initial shape of $ h $, and on the right the reconstruction $ h_{\text{app}} $. Here, $ K $ is the discretization of $ [0.01, 150] $ with $ 200 $ points, $ X $ is the discretization of $ [3.8, 4.2] $ with $ 101 $ points and we use the algorithm (91) with $ \lambda = 0.002 $ to reconstruct every $ h_n $. We used $ N = 20 $ modes to reconstruct $ h $
Figure 14. Recontruction of an inhomogeneity $ h $. From top to bottom, the initial representation of $ h $, the reconstruction $ h_{\text{app}} $ and the reconstruction $ h_{\text{app}} $ with the knowledge of the positivity of $ h $. Here, $ K $ is the discretization of $ [0.01, 150] $ with $ 200 $ points, $ X $ is the discretization of $ [3, 6] $ with $ 3001 $ points and we use the algorithm (91) with $ \lambda = 0.01 $ to reconstruct every $ h_n $. We choose used $ N = 20 $ modes to reconstruct $ h $
Table 1. Relative errors on the reconstruction of $ (x_c, r, \theta) $ for different bends. In each case, $ K $ is the discretization of $ [0.01, 40] $ with $ 100 $ points, and the reconstruction is obtain by (94)
$ (x_c, r, \theta) $ $ (2.5, 40, \pi/80) $ $ (4, 10, \pi/12) $ $ (2, 5, \pi/6) $
relative error on $ x_c $ $ 1.8\% $ $ 0\% $ $ 7.6\% $
relative error on $ r $ $ 3.0\% $ $ 7.5\% $ $ 23.8\% $
relative error on $ \theta $ $ 1.6\% $ $ 10.7\% $ $ 16.9\% $
Table 2. Relative errors on the reconstruction of $ h $ for different amplitudes $ A $. We choose $ h(x) = A\textbf{1}_{3\leq x\leq 5}(x-3)^2(5-x)^2 $ and $ g(x) = 0 $. In every reconstruction, $ K $ is the discretization of $ [0.01, 40]\setminus \{[n\pi-0.2, n\pi+0.2], n\in \mathbb{N}\} $ with $ 100 $ points, $ X $ is the discretization of $ [1, 7] $ with $ 601 $ points and we use the algorithm (91) with $ \lambda = 0.08 $ to reconstruct $ h' $
$ A $ $ 0.1 $ $ 0.2 $ $ 0.3 $ $ 0.5 $
$ \Vert h-h_{\text{app}}\Vert_{\text{L}^2( \mathbb{R})}/\Vert h\Vert_{\text{L}^2( \mathbb{R})} $ $ 8.82\% $ $ 10.41\% $ $ 15.12\% $ $ 54.99\% $
[1] L. Abrahamsson, Orthogonal grid generation for two-dimensional ducts, J. Comput. Appl. Math., 34 (1991), 305-314. doi: 10.1016/0377-0427(91)90091-W.
[2] L. Abrahamsson and H. O. Kreiss, Numerical solution of the coupled mode equations in duct acoustics, J. Comput. Phy., 111 (1994), 1-14. doi: 10.1006/jcph.1994.1038.
[3] S. Acosta, S. Chow, J. Taylor and V. Villamizar, On the multi-frequency inverse source problem in heterogeneous media, Inverse Problems, 28 (2012), 075013. doi: 10.1088/0266-5611/28/7/075013.
[4] H. Ammari, E. Iakovleva and H. Kang, Reconstruction of a small inclusion in a two-dimensional open waveguide, SIAM J. Appl. Math., 65 (2005), 2107-2127. doi: 10.1137/040615389.
[5] G. Bao and P. Li, Inverse medium scattering problems for electromagnetic waves, SIAM J. Appl. Math., 65 (2005), 2049-2066. doi: 10.1137/040607435.
[6] G. Bao and F. Triki, Reconstruction of a defect in an open waveguide, Sci. China Math., 56 (2013), 2539-2548. doi: 10.1007/s11425-013-4696-8.
[7] G. Bao and F. Triki, Stability for the multifrequency inverse medium problem, J. Differential Equations, 269 (2020), 7106-7128. doi: 10.1016/j.jde.2020.05.021.
[8] J. P. Berenger, A perfectly matched layer for the absorption of electromagnetic waves, J. Comput. Phys., 114 (1994), 185-200. doi: 10.1006/jcph.1994.1159.
[9] L. Bourgeois and S. Fliss, On the identification of defects in a periodic waveguide from far field data, Inverse Problems, 30 (2014), 095004. doi: 10.1088/0266-5611/30/9/095004.
[10] L. Bourgeois and E. Lunéville, The linear sampling method in a waveguide: A modal formulation, Inverse Problems, 24 (2008), 015018. doi: 10.1088/0266-5611/24/1/015018.
[11] D. Colton and A. Kirsch, A simple method for solving inverse scattering problems in the resonance region, Inverse Problems, 12 (1996), 383-393. doi: 10.1088/0266-5611/12/4/003.
[12] D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, Applied Mathematical Sciences, Springer-Verlag, Berlin, 1992. doi: 10.1007/978-3-662-02835-3.
[13] S. Dediu and J. R. McLaughlin, Recovering inhomogeneities in a waveguide using eigensystem decomposition, Inverse Problems, 22 (2006), 1227-1246. doi: 10.1088/0266-5611/22/4/007.
[14] A. S. B.-B. Dhia, L. Chesnel and S. A. Nazarov, Perfect transmission invisibility for waveguides with sound hard walls, J. Math. Pures Appl., 111 (2018), 79-105. doi: 10.1016/j.matpur.2017.07.020.
[15] H. Dym and H. P. McKean, Fourier Series and Integrals, Academic Press New York, 1972.
[16] P. Grisvard, Elliptic Problems in Nonsmooth Domains, Society for Industrial and Applied Mathematics, 2011. doi: 10.1137/1.9781611972030.ch1.
[17] M. Isaev and R. G. Novikov, Hölder-logarithmic stability in Fourier synthesis, Inverse Problems, 36 (2020), 125003. doi: 10.1088/1361-6420/abb5df.
[18] V. Isakov and S. Lu, Increasing stability in the inverse source problem with attenuation and many frequencies, SIAM J. Appl. Math., 78 (2018), 1-18. doi: 10.1137/17M1112704.
[19] M. Kharrat, O. Bareille, W. Zhou and M. Ichchou, Nondestructive assessment of plastic elbows using torsional waves: Numerical and experimental investigations, Journal of Nondestructive Evaluation, 35 (2016), 1-14. doi: 10.1007/s10921-015-0324-6.
[20] M. Kharrat, M. N. Ichchou, O. Bareille and W. Zhou, Pipeline inspection using a torsional guided-waves inspection system. part 1: Defect identification, International Journal of Applied Mechanics, 6 (2014). doi: 10.1142/S1758825114500343.
[21] Y. Y. Lu, Exact one-way methods for acoustic waveguides, Math. Comput. Simulation, 50 (1999), 377-391. doi: 10.1016/S0378-4754(99)00111-1.
[22] W. McLean, Strongly Elliptic Systems and Boundary Integral Equations, Cambridge University Press, 2000.
[23] M. Sini and N. T. Thanh, Inverse acoustic obstacle scattering problems using multifrequency measurements, Inverse Probl. Imaging, 6 (2012), 749-773. doi: 10.3934/ipi.2012.6.749.
[24] J. Todd, The condition of the finite segment of the Hilbert matrix, National Bureau of Standarts, Applied Mathematics Series, (1954), 109–119.
Figures(14)
HTML views(409) PDF downloads(319) Cited by(0)
Éric Bonnetier
Angèle Niclas
Laurent Seppecher
Grégory Vial
Representation of the three types of defects: in $ (1) $ a local perturbation $ q $, in $ (2) $ a bending of the waveguide, in $ (3) $ a localized defect in the geometry of $ \Omega $. A controlled source $ s $ generates a wave field $ u^\text{inc}_k $. When it crosses the defect, it generates a scattered wave field $ u^s_k $. Both $ u^\text{inc}_k $ and $ u^s_k $ are measured on the section $ \Sigma $
Condition number of $ M_t^TM_t $ for different sizes of support and values of $ \omega_0 $. Here, $ X $ is the discretization of $ [1-r, 1+r] $ with $ 500r+1 $ points. The $ x $-axis represents the evolution of $ r $, and the $ y $-axis $ \text{cond}_2(M_t^TM_t) $. Each curve corresponds to value of $ \omega_0 $ as indicated in the left rectangle
Representation of a bend in a waveguide
Representation of a shape defect in a waveguide
Reconstruction of $ f(x) = (x-0.8)(1.2-x)\textbf{1}_{0.8\leq x\leq 1.2} $ for different values of $ \omega_1 $ using the discrete operator $ \gamma $ and the algorithm (91) with $ \lambda = 0.001 $. Here, $ X $ is the discretization of $ [0.5, 1.5] $ with $ 10\omega_1 $ points, and $ K $ is the discretization of $ [0.01, \omega_1] $ with $ 1000 $ points
$ \text{L}^2 $-error between $ f(x) = (x-0.8)(1.2-x)\textbf{1}_{0.8\leq x\leq 1.2} $ and its reconstruction $ f_{app} $ for different values of $ \omega_1 $ using the discrete operator $ \gamma $ and the algorithm (91) with $ \lambda = 0.001 $. Here, $ X $ is the discretization of $ [0.5, 1.5] $ with $ 10\omega_1 $ points, and $ K $ is the discretization of $ [0.01, \omega_1] $ with $ 1000 $ points
Reconstruction of $ f(x) = (x-0.8)(1.2-x)\textbf{1}_{0.8\leq x\leq 1.2} $ for different values of $ \omega_0 $ and $ r = 0.5 $ using the discrete operator $ \gamma $ and the algorithm (91) with $ \lambda = 0.001 $. Here, $ X $ is the discretization of $ [0.5, 1.5] $ with $ 251 $ points, and $ K $ is the discretization of $ [\omega_0, 50] $ with $ 1000 $ points
Reconstruction of $ f(x) = (x-0.8)(1.2-x)\textbf{1}_{0.8\leq x\leq 1.2} $ for different sizes of support $ r $ and $ \omega_0 = 3\pi $ using the discrete operator $ \gamma $ and the algorithm (91) with $ \lambda = 0.001 $. Here, $ X $ is the discretization of $ [1-r, 1+r] $ with $ 500r+1 $ points, and $ K $ is the discretization of $ [3\pi, 50] $ with $ 1000 $ points
Reconstruction of two different bends. The black lines represent the initial shape of $ \Omega $, and the red the reconstruction of $ \Omega $. In both cases, $ K $ is the discretization of $ [0.01, 40] $ with $ 100 $ points, and the reconstruction is obtain by (94). On the left, the initial parameters of the bend are $ (x_c, r, \theta) = (4, 10, \pi/12) $ and on the right, $ (x_c, r, \theta) = (2, 5, \pi/6) $
Reconstruction of a waveguide with two successive bends. The black lines represent the initial shape of $ \Omega $, and the red the reconstruction of $ \Omega $, slightly shifted for comparison purposes. In both cases, $ K $ is the discretization of $ [0.01, 40] $ with $ 100 $ points. The parameters of the two bends are $ (x_c^{(1)}, r^{(1)}, \theta^{(1)}) = (2, 10, \pi/30)) $ and $ (x_c^{(2)}, r^{(2)}, \theta^{(2)}) = (3.8, 8, -\pi/20)) $
Reconstruction of two shape defects. In black, the initial shape of $ \Omega $, and in red the reconstruction, slightly shifted for comparison purposes. In both cases, $ K $ is the discretization of $ [0.01, 70]\setminus \{[n\pi-0.2, n\pi+0.2], n\in \mathbb{N}\} $ with $ 300 $ points, $ X $ is the discretization of $ [3, 4.5] $ with $ 151 $ points and we use the algorithm (91) with $ \lambda = 0.08 $ to reconstruct $ s_0 $ and $ s_1 $. On the left, $ h(x) = \frac{5}{16}\textbf{1}_{3.2\leq x\leq 4.2}(x-3.2)^2(4.2-x)^2 $ and $ g(x) = -\frac{35}{16}\textbf{1}_{3.4\leq x\leq 4}(x-3.4)^2(4-x)^2 $. On the right, $ h(x) = \frac{125}{16}\textbf{1}_{3.7\leq x\leq 4.2}(x-3.7)^2(4.2-x)^2 $ and $ g(x) = \frac{125}{16}\textbf{1}_{3.4\leq x\leq 4}(x-3.4)^2(4-x)^2 $
Recontruction of $ h_n $ for $ 0\leq n\leq 9 $, where $ h(x) = 0.05\textbf{1}_{\left|\left(\frac{x-4}{0.05}, \frac{y-0.6}{0.15}\right)\right|\leq 1}\left|\left(\frac{x-4}{0.05}, \frac{y-0.6}{0.15}\right)\right|^2 $. In blue, we represent $ h_n $ and in red the reconstruction of $ h_{n_{\text{app}}} $. In every reconstruction, $ K $ is the discretization of $ [0.01, 150] $ with $ 200 $ points, $ X $ is the discretization of $ [3.8, 4.2] $ with $ 101 $ points and we use the algorithm (91) with $ \lambda = 0.002 $ to reconstruct every $ h_n $
Recontruction of an inhomogeneity $ h $, where $ h(x) = 0.05\textbf{1}_{\left|\left(\frac{x-4}{0.05}, \frac{y-0.6}{0.15}\right)\right|\leq 1}\left|\left(\frac{x-4}{0.05}, \frac{y-0.6}{0.15}\right)\right|^2 $. On the left, we represent the initial shape of $ h $, and on the right the reconstruction $ h_{\text{app}} $. Here, $ K $ is the discretization of $ [0.01, 150] $ with $ 200 $ points, $ X $ is the discretization of $ [3.8, 4.2] $ with $ 101 $ points and we use the algorithm (91) with $ \lambda = 0.002 $ to reconstruct every $ h_n $. We used $ N = 20 $ modes to reconstruct $ h $
Recontruction of an inhomogeneity $ h $. From top to bottom, the initial representation of $ h $, the reconstruction $ h_{\text{app}} $ and the reconstruction $ h_{\text{app}} $ with the knowledge of the positivity of $ h $. Here, $ K $ is the discretization of $ [0.01, 150] $ with $ 200 $ points, $ X $ is the discretization of $ [3, 6] $ with $ 3001 $ points and we use the algorithm (91) with $ \lambda = 0.01 $ to reconstruct every $ h_n $. We choose used $ N = 20 $ modes to reconstruct $ h $ | CommonCrawl |
Non-destructive and distributed measurement of optical fiber diameter with nanometer resolution based on coherent forward stimulated Brillouin scattering
Zijie Hua 1 , ,
Dexin Ba 1 ,
Dengwang Zhou 1, 2, 3 ,
Yijia Li 1 ,
Yue Wang 1 ,
Xiaoyi Bao 4 ,
Yongkang Dong 1 , * , ,
National Key Laboratory of Science and Technology on Tunable Laser, Harbin Institute of Technology, 150001 Harbin, China
Postdoctoral Research Station for Optical Engineering, School of Astronautics, Harbin Institute of Technology, Harbin 150001, China
Research Center for Space Optical Engineering, School of Astronautics, Harbin Institute of Technology, Harbin 150001, China
Fiber Optics Group, Department of Physics, University of Ottawa, Ottawa, K1N 6N5, Canada
Yongkang Dong ([email protected])
These authors contributed equally: Zijie Hua, Dexin Ba
Precise control and measurement of the optical fiber diameter are vital for a range of fields, such as ultra-high sensitivity sensing and high-speed optical communication. Nowadays, the measurement of fiber diameter relies on point measurement schemes such as microscopes, which suffer from a tradeoff between the resolution and field of view. Handling the fiber can irreversibly damage the fiber samples, especially when multi-point measurements are required. To overcome these problems, we have explored a novel technique in which the mechanical properties of fibers are reflected by forward stimulated Brillouin scattering (FSBS), from which the diameters can be demodulated via the acoustic dispersion relation. The distributed FSBS spectra with narrow linewidths were recorded via the optimized optomechanical time-domain analysis system using coherent FSBS, thereby achieving a spatial resolution of 1 m over a fiber length of tens of meters. We successfully obtained the diameter distribution of unjacketed test fibers with diameters of 125 μm and 80 μm. The diameter accuracy was verified by high-quality scanning electron microscope images. We achieved a diameter resolution of 3.9 nm, virtually independent of the diameter range. To the best of our knowledge, this is the first demonstration of non-destructive and distributed fiber diameter monitoring with nanometer resolution.
[1] Kawata, O. et al. A splicing and inspection technique for single-mode fibers using direct core monitoring. Journal of Lightwave Technology 2, 185-191 (1984).
[2] Michaud-Belleau, V. et al. Backscattering in antiresonant hollow-core fibers: over 40 dB lower than in standard optical fibers. Optica 8, 216-219 (2021).
[3] Belardi, W. & Knight, J. C. Hollow antiresonant fibers with reduced attenuation. Optics Letters 39, 1853-1856 (2014).
[4] Couny, F. , Benabid, F. & Light, P. S. Large pitch Kagome-structured hollow-core photonic crystal fiber. Optics Letters 31, 3574-3576 (2006).
[5] Poletti, F. , Petrovich, M. N. & Richardson, D. J. Hollow-core photonic bandgap fibers: technology and applications. Nanophotonics 2, 315-340 (2013).
[6] Tani, F. et al. Effect of anti-crossings with cladding resonances on ultrafast nonlinear dynamics in gas-filled photonic crystal fibers. Photonics Research 6, 84-88 (2018).
[7] Lloyd, S. W. , Digonnet, M. J. F. & Fan, S. H. Modeling coherent backscattering errors in fiber optic gyroscopes for sources of arbitrary line width. Journal of Lightwave Technology 31, 2070-2078 (2013).
[8] Arditty, H. J. & Lefevre, H. C. Fiber-optic gyroscopes. in New Directions in Guided Wave and Coherent Optics (eds Ostrowsky, D. B. & Spitz, E. ) (Dordrecht: Springer, 1984), 299-333.
[9] Poveda-Wong, L. et al. Fabrication of long period fiber gratings of subnanometric bandwidth. Optics Letters 42, 1265-1268 (2017).
[10] Brennan, J. Dispersion management with long-length fiber Bragg gratings. Proceedings of 2003 Optical Fiber Communications Conference, 2003. Atlanta, GA, USA: IEEE, 2003.
[11] Fan, J. T. et al. Video-rate imaging of biological dynamics at centimetre scale and micrometre resolution. Nature Photonics 13, 809-816 (2019).
[12] Lohmann, A. W. et al. Space–bandwidth product of optical signals and systems. Journal of the Optical Society of America A 13, 470-473 (1996).
[13] Wu, J. B. Study on the diameter measurement of optical fibers using the method of forward near-axis far-field interference. Proceedings of the IMTC/98 Conference Proceedings. IEEE Instrumentation and Measurement Technology Conference. Where Instrumentation is Going. St. Paul, MN, USA: IEEE, 1998, 1149-1152.
[14] Presby, H. M. Refractive index and diameter measurements of unclad optical fibers. Journal of the Optical Society of America 64, 280-284 (1974).
[15] Smithgall, D. H. , Watkins, L. S. & Frazee, R. E. High-speed noncontact fiber-diameter measurement using forward light scattering. Applied Optics 16, 2395-2402 (1977).
[16] Jasapara, J. et al. Accurate noncontact optical fiber diameter measurement with spectral interferometry. Optics Letters 28, 601-603 (2003).
[17] Ashkin, A. , Dziedzic, J. M. & Stolen, R. H. Outer diameter measurement of low birefringence optical fibers by a new resonant backscatter technique. Applied Optics 20, 2299-2303 (1981).
[18] Birks, T. A. , Knight, J. C. & Dimmick, T. E. High-resolution measurement of the fiber diameter variations using whispering gallery modes and no optical alignment. IEEE Photonics Technology Letters 12, 182-183 (2000).
[19] Sumetsky, M. & Dulashko, Y. Radius variation of optical fibers with angstrom accuracy. Optics Letters 35, 4006-4008 (2010).
[20] Alcusa-Sáez, E. P. et al. Time-resolved acousto-optic interaction in single-mode optical fibers: characterization of axial nonuniformities at the nanometer scale. Optics Letters 39, 1437-1440 (2014).
[21] Alcusa-Sáez, E. P. et al. Improved time-resolved acousto-optic technique for optical fiber analysis of axial non-uniformities by using edge interrogation. Optics Express 23, 7345-7350 (2015).
[22] Ohashi, M. , Shibata, N. & Shiraki, K. Fibre diameter estimation based on guided acoustic wave Brillouin scattering. Electronics Letters 28, 900-902 (1992).
[23] Zhao, Y. et al. Photoacoustic Brillouin spectroscopy of gas-filled anti-resonant hollow-core optical fibers. Optica 8, 532-538 (2021).
[24] Shelby, R. M. , Levenson, M. D. & Bayer, P. W. Guided acoustic-wave Brillouin scattering. Physical Review. B,Condensed Matter 31, 5244-5252 (1985).
[25] Bashan, G. et al. Optomechanical time-domain reflectometry. Nature Communications 9, 2991 (2018).
[26] Chow, D. M. et al. Distributed forward Brillouin sensor based on local light phase recovery. Nature Communications 9, 2990 (2018).
[27] Pang, C. et al. Opto-mechanical time-domain analysis based on coherent forward stimulated Brillouin scattering probing. Optica 7, 176-184 (2020).
[28] Brekhovskikh, L. & Goncharov, V. Elastic waves in solids. in Mechanics of Continua and Wave Dynamics (eds Brekhovskikh, L. & Goncharov, V.) (Berlin, Heidelberg: Springer, 1985), 55-74.
[29] Horiguchi, T. et al. Development of a distributed sensing technique using Brillouin scattering. Journal of Lightwave Technology 13, 1296-1302 (1995).
[30] Antman, Y. et al. Optomechanical sensing of liquids outside standard fibers using forward stimulated Brillouin scattering. Optica 3, 510-516 (2016).
[31] Diamandi, H. H. , London, Y. & Zadok, A. Opto-mechanical inter-core cross-talk in multi-core fibers. Optica 4, 289-297 (2017).
[32] Lu, C. S. et al. Circle detection by arc-support line segments. Proceedings of 2017 IEEE International Conference on Image Processing. Beijing, China: IEEE, 2017, 76-80.
[33] Hayashi, N. et al. Temperature coefficient of sideband frequency produced by polarized guided acoustic-wave Brillouin scattering in highly nonlinear fibers. Applied Physics Express 10, 092501 (2017).
[34] Tanaka, Y. & Ogusu, K. Temperature coefficient of sideband frequencies produced by depolarized guided acoustic-wave Brillouin scattering. IEEE Photonics Technology Letters 10, 1769-1771 (1998).
[35] Tu, X. B. et al. Vector brillouin optical time-domain analysis with heterodyne detection and IQ demodulation algorithm. IEEE Photonics Journal 6, 6800908 (2014).
[36] Zhou, D. W. et al. Slope-assisted BOTDA based on vector SBS and frequency-agile technique for wide-strain-range dynamic measurements. Optics Express 25, 1889-1902 (2017).
[37] Li, W. H. et al. Differential pulse-width pair BOTDA for high spatial resolution sensing. Optics Express 16, 21616-21625 (2008).
[38] Diakaridia, S. et al. Detecting cm-scale hot spot over 24-km-long single-mode fiber by using differential pulse pair BOTDA based on double-peak spectrum. Optics Express 25, 17727-17736 (2017).
Figures(12)
Optomechanical measurement: distributed optical fiber geometry characterization
Scientists have developed a nondestructive and fully distributed optical fiber characterization which can provide guidance during optical fiber manufacturing. Conventional point measurement techniques such as microscopes for optical fiber diameter measurement will lead to irreversible damage to the testing fibers, and meanwhile suffers from the tradeoff between the diameter resolution and field of view. Now, a team of researchers from China and Canada, led by Yongkang Dong from Harbin Institute of Technology in China, has developed an innovative technique that takes the advantage of forward stimulated Brillouin scattering (FSBS). Opto-mechanical time domain analysis (OMTDA) is applied to record the distributed FSBS spectra of the fiber through which the fiber diameter can be extracted. The work overcomes the long-standing problems of fiber diameter measurements which facilitates monitoring of fiber geometry along the axial direction.
Zijie Hua1, ,
Dexin Ba1,
Dengwang Zhou1, 2, 3,
Yijia Li1,
Yue Wang1,
Xiaoyi Bao4,
Yongkang Dong1, *, ,
1. National Key Laboratory of Science and Technology on Tunable Laser, Harbin Institute of Technology, 150001 Harbin, China
2. Postdoctoral Research Station for Optical Engineering, School of Astronautics, Harbin Institute of Technology, Harbin 150001, China
3. Research Center for Space Optical Engineering, School of Astronautics, Harbin Institute of Technology, Harbin 150001, China
4. Fiber Optics Group, Department of Physics, University of Ottawa, Ottawa, K1N 6N5, Canada
Yongkang Dong, [email protected]
Abstract: Precise control and measurement of the optical fiber diameter are vital for a range of fields, such as ultra-high sensitivity sensing and high-speed optical communication. Nowadays, the measurement of fiber diameter relies on point measurement schemes such as microscopes, which suffer from a tradeoff between the resolution and field of view. Handling the fiber can irreversibly damage the fiber samples, especially when multi-point measurements are required. To overcome these problems, we have explored a novel technique in which the mechanical properties of fibers are reflected by forward stimulated Brillouin scattering (FSBS), from which the diameters can be demodulated via the acoustic dispersion relation. The distributed FSBS spectra with narrow linewidths were recorded via the optimized optomechanical time-domain analysis system using coherent FSBS, thereby achieving a spatial resolution of 1 m over a fiber length of tens of meters. We successfully obtained the diameter distribution of unjacketed test fibers with diameters of 125 μm and 80 μm. The diameter accuracy was verified by high-quality scanning electron microscope images. We achieved a diameter resolution of 3.9 nm, virtually independent of the diameter range. To the best of our knowledge, this is the first demonstration of non-destructive and distributed fiber diameter monitoring with nanometer resolution.
Fiber cladding diameter is one of the most basic geometric parameters which must be controlled during optical fiber manufacturing. In optical fiber communication devices and systems, severe loss at the splicing points could occur if the cladding is not precisely matched, which may worsen for multicore fibers because the alignment of off-center cores is more sensitive to the difference in cladding diameter1. Anti-resonant hollow-core fibers (AR-HCFs) have great potential to replace conventional single-mode and multi-mode fibers in optical fiber communication owing to their low-level excellent transmission performance, such as extreme Rayleigh backscattering2, reduced attenuation3, suppressed nonlinear optical effect4, and broad bandwidth for guidance4,5. The cladding diameters and inner 2D microstructures fundamentally determine the guidance performance of AR-HCFs6. Nonetheless, no technique exists for characterizing their geometric size in a distributed manner. The cladding diameter is also a vital parameter considering optical fiber sensing. For example, the error of fiber-optic gyroscopes is partly owing to the nonreciprocal effect induced by the non-uniformity of fiber windings7,8. It is directly determined by the coating diameter, which tends to be affected by the cladding diameter. Moreover, when the application includes, for example, cladding-guided modes, nonlinear effects under phase-matching conditions, or acoustic grating generation9,10, accurate control over the fiber diameter is essential.
Because of spatial resolution and ubiquity, scanning electron microscopes (SEMs) or optical microscopes are widely employed in industry for imaging the cross-sections of fibers under test for measuring fiber diameter. However, optical and electron microscopy are useful only for point measurements. Moreover, the measurements are destructive as the fiber must be cut at the measurement locations, causing irreversible damage to the fiber. These conventional microscopy techniques involve a trade-off between the resolution and the field of view (FOV) of the microscope11,12, which limits the resolution approximately 100 nm for fiber diameters of approximately 125 μm. Non-destructive estimation of fiber diameter has been explored since the 1970s. These methods are based on the analysis of the interference fringe pattern produced by forward and backward scattering13-15, with a best precision of 250 nm over a fiber diameter range of 50−150 μm. To date, this so-called shadowing technique has been used in most practical commercial devices for estimating fiber diameter.
However, the diameter resolution achieved by the aforementioned methods is insufficient for demanding applications. Thus, methods such as spectral interferometry16 have been introduced to achieve a precision greater than 10 nm. A more accurate measurement of the fiber cladding diameter can be realized by near-field resonance in backscattered light17. The spectrum of the resonances allows relative and absolute measurement of the cylinder diameter to an accuracy of ~1 part in 105 for a fiber approximately 100 µm in diameter, which yields accuracy better than 1 nm. Another high-accuracy scheme exploits the properties of whispering gallery modes (WGMs) generated by an auxiliary microfiber, achieving a resolution of 1 part in 104 (angstrom accuracy) as reported18,19.
To date, the schemes used for measuring fiber diameter are mostly based on point measurements. To measure the diameter across the entire length of the fiber, the location of the side illumination or the microfiber must be scanned axially, which is time-consuming and difficult to control. Unfortunately, few, if any, current methods can realize truly distributed diameter measurements along the entire fiber, leaving the fiber manufacturing process with high uncertainty. Recently, Alcusa-Sáez, et al20,21 have realized a quasi-distributed diameter measurement with a centimeter-scale spatial resolution and a limited fiber length based on a time-resolved acousto-optic technique using a flexural wave. However, the application of WGMs and acousto-optic techniques can only detect the relative diameter variations along the fiber, but are unable to obtain the absolute value of the diameter.
Forward stimulated Brillouin scattering (FSBS), which was first applied to monitor single-mode fiber diameters in 199222 , has been recently extended to the measurement of AR-HCF structure23. However, distributed monitoring of fiber diameters by FSBS is inherently because of its forward characteristic24. Recently, two methods were claimed to fulfill distributed chemical sensing by FSBS, where different backward scatterings were inspected25,26. The reported axial spatial resolutions were tens of meters. These axial spatial resolutions are inadequate for investigating fluctuations in the fiber diameter. To improve the spatial resolution, we introduced a novel optomechanical protocol called optomechanical time-domain analysis (OMTDA) for measuring the acoustic impedance of fibers and achieved an axial spatial resolution of 2 m27. The enhanced spatial resolution provided by the OMTDA system led us to believe that a fully distributed measurement of the fiber diameters was possible. In our work, we accurately measured the axial non-uniformity caused by both industrial manufacturing and hydrofluoric acid solution along two unjacketed 30-meter single-mode fibers with distinct nominal cladding diameters of 125 μm and 80 μm. We achieved a diameter resolution of ~3.9 nm and the results agreed well with SEM measurements. For the first time, this technology facilitates the non-destructive, distributed, high-resolution measurement of optical fibers with ~nm precision.
Principle of FSBS based diameter measurement
FSBS is a third-order optomechanical nonlinear effect in which two co-propagating laser beams interact with a transverse acoustic wave enhanced by laser-induced electrostriction.
Under ideal conditions, the cross-section of an ordinary single-mode fiber can be regarded as a perfect circle with diameter $ d $ . The precise resonant frequency of the mth-order radial acoustic mode R0m can be expressed as
$$ {f_{0m}} = \frac{{{V_L}}}{{2\pi }}\sqrt {{k^2} + \frac{{4y_m^2}}{{{d^2}}}} $$
We denote $ {v}_{L}=5968\;m/s $ as the longitudinal acoustic velocity, and ${y_m}$ is the mth-order root of the boundary equation of the circular cladding24.
The axial wave vector $k$ the transverse acoustic wave was approximately zero. Thus, the resonant frequency ${f_{0m}}$ is inversely proportional to the fiber diameter $d$ . To generate FSBS, the phase-matching condition requires that the phase velocity of the acoustic wave be equal to the group velocity of the light wave in the fiber. The slope of the dispersion relation curve is approximately zero, implying that the phase velocity is approximately infinite, and the phase-matching condition of FSBS can be satisfied automatically at the resonant frequency of a specific acoustic wave. Thus, the absolute diameter of the fiber can be determined by extracting the spectral information.
However, the cross-section of an actual fiber is not perfectly circular. Thus, Eq. 1 is no longer a valid dispersion relation. Instead, the frequency and acoustic wave distribution $ u $ must be quantified by the more general elastic dynamic equation28:
$$ \rho \frac{{{\partial ^2}{u_i}}}{{\partial {t^2}}} - {\left[ {{c_{ijkl}}{u_{kl}} + {\eta _{ijkl}}\frac{{\partial {u_{kl}}}}{{\partial t}}} \right]_j} = {\left[ {{\varepsilon _0}{\varepsilon _{im}}{\varepsilon _{jn}}{p_{klmn}}E_k^{(1)}E_l^{(2)*}} \right]_j} $$
where ρ, cijkl, ηijkl, ε, and pklmn are the medium density, stiffness matrix, viscosity matrix, dielectric constant, and polarization tensor, respectively.
Considering that the deviation from the standard circle is generally slight, to simplify the physical model, we assumed the cross section to be elliptical with a known non-circularity given by the ratio of the difference between the diameters of the major and minor axes to the average fiber diameter, which is used as a boundary condition for Eq. 2. After inspection, we found that the non-circularity was less than 0.006 for well-made fibers. Using finite element analysis, we calculated the displacement fields of R07 for non-circularities of 0 and 0.005, as illustrated in Fig. 1a and b. In general, the basic acoustic wave distributions present similar ring structures. However, the gradual destruction of the radiation symmetry of the R07 mode distributions can be observed with an increase in non-circularity. In particular, for non-zero non-circularity, the energy is concentrated more toward the minor axis of the ellipse compared to the ring structure of the acoustic wave in a perfectly circular geometry. Correspondingly, the dispersion relation reveals an overall shift toward a higher frequency when the elliptical cross-section is considered, as depicted in Fig.1c. In the following studies, we focused on two types of single-mode fibers with nominal diameters of 125 μm and 80 μm, respectively. Fig. 2 illustrates the simulation of evolution in the demodulated diameter based on the FSBS resonance frequency with non-circular fiber cross sections. As the non-circularity increases, the average diameter remains constant in geometry, while the diameter calculated from the FSBS spectrum exhibits a downward trend, indicating that the effect of boundary shrinkage is more obvious than the expansion. Thus, a bias exists when FSBS is applied to the fiber diameter measurement. The maximal measured results are tens of nanometers smaller than the average of the major and minor axes.
Fig. 1 Displacement distribution of the R07 mode when the non-circularity of fiber cross section is a 0; b 0.005 and c dispersion relation between frequency and diameter.
Fig. 2 Diameter extraction by FSBS under distinct non-circularities.
a 125-μm fiber; b 80-μm fiber.
To measure the distributed FSBS resonant frequency and calculate the diameter profile, we used an optimized OMTDA system, as illustrated in Fig. 3a. The core of the OMTDA is the coherent FSBS, whose schematic diagram is shown in Fig. 3b. A long activation pulse containing beats of frequency $ {f}_{m} $ around the FSBS resonant frequency $ {f}_{0m} $ stimulates the R0m mode to a steady state. This is followed by a short probing pulse with two sidebands with a frequency difference $ {f}_{m} $ for probing the resonant frequency. The probing pulse interacts with the pre-excited R0m mode, causing an intensity evolution of both sidebands. By recording the sidebands through the linear mapping relationship between the backward Brillouin gain of detecting light and the pump intensity under a small gain approximation27, the absolute value of the distributed diameter profile can be demodulated with the help of backward stimulated Brillouin scattering. The probing process coherently strengthens the activated acoustic wave and enhances the signal-to-noise ratio (SNR). The extra frequency shift $ {f}_{1} $ , which is loaded onto the activation pulse to differentiate from the probing process, is a beneficial feature. Similar to the conventional BOTDA system29, the input light, produced by a narrow linewidth laser source with a wavelength of 1550 nm and output power of 100 mW, is divided into upper and lower branches via a 50/50 optical fiber coupler. The upper branch, containing 50% of the laser power, generates the activation and probing pulses involved in the FSBS process using an arbitrary waveform generator (AWG, Tek AWG70001A), which imposes the RF pulse signal onto the light wave through electro-optic modulator 1 (EOM1) under the suppressed carrier condition. The RF pulse consists of two parts. The longer part (1.5 μs), which acts as the activation pulse, contains two frequencies $ {f}_{1} $ and $ {{f}_{1}+f}_{m} $ , and therefore adds four frequency components to the light wave. The influence of the activation pulse is completely removed in the following acoustic wave probing process by adjusting the extra frequency $ {f}_{1} $ is adjusted to 1.5 GHz. The shorter 10 ns part of the RF pulse is a single frequency pulse, which creates two frequency components of the light wave. Considering the phase sensitivity of FSBS, a shorter pulse is injected into the fiber under test (FUT) following a longer pulse in the time sequence, ensuring zero phase difference between the two pulses at the conjugation and maximum energy transfer efficiency. The peak power of the pulses is amplified to 1.5 W via an erbium-doped fiber amplifier (EDFA) and injected into the FUT through port 1 of the circulator. Notably, a high-speed orthogonal polarization scrambler (PS) accompanied by polarization controller 2 (PC2) was used to smooth and stabilize the BOTDA traces, making them immune to polarization fading and fluctuation. Compared to the scheme in Ref. 27, which used a random PS, both the averaging times and polarization noise were reduced. Here, averaging was performed only 2000 times. In the lower branch that originates from the other 50% port of the coupler, a microwave generator (MG, Agilent E8257D) applies a microwave signal with a frequency approximately equal to the Brillouin frequency shift of the FUT ($ {\nu }_{B} $ ~11 GHz) to the EOM2, which is also in the carrier suppressed state, creating two sidebands on the light wave. Next, optical filter 1 (OF1) is used to filter out the carrier, while the lower sideband with the Stokes component is retained. Then, ~80 μW of detecting light was injected into the FUT via an isolator (ISO) in opposite directions. The sideband was adjusted to fall within the Brillouin amplification of the probing pulse. In contrast to the previous work, where the sweeping frequency of the microwave signal covered the Brillouin gain domain, we optimized the system by locking the frequency of the detecting light. This is appropriate because the temperature and strain in the fiber remained stable during the measurement of fiber diameter. Specifically, the upper and lower sidebands of the probing pulse were recorded separately by locking the detecting light to distinct frequencies. When the microwave frequency was equal $ {{\nu }_{B}+f}_{m}/2 $ , the upper sideband is detected via the BSBS effect with the detecting light. Similarly, the detection results of FSBS on the lower sideband correspond to the microwave frequency $ {{\nu }_{B}-f}_{m}/2 $ . Combining the optimization of frequency sweeping and polarization scrambling, we effectively reduced the measurement time by approximately 50 times, suppressed the interference caused by changes in temperature and system stability over time, and achieved a higher SNR along with a better spatial resolution of 1 m. The detection of light containing frequency shift information passed through the FUT and exits from port 3 of the circulator. Subsequently, OF2 with a narrow bandwidth of 3.7 GHz was placed to remove additional noise such as the Rayleigh scattering signal and amplified spontaneous emission, whose passband covers the components in detecting light and rejects the reflected and Rayleigh scattered signals of the activation and probing pulses. Finally, the output Brillouin signal was recorded by a photodetector with a bandwidth of 300 MHz and a data acquisition card with a sampling rate of 2GS s−1.
a Experimental configuration for the optimized OMTDA system. EOM, electro-optic modulator; AWG, arbitrary-waveform generator; PC, polarization controller; PS, orthogonal polarization scrambler; MG, microwave generator; EDFA, erbium-doped fiber amplifier; CIR, circulator; FUT, fiber under test; OF, optical filter; ISO, optical isolator; PD, photodetector; Acq, data acquisition module. b Basic principle of the OMTDA.
FSBS spectral distribution
The cylindrical structure of the fibers can support the existence of multiple transverse acoustic modes, which can be divided into two categories: radial mode, R0m, and torsional-radial mode, TR2m, where m denotes the order of the acoustic modes. The purely radial mode, R0m, is fully symmetrical and causes pure phase modulation of the light wave. It is considered the main source of FSBS in fibers with a circular cross section. Thus, we only consider the FSBS induced by the R0m mode, and neglect the influence of TR2m. The strength of the FSBS involving the mth-order radial acoustic waves is determined by the overlap integral of the light wave and acoustic wave30,31. In particular, both optical and mechanical properties, such as the mode field diameter and geometry of the fiber cross-section influence the FSBS spectral line shape and frequency distribution. The intensity of each FSBS spectrum is given by the accumulated energy transfer between the upper and lower frequencies of the probing pulse, as given by $ {ln[P}_{l}\left(z\right)/{P}_{u}\left(z\right)] $ . The simulated and measured FSBS distributions and intensities at the end of each FUT ranging from 100 MHz to 520 MHz are illustrated in Fig. 4. A near-perfect match between the simulation and experimental measurements can be observed from the multiple discrete peaks that reveal FSBS resonances induced by distinct acoustic modes in addition to the slight differences in relative peak intensity induced by the differences in the optical and mechanical properties of the FUTs involved. The FSBS intensity distribution of the 80-μm fiber is sparser than that of the 125-μm fiber, because the larger frequency interval between each peak corresponds to the inverse of the shorter round-trip time for sound waves propagating across the fiber. The frequency intervals are 47.97 MHz and 74.95 MHz for the fibers with diameters of 125 μm and 80 μm, respectively. Fig. 4 contains eight peaks representing the acoustic modes R03–R10 for the 125-μm fiber. However, the coverage for the 80-μm fiber shrinks to six peaks, of which the acoustic mode orders are from 2 to 7 over the same frequency range. We chose one of the most prominent peaks within the distribution for further investigation. For the 125-μm fiber, the peak corresponding to R07 is the strongest, and its frequency is approximately 322 MHz. Although the peak for R06 in the 80-μm fiber is the strongest, we chose the peak corresponding to R07 so that the order of the involved acoustic mode would not deviate far from the acoustic mode chosen for the 125-μm fiber while maintaining sufficient intensity.
Fig. 4 FSBS frequency distribution and strength of different acoustic modes.
a Simulation of the FSBS distributions for 125-μm (blue circles and bars) and 80-μm (red circles and bars) fibers, based on acousto-optic integral. b Experimentally measured FSBS distributions and intensities for both fibers.
Diameter measurements of the 125-μm fiber
We first investigated the diameter measurements of the 125-μm fiber based on the FSBS resonance of R07, as mentioned above. For the initial FUT, the location-dependent normalized 3D map of the distributed FSBS spectra obtained by sweeping the frequency from 321 MHz to 326 MHz with a fine step of 10 kHz is shown in Fig. 5a. The observed spectra benefit from the optimization of the OMTDA system, along with the obvious air-cladding impedance difference; thus, the FSBS spectrum exhibits an extremely narrow linewidth and relatively high SNR, enabling high-resolution diameter measurements. The FUT was then etched at different locations (see Materials and Methods). The resulting 3D FSBS spectra are illustrated in Fig. 5b, with obvious frequency shifts in the three etched sections.We fitted the measured distribution spectra with a Lorentzian lineshape to obtain the central frequencies along the FUT before and after etching, as shown in Fig. 6a. The step-like frequency trace of the etched fiber coincides with the curve corresponding to the original fiber at the unetched position. The representative spectra at positions marked in Fig. 6a as A, B, C, and D are presented in Fig. 6b, with central frequencies of 322.571, 324.397, 325.868, and 327.933 MHz, respectively. Despite slight differences in the linewidths of the spectral peaks resulting from the non-uniform etching of the cladding surface, which causes the cross-section to deviate from a perfect circle, the differences are too small to affect the accuracy of the fitting. The maximum linewidth was 0.671 MHz at B, while the minimum was 0.473 MHz at A, resulting in a degradation of the fitting uncertainty from 0.002 MHz to 0.003 MHz. The linewidths of the spectra at C and D were 0.537 MHz and 0.513 MHz, respectively.
Fig. 5 Distributed FSBS spectra of the a initial and b etched 125-μm fiber.
a Fitted FSBS resonant frequency distribution along the 125-μm FUT. b FSBS spectrum at the axial locations 1.5, 5, 15, and 27 m along the FUT.
First, we assume that the fiber cross-section is a perfect circle. According to the measured FSBS spectra, we demodulated the fiber diameter distribution within the sensing range using Eq. 1. Fig. 7 presents the changes in the absolute diameter for both the raw and etched fibers (Fig. 7a) and the resulting relative diameter variation from the difference between the diameters of the raw and etched fibers (Fig. 7b). The intrinsic diameter of the unetched fiber is approximately 125.3 μm. In addition, an intrinsic diameter fluctuation within 250 nm was observed with a maximum diameter of 125.42 μm and a minimum diameter of 125.18 μm. This is expected to result from optical fiber processing. At the etched regions, the diameters are smaller, as expected, and the curve of the relative diameter variation is basically flat, reflecting uniform etching along the treated fiber lengths.
Fig. 7 Results of distributed diameter measurements on the 125-μm fiber.
a Demodulated diameter distribution before and after etching and its comparison with the SEM results (A-F). b Diameter variations along the FUT. c Four representative images of the fiber cross section at A ,B, C and E in a captured by SEM.
In addition, we conducted a controlled experiment in which snapshots of fiber cross-sections taken by a SEM were analyzed to verify the accuracy of the measurements by our scheme (Fig.7, Materials and Methods). When the sample is a 125-μm fiber, the resolution reaches 159.1 nm. The dots and error bars in Fig. 7a represent the diameters and their uncertainties obtained by SEM using an arc-support line segment algorithm32. Apparently, the SEM-based diameter determinations (red dots) are consistent with the measured FSBS curve, within the uncertainty represented by the error bars. In general, the measurements conducted by the two methods are consistent with each other, verifying the feasibility of distributed fiber diameter measurement with high accuracy in our method.
Diameter measurements of 80-μm fiber
To demonstrate the ability of our scheme to conduct high-resolution distributed measurements with different nominal diameters, we performed an identical procedure on another fiber with a diameter of approximately 80 μm (see Materials and Methods). The normalized 3D maps of the distributed FSBS spectra of the unetched and etched fibers are shown in Fig. 8. The corresponding acoustic mode was R06 with a resonant frequency of approximately 423 MHz. Detailed information about the spectra is presented in Fig. 9, including the fitted central frequency (Fig. 9a) along with the local spectra at the axial positions A, B, C, and D (Fig. 9b). The central frequencies of the four spectra were 423.393 MHz, 425.217MHz, 426.707 MHz, and 427.566 MHz. Notably, the shortest etched section that is fully detectable is 1.5 m, confirming a spatial resolution of 1 m. Similar to the situation of the 125-μm fiber, the raw testing fiber exhibits intrinsic diameter fluctuations as well,
Fig. 8 Distributed FSBS spectra of the a initial and b etched 80-μm fiber.
a Fitted FSBS resonant frequency distribution along the 80-μm FUT. b FSBS spectrum at the FUT axial locations 5, 10, 19, and 26 m.
which lie within 100 nm. We further compared the demodulated diameter distribution with high-quality SEM image samples at each typical position (Fig.10, Materials and Methods). The diameters derived by the arc-support line segments are plotted in Fig. 10a and the measurements of both raw and etched FUT using FSBS are represented by solid lines. The basic consistency between the two measurement methods further confirms that our new method can reliably measure the diameters of fibers with different diameter ranges. Notably, the resolution of the SEM results for the 80 μm-diameter fiber is reduced to 106.0 nm, confirming the tradeoff between the resolution and FOV during microscopy. Furthermore, despite the frequency shift being comparable to that of the FSBS spectra of the etched 125-μm fiber, the shorter etching time produces smaller diameter changes along the FUT, as confirmed in Fig. 10b. The etch depths at B, D, and F were 352 nm, 678 nm, and 850 nm, respectively. Fig. 10c shows four examples of SEM images at typical positions A, B, D, and F.
Fig. 10 Results of distributed diameter measurements on the 80-μm fiber.
a Demodulated diameter distribution before and after etching and its comparison with the SEM results (A-F). b Diameter variations along the FUT. c Four representative images of the fiber cross-section at A, B, D, and F in a captured using SEM.
High resolution of the FSBS measurement
The measurement of fiber diameter by coherent FSBS benefits from the high SNR, resulting in small uncertainties and high reliability.
As mentioned above, there is a trade-off between the resolution and FOV when either optical or electron microscopes are used to measure the fiber diameter. This drawback is solved by our scheme, which skips the imaging process entirely and focuses on the spectral information of FSBS instead. The ultra-narrow linewidth of the FSBS resonance leads to an extraordinarily high frequency resolution of FSBS spectra $ {\delta f}_{m} $ , rendering a remarkable diameter resolution which is expressed as $ \delta d={\delta f}_{m}/(\partial {f}_{m}/\partial d) $ . In order to visually indicate the diameter resolution, we conducted FSBS spectrum measurements of the 125-μm fiber ten times with fibers kept in a thermally insulated enclosure to eliminate temperature variation effects. The results show a high similarity between the ten superimposed traces presented in Fig. 11. We used the maximum standard deviations of the ten sets of measurements at each location as the diameter resolution of our system. We determined the diameter resolution to be 3.9 nm. Hence, we can claim that our technique can detect diameter fluctuations below 10 nm. This level of precision significantly outperformed the traditional diameter estimation methods.
Fig. 11 Multiple sets of measurements of the diameter distributions for the fiber with a diameter of 125 μm.
Influence of temperature on accuracy
Finally, we investigated the effect of temperature on the accuracy of fiber diameter measurements using our scheme. During the aforementioned implementation of diameter measurements, we precisely maintained the ambient temperature at $ 298.9\pm 0.1\mathrm{K} $ . However, in practice, the temperature varies over a wide range and tends
to undermine the accuracy of the diameter measurement owing to the temperature dependence of the acoustic velocity. Similar to the case of backward Brillouin scattering, the temperature and acoustic velocity are linearly related, leading to a linear response of the FSBS resonant frequency33,34:
$$ {f_m}(T) = {f_m}({T_0}) + C_T^m(T - {T_0}) $$
where denotes $ C_T^m = {{\partial {f_m}} \mathord{\left/ {\vphantom {{\partial {f_m}} {\partial T}}} \right. } {\partial T}} $ the temperature coefficient of FSBS resonant frequency. However, when we demodulate the diameters from the FSBS frequencies, we assume that the longitudinal acoustic velocity remains constant at 5996 m s−1, resulting in a deviation from the actual velocity value when the temperature changes. This deviation resulted in a bias in the demodulated diameters away from their actual values. Fig. 12 presents the measured effect of temperature on the FSBS spectra of the 125-μm fiber. We increased the fiber temperature from 293.15 K to 315.65 K in steps of 2.5 K using a thermostat and measured the FSBS spectrum of the fiber end at each temperature. The results are presented in Fig. 12a. A gradual blue shift was observed as the temperature increased. The Lorentz-fitted central frequency of each spectrum, plotted against temperature in Fig. 12b, exhibits a linear response as expected, with a coefficient of 31.2 kHz K−1. Thus, the demodulated diameter temperature coefficient is 11.77 nm K−1. Fortunately, despite the distortion of the frequency distribution owing to temperature fluctuations, its effect can be compensated by canceling out the influence of the temperature on the acoustic velocity along with the same proportional influence on the FSBS frequency shift. Thus, the practical implementation of our scheme requires a precise record of temperature at 0.1 K.
a FSBS resonance spectra at different temperatures. b Temperature coefficient of the FSBS resonant frequency.
In this work, a fully distributed fiber diameter measurement is realized for the first time to the best of our knowledge. We employed an optimized OMTDA system based on an optimized coherent FSBS to record the distributed FSBS spectra with high speed and enhanced SNR. We chose two fibers with nominal diameters of 125 μm and 80 μm as the testing fibers. Before etching, the two fibers exhibited intrinsic diameter fluctuations of approximately 250 nm and 100 nm, respectively. A comparison of the diameters measured by our scheme and SEM after etching proves the feasibility of our scheme. Two main drawbacks of conventional microscopes in determining fiber diameter have been overcome by our method: the irreversible destruction caused by sample preparation and the trade-off between resolution and FOV. Our method, which works by investigating the spectral information of the FSBS resonance, achieves an extremely good resolution of a few nanometers for both 125-μm and 80-μm FUTs. The merits of our proposed scheme will enable the fiber manufacturing industry to control the geometry of fibers during fiber production with higher precision and accuracy.
Although the testing fiber used in the experiment was approximately 30 m, the measurement distance could be readily expanded to a few kilometers, which is mainly restricted by nonlinear effects and reduced FSBS efficiency with propagation distance. Moreover, the coverage of detectable objects can be expanded to special fibers such as multi-core fibers, AR-HCFs, and other custom-made waveguide structures23,31. This may fill the technical gap in the distributed measurement of the intrinsic parameters of optical fibers in optical fiber communication and sensing in the future. In addition, in field applications where the coatings are required, the linewidth of the FSBS spectrum tends to broaden to approximately 5 MHz, which eventually limits the resolution of the diameter to a few tens of nanometers. In addition, several variant Brillouin optical time-domain analysis (BOTDA) configurations such as vector BOTDA35,36 and differential pump pulse BOTDA37,38 are expected to be applied to the field of OMTDA to further improve the performance, for example, a much better spatial resolution.
Etching method
We applied a 20% hydrofluoric acid solution (HF) to etch specific segments of the fiber. The principle of the etching method is as follows:
$$\rm SiO_{2} + 4HF == SiF_{4} + 2H_{2}O (4) $$
In addition, since the etching rate of HF on the fiber possesses a certain value under the same conditions (temperature, concentration, and fiber material), the stepped fluctuation of the fiber cladding diameter can be constructed by using HF.
For diameter measurements of the 125-μm fiber, we chose a 35.5 m FUT as the sensing fiber whose coating was stripped in advance. It was then divided into seven segments and marked as segments 1–7 from the start to the end. Segments 2, 4, and 6 were immersed into the pre-made HF solution for 1, 2, and 3 min, respectively, to create step changes in the fiber diameter for intuitive observation.
On the other hand, a 27-meter FUT was chosen for diameter measurements of the 80-μm fiber. It was subjected to the same HF etching process as the other fiber, with the etching time halved at each location.
The scanning electron microscope used in the experiment was a Quanta 200FEG. In the experiments involving the fiber with a diameter of 125 μm, we took six segments from the typical positions of the etched fiber at axial positions 4.5, 10, 15, 22, 28, and 33 m, and tested them using the SEM at 2000x magnification with an acceleration voltage of 20 kV.
In the case of the fiber whose diameter was 80 μm, owing to the comparatively smaller FOV required, we used a larger magnification of the SEM (3000x), ideally increasing the resolution in the same proportion. Similarly, we took six segments from the typical positions of the etched fiber, corresponding to the axial positions 7, 10, 14, 20, 22.5, and 25.5 m.
The precision of the image acquired by SEM can be expressed by:
$$ \Delta {r_{\min }} = \max \left( {\Delta {r_{SEM}},\frac{{{\rm{FOV}}{_x}}}{{{N_{pixel}}(x)}},\frac{{{\rm{FOV}}{_y}}}{{{N_{pixel}}(y)}}} \right) $$
Here $ {\Delta {r}}_{SEM} $ is the highest resolution obtained by SEM, which is typically smaller than the last two terms (~5 nm) when the tested objects are optical fibers with diameters of approximately 100 μm. FOVx (FOVy) denotes the spatial coverage of the image in the x (y) direction, and Npixel(x) (Npixel(y)) is the number of pixels contained in the image in the x (y) direction. Thus, the resolution is determined by the number of pixels and FOV.
This work was supported by the National Key Scientific Instrument and Equipment Development Project of China (2017YFF0108700), National Natural Science Foundation of China (62005067), National Postdoctoral Program for Innovative Talents (BX20200104), China Postdoctoral Science Foundation (2020M681088), and the Heilongjiang Postdoctoral Fund to pursue scientific research (LBH-Z20067).
YD proposed this idea and initiated the project. ZH, DB, and YL performed the mathematical analysis and experiments. YD drafted the manuscript. All authors participated in discussions during the drafting of the manuscript.
Data availability. The data underlying the results presented in this paper are not publicly available at this time, but may be obtained from the authors upon reasonable request.
The authors declare that they have no conflict of interest.
Fig. 2 Diameter extraction by FSBS under distinct non-circularities. a 125-μm fiber; b 80-μm fiber.
Fig. 3 a Experimental configuration for the optimized OMTDA system. EOM, electro-optic modulator; AWG, arbitrary-waveform generator; PC, polarization controller; PS, orthogonal polarization scrambler; MG, microwave generator; EDFA, erbium-doped fiber amplifier; CIR, circulator; FUT, fiber under test; OF, optical filter; ISO, optical isolator; PD, photodetector; Acq, data acquisition module. b Basic principle of the OMTDA.
Fig. 4 FSBS frequency distribution and strength of different acoustic modes. a Simulation of the FSBS distributions for 125-μm (blue circles and bars) and 80-μm (red circles and bars) fibers, based on acousto-optic integral. b Experimentally measured FSBS distributions and intensities for both fibers.
Fig. 6 a Fitted FSBS resonant frequency distribution along the 125-μm FUT. b FSBS spectrum at the axial locations 1.5, 5, 15, and 27 m along the FUT.
Fig. 7 Results of distributed diameter measurements on the 125-μm fiber. a Demodulated diameter distribution before and after etching and its comparison with the SEM results (A-F). b Diameter variations along the FUT. c Four representative images of the fiber cross section at A ,B, C and E in a captured by SEM.
Fig. 9 a Fitted FSBS resonant frequency distribution along the 80-μm FUT. b FSBS spectrum at the FUT axial locations 5, 10, 19, and 26 m.
Fig. 10 Results of distributed diameter measurements on the 80-μm fiber. a Demodulated diameter distribution before and after etching and its comparison with the SEM results (A-F). b Diameter variations along the FUT. c Four representative images of the fiber cross-section at A, B, D, and F in a captured using SEM.
Fig. 12 a FSBS resonance spectra at different temperatures. b Temperature coefficient of the FSBS resonant frequency. | CommonCrawl |
A system of the Hamilton--Jacobi and the continuity equations in the vanishing viscosity limit
The Boltzmann equation near Maxwellian in the whole space
March 2011, 10(2): 459-478. doi: 10.3934/cpaa.2011.10.459
Free boundary problem for compressible flows with density--dependent viscosity coefficients
Ping Chen 1, , Daoyuan Fang 1, and Ting Zhang 1,
Department of Mathematics, Zhejiang University, Hangzhou 310027, China
Received May 2010 Revised September 2010 Published December 2010
In this paper, we consider the free boundary problem of the spherically symmetric compressible isentropic Navier--Stokes equations in $R^n (n \geq 1)$, with density--dependent viscosity coefficients. Precisely, the viscosity coefficients $\mu$ and $\lambda$ are assumed to be proportional to $\rho^\theta$, $0 < \theta < 1$, where $\rho$ is the density. We obtain the global existence, uniqueness and continuous dependence on initial data of a weak solution, with a Lebesgue initial velocity $u_0\in L^{4 m}$, $4m>n$ and $\theta<\frac{4m-2}{4m+n}$. We weaken the regularity requirement of the initial velocity, and improve some known results of the one-dimensional system.
Keywords: density-dependent viscosity coefficients., Compressible Navier-Stokes equations.
Mathematics Subject Classification: Primary: 76D05, 35R35; Secondary: 35Q35, 76N1.
Citation: Ping Chen, Daoyuan Fang, Ting Zhang. Free boundary problem for compressible flows with density--dependent viscosity coefficients. Communications on Pure & Applied Analysis, 2011, 10 (2) : 459-478. doi: 10.3934/cpaa.2011.10.459
D. Bresch, B. Desjardins and C. K. Lin, On some compressible fluid models: Korteweg, lubrication, and shallow water systems,, Comm. Partial Differential Equations, 28 (2003), 843. doi: doi:10.1081/PDE-120020499. Google Scholar
D. Bresch and B. Desjardins, Existence of global weak solutions for a 2D viscous shallow water equations and convergence to the quasi-geostrophic model,, Comm. Math. Phys., 238 (2003), 211. Google Scholar
G. Q. Chen and M. Kratka, Global solutions to the Navier-Stokes equations for compressible heat-conducting flow with symmetry and free boundary,, Comm. Partial Differential Equations, 27 (2002), 907. doi: doi:10.1081/PDE-120020499. Google Scholar
G. Q. Chen, Vacuum states and global stability of rarefaction waves for compressible flow,, Methods Appl. Anal., 7 (2000), 337. Google Scholar
P. Chen and T. Zhang, A vacuum problem for multidimensional compressible Navier-Stokes equations with degenerate viscosity coefficients,, Commun. Pure Appl. Anal., 7 (2008), 987. doi: doi:10.3934/cpaa.2008.7.987. Google Scholar
D. Y. Fang and T. Zhang, Compressible Navier-Stokes equations with vacuum state in one dimension,, Commun. Pure Appl. Anal., 3 (2004), 675. doi: doi:10.3934/cpaa.2004.3.675. Google Scholar
D. Y. Fang and T. Zhang, A note on compressible Navier-Stokes equations with vacuum state in one dimension,, Nonlinear Anal., 58 (2004), 719. doi: doi:10.1016/j.na.2004.05.016. Google Scholar
D. Y. Fang and T. Zhang, Global solutions of the Navier-Stokes equations for compressible flow with density-dependent viscosity and discontinuous initial data,, J. Differential Equations, 222 (2006), 63. doi: doi:10.1016/j.jde.2005.07.011. Google Scholar
Z. H. Guo, Q. S. Jiu and Z. P. Xin, Spherically symmetric isentropic compressible flows with density-dependent viscosity coefficients,, SIAM J. Math. Anal., 39 (2008), 1402. doi: doi:10.1137/070680333. Google Scholar
D. Hoff and D. Serre, The failure of continuous dependence on initial data for the Navier-Stokes equations of compressible flow,, SIAM J. Appl. Math., 51 (1991), 887. doi: doi:10.1137/0151043. Google Scholar
D. Hoff, Discontinuous solutions of the Navier-Stokes equations for compressible flow,, Arch. Rational Mech. Anal., 114 (1991), 15. doi: doi:10.1007/BF00375683. Google Scholar
D. Hoff, Global well-posedness of the Cauchy problem for the Navier-Stokes equations of nonisentropic flow with discontinuous initial data,, J. Differential Equations, 95 (1992), 33. doi: doi:10.1016/0022-0396(92)90042-L. Google Scholar
D. Hoff, Spherically symmetric solutions of the Navier-Stokes equations for compressible, isothermal flow with large, discontinuous initial data,, Indiana Univ. Math. J., 41 (1992), 1225. doi: doi:10.1512/iumj.1992.41.41060. Google Scholar
S. Jiang, Z. P. Xin and P. Zhang, Global weak solutions to 1D compressible isentropic Navier-Stokes equations with density-dependent viscosity,, Methods Appl. Anal., 12 (2005), 239. Google Scholar
S. Jiang and A. A. Zlotnik, Global well-posedness of the Cauchy problem for the equations of a one-dimensional viscous heat-conducting gas with Lebesgue initial data,, Proc. Roy. Soc. Edinburgh Sect. A, 134 (2004), 939. doi: doi:10.1017/S0308210500003565. Google Scholar
P.-L. Lions, "Mathematical Topics in Fluid Mechanics," Vol. 1-2., Oxford University Press: New York, (1996). Google Scholar
T. P. Liu, Z. P. Xin and T. Yang, Vacuum states for compressible flow,, Discrete Contin. Dynam. Systems, 4 (1998), 1. Google Scholar
A. Mellet and A. Vasseur, On the barotropic compressible Navier-Stokes equation,, Comm. Partial Differential Equations, 32 (2007), 431. doi: doi:10.1080/03605300600857079. Google Scholar
X. L. Qin, Z. A. Yao and H. X. Zhao, One dimensional compressible Navier-Stokes equations with density-dependent viscosity and free boundaries,, Comm. Pure Appl. Anal., 7 (2008), 373. Google Scholar
S. W. Vong, T. Yang and C. J. Zhu, Compressible Navier-Stokes equations with degenerate viscosity coefficient and vacuum(II),, J. Differential Equations, 192 (2003), 475. doi: doi:10.1016/S0022-0396(03)00060-3. Google Scholar
V. A. Vaigant and A. V. Kazhikhov, On existence of global solutions to the two-dimensional Navier-Stokes equations for a compressible viscosity fluid,, Siberian Math. J., 2 (1995), 1108. doi: doi:10.1007/BF02106835. Google Scholar
Z. P. Xin, Blowup of smooth solutions to the compressible Navier-Stokes equation with compact density,, Comm. Pure Appl. Math., 51 (1998), 229. doi: doi:10.1002/(SICI)1097-0312(199803)51:3<229::AID-CPA1>3.0.CO;2-C. Google Scholar
T. Yang, Z. A. Yao and C. J. Zhu, Compressible Navier-Stokes equations with density-dependent viscosity and vacuum,, Comm. Partial Differential Equations, 26 (2001), 965. doi: doi:10.1081/PDE-100002385. Google Scholar
T. Yang and C. J. Zhu, Compressible Navier-Stokes equations with degenerate viscosity coefficient and vacuum,, Comm. Math. Phys., 230 (2002), 329. doi: doi:10.1007/s00220-002-0703-6. Google Scholar
T. Zhang and D. Y. Fang, Global behavior of spherically symmetric Navier-Stokes equations with density-dependent viscosity,, J. Differential Equations, 236 (2007), 293. doi: doi:10.1016/j.jde.2007.01.025. Google Scholar
T. Zhang and D. Y. Fang, Global behavior of spherically symmetric Navier-Stokes-Poisson system with degenerate viscosity coefficients,, Arch. Ration. Mech. Anal., 191 (2009), 195. doi: doi:10.1007/s00205-008-0183-8. Google Scholar
T. Zhang and D. Y. Fang, A note on spherically symmetric isentropic compressible flows with density-dependent viscosity coefficients,, Nonlinear Analysis: Real World Applications, 10 (2009), 2272. doi: doi:10.1016/j.nonrwa.2008.04.014. Google Scholar
Xulong Qin, Zheng-An Yao, Hongxing Zhao. One dimensional compressible Navier-Stokes equations with density-dependent viscosity and free boundaries. Communications on Pure & Applied Analysis, 2008, 7 (2) : 373-381. doi: 10.3934/cpaa.2008.7.373
Xulong Qin, Zheng-An Yao. Global solutions of the free boundary problem for the compressible Navier-Stokes equations with density-dependent viscosity. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1041-1052. doi: 10.3934/cpaa.2010.9.1041
Bingkang Huang, Lusheng Wang, Qinghua Xiao. Global nonlinear stability of rarefaction waves for compressible Navier-Stokes equations with temperature and density dependent transport coefficients. Kinetic & Related Models, 2016, 9 (3) : 469-514. doi: 10.3934/krm.2016004
Guangwu Wang, Boling Guo. Global weak solution to the quantum Navier-Stokes-Landau-Lifshitz equations with density-dependent viscosity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 6141-6166. doi: 10.3934/dcdsb.2019133
Quansen Jiu, Zhouping Xin. The Cauchy problem for 1D compressible flows with density-dependent viscosity coefficients. Kinetic & Related Models, 2008, 1 (2) : 313-330. doi: 10.3934/krm.2008.1.313
Wuming Li, Xiaojun Liu, Quansen Jiu. The decay estimates of solutions for 1D compressible flows with density-dependent viscosity coefficients. Communications on Pure & Applied Analysis, 2013, 12 (2) : 647-661. doi: 10.3934/cpaa.2013.12.647
Ping Chen, Ting Zhang. A vacuum problem for multidimensional compressible Navier-Stokes equations with degenerate viscosity coefficients. Communications on Pure & Applied Analysis, 2008, 7 (4) : 987-1016. doi: 10.3934/cpaa.2008.7.987
Jianwei Yang, Peng Cheng, Yudong Wang. Asymptotic limit of a Navier-Stokes-Korteweg system with density-dependent viscosity. Electronic Research Announcements, 2015, 22: 20-31. doi: 10.3934/era.2015.22.20
Wenjun Wang, Lei Yao. Spherically symmetric Navier-Stokes equations with degenerate viscosity coefficients and vacuum. Communications on Pure & Applied Analysis, 2010, 9 (2) : 459-481. doi: 10.3934/cpaa.2010.9.459
Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602
Mei Wang, Zilai Li, Zhenhua Guo. Global weak solution to 3D compressible flows with density-dependent viscosity and free boundary. Communications on Pure & Applied Analysis, 2017, 16 (1) : 1-24. doi: 10.3934/cpaa.2017001
Yuming Qin, Lan Huang, Shuxian Deng, Zhiyong Ma, Xiaoke Su, Xinguang Yang. Interior regularity of the compressible Navier-Stokes equations with degenerate viscosity coefficient and vacuum. Discrete & Continuous Dynamical Systems - S, 2009, 2 (1) : 163-192. doi: 10.3934/dcdss.2009.2.163
Tao Wang, Huijiang Zhao, Qingyang Zou. One-dimensional compressible Navier-Stokes equations with large density oscillation. Kinetic & Related Models, 2013, 6 (3) : 649-670. doi: 10.3934/krm.2013.6.649
Zilai Li, Zhenhua Guo. On free boundary problem for compressible navier-stokes equations with temperature-dependent heat conductivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3903-3919. doi: 10.3934/dcdsb.2017201
Enrique Fernández-Cara. Motivation, analysis and control of the variable density Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2012, 5 (6) : 1021-1090. doi: 10.3934/dcdss.2012.5.1021
Jishan Fan, Tohru Ozawa. An approximation model for the density-dependent magnetohydrodynamic equations. Conference Publications, 2013, 2013 (special) : 207-216. doi: 10.3934/proc.2013.2013.207
Daoyuan Fang, Ting Zhang. Compressible Navier-Stokes equations with vacuum state in one dimension. Communications on Pure & Applied Analysis, 2004, 3 (4) : 675-694. doi: 10.3934/cpaa.2004.3.675
Jing Wang, Lining Tong. Stability of boundary layers for the inflow compressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2595-2613. doi: 10.3934/dcdsb.2012.17.2595
Peixin Zhang, Jianwen Zhang, Junning Zhao. On the global existence of classical solutions for compressible Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 1085-1103. doi: 10.3934/dcds.2016.36.1085
Misha Perepelitsa. An ill-posed problem for the Navier-Stokes equations for compressible flows. Discrete & Continuous Dynamical Systems - A, 2010, 26 (2) : 609-623. doi: 10.3934/dcds.2010.26.609
Ping Chen Daoyuan Fang Ting Zhang | CommonCrawl |
Role of renal function in risk assessment of target non-attainment after standard dosing of meropenem in critically ill patients: a prospective observational study
Lisa Ehmann1,2,
Michael Zoller3,
Iris K. Minichmayr1,2,
Christina Scharf3,
Barbara Maier4,
Maximilian V. Schmitt5,
Niklas Hartung1,6,
Wilhelm Huisinga6,
Michael Vogeser4,
Lorenz Frey3,
Johannes Zander4 &
Charlotte Kloft1
The Editorial to this article has been published in Critical Care 2017 21:283
Severe bacterial infections remain a major challenge in intensive care units because of their high prevalence and mortality. Adequate antibiotic exposure has been associated with clinical success in critically ill patients. The objective of this study was to investigate the target attainment of standard meropenem dosing in a heterogeneous critically ill population, to quantify the impact of the full renal function spectrum on meropenem exposure and target attainment, and ultimately to translate the findings into a tool for practical application.
A prospective observational single-centre study was performed with critically ill patients with severe infections receiving standard dosing of meropenem. Serial blood samples were drawn over 4 study days to determine meropenem serum concentrations. Renal function was assessed by creatinine clearance according to the Cockcroft and Gault equation (CLCRCG). Variability in meropenem serum concentrations was quantified at the middle and end of each monitored dosing interval. The attainment of two pharmacokinetic/pharmacodynamic targets (100%T>MIC, 50%T>4×MIC) was evaluated for minimum inhibitory concentration (MIC) values of 2 mg/L and 8 mg/L and standard meropenem dosing (1000 mg, 30-minute infusion, every 8 h). Furthermore, we assessed the impact of CLCRCG on meropenem concentrations and target attainment and developed a tool for risk assessment of target non-attainment.
Large inter- and intra-patient variability in meropenem concentrations was observed in the critically ill population (n = 48). Attainment of the target 100%T>MIC was merely 48.4% and 20.6%, given MIC values of 2 mg/L and 8 mg/L, respectively, and similar for the target 50%T>4×MIC. A hyperbolic relationship between CLCRCG (25–255 ml/minute) and meropenem serum concentrations at the end of the dosing interval (C8h) was derived. For infections with pathogens of MIC 2 mg/L, mild renal impairment up to augmented renal function was identified as a risk factor for target non-attainment (for MIC 8 mg/L, additionally, moderate renal impairment).
The investigated standard meropenem dosing regimen appeared to result in insufficient meropenem exposure in a considerable fraction of critically ill patients. An easy- and free-to-use tool (the MeroRisk Calculator) for assessing the risk of target non-attainment for a given renal function and MIC value was developed.
Clinicaltrials.gov, NCT01793012. Registered on 24 January 2013.
Severe infections remain a major issue in the intensive care unit (ICU) because of their high prevalence and high mortality rates among critically ill patients [1]. Hence, rational antibiotic therapy is especially important in this vulnerable population. Apart from an appropriate activity spectrum and early initiation of antibiotic therapy, a dosing regimen leading to adequate therapeutic antibiotic concentrations and exposure is crucial [2,3,4,5]. Adequate antibiotic exposure not only has been found to improve clinical success but also has been suggested to reduce resistance development [6, 7]. At the same time, pathophysiological changes in critically ill patients, including organ dysfunction or altered fluid balance, might substantially influence antibiotic concentrations and increase the risk of inadequate antibiotic exposure. As a second challenge, infections in these patients are often caused by pathogens with lower susceptibility (i.e., higher minimum inhibitory concentration [MIC]) than in other clinical settings [8,9,10,11].
Meropenem is a broad-spectrum carbapenem β-lactam antibiotic frequently used to treat severe bacterial infections in critically ill patients, such as those with severe pneumonia, complicated intra-abdominal infections, complicated skin and soft tissue infections, or sepsis [12]. For these indications, the approved standard dosing regimens for adults (intact renal function [RF]) include 500 mg or 1000 mg administered as short-term infusions every 8 h; for other indications, doses up to 2000 mg are recommended [12]. Meropenem is a hydrophilic molecule with very low plasma protein binding of approximately 2% [13]. It is excreted primarily via the kidney, predominantly by glomerular filtration but also by active tubular secretion [14]. Meropenem has been shown to be readily dialysable and effectively removed by haemodialysis [15,16,17]. As a β-lactam antibiotic, meropenem shows time-dependent activity; that is, its antibacterial activity is linked to the percentage of time that meropenem concentrations exceed the MIC value of a pathogen (%T>MIC) [18]. The attainment of the pharmacokinetic/pharmacodynamic (PK/PD) index %T>MIC has been associated with clinical success in patients treated with meropenem [19,20,21]. For example, Ariano et al. demonstrated that the probability of clinical response was 80% when %T>MIC was 76–100 in febrile neutropenic patients with bacteraemia but only 36% when %T>MIC was between 0 and 50 [20].
Previous studies have revealed large inter-patient variability in meropenem concentrations after standard dosing in critically ill patients [22,23,24], which resulted in inadequate meropenem exposure in a relevant fraction of patients [23, 25]. However, in most of these studies, only limited numbers of patients and/or rather homogeneous patient sub-groups have been investigated. Hence, the identified variability in meropenem exposure might not have adequately reflected a typically heterogeneous critically ill population. In previous analyses, RF has been shown to be a major cause of variability in meropenem exposure [23, 24, 26,27,28,29,30,31] and, as a consequence, to be influential on the attainment of specific target concentrations [25, 32, 33]. However, the impact of kidney function on target attainment has been assessed primarily for distinct RF classes but not yet in a coherent quantitative framework for a population covering the full spectrum of RF ranging from dialysis/severe renal impairment (RI) to augmented renal clearance.
The aims of this study were (1) to quantify inter- and intra-individual variability of meropenem serum concentrations in a heterogeneous critically ill population covering the full spectrum of RF classes after meropenem standard dosing, (2) to investigate the attainment of two different PK/PD targets, (3) to assess the impact of RF on meropenem exposure and consequently target attainment and (4) ultimately to develop an easy-to-use risk assessment tool allowing identification and quantification of the risk of target non-attainment for a particular patient on the basis of the patient's RF.
This prospective observational study was conducted at three ICUs within the Department of Anaesthesiology, University Hospital, LMU Munich, Germany. The study protocol (ClinicalTrials.gov identifier NCT01793012) was approved by the Institutional Review Board of the Medical Faculty of the LMU Munich, Germany. Criteria for inclusion comprised the presence of severe infection (confirmed or suspected by clinical assessment), age ≥ 18 years and therapy with meropenem (including possible de-escalation; clinical assessment independent from the study). Patients were excluded in case of a planned hospitalisation < 4 days or meropenem administration > 48 h prior to study start. Written informed consent to participate was obtained from all patients or their legal representatives. All patients received standard doses of meropenem as 30-minute infusions three times per day (see Additional file 1: Study design, Figure S1a). Multiple arterial blood samples were collected for the quantification of meropenem concentrations over a study period of 4 days. Intensive sample collection was performed during all three dosing intervals of study day 1 and during the first dosing interval of study days 2–4. An additional single minimum meropenem concentration (Cmin) sample before the next dose was collected for the third dosing interval of days 2 and 3. The planned sampling time points per intensively monitored dosing interval were as follows: 15 minutes, 30 minutes, 1.5 h, 4 h, and 8 h (directly before next dose; Cmin) after the start of infusion (see Additional file 1: Study design, Figure S1b). The exact sampling time points were recorded by the medical staff. In addition, patient-specific data such as diagnosis, demographics, disease scores and laboratory data (e.g., serum creatinine) were recorded during the study period. Creatinine clearance was estimated according to the Cockcroft and Gault equation (CLCRCG [34]) on the basis of daily measured serum creatinine (Jaffe assay):
$$ \mathrm{CLC}{\mathrm{R}}_{\mathrm{CG}}\left[\frac{\mathrm{ml}}{\min}\right]=\frac{\left(140-\mathrm{age}\ \left[\mathrm{years}\right]\right)\cdot \mathrm{body}\ \mathrm{weight}\left[\mathrm{kg}\right]}{72\cdot \mathrm{serum}\ \mathrm{creatinine}\left[\frac{\mathrm{mg}}{\mathrm{dl}}\right]}\cdot \left(0.85\ \mathrm{if}\ \mathrm{female}\right) $$
In addition, pathogens identified in specimens collected from the patients (between 3 days before and 3 days after the study period) were recorded.
Bioanalytical method for meropenem concentration
Blood samples were immediately sent to the Institute of Laboratory Medicine, University Hospital, LMU Munich and centrifuged. Serum samples were stored at −80 °C until total meropenem serum concentration was quantified by using a validated liquid chromatography-tandem mass spectrometry method described previously [35]. Briefly, sixfold deuterated meropenem was used as an internal standard, and validation revealed good analytical performance, with an inaccuracy of less than or equal to ± 4% relative error and imprecision ≤ 6% coefficient of variation (CV).
Variability of meropenem concentrations
To quantify inter- and intra-individual variability of meropenem serum concentrations, measured Cmin values were first analysed without regard to the actual heterogeneous sampling time points or administered doses. Inter-individual variability was evaluated by a summary statistical analysis of all available Cmin values; for description of intra-individual variability, the ratios of the maximum and minimum Cmin values \( \left(\frac{{\mathrm{C}}_{\min \_\max }}{{\mathrm{C}}_{\min \_\min }}\right) \) of all dosing intervals monitored within a patient were statistically summarised. Summary statistics included median, range, 95% CI and %CV.
In order to exclude a potential impact of dose- and sampling time point-related variability on the meropenem minimum concentrations, dose-normalised meropenem concentrations (to a dose of 1000 mg, assuming linear PK) at two specific time points (4 h [C4h] and 8 h [C8h] after infusion start) were calculated, and the variability was evaluated as described above. C4h and C8h values were determined by linear regression (if more than two data points) or linear interpolation (if two data points) of the logarithmised data in the declining phase of each concentration-time profile. In case of a coefficient of determination (R 2) < 0.9, being associated with two distinct phases in the declining part of the concentration-time profile, a separate linear interpolation/regression was performed for each of these phases.
Pharmacokinetic/pharmacodynamic target attainment
To evaluate the achievement of therapeutically adequate meropenem serum concentrations, PK/PD target attainment was assessed for a broad MIC range from 0.25 mg/L to 8 mg/L, with a special focus on MIC 2 mg/L and MIC 8 mg/. The two values are common European Committee on Antimicrobial Susceptibility Testing (EUCAST) susceptible/intermediate (S/I) and intermediate/resistant (I/R) MIC breakpoints for relevant bacteria, such as Enterobacteriaceae, Pseudomonas spp. or Acinetobacter spp. [36]. The target 100%T>MIC (i.e., meropenem serum concentrations exceeding one times the MIC for the entire dosing interval) was selected because it has previously been shown to improve clinical cure and bacteriological eradication in patients with serious bacterial infections treated with β-lactam antibiotics [20, 37]. In accordance with other studies, 50%T>4×MIC (i.e., meropenem serum concentration exceeding four times the MIC for half of the dosing interval) was chosen as a second target [38,39,40]. Owing to the negligible protein binding of meropenem (2%), total meropenem serum concentrations were used for all analyses [13, 41].
To evaluate the attainment of the targets 100%T>MIC and 50%T>4×MIC, the predicted C4h and C8h values of each dosing interval were evaluated regarding the achievement of the above-mentioned thresholds (one or four times the MIC breakpoints) for all patients not undergoing continuous renal replacement therapy (CRRT). Additionally, target attainment was evaluated for a dose of 2000 mg meropenem based on the extrapolated C4h and C8h values (assuming linear PK). Dosing was considered adequate if the target was attained in ≥ 90% of the monitored dosing intervals [41].
Impact of renal function on meropenem exposure and target attainment
To investigate the impact of RF on meropenem exposure, CLCRCG was related to C4h and C8h values (at patient level using the median individual CLCRCG of a patient, and at sample level using single CLCRCG values). For non-CRRT patients, the relationship between CLCRCG and C8h values was quantified by weighted linear least squares regression in double logarithmic scale \( \left({\mathrm{C}}_{8\mathrm{h}}=\upalpha \cdot \frac{1}{{\left(\mathrm{CLC}{\mathrm{R}}_{\mathrm{C}\mathrm{G}}\right)}^{\upbeta}}\right) \). For further details, see Additional file 2: Regression model for risk calculation.
Target attainment at sample level was stratified by the following classes of RF or RI on the basis of CLCRCG [42,43,44]: severe RI 15–29 ml/minute, moderate RI 30–59 ml/minute, mild RI 60–89 ml/minute, normal RF 90–129 ml/minute and augmented RF ≥ 130 ml/minute. All analyses described here and previously were performed using the software R, version 3.3.2 (R Foundation for Statistical Computing, Vienna, Austria).
Risk assessment tool
A tool for the risk assessment of target non-attainment based on the RF was developed using Excel 2016 software with Visual Basic for Applications (Microsoft Corporation, Redmond, WA, USA). In the Excel tool, the quantified CLCRCG-C8h relationship for non-CRRT patients, the prediction interval around this relationship and the computation of the risk of target (100%T>MIC) non-attainment for given CLCRCG and MIC values were implemented. For further details, see Additional file 2: Regression model for risk calculation.
Patient characteristics
A total of 48 patients (27 male, 21 female) were included in the study (see Table 1). Of these patients, 83% suffered from sepsis, which was most frequently caused by pneumonia or peritonitis (75% or 20% of the sepsis patients, respectively). Pathogens detected in the patients comprised Enterobacteriaceae, non-fermenters (e.g., Pseudomonas spp.), Staphylococcus spp., Streptococcus spp., Enterococcus spp., Bacillus spp., Clostridium spp., Bacteroides spp., Mycoplasma spp., Candida spp. and Aspergillus spp. The patient group covered broad ranges of age (24–84 years), body mass index (16–49 kg/m2) and severity of illness (Acute Physiology and Chronic Health Evaluation II [APACHE II] score 11–42). RF determined by CLCRCG was highly variable, ranging from severely impaired to augmented RF (first study day 24.8–191 ml/minute). Seven patients received CRRT, and six patients underwent extracorporeal membrane oxygenation (ECMO). Twenty-eight patients were post-lung or post-liver transplant recipients.
Table 1 Patient characteristics on study day 1
Meropenem dosing and sampling
During the study period, patients were treated with 1000 mg (n patients = 47) or 2000 mg (n patients = 1) of meropenem administered as 30-minute infusions approximately every 8 h (median 8 h, 95% CI 6.94–9.19 h). A total of 1376 blood samples (median per patient 31) were taken during 349 dosing intervals (median per patient 8, range per patient 4–8). Of the measurements, 23.5% (n = 324) were Cmin samples, which were collected 7.92 h (median) after infusion start (95% CI 6.85–9.08 h). Very few serum concentrations (0.36% of data) revealed an implausible increase in the terminal part of the concentration-time profiles and were therefore excluded from the data analyses (red data points in Fig. 1).
Individual meropenem serum concentration-time profiles. Number above individual plot is patient identifier. Circles represent measured meropenem concentrations. Red circles represent meropenem concentrations excluded from analyses (0.36%; see text). Lines represent connection of consecutively sampled meropenem concentrations; that is, gaps represent non-monitored dosing intervals or missing planned meropenem concentration measurements
Large inter-individual variability was observed for both the observed Cmin values (see Fig. 2) and the calculated concentrations C8h and C4h (see Table 2). Whereas inter-individual variability in Cmin and C8h was particularly large, varying in both concentrations by up to a factor of approximately 1000 between patients, C4h values were slightly less variable (Cmin range 0.03–30.0 mg/L, 104 CV%; C8h range 0.0426–30.0 mg/L, 110 CV%; C4h range 0.933–43.3 mg/L, 69.9 CV%). Apart from inter-individual variability, large intra-individual variability was identified (see Table 2). Particularly Cmin (see Fig. 1) and C8h values showed large variability, with concentrations varying in median by twofold to more than tenfold within a patient (range of ratios \( \frac{{\mathrm{C}}_{\min \_\max }}{{\mathrm{C}}_{\min \_\min }} \): 1.3–10.9, range of ratios \( \frac{{\mathrm{C}}_{8\mathrm{h}\_\max }}{{\mathrm{C}}_{8\mathrm{h}\_\min }} \): 1.22–11.4). Intra-individual variability in C4h values was slightly lower, but the C4h values within a patient still varied up to more than fivefold (range of ratios \( \frac{{\mathrm{C}}_{4\mathrm{h}\_\max }}{{\mathrm{C}}_{4\mathrm{h}\_\min }} \): 1.10–5.47).
Meropenem serum concentrations vs. time after last dose (n = 48 patients). Dark blue/red circles represent concentrations of patients treated with 1000 mg/2000 mg meropenem. Light blue/orange circles represent measured meropenem serum concentration values at the end of the actual dosing interval among patients treated with 1000 mg/2000 mg meropenem
Table 2 Inter- and intra-individual variability of meropenem concentrations at specific time points
For infections in non-CRRT patients with pathogens of MIC 2 mg/L, both investigated targets were attained in approximately half of the dosing intervals monitored, with slightly higher attainment for the 50%T>4×MIC target (56%) than for the 100%T>MIC target (48%; see Table 3). When extrapolating the data to a dose of 2000 mg, target attainment was substantially higher, with 91% and 78% for the targets 50%T>4×MIC and 100%T>MIC, respectively (see Additional file 3: PK/PD target attainment, Table S2).
Table 3 Pharmacokinetic/pharmacodynamic target attainment for all patients not receiving continuous renal replacement therapy and stratified by renal function
Given an MIC of 8 mg/L, the target 100%T>MIC was attained only in about one-fifth of the monitored meropenem dosing intervals; attainment of the target 50%T>4×MIC was very low (7%; see Table 3). When extrapolating to a dose of 2000 mg, the attainment of 100%T>MIC was approximately twice as high as for a dose of 1000 mg (38.1% vs. 20.6%); the attainment of 50%T>4×MIC was even about four times as high (27.4% vs. 7.17%) (see Additional file 3: PK/PD target attainment, Table S2). For doses of 1000 mg and 2000 mg, target attainment for the full MIC range from 0.25 mg/L to 8 mg/L is summarised in Additional file 3: PK/PD target attainment.
In addition to the large inter- and intra-patient variability in meropenem exposure (i.e., C4h values [see Fig. 3a, y-axis] and C8h values [see Fig. 3b, y-axis]), large variability was also observed for RF, with representatives in all RF classes from severe RI to augmented RF (see Fig. 3, x-axes). In addition to the 41 non-CRRT patients, 7 CRRT patients were investigated. Whereas RF was stable (i.e., constant RF class) within the monitored study period for half of the patients (n = 24), RF of the other half changed between two (n patients = 21) or even three (n patients = 3) classes of RF. Already at the patient level, a strong dependency between median individual CLCRCG and C4h (see Fig. 3a1) and C8h (see Fig. 3b1) of the patients was found, interestingly also for the CRRT patients (see Fig. 3a2, b2). Also of note, in patients undergoing ECMO, meropenem concentrations were comparable with non-ECMO patients regarding their median individual CLCRCG. Moreover, within most of the individuals with changing RF, the same tendency of higher meropenem exposure for decreased RF was observed; for example, patient 34 had worsening of RF and at the same time increasing meropenem exposure across the 4 study days (see grey tick mark label in Fig. 3a1, b1). At the sample level (i.e., when relating all single CLCRCG values as a continuous variable to meropenem exposure [C8h]), a distinct relation was found, which was described by the hyperbolic function \( {\mathrm{C}}_{8\mathrm{h}}=40363\cdot \frac{1}{{\left(\mathrm{CLC}{\mathrm{R}}_{\mathrm{C}\mathrm{G}}\right)}^{2.27}} \) (see Fig. 3c; without C8h values of patient 36). Four C8h values of one patient (patient 36) were excluded from the regression because they were considerably larger than those of the remaining patients with similar RF; when including the four values of this patient, the predicted C8h values in the investigated CLCRCG range changed only negligibly for all metrics (quantified CLCRCG-meropenem exposure relationship, 95% CI, 95% prediction interval) (see Additional file 2: Regression model for risk calculation, Figure S2).
Relationship between meropenem serum concentration and creatinine clearance. Meropenem serum concentrations 4 h (C4h) (a1, a2) and 8 h (C8h) (b1, b2, c) after start of infusion in non-CRRT (a1, b1, c) and CRRT (a2, b2) patients vs. median individual CLCRCG (patient level; a, b) or vs. all single CLCRCG (sample level; c) of the patients. Tick mark of x-axis (a, b) represents median individual CLCRCG at time of determined C4h or C8h value. Bold tick mark labels (a, b) represent ECMO patients. Grey tick mark labels (a1, b1) represent patient example mentioned in "Impact of renal function on meropenem exposure and target attainment" section of main text. Coloured symbols (a-c) represent renal function class of a patient at time of determined C4h or C8h value. Shaped symbols (a, b) represent study day on which C4h or C8h value was determined. Dashed vertical lines/horizontal arrows (a-c) represent separation of renal function classes. Dashed horizontal lines (a-c) represent EUCAST MIC breakpoints for Enterobacteriaceae, Pseudomonas spp. or Acinetobacter spp. (S/I 2 mg/L, I/R 8 mg/L [36]). Data points labelled with 36 (c) represent four C8h values of patient 36. Black curve (c) represents quantified hyperbolic relationship between CLCRCG and C8h values, excluding data of patient 36. Abbreviations: CLCR CG Creatinine clearance estimated according to Cockcroft and Gault [34]; CRRT Continuous renal replacement therapy; C 4h Meropenem serum concentration at 4 h after infusion start; C 8h Meropenem serum concentration at 8 h after infusion start; ECMO Extracorporeal membrane oxygenation; EUCAST European Committee on Antimicrobial Susceptibility Testing; ID Patient identifier; I/R Intermediate/resistant; MIC Minimum inhibitory concentration; S/I Susceptible/intermediate
In non-CRRT patients, stratification of target attainment by the RF classes identified augmented RF to mild RI (CLCRCG > 130–60 ml/minute) as a risk factor for non-attainment of both targets (target attainment 0–46.2% for 100%T>MIC, 0–59.7% for 50%T>4×MIC) (see Table 3) for infections with pathogens of MIC 2 mg/L. Given an MIC of 8 mg/L, meropenem treatment resulted in reliable target attainment only in the presence of severe RI (CLCRCG 15–29 ml/minute); thus, already moderate RI (CLCRCG 30–59 ml/minute) was identified as a risk factor for target non-attainment (target attainment for moderate RI 51.4% for 100%T>MIC, 12.5% for 50%T>4×MIC).
The developed risk assessment tool, the MeroRisk Calculator (beta version), is provided as Additional file 4 and is compatible with Windows operating systems and Excel version 2010 and onwards. When opening the tool, the user might be asked to enable macros, enable content and add to trusted documents. The MeroRisk Calculator is an easy-to-use, three-step Excel spreadsheet (graphical user interface) which can be used to assess the risk of target non-attainment of the PK/PD index 100%T>MIC for non-CRRT patients (Fig. 4a). In step 1, the user provides either the CLCRCG of a patient or its determinants (sex, age, total body weight, serum creatinine concentration), which will then be used to calculate CLCRCG. In step 2, the user provides the MIC value of a determined or suspected infecting pathogen, which is used as the target meropenem concentration. In cases in which the MIC value is not available, no MIC value needs to be provided (for handling of blank MIC entry, see next step). In step 3, the MeroRisk Calculator computes the probability ("risk") of target non-attainment for the given CLCRCG and MIC value; if the MIC entry was left blank, the user then has the option to select a EUCAST MIC breakpoint for relevant bacteria [36]. The calculated risk (rounded to integer) of target non-attainment is displayed with the following three-colour coding system: green (≤10%), orange (>10% to < 50%) and red (≥50%). In addition, the tool provides a graphical illustration of the quantified CLCRCG-C8h relationship including the 95% prediction interval and predicts, on the basis of provided/calculated CLCRCG, the most likely concentration to which meropenem concentrations after multiple dosing will decline before the next dosing (C8h) (see Fig. 4b; for further details, see Additional file 2: Regression model for risk calculation, section 2).
Graphical user interface of the MeroRisk Calculator. a Display when opening the tool (i.e., without any entries). b Display after risk calculation for a specific patient: female, aged 60 years, body weight 65 kg, serum creatinine 0.6 mg/dl, infected with pathogen of MIC 2 mg/L. Abbreviations: CLCR CG Creatinine clearance estimated according to Cockcroft and Gault equation [34], CRRT Continuous renal replacement therapy, C 8h Meropenem serum concentration 8 h after infusion start, MIC Minimum inhibitory concentration
We found a strong relationship between RF and meropenem exposure and consequently PK/PD target attainment, and we developed a graphical user tool to predict the risk of target non-attainment under meropenem standard dosing based on an ICU patient's RF.
This work was focused on the analysis of the standard dosing regimen for meropenem (1000 mg administered as 30-minute infusions every 8 h) as the approved and still most frequently used dosing regimen in ICUs [12, 45]. To best represent the variety of different ICU patients, the analysis was based on extensively sampled data of a prospective observational study including a large number of patients with highly heterogeneous patient-specific factors from different ICUs, though at one single study centre.
We showed large inter-individual variability in meropenem exposure, which was in accordance with previous studies [22, 23]. The larger variability in concentrations of the late phase compared with the earlier phase of the concentration-time profile (variability: Cmin, C8h > C4h) suggested that PK variability was due to variability in drug elimination processes rather than in drug distribution. This finding is supported by population PK analyses that identified larger inter-individual variability on the PK parameter clearance than on volume of distribution [24, 28]. The relatively long observation period of 4 days and the large number of samples collected per patient in our study additionally enabled the quantification of intra-individual variability in meropenem exposure. Its large value led to the hypothesis that meropenem exposure is influenced by certain time-varying patient-specific factors such as confirmed in the present work by longitudinally measured CLCRCG.
Our PK/PD analysis demonstrated that meropenem standard dosing did not achieve the desired meropenem PK/PD targets 100%T>MIC and 50%T>4×MIC in a considerable fraction of patients. For pathogens of MIC 2 mg/L, which represents the upper limit of the susceptible range for many important bacteria [36], meropenem exposure was inadequate in every second dosing interval monitored. In line with our work, Carlier et al. found similar results for the target 100%T>MIC given the same MIC value (target attainment 55%) [25]. For infections with less susceptible bacteria of MIC 8 mg/L (I/R breakpoint [36]), which have been shown to commonly occur in ICUs [8, 9], target non-attainment was high, with four of five dosing intervals resulting in sub-therapeutic concentrations (target 100%T>MIC). The target attainment analysis with the two targets 100%T>MIC and 50%T>4×MIC revealed similar results. Of note, current knowledge on PK/PD targets for meropenem in heterogeneous ICU populations is limited, and a PK/PD target for this special patient population has not been derived yet. In relation to other PK/PD targets derived for meropenem in diverse clinical studies (e.g., 19.2%T>MIC and 47.9%T>MIC [21], 54%T>MIC [19] and 76-100%T>MIC [20]), the two PK/PD targets selected for our analysis were at the upper end (i.e., stricter). The selection of the higher targets seemed reasonable, given (1) limited knowledge on an adequate PK/PD target for heterogeneous ICU populations and (2) the high severity of illness (median APACHE IIfirst study day 27) and the high proportion of patients with transplants (~58%) in the evaluated population. Indeed, these targets have been reported to be commonly used in clinical practice for ICU patients [40]. However, owing to the limited knowledge of PK/PD targets in ICU patients, there is a crucial need to explore which PK/PD target is best related to clinical outcome in critically ill patients in a prospective clinical trial. Further analyses should also be aimed at investigating differences in PK/PD targets between, for example, different patient sub-groups (e.g., with vs. without transplants), different states of severity of illness or different types of infecting bacteria (gram-positive vs. gram-negative) in a sufficiently large number of patients.
In line with other studies, we identified RF determined by CLCRCG to influence meropenem exposure [26, 27, 29,30,31]. On the basis of the large number of longitudinally measured meropenem serum concentrations and CLCRCG values covering the full spectrum of RF classes, we were able to quantify a hyperbolic relationship between CLCRCG and meropenem exposure. The present study also included special patient groups such as CRRT and ECMO patients. For CRRT patients, authors of other publications identified measured CLCR determined via 24-h urine collection [28] or residual diuresis [46] as influencing factors on meropenem exposure, both requiring time-consuming urine collection. Although our analysis included a rather small number of CRRT patients, it revealed CLCRCG as a potential determinant of meropenem exposure which can be assessed more easily and quickly in clinical practice than RF markers determined via 24-h urine collection. This finding requires further investigation with a larger number of patients under a well-designed protocol. For the six ECMO patients, the relationship between CLCRCG and meropenem concentrations did not seem different from that of the remaining patients, suggesting that ECMO therapy did not have a strong impact on meropenem serum exposure. This is in line with findings reported by Donadello et al. showing no significant difference between the PK parameters of ECMO and control non-ECMO ICU patients [47].
The impact of RF on the target attainment was overall in accordance with the results of a recent publication by Isla et al. [33], in which the probability of attaining the target 100%T>MIC was analysed for three specific CLCRCG values: Target attainment was 51% for CLCRCG 35 ml/minute (vs. 51% in our study for CLCRCG range 30–59 ml/minute), 3% for CLCRCG 71 ml/minute (vs. 4.6%, 60–89 ml/minute) and 0% for CLCRCG 100 ml/minute (vs. 3.5%, 90–129 ml/minute) for an MIC 8 mg/L. Because the present study included patients covering the full spectrum of RF classes, additional investigation of target attainment in extreme RF classes (severe RI, augmented RF) was possible. For infections with bacteria of MIC 2 mg/L, augmented RF to mild RI was identified as a risk factor of target non-attainment; given bacteria of MIC 8 mg/L, moderate RI was an additional risk factor. These findings imply the need for dosing intensification in patients identified to be at risk of target non-attainment, such as by increasing the dose or prolonged up to continuous infusion, which is currently under clinical investigation; whereas some previous studies have associated continuous infusion with improved clinical cure rates [48, 49], others have not shown a difference in clinical outcome when comparing continuous with intermittent dosing [50]. In this PK/PD analysis, the only patient group that reliably reached the PK/PD targets was the subgroup with severe RI. Notably, these patients also received 1000 mg meropenem every 8 h as 30-minute infusions and thus received higher doses than recommended in the summary of product characteristics (half of indicated dose every 12 h for patients with CLCRCG 10–25 ml/minute [12]).
To enable the practical application of the quantified relationship between RF and meropenem exposure and consequently target attainment, we developed a risk assessment tool in a commonly available and known software (see Additional file 4: MeroRisk Calculator, beta version). This easy-to-use Excel tool allows assessment of the risk of target non-attainment for non-CRRT patients displaying RF within a broad range (25–255 ml/minute) and receiving standard dosing of meropenem (1000 mg every 8 h as 30-minute infusions). We implemented the risk of target non-attainment of meropenem depending on creatinine clearance according to the Cockcroft and Gault equation (CLCRCG [34]) and not depending on creatinine clearance determined by 24-h urine collection (CLCRUC [51]), because CLCRCG can be assessed more easily in clinical practice, and the relationship between CLCRUC and meropenem exposure was not better than between CLCRCG and meropenem exposure (see Additional file 2: Figure S3). To apply the tool, the user needs to provide only the CLCRCG or its determinants (i.e., sex, age, total body weight and the routinely determined laboratory value serum creatinine). In addition, the MIC value of a bacterium determined or suspected in the investigated patient needs to be provided. Should MIC values not be available, the user has the option to select an MIC breakpoint for important pathogens from the EUCAST database. Because only a limited number of patients with augmented RF or severe RI were included in this analysis, the uncertainty of the CLCRCG-meropenem exposure relationship implemented in the MeroRisk Calculator is higher for the extremes of the RF spectrum. Furthermore, the user of the tool needs to keep in mind that in addition to CLCRCG, other factors might influence meropenem exposure. To visualise the prediction uncertainty (i.e., uncertainty in the CLCRCG-meropenem exposure relationship combined with the variability in C8h values) of the calculated meropenem C8h value for a patients CLCRCG, the prediction interval around the CLCRCG-meropenem exposure relationship is additionally provided in the risk assessment tool. Of particular note, using the MeroRisk calculator does not require the measurement of a meropenem concentration of a patient. In case of available meropenem concentrations in a patient, use of therapeutic drug monitoring is encouraged to aid therapeutic decision making [52]. The current beta version of the MeroRisk Calculator is intended to be used in the setting of clinical research and training. As a next step, comprehensive prospective validation of the risk calculator in clinical research setting is warranted.
Our PK/PD analysis demonstrated large inter- as well as intra-patient variability in meropenem serum exposure after standard dosing in critically ill patients. Standard dosing was likely to result in sub-therapeutic meropenem exposure in a considerable fraction of critically ill patients, especially when assuming infections caused by less susceptible bacteria commonly encountered in these patients. CLCRCG was identified as a vital clinical determinant of meropenem exposure and consequently target attainment. In the future, the newly developed risk assessment tool as a graphical user interface (see Additional file 4: MeroRisk Calculator) might, if all requirements are met, be beneficial in clinical practice for therapeutic decision making. An ICU patient's risk of target non-attainment, given his/her RF and the MIC value of the infecting pathogen, would already be accessible when no meropenem concentration measurement is available, such as prior to the start of antibiotic therapy. Our findings indicate that dosing intensification might be needed, depending on a patient's RF and the susceptibility of the infecting pathogen, and that optimised dosing regimens should be further investigated with respect to increased clinical benefit and reduced development of resistance.
BMI:
C4h :
Meropenem serum concentration 4 h after infusion start
CLCRCG :
Creatinine clearance estimated according to Cockcroft and Gault equation
CLCRUC :
Creatinine clearance determined by 24-h urine collection
Cmin :
Minimum meropenem concentration
CRP:
Continuous venovenous haemofiltration
CVVHD:
Continuous venovenous haemodialysis
CVVHDF:
Continuous venovenous haemodiafiltration
CX :
Meropenem serum concentrations at specific time points
EUCAST:
European Committee on Antimicrobial Susceptibility Testing
I/R:
Intermediate/resistant
MIC:
PD:
Pharmacodynamic(s)
PK:
Pharmacokinetic(s)
S/I:
Susceptible/intermediate
SOFA:
Sepsis-related Organ Failure Assessment
%T>MIC :
Percentage of time that drug concentration exceeds the minimum inhibitory concentration
%T>4×MIC :
Percentage of time that drug concentration exceeds four times the minimum inhibitory concentration
Kempker JA, Martin GS. The changing epidemiology and definitions of sepsis. Clin Chest Med. 2016;37:165–79.
Levy Hara G, Kanj S, Pagani L, Abbo L, Endimiani A, Wertheim HFL, et al. Ten key points for the appropriate use of antibiotics in hospitalised patients: a consensus from the Antimicrobial Stewardship and Resistance Working Groups of the International Society of Chemotherapy. Int J Antimicrob Agents. 2016;48:239–46.
Kumar A. Early antimicrobial therapy in severe sepsis and septic shock. Curr Infect Dis Rep. 2010;12:336–44.
Harbarth S, Garbino J, Pugin J, Romand JA, Lew D, Pittet D. Inappropriate initial antimicrobial therapy and its effect on survival in a clinical trial of immunomodulating therapy for severe sepsis. Am J Med. 2003;115:529–35.
MacArthur RD, Miller M, Albertson T, Panacek E, Johnson D, Teoh L, et al. Adequacy of early empiric antibiotic treatment and survival in severe sepsis: experience from the MONARCS trial. Clin Infect Dis. 2004;38:284–8.
Roberts JA, Paul SK, Akova M, Bassetti M, De Waele JJ, Dimopoulos G, et al. DALI: defining antibiotic levels in intensive care unit patients: are current β-lactam antibiotic doses sufficient for critically ill patients? Clin Infect Dis. 2014;58:1072–83.
Tam VH, Schilling AN, Neshat S, Poole K, Melnick DA, Coyle EA. Optimization of meropenem minimum concentration/MIC ratio to suppress in vitro resistance of Pseudomonas aeruginosa. Antimicrob Agents Chemother. 2005;49:4920–7.
Valenza G, Seifert H, Decker-Burgard S, Laeuffer J, Morrissey I. Mutters R; COMPACT Germany Study Group. Comparative Activity of Carbapenem Testing (COMPACT) study in Germany. Int J Antimicrob Agents. 2012;39:255–8.
Cohen J. Confronting the threat of multidrug-resistant Gram-negative bacteria in critically ill patients. J Antimicrob Chemother. 2013;68:490–1.
Roberts JA, Abdul-Aziz MH, Lipman J, Mouton JW, Vinks AA, Felton TW, et al. Individualised antibiotic dosing for patients who are critically ill: challenges and potential solutions. Lancet Infect Dis. 2014;14:498–509.
De Paepe P, Belpaire FM, Buylaert WA. Pharmacokinetic and pharmacodynamic considerations when treating patients with sepsis and septic shock. Clin Pharmacokinet. 2002;41:1135–51.
Datapharm. Meronem IV 500 mg & 1 g. Updated 9 Mar 2017. https://www.medicines.org.uk/emc/medicine/11215. Accessed 26 Jun 2017.
Craig WA. The pharmacology of meropenem, a new carbapenem antibiotic. Clin Infect Dis. 1997;24 Suppl 2:S266–75.
Shibayama T, Sugiyama D, Kamiyama E, Tokui T, Hirota T, Ikeda T. Characterization of CS-023 (RO4908463), a novel parenteral carbapenem antibiotic, and meropenem as substrates of human renal transporters. Drug Metab Pharmacokinet. 2007;22:41–7.
Christensson BA, Nilsson-Ehle I, Hutchison M, Haworth SJ, Oqvist B, Norrby SR. Pharmacokinetics of meropenem in subjects with various degrees of renal impairment. Antimicrob Agents Chemother. 1992;36:1532–7.
Roberts DM, Liu X, Roberts JA, Nair P, Cole L, Roberts MS, et al. A multicenter study on the effect of continuous hemodiafiltration intensity on antibiotic pharmacokinetics. Crit Care. 2015;19:84.
Roehr AC, Frey OR, Koeberer A, Fuchs T, Roberts JA, Brinkmann A. Anti-infective drugs during continuous hemodialysis - using the bench to learn what to do at the bedside. Int J Artif Organs. 2015;38:17–22.
Drusano GL. Prevention of resistance: a goal for dose selection for antimicrobial agents. Clin Infect Dis. 2003;36 Suppl 1:S42–50.
Li C, Du X, Kuti JL, Nicolau DP. Clinical pharmacodynamics of meropenem in patients with lower respiratory tract infections. Antimicrob Agents Chemother. 2007;51:1725–30.
Ariano RE, Nyhlén A, Donnelly JP, Sitar DS, Harding GKM, Zelenitsky SA. Pharmacokinetics and pharmacodynamics of meropenem in febrile neutropenic patients with bacteremia. Ann Pharmacother. 2005;39:32–8.
Crandon JL, Luyt C, Aubry A, Chastre J, Nicolau DP. Pharmacodynamics of carbapenems for the treatment of Pseudomonas aeruginosa ventilator-associated pneumonia: associations with clinical outcome and recurrence. J Antimicrob Chemother. 2016;71:1534–2537.
Mattioli F, Fucile C, Del Bono V, Marini V, Parisini A, Molin A, et al. Population pharmacokinetics and probability of target attainment of meropenem in critically ill patients. Eur J Clin Pharmacol. 2016;72:839–48.
Tsai D, Stewart P, Goud R, Gourley S, Hewagama S, Krishnaswamy S, et al. Optimising meropenem dosing in critically ill Australian Indigenous patients with severe sepsis. Int J Antimicrob Agents. 2016;48:542–6.
Jaruratanasirikul S, Thengyai S, Wongpoowarak W, Wattanavijitkul T, Tangkitwanitjaroen K, Sukarnjanaset W, et al. Population pharmacokinetics and Monte Carlo dosing simulations of meropenem during the early phase of severe sepsis and septic shock in critically ill patients in intensive care units. Antimicrob Agents Chemother. 2015;59:2995–3001.
Carlier M, Carrette S, Roberts JA, Stove V, Verstraete A, Hoste E, et al. Meropenem and piperacillin/tazobactam prescribing in critically ill patients: does augmented renal clearance affect pharmacokinetic/pharmacodynamic target attainment when extended infusions are used? Crit Care. 2013;17:R84.
Kees MG, Minichmayr IK, Moritz S, Beck S, Wicha SG, Kees F, et al. Population pharmacokinetics of meropenem during continuous infusion in surgical ICU patients. J Clin Pharmacol. 2016;56:307–15.
Goncalves-Pereira J, Silva NE, Mateus A, Pinho C, Povoa P. Assessment of pharmacokinetic changes of meropenem during therapy in septic critically ill patients. BMC Pharmacol Toxicol. 2014;15:21.
Isla A. Population pharmacokinetics of meropenem in critically ill patients undergoing continuous renal replacement therapy. Clin Pharmacokinet. 2008;47:173–80.
Roberts JA, Kirkpatrick CMJ, Roberts MS, Robertson TA, Dalley AJ, Lipman J. Meropenem dosing in critically ill patients with sepsis and without renal dysfunction: intermittent bolus versus continuous administration? Monte Carlo dosing simulations and subcutaneous tissue distribution. J Antimicrob Chemother. 2009;64:142–50.
Alobaid AS, Wallis SC, Jarrett P, Starr T, Stuart J, Lassig-Smith M, et al. Effect of obesity on the population pharmacokinetics of meropenem in critically ill patients. Antimicrob Agents Chemother. 2016;60:4577–84.
Minichmayr IKM, Roberts JA, Frey OR, Roehr AC, Kloft C, Brinkmann Alexander. Development of a dosing nomogram for continuous infusion meropenem in critically ill patients based on a validated population pharmacokinetic model. J Antimicrob Chemother. 2017 [manuscript submitted for publication].
Crandon JL, Ariano RE, Zelenitsky SA, Nicasio AM, Kuti JL, Nicolau DP. Optimization of meropenem dosage in the critically ill population based on renal function. Intensive Care Med. 2011;37:632–8.
Isla A, Canut A, Arribas J, Asín-Prieto E, Rodríguez-Gascón A. Meropenem dosing requirements against Enterobacteriaceae in critically ill patients: influence of renal function, geographical area and presence of extended-spectrum β-lactamases. Eur J Clin Microbiol Infect Dis. 2016;35:511–9.
Cockcroft DW, Gault MH. Prediction of creatinine clearance from serum creatinine. Nephron. 1976;16:31–41.
Zander J, Maier B, Suhr A, Zoller M, Frey L, Teupser D, et al. Quantification of piperacillin, tazobactam, cefepime, meropenem, ciprofloxacin and linezolid in serum using an isotope dilution UHPLC-MS/MS method with semi-automated sample preparation. Clin Chem Lab Med. 2015;53:781–91.
European Committee on Antimicrobial Susceptibility Testing (EUCAST). Breakpoint tables for interpretation of MICs and zone diameters. Version 7.0. 2017. http://www.eucast.org/fileadmin/src/media/PDFs/EUCAST_files/Breakpoint_tables/v_7.1_Breakpoint_Tables.pdf. Accessed 26 Jun 2017.
McKinnon PS, Paladino JA, Schentag JJ. Evaluation of area under the inhibitory curve (AUIC) and time above the minimum inhibitory concentration (T > MIC) as predictors of outcome for cefepime and ceftazidime in serious bacterial infections. Int J Antimicrob Agents. 2008;31:345–51.
Taccone FS, Laterre PF, Dugernier T, Spapen H, Delattre I, Wittebole X, et al. Insufficient β-lactam concentrations in the early phase of severe sepsis and septic shock. Crit Care. 2010;14:R126.
Jamal JA, Mat-Nor MB, Mohamad-Nor FS, Udy AA, Wallis SC, Lipman J, et al. Pharmacokinetics of meropenem in critically ill patients receiving continuous venovenous haemofiltration: a randomised controlled trial of continuous infusion versus intermittent bolus administration. Int J Antimicrob Agents. 2015;45:41–5.
Wong G, Brinkman A, Benefield RJ, Carlier M, De Waele JJ, El Helali N, et al. An international, multicentre survey of β-lactam antibiotic therapeutic drug monitoring practice in intensive care units. J Antimicrob Chemother. 2014;69:1416–23.
European Medicines Agency (EMA). Guideline on the use of pharmacokinetics and pharmacodynamics in the development of antibacterial medicinal products. 21 Jul 2016. http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2016/07/WC500210982.pdf. Accessed 26 Jun 2017.
European Medicines Agency (EMA). Guideline on the evaluation of the pharmacokinetics of medicinal products in patients with decreased renal function. 20 Feb 2014. http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2014/02/WC500162133.pdf. Accessed 26 Jun 2017.
Food and Drug Administration. Guidance for industry: pharmacokinetics in patients with impaired renal function — study design, data analysis, and impact on dosing and labeling. Mar 2010. https://www.fda.gov/downloads/drugs/guidances/ucm204959.pdf. Accessed 26 Jun 2017.
Udy AA, Baptista JP, Lim NL, Joynt GM, Jarrett P, Wockner L, et al. Augmented renal clearance in the ICU. Crit Care Med. 2014;42:520–7.
Tabah A, de Waele J, Lipman J, Zahar JR, Cotta MO, Barton G, et al. The ADMIN-ICU survey: a survey on antimicrobial dosing and monitoring in ICUs. J Antimicrob Chemother. 2015;70:2671–7.
Ulldemolins M, Soy D, Llaurado-Serra M, Vaquer S, Castro P, Rodríguez AH, et al. Meropenem population pharmacokinetics in critically ill patients with septic shock and continuous renal replacement therapy: influence of residual diuresis on dose requirements. Antimicrob Agents Chemother. 2015;59:5520–8.
Donadello K, Antonucci E, Cristallini S, Roberts JA, Beumier M, Scolletta S, et al. β-Lactam pharmacokinetics during extracorporeal membrane oxygenation therapy: a case-control study. Int J Antimicrob Agents. 2015;45:278–82.
Abdul-Aziz MH, Sulaiman H, Mat-Nor MB, Rai V, Wong KK, Hasan MS, et al. B-Lactam Infusion in Severe Sepsis (BLISS): a prospective, two-centre, open-labelled randomised controlled trial of continuous versus intermittent β-lactam infusion in critically ill patients with severe sepsis. Intensive Care Med. 2016;42:1535–45.
Dulhunty JM, Roberts JA, Davis JS, Webb SAR, Bellomo R, Gomersall C, et al. Continuous infusion of β-lactam antibiotics in severe sepsis: a multicenter double-blind, randomized controlled trial. Clin Infect Dis. 2013;56:236–44.
Dulhunty JM, Roberts JA, Davis JS, Webb SAR, Bellomo R, Gomersall C, et al. A multicenter randomized trial of continuous versus intermittent β-lactam infusion in severe sepsis. Am J Respir Crit Care Med. 2015;192:1298–305.
Levey AS, Inker LA. Assessment of glomerular filtration rate in health and disease: a state of the art review. Clin Pharmacol Ther. 2017;102:405–19.
Wicha SG, Kees MG, Solms A, Minichmayr IK, Kratzer A, Kloft C. TDMx: a novel web-based open-access support tool for optimising antimicrobial dosing regimens in clinical routine. Int J Antimicrob Agents. 2015;45:442–4.
Knaus WA. APACHE II: a severity of disease classification system Article. Crit Care Med. 1985;13:818–29.
Vincent JL, Moreno R, Takala J, Willatts S, De Mendonça A, Bruining H, et al. The SOFA (Sepsis-related Organ Failure Assessment) score to describe organ dysfunction/failure. Intensive Care Med. 1996;22:707–10.
This study was supported by a Mérieux research grant (Institut Mérieux, Lyon, France). The design, collection, analysis and interpretation of data, as well as the writing and publication of the manuscript, were done by the authors without participation or influence from the funding source.
The datasets generated and/or analysed during the present study are not publicly available, but they are available from the corresponding author on reasonable request.
Department of Clinical Pharmacy and Biochemistry, Institute of Pharmacy, Freie Universitaet Berlin, Kelchstrasse 31, 12169, Berlin, Germany
Lisa Ehmann, Iris K. Minichmayr, Niklas Hartung & Charlotte Kloft
Graduate Research Training Program PharMetrX, Berlin/Potsdam, Germany
Lisa Ehmann & Iris K. Minichmayr
Department of Anaesthesiology, University Hospital, LMU Munich, Munich, Germany
Michael Zoller, Christina Scharf & Lorenz Frey
Institute of Laboratory Medicine, University Hospital, LMU Munich, Munich, Germany
Barbara Maier, Michael Vogeser & Johannes Zander
Institute of Pharmacy and Molecular Biotechnology, University of Heidelberg, Heidelberg, Germany
Maximilian V. Schmitt
Institute of Mathematics, Universitaet Potsdam, Potsdam, Germany
Niklas Hartung & Wilhelm Huisinga
Lisa Ehmann
Michael Zoller
Iris K. Minichmayr
Christina Scharf
Barbara Maier
Niklas Hartung
Wilhelm Huisinga
Michael Vogeser
Lorenz Frey
Johannes Zander
Charlotte Kloft
MZ, CS, MV, LF and JZ designed the clinical study. MZ, CS and JZ conducted the clinical study. BM, JZ and MV performed assays. LE, IKM and CK designed data analysis. LE and CK analysed data. LE, MVS, NH and CK developed the tool. LE, MZ, IKM, JZ and CK discussed results. LE drafted the manuscript. LE, MZ, IKM, CS, BM, MVS, NH, WH, MV, LF, JZ and CK commented on and approved the manuscript. All authors read and approved the final manuscript.
Correspondence to Charlotte Kloft.
Ethics approval and consent were obtained from the Institutional Review Board of the Medical Faculty of the LMU Munich, Germany (registration number 428-12). Written informed consent to participate was obtained from all patients or their legal representatives.
WH declares receiving research grants from an industry consortium (AbbVie Deutschland GmbH & Co. KG, Boehringer Ingelheim Pharma GmbH & Co. KG, Grünenthal GmbH, F. Hoffmann-La Roche Ltd, Merck KGaA and SANOFI). CK declares receiving research grants from an industry consortium (AbbVie Deutschland GmbH & Co. KG, Boehringer Ingelheim Pharma GmbH & Co. KG, Grünenthal GmbH, F. Hoffmann-La Roche Ltd, Merck KGaA and SANOFI) as well as research grants from the Innovative Medicines Initiative-Joint Undertaking (DDMoRe) and Diurnal Ltd. The other authors declare that they have no competing interests.
Johannes Zander and Charlotte Kloft share senior authorship.
Study design.pdf. (PDF 170 kb)
Regression model for risk calculation.pdf. (PDF 475 kb)
PK/PD target attainment.pdf. (PDF 65 kb)
MeroRisk Calculator.xltm. (XLTM 330 kb)
Ehmann, L., Zoller, M., Minichmayr, I.K. et al. Role of renal function in risk assessment of target non-attainment after standard dosing of meropenem in critically ill patients: a prospective observational study. Crit Care 21, 263 (2017). https://doi.org/10.1186/s13054-017-1829-4
β-Lactam
Pharmacokinetics/Pharmacodynamics
Target attainment | CommonCrawl |
You are here: The University of Winnipeg / Mathematics and Statistics /
Room 1L11
Lockhart Hall
Dr. Jean-Marie De Koninck
Department of Mathematics and Statistics, Laval University, Québec City
Title: THE SECRET LIFE OF MATHEMATICS
Abstract: Your doctor tells you that you have tested positive for a serious disease and he also tells you the test is reliable in 98% of the cases; should you be worried? Can math be useful in eliminating traffic jams? Why do airline companies practice overbooking and pretend that it is for your own good? In soccer, how important is it to score the first goal? Why hockey coaches should pull their goalie much earlier than they usually do. These are some of the topics that illustrate the importance of mathematics in our daily lives and that Jean-Marie De Koninck will cover in his talk «The Secret Life of Mathematics».
Jean-Marie De Koninck has been a researcher and professor of mathematics at Université Laval for more than forty years and is well known to the scientific community for his work in analytic number theory. He is the author of 15 books and 150 peer reviewed articles in scientific journals. He is now Professor Emeritus. Professor De Koninck has also hosted his own science outreach television show "C'est mathématique!", broadcasted on the French-Canadian channel (Canal Z) and later on TFO (Télévision française de l'Ontario). In 2005, he created the Sciences and Mathematics in Action (SMAC) program whose purpose is to excite kids about science and mathematics. He is well known by the general public as the founder of Operation Red Nose, a road safety operation involving over 55,000 volunteers across Canada. He was also very active in the media during the ten years he acted as President of the Table québécoise de la sécurité routière. He is now a member of the Board for the Société de l'assurance automobile du Québec. Many have also seen him as a color-commentator for nationally televised swim events.
Wednesday, Jan. 30 12:30 pm - 1:20pm
Room 3M69
Dr. Mahmoud Torabi, Associate Professor of Bio/statistics, Department of Community Health Sciences, University of Manitoba
Title: Spatial Modeling of Disease Mapping: An Introduction
In traditional statistics, we frequently assess the effects of exposure on health outcomes through regression analysis. Such analysis can take many forms: linear, Poisson, and logistic regression are perhaps the most familiar. The same models can be used in spatial analysis after we adapt them to incorporate our ideas about neighborhood relationships and spatially correlated error terms.
In this talk, I will review some basic regression models (normal and possibly non-normal data) assuming that the observations are independent from each other. I extend these basic models with relaxing the assumption of independent errors and study possible spatial pattern of error terms. I will show some real data analyses through the talk.
Friday, Sept 28 12:30 Room 1L12
Dr. Erica Moodie, Department of Epidemiology, Biostatistics, & Occupational Health, McGill University
Title: An introduction to causal inference in statistics
Abstract: Statistical causal inference is a framework that is used to try to discover the structure of the data and eliminate any spurious explanations for an observed association. A particular challenge in causal inference is the issue of confounding, which arises in nonexperimental studies or when there is non-compliance in a randomized trial. In this seminar, I will give a brief history of causality and an introduction to some fundamental principles in causal inference in statistics.
12:30 to 1:20 pm
Room 1L11 Dr. Christian Léger,
Title: Statistics or How Making Sense of Data is the New Gold Rush!
Abstract: Data are everywhere in our lives. Their abundance is sometimes breathtaking. But these nuggets are useless unless we can make sense of them. This is why statistician/data scientist often comes up on top of any survey of high paying, high demand jobs! In this lecture, I will give an overview of the importance of statistics. Through examples, we will see different areas where statistics plays a major role. We will also see the importance of bias and variance in uncovering meaning in the data. As the field is always evolving, some current research topics will also be presented.
Room 3M67 Manon Stipulanti,
University of Liège
Title: Pascal-like triangles: base $2$ and beyond
Abstract: The Pascal triangle and the corresponding Sierpi\'nski gasket are well-studied objects.
They exhibit self-similarity features and have connections with dynamical systems, cellular automata, number theory and automatic sequences
in combinatorics on words. The link between those two objects is well-known and can be understood in the following way.
Consider the intersection of the lattice $\mathbb{N}^2$ with the region $[0,2^n]\times [0,2^n]$. Then the first $2^n$ rows and columns
of the usual Pascal triangle $(\binom{m}{k}\bmod{2})_{m,k< 2^n}$ provide a coloring of this lattice: the square on the mth row and kth
column is colored in white (resp.; black) if $\binom{m}{k} \equiv 0 \bmod{2}$ (resp.; $\binom{m}{k} \equiv 1 \bmod{2}$).
If we normalize this compact set by a homothety of ratio $1/2^n$, we get a sequence of compact subsets of $[0,1]\times [0,1]$ converging,
for the Hausdorff distance, to the Sierpi\'nski gasket when $n$ tends to infinity.
In a work in collaboration with Julien Leroy and Michel Rigo (University of Liège), we extend this convergence to a generalized Pascal triangle
by considering the binary expansions of integers and the binomial coefficients of finite words. More precisely, a finite word is simply a finite sequence
of letters belonging to a finite set called the alphabet. In combinatorics on words, one can introduce the binomial coefficient $\binom{u}{v}$ of
two finite words $u$ and $v$ which is the number of times v occurs as a subsequence of u (meaning as a ``scattered'' subword).
This concept naturally extends the binomial coefficient of two integers.
Related to this triangle P2, we also define the sequence $(S_2(n))_{n\ge 0}$ that counts the number of positive entries on each row of P2.
This sequence exhibits a strong structure: it is palindromic between powers of 2. This suggests that it is 2-regular in the sense of Allouche and Shallit.
Finally, its summatory function has a particular behavior that is worth studying in details.
We also extend those results to the Zeckendorff numeration system using Fibonacci numbers and, more recently, to any Parry--Bertrand numeration system.
Wed, March 21
Room 1L07 Dr. Shakhawat Hossain
Title: Shrinkage estimation method of exponentiated Weibull regression model for time-to-event data
Abstract: In this talk, we consider the exponentiated Weibull model, which includes as special cases the Weibull, log-logistic, and log-normal distributions. This model is broadly used to model time-to-event data in many studies and the primary focus of this data is to find the relationship between the time-to-event and the covariates. This leads to the regression model that may have many covariates, some of which may not be significantly related to the survival time. In that we use some auxiliary or non-sample information on insignificant covariates in the unrestricted model to produce a restricted model. The shrinkage estimators optimally combine the unrestricted and restricted model estimators and outperform the maximum likelihood estimator (MLE) under the quadratic loss. Asymptotic properties of these estimators including biases and risks will be discussed. A simulation study is conducted to assess the performance of the proposed estimators with respect to the unrestricted MLE. This study will be incorporated with varying sample sizes, different hazard shapes, and percentages of censored observations. Estimators will be compared based on bias, risk, and mean squared prediction error. The relevance of the proposed estimators will be illustrated with two real data sets. This is joint work with Shahedul Khan, University of Saskatchewan.
Dr. Micah McCurdy
Calling all hockey fans
Ever wonder if a certain hockey player is hurting or helping your favourite hockey team?
Well, Micah McCurdy might be able to sort that out through math, data and statistics. McCurdy is a mathematician who makes pictures to try and help the public understand hockey. He will speak at UWinnipeg on Isolating Individual Player Threat in the NHL on Tuesday March 6, 2018 at 4:00 pm in Room 1L11, Lockhart Hall. McCurdy uses data to measure results about hockey that can also help you do the math for your team.
This lecture is free and open to the public and is part of the Math and Stats Lecture Series.
Title: Isolating Individual Player Threat in the NHL
Abstract: To tease apart which players are helping their teams and which are hurting, we turn, perhaps predictably, to regression. Somewhat less predictably, we quantify our observations of team performance in a function space whose elements measure shot fluxes - rates of shot generation from a given location. This lets us capture an aspect of shot quality as well as shot quantity. We obtain estimates of individual player impact on 5v5 offence and defence, isolated from the impact of their teammates, their starting shift position on the ice, and the score environment in which they are deployed. Along the way, we obtain a possibly novel and definitely simple closed form for a certain combinatorial regression
"I intend to make all of my research available to the public, for free," shares McCurdy. "I find working in this way to be immensely satisfying and I do not aspire to a team position, especially not if it requires removing my public work from the internet, as we have seen in a number of prominent cases."
McCurdy lives in Halifax and is employed mostly by the public who subscribe to his website, hockeyviz.com He is also employed intermittently by Saint Mary's University, where he teaches undergrads. He also works with NHL teams.
Fri, Feb 9
12:30 - 1:20 pm
Dr. Narad Rampersad
UWinnipeg Department of
Title: Critical exponents of balanced words
Abstract: This talk is about two fundamental concepts in combinatorics on words: balance and repetition. A word w is "balanced" if, for every pair u,v of subwords of w of the same length, and every letter a, the number of a's in u and v differ by at most 1. We are interested in what kinds of repetitions are avoidable/unavoidable in such words. We measure repetitions by their "exponent": the exponent of a word is the ratio of its length to its period. Over a binary alphabet the class of infinite aperiodic balanced words is identical to the well-studied class of Sturmian words. The repetitions in Sturmian words are well-understood. In particular, there is a formula for the critical exponent (supremum of exponents e such that x^e is a subword for some word x) of a Sturmian word. It is known that the Fibonacci word has the least critical exponent over all Sturmian words and this value is (5+sqrt(5))/2. However, little is known about the critical exponents of balanced words over larger alphabets. We show that the least critical exponent among ternary balanced words is 2+sqrt(2)/2 and we construct a balanced word over a four-letter alphabet with critical exponent (5+sqrt(5))/4. This is joint work with J. Shallit and E. Vandomme.
Tues, Dec. 5
Room 1L06 Dr. Mohammad Jafari Jozani
Department of Statistics,
Title: Towards more efficient and less expensive follow up analysis of bone mineral density in large cohort studies
Abstract: We develop a new methodology for analyzing upper and/or lower quantiles of the distribution of bone mineral density using quantile regression. Nomination sampling designs are used to obtain more representative samples from the tails of the underlying distribution. We propose new check functions to incorporate the rank information of nominated samples in the estimation process. Also, we provide an alternative approach that translates estimation problems with nominated samples to corresponding problems under simple random sampling (SRS). Strategies are given to choose proper nomination sampling designs for a given population quantile. We implement our results to a large cohort study in Manitoba to analyze quantiles of bone mineral density using available covariates. We show that in some cases, methods based on nomination sampling designs require about one tenth of the sample used in SRS to estimate the lower or upper tail conditional quantiles with comparable mean squared errors. This is a dramatic reduction in time and cost compared with the usual SRS approach.
This talk is based on a work in collaboration with Ayilara Olawale Fatai and Bill Leslie.
Fri, October 6
Room 1L07 Dr. Vaclav Linek
Title: Tilings and Skolem Sequences
A Skolem sequence of order n is a sequence of the numbers 1, 2,……n each occurring twice, where the two occurrences of each number j are exactly j positions apart (so there are j - 1 symbols between the two j's). Thus, S = 4, 1, 1, 3, 4, 2, 3, 2 is a Skolem sequence of order 4: the 1s are one position apart, the 2s are two positions apart, the 3s are three positions apart, and the 4s are four positions apart. Similarly, S = 3, 4, 5, 3, 2, 4, 2, 5, 1, 1 is a Skolem sequence of order 5, and S = 1, 1, 3, 4, _ , 3, 2, 4, 2 is a variant: a split Skolem sequence of order 4 with a hole in the middle. Skolem sequences are used to construct combinatorial designs and are of interest on their own. Many parametrized families of these sequences have appeared over the years. We will give a unifying conceptual treatment of these parametrizations as tilings. (Joint work with B. Stevens and S. Mor).
Wed, April 12
ROOM 2M77 Dr. Scott Rodney
Department of Mathematics, Physics and Geology,
Cape Breton University
Title: Poincare's Inequality and Neumann Problems
Recently, my group has devoted much time to the development of an axiomatic framework that gives continuity of weak solutions to a large class of quasilinear PDE in divergence form with rough coefficients. In this talk I will begin with a general discussion of sufficient conditions. I will then focus on a new result giving an equivalence between the validity of a weighted Poincar\'e inequality and the existence of a weak solution to a Neumann problem for a matrix weighted $p$-Laplacian. That is, for $1\leq p<\infty$ and a $p/2$-integrable $n\times n$-valued matrix function $Q(x)$ on a bounded open subset $E$ of $\mathbb{R}^n$, we will consider weak solutions of \Delta_p u = \sqrt{Q(x)}\nabla u(x)^{p-2}Q(x)\nabla u(x)= f(x)|^{p-2}f(x) in $E$ where $f$ is assumed to belong only to a weighted $L^p$ class.
Room 1C16A
Dr. Shannon Ezzat
Dept. of Mathematics and Statistics
Title: Pi
Most students know the mathematical fact that pi cannot be expressed as a ratio of whole numbers. However, very few students know why this fact is true. We will show why this well-known result is indeed true using a proof by contradiction.
Wednesday March 8th, 12:10 - 12:50 PM
Carol Shields Auditorium, 2nd floor of the Millennium Library Sohail Khan
Title: "Let's Quantify the Chances: Probability Theory and Its More Practical Uses"
http://wpl.winnipeg.ca/library/pdfs/posters/skywalkwinter2017.pdf
Wed., March 15
Room 1L04, Lockhart Hall,
UWinnipeg Dr. Sanjoy Sinha
Title: Joint modeling of longitudinal and time-to-event data
In many clinical studies, subjects are measured repeatedly over a fixed period of time. Longitudinal measurements from a given subject are naturally correlated. Linear and generalized linear mixed models are widely used for modeling the dependence among longitudinal outcomes. In addition to the longitudinal data, we often collect time-to-event data (e.g., recurrence time of a tumor) from the subjects. When multiple outcomes are observed from a given subject with a clear dependence among the outcomes, a natural way of analyzing these outcomes and their associations would be the use of a joint model. I will discuss a likelihood approach for jointly analyzing the longitudinal and time-to-event data. The method is useful for dealing with left-censored covariates often observed in clinical studies due to the limit of detection. The finite-sample properties of the proposed estimators will be discussed using results from a Monte Carlo study. An application of the proposed method will be presented using a large clinical dataset of pneumonia patients obtained from the Genetic and Inflammatory Markers of Sepsis (GenIMS) study.
Room 1L06, Lockhart Hall, UWinnipeg
Dr. Karen Meagher
Dept. of Mathematics and Statistics, University of Regina
TITLE: "Cocliques in Derangement Graphs"
The derangement graph for a group is a Cayley graph for a group G with connection set the set of all derangements in G (these are the elements with no fixed points). The eigenvalues of the derangement graph can be calculated using the irreducible characters of the group. The eigenvalues can give information about the graph, I am particularly interested in applying Hoffman's ratio bound to bound the size of the cocliques in the derangment graph. This bound can also be used to obtain information about the structure of the maximum cocliques. I will present a few conjectures about the structure of the cocliques, this work is attempting to find a version of the Erdos-Ko-Rado theorem for permutations.
Room 2C13, Centennial Hall, UWinnipeg Dr. Anna Stokke
Title: Lattice path proofs for Jacobi-Trudi formulas
Schur functions, which play an important role in symmetric function theory and in the representation theory of the general linear group, can be defined in terms of semistandard Young tableaux. The Jacobi-Trudi identity expresses a Schur function as a determinant involving certain homogeneous symmetric functions. Gessel and Viennot gave a proof of the Jacobi-Trudi identity using non-intersecting lattice paths. I will discuss Gessel and Viennot's proof as well as new proofs for symplectic and orthosymplectic Jacobi-Trudi identities.
This talk will be accessible to undergraduate students in mathematics.
Room 2C13, Centennial Hall, UWinnipeg Jeff Babb Title: Multivariate statistical analysis: using R software to assess multivariate normality and to draw inferences based upon Hotelling's T2 statistic
Many inference procedures in multivariate statistical analysis are based upon the multivariate normal (MVN) distribution and Hotelling's T2 statistic. This talk will discuss the multivariate normal distribution, outline an approach for assessing multivariate normality, and examine procedures which utilize Hotelling's T2 statistic to draw inferences about a mean vector and the difference in mean vectors. Examples using R software will be provided.
Room 2C13, Centennial Hall, UWinnipeg Dr. Lucas Mol
Title: A family of patterns with reversal with interesting avoidance properties
A pattern p is a word over letters called variables. An instance of p is the image of p under some nonerasing morphism. A word w is said to avoid p if it contains no instance of p. A pattern p is called k-avoidable if there are infinitely many words over an alphabet of size k that avoid p. We say that p is avoidable if it is k-avoidable for some k and unavoidable otherwise. The avoidability index of an avoidable pattern p is the least number k such that p is k-avoidable. The question of whether there are avoidable patterns of index greater than 5 remains open. Additionally, there are relatively few known examples of patterns of index 4 or 5, and all known examples are quite long and complex.
Recently, work has been done on patterns with reversal, in which the reversal or mirror image of variables is allowed. An instance of a pattern with reversal p is the image of p under some nonerasing morphism which respects this reversal. The avoidability index of patterns with reversal is then defined as above for patterns. We present an infinite family of patterns with reversal whose avoidability indices are bounded between 4 and 5. These patterns with reversal are much simpler than the previously known patterns of index 4 or 5.
Room 2C13, Centennial Hall, UWinnipeg Dr. Narad Rampersad
Title: Decidable properties of automatic sequences
A k-automatic sequence is a sequence (of integers or just symbols) that can be generated by a finite automaton in the following sense:
Each state of the automaton has an associated output and the n-th term of the sequence is obtained as the output of the state reached by the automaton after reading the digits of n written in base k. The prototypical example is the 2-automatic Thue-Morse sequence, whose n-th term is equal to the sum of the binary digits of n modulo 2.
Some classical work of Buchi gives an equivalent definition of k-automatic sequences in terms of a certain extension of Presburger arithmetic. This extension remains decidable and in recent years many researchers (notably Shallit) have used the decidability of this theory to give entirely computerized proofs of many combinatorial properties of automatic sequences. For instance, a classical combinatorial property of the Thue-Morse sequence is that it does not contain the same sequence of terms three times in a row. This is an example of a combinatorial property that is provable by these automated techniques. We give a survey of this approach and mention some recent new results that have been proven by means of such techniques.
Room 2M74 Manitoba Hall Statistics Canada Information Session
Have you considered a career where you could…
Develop your technical, analytical and managerial skills in a stimulating and professional environment;
Benefit from a training and development program with varied assignments; and
Have excellent prospects for advancement?
To find out more…
For additional information on opportunities for employment as a mathematical statistician with Statistics Canada, you can consult our recruitment web site at www.statcan.gc.ca/MArecruitment or contact us by e-mail at [email protected].
Apply online at http://jobs.gc.ca (from September 21st to October 13th)
Room 1L07 Dr. Brett Stevens, Professor
Title: Constructing covering arrays from the unions of hypergraphs
Covering arrays are generalizations of orthogonal arrays which have applications to reliability testing. Since repetition of coverage is permitted, one common method of construction is to vertically concatenate arrays until all $t$-tuples of columns are covered. This corresponds to taking the union of several hypergraphs to produce a complete $t$-uniform hypergraph. We survey constructions of this form. We start with the Roux-type constructions. Then we examine arrays created from linear feedback shift registers. In the case of strength 3 this construction is equivalent to showing that the union of the projective linear independence hypergraph and one isomorphic image of itself is the complete 3-uniform hypergraph. We also show some examples of this method for higher strength. We close with a family of hypergraphs constructed from ordered orthogonal arrays (t,m,s-nets) that may be useful to consider for this construction method and ask if the union of two or more isomorphic copies yields a complete hypergraph.
Dr. Mostafa Nasri
Title: Equilibrium Problems: Solution Techniques and Applications
The main topic of this talk is to introduce the equilibrium problem in the context of optimization and its certain properties. The equilibrium problem provides a unified framework for a large family of problems such as complementarity problems, fixed point problems, minimization problems, Nash games, variational inequality problems and vector minimization problems. Although a large number of solution algorithms have been developed for this problem, there is still a wide scope for improvement and a need for extensive additional research in this realm. In particular, efficient and convergent algorithms for solving such problems are still being sought. With above motivations, proximal point algorithms are proposed for solving the equilibrium problem and their convergence properties are studied. Considering these proximal point algorithms, computer-amenable algorithms, called augmented Lagrangian algorithms, are developed for solving the same problem whose feasible sets are defined by convex inequalities. It is also shown that these algorithms can be extended to Banach spaces. Moreover, real-world problems are addressed for which the presented algorithms are applicable.
Wed., February 3
12:30 - 1:20pm
Room 2C15
Max Bennett
Title: How to count braids
In this talk I will introduce braids and cover a few interesting combinatorial properties that they exhibit, including an enumeration result of Albenque and Nadeau. Examining this leads to a solution to the word problem on braids.
Very little prerequisite knowledge is necessary, but some familiarity with group theory would help.
12:30 to 1:20
Dr. Shakhawat Hossain
SHRINKAGE ESTIMATION FOR GENERALIZED LINEAR MIXED MODELS
In this paper, we consider the pretest, shrinkage, and penalty estimation procedures in the generalized linear mixed model when it is conjectured that some of the regression parameters are restricted to a linear subspace. We develop the statistical properties of the pretest and shrinkage estimation methods, which include asymptotic distributional biases and risks. We show that the pretest and shrinkage estimators have a significantly higher relative efficiency than the classical estimator. Furthermore, we consider the penalty estimator: LASSO (Least Absolute Shrinkage and Selection Operator), and numerically compare its relative performance with that of the other estimators. A series of Monte Carlo simulation experiments are conducted with different combinations of inactive predictors and the performance of each estimator is evaluated in terms of the simulated mean squared error. The study shows that the shrinkage and pretest estimators are comparable to the LASSO estimator when the number of inactive predictors in the model is relatively large. The estimators under consideration are applied to a real data set to illustrate the usefulness of the procedures in practice.
This is joint work with Trevor Thompson.
Room 1L04 Michael Pawliuk
Former U of W Honours student in Mathematics
PhD Candidate, University of Toronto
In 1992, Hrushovski gave a positive answer the following
question: "If the enemy gives you a finite graph G, and an
isomorphism f of two of its induced subgraphs, is there a
larger finite graph G' that contains G and for which f
extends to an automorphism on all of G'?" This has
consequences for amenability of the automorphism group
of the countably infinite random graph.
The question is still interesting if you replace the word
"graph" with "metric space", "tournament" or "complete npartite
directed graph". We will present a construction due
to Mackey in the 1960s, that we adapted to give a positive
answer to question of Hrushovski for tournaments, and
many other classes of directed graphs.
This is joint work with Marcin Sabok (McGill).
SEMINAR MOVED TO JANUARY 2016
changed to December 4
Dr. Ortrud Oellermann
PROGRESS ON THE OBERLY-SUMNER CONJECTURE
For a given graph property P we say a graph G is locally P if the open neighbourhood of every vertex induces a graph that has property P. Oberly and Sumner (1979) conjectured that every connected, locally k-connected, K_{1,k+2}-free graph of order at least 3 is hamiltonian. They proved their conjecture for k=1, but it has not been settled for any k at least 2. We define a graph to be k-P_3-connected if for any pair of nonadjacent vertices u and v there exist at least k distinct u-v paths of order 3 each. We make progress toward proving the Oberly-Sumner conjecture by showing that every connected, locally k-P_3-connected, K_{1,k+2}-free graph of order at least 3 is hamiltonian and, in fact, fully cycle extendable.
This is joint work with S. van Aardt, M. Frick, J. Dunbar and J.P. de Wet.
Dr. James Currie
Binary patterns with reversal
The study of words avoiding patterns is a major theme in combinatorics on words, explored by Thue and others. The reversal map is also a basic notion in combinatorics on words, and it is therefore natural that recently work has been done on patterns with reversals. Shallit recently asked whether the number of binary words avoiding xxx^R grows polynomially with length, or exponentially. The surprising answer (by C. and Rampersad) is `Neither`. As Adamczewski has observed, this implies that the language of binary words avoiding xxx^R is not context-free - a result which has so far resisted proof by standard methods.
Basic questions about patterns with reversal have not yet been addressed. In this talk, we completely characterize the k-avoidability of an arbitrary binary pattern with reversal. This is a direct (and natural) generalization of the work of Cassaigne characterizing k-avoidability for binary patterns without reversal, and involves a blend of classical results and new constructions.
This is joint work with Philip Lafrance.
Wednesday, Oct 7
Bryan Penfound
Connecting the High School Pre-calculus Curriculum with Higher Education
Recently Bryan has developed an online pre-calculus review workshop for first-year students entering Calculus at the University of Winnipeg. The online workshop is divided into five main content areas, each with several online videos, problem sets, and diagnostic quizzes. The purpose of this session is to connect with high school pre-calculus teachers and to encourage the use of the online workshop as a student and teacher reference.
The Tarry-Escott Problem
The Tarry-Escott Problem is the following: Given a "degree" k, find two distinct lists of integers {a_1,...,a_s} and {b_1,...,b_s} that satisfy
a_1 + a_2 + . . . + a_s = b_1 + b_2 + . . . + b_s
a_1^2 + a_2^2 + . . . + a_s^2 = b_1^2 + b_2^2 + . . . + b_s^2 .
a_1^k + a_2^k + . . . + a_s^k = b_1^k + b_2^k + . . . + b_s^k.
In 1851 Prouhet gave a solution for all k that requires lists of length 2^k. By a counting argument one can show (non-constructively) that there is a solution using lists of size only k(k+1)/2+1, but the numbers are (potentially) huge. Suppose we restrict the a_i and b_i to be in {1,...,m}. Borwein, Erdelyi, and Kos showed that there is no solution for degree k > 16/7sqrt{m}+5. The goal of the talk is to give the proof of this result. Remarkably, this bound implies (by a non-trivial argument) the following result on words: Any word of length m is uniquely determined by the multiset of its (scattered) subsequences of length at most floor(16/7sqrt{m}+5).
Thursday, June 4 10:00 to 11:30 am
Jeff Babb, Department of Mathematics and Statistics, University of Winnipeg
Continuing Colloquium Series on R Software
ABSTRACT: Cluster analysis and minimum spanning trees are useful techniques for exploring multivariate data and assessing ways to group multivariate observations. This talk will consider distance measures, four agglomerative hierarchical clustering methods (single linkage, complete linkage, average linkage, Ward linkage), related graphics and diagnostics (dendrogram, cophenetic matrix, cophenetic correlation), and minimum spanning trees. Examples of using R statistical software for performing cluster analysis and obtaining a minimum spanning tree will be provided.
Friday, February 6, 12:30 - 1:20pm Room 4C84 Robert Borgersen, Department of Mathematics, University of Manitoba TITLE: Progress Towards a Mathematics Placement Test at the University of Manitoba
ABSTRACT: A mathematics placement test is, in general, a test that attempts to measure a student's current competence in a number of mathematical abilities, and based on their current skills ''place'' them into only those classes for which they achieve a minimum level in all of the prerequisite abilities. The goal is to catch students who require remediation before they waste resources on a course they are not ready for. In this talk, I will discuss recent progress towards developing such a test at the University of Manitoba, opportunities such a test could provide, promising results we have had, and challenges we see on the horizon. There will be time for those in attendance to provide their thoughts, input, and opinions on the project.
Friday, February 27 12:30
Room 4M47 Theatre B
Dr. Jeffrey Rosenthal, Department of Statistics, University of Toronto
TITLE: "From Lotteries to Polls to Monte Carlo"
ABSTRACT: This talk will use randomness and probability to answer such questions as: Just how unlikely is it to win the lottery jackpot? If you flip 100 coins, how close will the number of heads be to 50? How many dying patients must be saved to show that a new medical drug is effective? Why do strange coincidences occur so often? If a poll samples 1,000 people, how accurate are the results? How did statistics help to expose the Ontario Lottery Retailer Scandal? If two babies die in the same family without apparent cause, should the parents be convicted of murder? Why do casinos always make money, even though gamblers sometimes win and sometimes lose? And how is all of this related to Monte Carlo Algorithms, an extremely popular and effective method for scientific computing? No mathematical background is required to attend. Jeffrey Rosenthal is an award-winning professor in the Department of Statistics at the University of Toronto. He received his BSc from the University of Toronto at the age of 20, and his PhD in Mathematics from Harvard University at the age of 24. His book for the general public, Struck by Lightning: The Curious World of Probabilities, was published in sixteen editions and ten languages, and was a bestseller in Canada. This led to numerous media and public appearances, and to his work exposing the Ontario lottery retailer scandal. Dr. Rosenthal has also dabbled as a computer game programmer, musical performer, and improvisational comedy performer, and is fluent in French. His web site is www.probability.ca
Friday, January 16 12:30
Room 3C30 Dr. Karen Gunderson Heilbronn Institute for Mathematical Research, University of Bristol TITLE: "Friendship hypergraphs"
ABSTRACT: For $r \ge 2$, an $r$-uniform hypergraph is called a \emph{friendship $r$-hypergraph} if every set $R$ of $r$ vertices has a unique `friend' - a vertex $x \notin R$ with the property that for each subset $A \subseteq R$ of size $r-1$, the set $A \cup \{x\}$ is a hyperedge. In the case $r = 2$, the Friendship Theorem of Erd\H{o}s, R\'{e}nyi and S\'{o}s states that the only friendship graphs are `windmills'; a graph consisting of triangles with a single common vertex. For $r \geq 3$, there exist infinite classes of friendship $r$-hypergraphs, not necessarily uniquely defined. These types of hypergraphs belong to a family that generalises the notion of a Steiner system, since in an $r$-uniform Steiner system, every set of $r-1$ vertices has a unique friend. In this talk, I shall give some background on these types of hypergraphs and describe new results on both upper and lower bounds on the size of friendship hypergraphs. Joint work with Natasha Morrison (Oxford) and Jason Semeraro (Bristol).
2014 Seminars
Monday, November 17 12:30
Dr. Ortrud Oellermann,
The University of Winnipeg
TITLE: "Reconstruction Problems in Graphs"
ABSTRACT: We say that a graph can be reconstructed from partial information about its structure if the graph can be uniquely determined from this information. We begin by giving an overview of graph reconstruction problems. In the second part of the talk we consider the problem of reconstructing a graph from its digitally convex sets; where a set of vertices S is digitally convex if every vertex, whose closed neighbourhood is contained in S, also belongs to S. (New results are joint work with P. Lafrance and T. Pressey)
Monday, Oct 27 12:30 3M64 Trevor Thomson
NSERC Summer Research Student TITLE: Efficient Estimation for Time Series Following GLMs
ABSTRACT: In this talk, I will discuss the shrinkage and pretest estimation methods for time series of a generalized linear model with binary or count data when it is conjectured that some of the regression parameters may be reduced to a subspace. Especially, I examine these estimators for possible improvements in estimation and forecasting when there are many predictors in the linear models. The statistical properties of the pretest and shrinkage estimators including asymptotic distributional biases and risks are developed. They show that the shrinkage estimators have a significantly higher relative efficiency than the maximum partial likelihood estimator if the shrinkage dimension exceeds two and risk of the pretest estimator depends on the validity of the subspace of associated parameters. A Monte Carlo simulation experiment is conducted for different combinations of inactive covariates and the performance of each estimator is evaluated in terms of the simulated relative mean squared error. The proposed methods are applied to a real data set to illustrate the usefulness of the procedures in practice.
Friday, April 25 12:30pm in Room 3M60 Dr. Azer Akhmedov
Mathematics Department, North Dakota State University TITLE: "On the Hamiltonicity of Some Vertex Transitive Graph"
ABSTRACT: Lovasz has conjectured that every vertex transitive graph contains a Hamiltonian path. Another version of this conjecture states that every vertex transitive graph is Hamiltonian (contains a Hamiltonian cycle) unless it is isomorphic to one of the following 5 graphs: the complete graph K_2, the Petersen graph, the Coxeter graph, and two other graphs obtained from the Petersen and Coxeter graphs by truncation.
Lovasz's Conjecture is wide open. A weaker Kneser conjecture states that a certain class of vertex transitive graphs are Hamiltonian. This claim has been verified in some special but significant cases (by Ya-Chen and Furedi), although in its full version, the conjecture is still open.
The Hamiltonicity problem of graphs turns out to be interesting also in musical theory as a way of generating musical morphologies. We have studied the Hamiltonicity problem for several graphs which are interesting to musical theorists. Some of these graphs are vertex transitive, and some are closely related to Kneser graphs. In the talk, I'll present a brief introduction to Hamiltonian graphs and mention several popular Hamiltonicity problems in graph theory. Then I'll discuss major ideas of the proof. This is a joint work with composer Michael Winter.
Friday, March 14 in 1L11 - 12:30-1:20 Dr. Randall Pyke
Simon Fraser University Fractals: A New (and Better) Way of Looking at the World.
Fractals are complicated geometric shapes that have captured the imagination of mathematicians for years, and more recently the larger public. It was the pioneering work of the mathematician Benoit Mandelbrot, beginning in the 1970's, that brought fractal geometry out from the remote corners of abstract mathematics into the mainstream. In this talk we will discuss what fractals are, how they are created, and some of their applications in areas outside of mathematics. We will also drift into the Julia and Mandelbrots sets.
Friday, March 14 in 2C13 10:30-11:20 Dr. Randall Pyke
Simon Fraser University FACULTY PRESENTATION
The Dynamics of Solitons
Solitons are localized solutions of nonlinear wave equations and appear in many applicable areas. Trying to understand their remarkable properties (robustness) have led to major advances in the theory of nonlinear partial differential equations and to their uses in areas such as solid-state electronics and nonlinear optics. I will introduce solitons and their close relatives, solitary waves, with examples, numerical experiments, and illustrate some methods for studying them.
Thursday , March 13 in 3M69 2:30-3:45 Dr. Randall Pyke
Simon Fraser University MATH/STAT & PHYSICS STUDENTS PRESENTATION
The Remarkable Theorem of Emmy Noether.
In 1918 Emmy Noether proved a theorem relating symmetries of a differential equation with conservation laws for solutions of the equation. It made precise what was up to then folklore in physics and is now the cornerstone in the modern theory of symmetries of differential equations. We will discuss this theorem by first introducing the calculus of variations, a powerful method in physics and differential equations and a major tool in modern analysis.
Wednesday February 5, 12:30pm in Room 4M46 Dr. Gerald Cliff, University of Alberta TITLE: "The groups of invertible and symplectic matrices "
I will first consider when a matrix can be inverted without switching rows. Then I will define symplectic matrices, which are somewhat analogous to orthogonal matrices. I will see which row switches are symplectic. This leads to the Weyl group of the symplectic group. I will assume the audience has no familiarity with symplectic matrices or Weyl groups.
Our Courses and Programs
Mathematics Program
Math Course Descriptions
Statistics Program
Stats Course Descriptions
Department Advising
Students Succeed
Upgrade your math
Pre-Calc Review Workshop
Math for Early/Middle Year Teachers
Math & Science Tutoring
Available Faculty Positions
Marker & Lab Demonstrator Positions
Awards Office | CommonCrawl |
Tin cast bronze
BrO3Ts7S5N
for valves employed in sea water, water vapour
CIS, Russia, Ukraine Bronze Tin cast bronze
Temperature, °C
$$E\cdot 10^{9}$$, $$MPa$$
$$\alpha\cdot 10^{6}$$, $$K^{-1}$$
$$\varkappa$$, $$\frac{W}{m\cdot K}$$
$$\rho$$, $$\frac{kg}{m^3}$$
$$c\cdot 10^{-3}$$, $$\frac{J}{kg\cdot K}$$
20 0.84 62.8 8700
100 16.71 364.3
Mechanical properties at 20 °C
Size, mm
$$\sigma _{U}$$, $$MPa$$
$$\sigma_{Y}$$, $$MPa$$
$$\sigma_{0.5}$$, $$MPa$$
Casting in sandy form 175 8
Casting in sandy form 205 176 5
Brinell hardness number
Value, HBW
Casting and technological parameters
melting point, °C
coefficient of friction with lubrication
coefficient of friction without lubrication
GOST 613-79
Description of physical characteristics
$$E\cdot 10^{9}$$ $$MPa$$ Elastic modulus
$$\alpha\cdot 10^{6}$$ $$K^{-1}$$ Coefficient of thermal (linear) expansion (range 20°C–T)
$$\varkappa$$ $$\frac{W}{m\cdot K}$$ Coefficient of thermal conductivity (the heat capacity of the material)
$$\rho$$ $$\frac{kg}{m^3}$$ The density of the material
$$c\cdot 10^{-3}$$ $$\frac{J}{kg\cdot K}$$ The specific heat of the material (range 20°C–T)
Description of mechanical properties
$$\sigma_{Y}$$ $$MPa$$ Yield strength
$$\sigma_{0.5}$$ $$MPa$$ Tensile stress required to produce a total elongation of 0.5%
$$\sigma _{U}$$ $$MPa$$ Ultimate tensile strength
Description of the casting and technological parameters
melting point °C The temperature at which solid crystalline body makes the transition to the liquid state and Vice versa | CommonCrawl |
Comparative phase imaging of live cells by digital holographic microscopy and transport of intensity equation methods
Jeremy M. Wittkopp, Ting Chean Khoo, Shane Carney, Kai Pisila, Shahab J. Bahreini, Kate Tubbesing, Supriya Mahajan, Anna Sharikova, Jonathan C. Petruccelli, and Alexander Khmaladze
Jeremy M. Wittkopp,1 Ting Chean Khoo,1 Shane Carney,1 Kai Pisila,1 Shahab J. Bahreini,1 Kate Tubbesing,1,2 Supriya Mahajan,3 Anna Sharikova,1 Jonathan C. Petruccelli,1 and Alexander Khmaladze1,*
1Department of Physics, SUNY University at Albany, 1400 Washington Avenue, Albany, NY 12222, USA
2Department of Molecular and Cellular Physiology, Albany Medical College, 47 New Scotland Avenue, Albany, NY 12208, USA
3Department of Medicine, SUNY University at Buffalo, 875 Ellicott Street, Buffalo, NY 14203, USA
*Corresponding author: [email protected]
Jonathan C. Petruccelli https://orcid.org/0000-0001-8543-3398
J Wittkopp
T Khoo
S Carney
K Pisila
S Bahreini
K Tubbesing
S Mahajan
A Sharikova
J Petruccelli
A Khmaladze
•https://doi.org/10.1364/OE.385854
Jeremy M. Wittkopp, Ting Chean Khoo, Shane Carney, Kai Pisila, Shahab J. Bahreini, Kate Tubbesing, Supriya Mahajan, Anna Sharikova, Jonathan C. Petruccelli, and Alexander Khmaladze, "Comparative phase imaging of live cells by digital holographic microscopy and transport of intensity equation methods," Opt. Express 28, 6123-6133 (2020)
High-speed transport-of-intensity phase microscopy with an electrically tunable lens (OE)
Quantitative phase microscopy for cellular dynamics based on transport of intensity equation (OE)
Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy (OL)
Imaging Systems, Microscopy, and Displays
Holographic microscopy
Phase contrast imaging
Phase imaging
Three dimensional imaging
Original Manuscript: December 12, 2019
Revised Manuscript: January 26, 2020
Manuscript Accepted: January 28, 2020
Digital holographic and transport of intensity equation phase imaging
We describe a microscopic setup implementing phase imaging by digital holographic microscopy (DHM) and transport of intensity equation (TIE) methods, which allows the results of both measurements to be quantitatively compared for either live cell or static samples. Digital holographic microscopy is a well-established method that provides robust phase reconstructions, but requires a sophisticated interferometric imaging system. TIE, on the other hand, is directly compatible with bright-field microscopy, but is more susceptible to noise artifacts. We present results comparing DHM and TIE on a custom-built microscope system that allows both techniques to be used on the same cells in rapid succession, thus permitting the comparison of the accuracy of both methods.
© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Digital Holographic Microscopy (DHM) is an established real-time, high resolution quantitative phase contrast imaging technique. It is based on the interferometric detection of phase changes of a light wave that passes through, or is reflected from, a microscopic sample. The intensity of light passing through a nearly transparent sample, such as a cell, changes little, but the light slows down inside the sample in proportion to its index of refraction, resulting in a significant phase change. DHM converts these phase changes into intensity variations, which are recorded by an imaging detector. Since the phase change indicates a change in the optical path length, or optical thickness, a height profile of the sample can be deduced. It is, therefore, especially advantageous in mapping the height profile of mostly transparent samples ("pure phase" objects), such as individual cells grown in a monolayer on a transparent surface [1–8].
Other useful properties of digital holographic microscopic imaging are the ability to numerically focus on a surface of the sample from a single image frame, and the ability to digitally correct for various imperfections in the imaging system. For example, the curvature mismatch between the reference and object beams can be numerically compensated [1,2,9–14] and the process for such numerical compensation can be automated (as discussed below), significantly simplifying the operation of a digital holographic microscope.
DHM uses a high-resolution digital camera to acquire a hologram. After the intensity information is recorded, it is encoded as an array of numbers representing the intensity of the optical field. To extract the phase map of the cell from the hologram, it is necessary to numerically propagate the optical field along the direction perpendicular to the hologram plane, until the object is in focus. In this work, this is done using angular spectrum method. By calculating the final complex optical field and extracting the phase, the optical thickness map of the cells is created. Coupled with many phase unwrapping algorithms, DHM is becoming a routine method for inspection of microstructures [15] and biological systems on a nanometer scale [8].
DHM allows simultaneous measurement of thickness of individual cells and monitoring of many cells within the field of view with nearly real-time resolution. In principle, the speed at which the digital holograms are acquired is limited only by the frame rate of the camera, although numerical propagation and, in some cases, software-based phase unwrapping may require substantial processing power, especially for larger images of samples with complicated height profiles.
Transport of intensity equation (TIE) phase imaging is, like DHM, a high resolution computational imaging technique capable of near-real-time 3D surface and thickness reconstructions [16–18]. However, the TIE offers several advantages as compared to DHM and other interferometric techniques. Rather than using the interferometric combination of multiple beams of light, it relies simply on free-space propagation of a single beam, which is itself a form of self-interference, to render phase measurable. While DHM requires significant spatial stability, i.e. that the interfering beams remain aligned to within fractions of a wavelength (hundreds of nanometers) during the measurement, the TIE requires much less, only that the single beam be held stationary with respect to the camera's pixels (typically several microns).
For pure phase specimens, in addition to the knowledge of the mean wavelength of the illumination, the camera pixel size and system magnification, the TIE requires only a single defocused, bright-field image acquired at a known defocus position. Since all intensity variations in the defocused image are due to the phase of the specimen, this can be used to reconstruct the specimen's phase profile. For non-transparent specimens, the TIE requires instead two images with different levels of defocus in order to separate phase- and absorption-induced intensity variations. For robustness, however, three images are typically acquired: one in focus and two defocused symmetrically about the specimen plane [19]. Phase is then retrieved by solving a second order linear elliptic partial differential equation using the acquired images as input. This can be performed in near-real-time on modern computers using fast Fourier transform (FFT) algorithms [20]. The retrieved phase is naturally unwrapped and can readily measure variations of optical thickness greater than one wavelength.
Another advantage of the TIE over interferometric methods is its natural compatibility with commercial bright-field microscopes. Defocus can be implemented rapidly via a motion controller, either utilizing the microscope's own focus control or by shifting the position of the imaging camera. Alternatively, real-time imaging can be obtained by utilizing beam splitters [21,22], a programmed phase mask [23], or a tunable lens [24] to acquire the defocused images without physical defocusing. The TIE is also significantly more robust than DHM to reduced spatial and temporal coherence of the light source, and is compatible with most sources of illumination including LED and broadband incandescent sources [25–27]. Additionally, since it is compatible with spatially partially coherent illumination, TIE phase microscopy has been shown to produce phase reconstructions with resolution exceeding the coherent diffraction limit [28]. The TIE has been demonstrated to produce useful quantitative phase reconstructions of biological samples in conventional microscopes [29–31].
Since the TIE is a less well-established technique, TIE microscopes are typically validated by using static, known phase targets such as binary phase masks or microlens arrays [21,23–25,29,31]. While TIE is capable of producing accurate reconstructions of such targets, these may not provide an accurate assessment of its capability at retrieving the phase of live cells. As such, having a "gold standard" in the form of the better-established DHM phase image to serve as a reference yields a useful comparison. While earlier studies have provided comparisons of DHM and TIE [32,33], none to our knowledge has attempted to compare the two methods quantitatively in imaging live cells.
2. Digital holographic and transport of intensity equation phase imaging
2.1 Automated Fourier filter selection and curvature correction for digital holographic microscopy
In order to generate an accurate phase profile from the raw data collected from the camera, one needs to numerically compensate a curvature mismatch between the reference and object beams [1,2] that is present in a holographic setup. If the mismatch is not properly compensated, the resulting phase image of a flat surface will be curved. We developed an automated iterative method combining the curvature compensation (discussed in Ref. [2]) with the simultaneous Fourier filter adjustments.
The reconstruction of a hologram by the angular spectrum method [7,8,34] involves the selection of a first order diffraction peak in the Fourier space [see Fig. 1(a)]. The algorithm begins by taking the initial guess of the location of the peak based on the location of the brightest point away from the central 0th order maximum. It then centers itself around this maximum, and rejects the signals beyond a certain radius [see Fig. 1(b)]. The algorithm then uses the location of this bright point as the center of the Fourier space. The phase reconstruction, shown in Fig. 1(c), is curved due to the curvature mismatch between the reference and object beams. The position of the Fourier filter is adjusted so that the center of curvature is in the center of the phase image [and the center of the phase image is as free from the phase discontinuities as possible – see Fig. 1(d)]. The spherical curvature correction is applied next, resulting in a flatter phase image [Fig. 1(e)]. Finally, adjusting the location of the projection of the center of curvature on the CCD matrix (xo and yo in Ref. [2]) produces the image shown in Fig. 1(f), which is free from curvature.
Fig. 1. The stages of the automatic Fourier filter placement and curvature correction (see Ref. [2] for more detail): (a) the result of the angular spectrum transform applied to the hologram of USAF target, group 6, acquired by camera; (b) filtering of the first diffraction order; (c) phase reconstruction with curvature present; (d) phase reconstruction after the initial filter repositioning; (e) the results of the curvature correction; (f) final curvature adjustment.
While this iterative process may take 30–60 seconds (depending on the image size and initial guesses), it does not require any user input throughout its run. It also only needs to be applied once for a particular setup setting. While the curvature needs to be slightly adjusted for a different microscope objective, the most practical application for this process is to analyze larger images through the tiling of multiple smaller images. In such a case, each region of a larger field of view will have a slightly different curvature mismatch, and the smaller images can be taken sequentially and automatically adjusted for curvature. The results can be compiled into a larger full image and compared to a larger field TIE image.
2.2 Transport of intensity equation phase imaging
The TIE relates the variation of intensity I along the z axis to the gradient of the phase ϕ in the transverse x-y plane through the partial differential equation
(1)$$\frac{{\partial I({{\textbf x},z} )}}{{\partial z}} ={-} \frac{\lambda }{{2\pi M}}{\nabla _{\textbf x}} \cdot [{I({{\textbf x},z} ){\nabla_{\textbf x}}\phi ({{\textbf x},z} )} ],$$
where x=(x,y) is the coordinate in the transverse (detector) plane, λ is the beam's wavelength, M is the microscope's magnification, and ${\nabla _{\textbf x}}$ is the gradient operator in the transverse plane.
In practice, the derivatives are estimated by finite differences, with transverse derivatives taken between pixels on the camera and the axial derivative being a finite difference of two images acquired with the system focused at axial positions ±Δz, as illustrated in Fig. 2.
Fig. 2. TIE microscopy performed by defocusing the system about the sample plane.
Additionally, and without loss of generality, if z = 0 is taken to be the central plane about which these images are acquired, the TIE has the finite-difference form
(2)$$\frac{{I({\textbf x},\Delta z) - I({\textbf x}, - \Delta z)}}{{2\Delta z}} ={-} \frac{\lambda }{{2\pi M}}{\nabla _{\textbf x}}\cdot [{I({{\textbf x},0} ){\nabla_{\textbf x}}\phi ({{\textbf x},0} )} ].$$
In this work, this differential equation is solved through defining an auxiliary function [16] whose gradient is assumed to be equal to the product of the in-focus intensity and transverse phase gradient, ${\nabla _{\textbf x}}\Theta = I({\textbf x},0){\nabla _{\textbf x}}\phi ({\textbf x},0).$ This leads to a pair of Poisson's equations
(3)$${\nabla _{\textbf x}}^2\Theta ={-} \frac{{2\pi M}}{\lambda }\frac{{I({\textbf x},\Delta z) - I({\textbf x}, - \Delta z)}}{{2\Delta z}},$$
(4)$${\nabla _{\textbf x}}^2\phi ({\textbf x},0) = {\nabla _{\textbf x}}\cdot \left[ {\frac{{{\nabla_{\textbf x}}\Theta }}{{I({\textbf x},0)}}} \right].$$
Phase is retrieved by solving these Poisson's equations through use of the fast Fourier transform (FFT) [20]. This has the advantage of speed over other numerical methods for directly solving Eq. (2). Note also that solving the TIE requires that the value of the phase or its normal derivative be defined on the boundary of the region of interest, and this is typically not known a priori. The FFT assumes periodic boundary conditions, which will hold in the case of periodic objects and work well in the case of isolated objects. For aperiodic objects with non-trivial phase structure near the edges of the region of interest, the FFT can result in large artifacts near the boundaries. This can be somewhat alleviated by periodic tiling of the measured intensities [35]. Methods have been proposed to measure the boundary values directly by introducing hard-edged apertures in the field of view [20,36,37], but these require precise manufacturing, alignment and measurement.
As mentioned before, TIE is also robust under the conditions of low spatial and temporal coherence of the light source, and is compatible with most sources of illumination including LED and broadband incandescent sources, provided that mean wavelength is used in Eqs. (1) and (2) and the specimen is not strongly dispersive. The use of partially coherent light can essentially eliminate coherent diffraction noise in the reconstructed phase.
The performance of a TIE system depends strongly on the details of the illumination employed [27,38,39]. In a microscope with Kohler illumination, the shape and size of the condenser aperture controls the spatial coherence of the system, and the resolution of TIE is degraded due to blurring of the defocused images by convolution with the scaled condenser aperture function. On the other hand, the in-focus resolution is limited by the maximum spatial frequency of the sample that can be captured by the microscope objective. Larger condenser apertures improve the in-focus resolution by allowing the objective to capture higher spatial frequencies of the specimen. Varying the size of the condenser aperture therefore trades off in-focus versus defocused resolution, and since TIE relies on solving a differential equation using both in- and defocused images, the condenser aperture used must be considered carefully to optimize the reconstructed phase. For example, it has been recently demonstrated that dynamic control of the illumination can be used to combine the strengths of both large and small condenser aperture TIE imaging [28]. By dividing the total exposure time in half and acquiring two images with half of the exposure each, one with a large and one with a small condenser aperture, the reconstruction can produce high resolution phase reconstructions while reducing noise [38]. Additionally, some of the computational steps of the reconstruction can be performed optically, further reducing noise [39]. In order to use a single source to boost the in-focus resolution at all spatial frequencies, while not degrading the defocused resolution due to convolution with a broad condenser aperture, off-axis illumination from a single, annular aperture can be employed [28]. If this annular ring is chosen such that the NA of the condenser illumination is matched to the NA of the objective, the TIE resolution should be approximately twice the resolution of in-focus, coherent imaging, i.e. a minimum resolvable feature size of 0.61λ/NA, as opposed to 1.22λ/NA. Since swapping annuli in the condenser aperture to match each objective is not necessarily practical, similar results can be obtained by replacing the microscope's illumination system with an LED array, provided an annular set of LEDs is used to illuminate the specimen, and the angle of illumination from each LED through the objective is matched to the NA of the objective.
Finally, it is worth mentioning that the TIE has two drawbacks compared to DHM. The first is that it requires multiple images with axial scanning between the images, which takes time and precise defocus control on the order of the Rayleigh range of the objective, Δz ≈ λ/NA2. In principle, if the samples were uniformly attenuating, only a single defocused image would be needed. However, we find that even for weakly attenuating images, the reconstructions are most robust with three acquired images (under-, over- and in-focus). Although not adopted here, this drawback can be overcome with a variety of methods to eliminate mechanical scanning in acquiring defocused images [21–24]. A second drawback is the TIE's sensitivity to low-spatial-frequency noise [40]. Low spatial frequency noise corresponds to large, slowly varying phase features. Since light is expected to be refracted only weakly by such features in the specimen, only very subtle changes in intensity are expected upon small defocus, and these changes are easily corrupted by noise. Moreover, low-spatial-frequency components of the noise in a measurement are likely to be misinterpreted by the TIE as phase signatures. TIE phase reconstructions are therefore subject to large, slowly-varying background features due to noise. Although a variety of methods have been proposed to alleviate this problem, they either rely on acquiring additional images at more defocus positions [41–45] (and thus longer and more complicated data acquisition), or on computational methods, which use assumptions on the specimen structure to preferentially eliminate noise [46,47] (which limits the class of objects that can be reconstructed). So as not to limit the applicability of our system, and to maintain rapid data acquisition, we do not use any of these noise reduction methods here.
3. Experimental
Figure 3 shows the combined DHM and TIE optics setup. A Thorlabs He-Ne Laser (Model HNL008R, λ=633 nm) was used for digital holographic measurements, and a blue LED 2D array (λ=466 nm) was used for TIE. For the DHM measurements, the beam splitter BS1 separated the laser light into the object and reference beams. Plano-convex lenses L1 and L2 expanded the object beam. The object beam was reflected by mirror M1 to pass through the sample. The image was built by the microscope objective OBJ1. The reference beam was reflected by M2, and expanded by L3 and L4. The object and the reference beams were recombined by the beam splitter BS2 and the interference pattern was formed on the camera. The curvature mismatch between the reference and the object beams was compensated numerically using customized in-house software.
Fig. 3. Combined DHM and TIE setup (see text for details).
To perform TIE measurements, an Adafruit 32 × 32 RGB LED array controlled by Arduino UNO was programmed to project modifiable illumination patterns using the blue LED sources. Light emitted by this array was passed through the sample and focused by objective lens OBJ1 onto the camera. An annular pattern of LEDs was used so that the mirror M1 did not significantly obstruct the illumination; it remained in place during TIE measurements. The objective was focused sequentially on three planes: two on either side of focus, and one in focus. The optical train of the Olympus IX73 inverted microscope and Thorlabs CS505MU CMOS camera were used to capture images. The objective turret was equipped with a motorized motion controller, enabling the precise axial defocus positioning. In-focus, under-focused, and over-focused images were supplied to an FFT-based TIE solver in order to reconstruct the image.
As a test to quantitatively compare both DHM and TIE techniques, a sample with known properties was analyzed. A Benchmark Technologies Quantitative Phase Target (n = 1.52) with an etch depth of 100 nm [see Fig. 4(a)] was imaged, and the phase information was reconstructed by both methods. The results of the 3D reconstruction performed by TIE can be seen in Fig. 4(b). Here, we converted the phase into physical thickness t [see Eq. (5) below].
Fig. 4. Comparison of a spoke target thickness measurements using DHM and TIE. The bright-field in-focus image of the spoke target is shown in (a), the TIE reconstruction of the optical thickness is shown in (b); (c) and (d) show the cross-section of the same spoke for TIE and DHM respectively.
The phase $\phi ({\boldsymbol {x}} )$ quantitatively depends on the wavelength of light illuminating the sample. Scaling retrieved phase by a pre-factor dependent on wavelength resulted in a quantity invariant across all wavelengths, as it relies only on the sample thickness t, its index of refraction nf, and the index of refraction of the surrounding medium n0. The optical path difference (OPD) is
(5)$$OPD = \frac{\lambda }{{2\pi }}\phi ({\textbf x}) = ({n_f} - {n_o})t.$$
The cross-section of the reconstructed thickness is shown for TIE in Fig. 4(c) and DHM in Fig. 4(d). It is not clear how uniform the thickness of each step of the phase target actually is, as we could not measure the structure independently from DHM and TIE. The disagreement in the profile step size (0.1 micron for DHM, 0.08 micron for TIE) is well within the range of oscillations seen in DHM and the low-frequency background observed in TIE.
Next, we used DHM and TIE to evaluate the optical thickness profile of a live human epithelial cheek cell (Fig. 5). Our combined DHM/TIE setup allowed the imaging of the same cell without the need to move the sample.
Fig. 5. Comparison of a cheek cell optical thickness measurements using DHM (left-hand side) and TIE (right-hand side). The images in (a) and (b) show the off-axis view; (c) and (d) show on-axis view. Vertical axis represents optical thickness. The scale for the horizontal (x and y) axes is in pixels (pixel size is 0.17×0.17 µm2). The cross-section shown as the red solid horizontal line in (c) and the black dashed horizontal line in (d) is plotted in (e), and the cross-section shown as the black solid diagonal line in (c) and the red dashed diagonal line in (d) is plotted in (f).
The results of the single-cell imaging are shown in Fig. 5, where 3D off-axis [Figs. 5(a) and 5(b)] and on-axis [Figs. 5(c) and 5(d)] views of the reconstructed optical thickness are shown.
Figures 5(e) and 5(f) show two cross-sections through the reconstructed optical thickness. As can be seen, the two methods yielded quantitatively similar optical thickness profile for the cell. The observed difference in cell profiles could be due to the fact that the cell has moved between the DHM and TIE measurements (the time to switch between sources, acquire camera frames, move the camera to the new defocus distance, etc.).
Finally, Table 1 shows the RMS error between TIE and DHM phase profiles for each of the line profiles in Figs. 4 and 5. Ongoing improvements to the DHM/TIE system and software will eliminate the time between DHM and TIE acquisitions, so that even measurements of rapid cellular dynamics may be compared between these two techniques.
Table 1. RMS Error between TIE and DHM
We developed and tested a combination digital holographic microscopy (DHM) and transport of intensity (TIE) microscopic phase imaging system for 3D data acquisition. We performed the comparison between the two techniques. The use of a combined "hybrid" setup allowed the implementation of both methods in rapid succession for static targets and live cells. Quantitative agreement between methods was found in both static and live samples.
Further, we presented a method to automate the selection and positioning of the Fourier domain filter and wavefront curvature correction, simplifying the processing of DHM images. This is particularly important for handling a succession of related images (assembling a larger field of view, or a time series).
Ongoing work includes modifying the experimental setup to perform simultaneous data collection with both the DHM and TIE systems. This can be achieved using dichroic mirrors, taking advantage of the difference in illumination wavelength between the two systems. This combined system will also be used for the analysis of multiple cells and monitoring changes induced by various chemical treatments. Lastly, the availability of simultaneous DHM measurements for validation will enable us to rapidly improve the TIE reconstruction algorithms for live cells by providing an accurate reference phase for comparison.
In the future, we intend to use both DHM and TIE together, and demonstrate that TIE offers a practical tool capable of extracting cell volume from live biological specimen with minimal adjustments. Such a robust system will be used by biological scientists without specialized optical engineering training in a variety of cell imaging applications.
National Institute on Drug Abuse (RO1 DA047410-01); New York State Department of Health (DOH01-C33920GG-3450000); Peter T. Rowley Breast Cancer Scientific Research Projects (C33920GG).
1. A. Khmaladze, A. Restrepo-Martínez, M. K. Kim, R. Castañeda, and A. Blandón, "Simultaneous Dual-Wavelength Reflection Digital Holography Applied to the Study of the Porous Coal Samples," Appl. Opt. 47(17), 3203–3210 (2008). [CrossRef]
2. A. Khmaladze, M. K. Kim, and C.-M. Lo, "Phase Imaging of Cells by Simultaneous Dual-Wavelength Reflection Digital Holography," Opt. Express 16(15), 10900–10911 (2008). [CrossRef]
3. A. Khmaladze, T. Epstein, and Z. Chen, "Phase Unwrapping by Varying the Reconstruction Distance in Digital Holographic Microscopy," Opt. Lett. 35(7), 1040–1042 (2010). [CrossRef]
4. A. Khmaladze, R. L. Matz, C. Zhang, T. Wang, M. M. Banaszak Holl, and Z. Chen, "Dual-Wavelength Linear Regression Phase Unwrapping in Three-Dimensional Microscopic Images of Cancer Cells," Opt. Lett. 36(6), 912–914 (2011). [CrossRef]
5. A. Khmaladze, R. L. Matz, J. Jasensky, E. Seeley, M. M. Banaszak Holl, and Z. Chen, "Dual-Wavelength Digital Holographic Imaging with Phase Background Subtraction," Opt. Eng. 51(5), 055801 (2012). [CrossRef]
6. A. Khmaladze, R. L. Matz, T. Epstein, J. Jasensky, M. M. Banaszak Holl, and Z. Chen, "Cell Volume Changes During Apoptosis Monitored in Real Time Using Digital Holographic Microscopy," J. Struct. Biol. 178(3), 270–278 (2012). [CrossRef]
7. A. Sharikova, E. Quaye, J. Y. Park, M. C. Maloney, H. Desta, R. Thiyagarajan, K. L. Seldeen, N. U. Parikh, P. Sandhu, A. Khmaladze, B. R. Troen, S. A. Schwartz, and S. D. Mahajan, "Methamphetamine Induces Apoptosis of Microglia via the Intrinsic Mitochondrial-Dependent Pathway," J. Neuroimmune Pharmacol. 13(3), 396–411 (2018). [CrossRef]
8. L. Y. D'Brant, H. Desta, T. C. Khoo, A. Sharikova, S. D. Mahajan, and A. Khmaladze, "Methamphetamine-induced apoptosis in glial cells examined under marker-free imaging modalities," J. Biomed. Opt. 24(4), 046503 (2019). [CrossRef]
9. P. Ferraro, S. De Nicola, A. Finizio, G. Coppola, S. Grilli, C. Magro, and G. Pierattini, "Compensation of the inherent wave front curvature in digital holographic coherent microscopy for quantitative phase-contrast imaging," Appl. Opt. 42(11), 1938–1946 (2003). [CrossRef]
10. T. Colomb, E. Cuche, F. Charrière, J. Kühn, N. Aspert, F. Montfort, P. Marquet, and C. Depeursinge, "Automatic procedure for aberration compensation in digital holographic microscopy and applications to specimen shape compensation," Appl. Opt. 45(5), 851–863 (2006). [CrossRef]
11. T. Colomb, F. Montfort, J. Kühn, N. Aspert, E. Cuche, A. Marian, F. Charrière, S. Bourquin, P. Marquet, and C. Depeursinge, "Numerical parametric lens for shifting, magnification, and complete aberration compensation in digital holographic microscopy," J. Opt. Soc. Am. A 23(12), 3177–3190 (2006). [CrossRef]
12. F. Montfort, F. Charrière, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, "Purely numerical compensation for microscope objective phase curvature in digital holographic microscopy: influence of digital phase mask position," J. Opt. Soc. Am. A 23(11), 2944–2953 (2006). [CrossRef]
13. E. Cuche, P. Marquet, and C. Depeursinge, "Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms," Appl. Opt. 38(34), 6994–7001 (1999). [CrossRef]
14. J. Min, B. Yao, P. Gao, B. Ma, S. Yan, F. Peng, J. Zheng, T. Ye, and R. Rupp, "Wave-front curvature compensation of polarization phase-shifting digital holography," Optik 123(17), 1525–1529 (2012). [CrossRef]
15. T. C. Khoo, A. Sharikova, and A. Khmaladze, "Dual wavelength digital holographic imaging of layered structures," Opt. Commun. 458, 124793 (2020). [CrossRef]
16. M. R. Teague, "Deterministic phase retrieval: a green's function solution," J. Opt. Soc. Am. 73(11), 1434–1441 (1983). [CrossRef]
17. N. Streibl, "Phase imaging by the transport equation of intensity," Opt. Commun. 49(1), 6–10 (1984). [CrossRef]
18. A. Barty, K. A. Nugent, D. M. Paganin, and A. Roberts, "Quantitative Optical Phase Microscopy," Opt. Lett. 23(11), 817–819 (1998). [CrossRef]
19. T. E. Gureyev, Y. I. Nesterets, D. M. Paganin, A. Pogany, and S. W. Wilkins, "Linear Algorithms for Phase Retrieval in the Fresnel Region. 2. Partially Coherent Illumination," Opt. Commun. 259(2), 569–580 (2006). [CrossRef]
20. T. E. Gureyev and K. A. Nugent, "Phase retrieval with the transport-of-intensity equation II. Orthogonal series solution for nonuniform illumination," J. Opt. Soc. Am. A 13(8), 1670–1682 (1996). [CrossRef]
21. C. Zuo, Q. Chen, W. Qu, and A. Asundi, "Noninterferometric single-shot quantitative phase microscopy," Opt. Lett. 38(18), 3538–3541 (2013). [CrossRef]
22. X. Tian, W. Yu, X. Meng, A. Sun, L. Xue, C. Liu, and S. Wang, "Real-time quantitative phase imaging based on transport of intensity equation with dual simultaneously recorded field of view," Opt. Lett. 41(7), 1427–1430 (2016). [CrossRef]
23. W. Yu, X. Tian, X. He, X. Song, L. Xue, C. Liu, and S. Wang, "Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method," Appl. Phys. Lett. 109(7), 071112 (2016). [CrossRef]
24. C. Zuo, Q. Chen, W. Qu, and A. Asundi, "High-speed transport-of-intensity phase microscopy with an electrically tunable lens," Opt. Express 21(20), 24060–24075 (2013). [CrossRef]
25. J. C. Petruccelli, L. Tian, and G. Barbastathis, "The transport of intensity equation for optical path length recovery using partially coherent illumination," Opt. Express 21(12), 14430–14441 (2013). [CrossRef]
26. T. E. Gureyev, A. Roberts, and K. A. Nugent, "Phase retrieval with the transport-of-intensity equation: matrix solution with use of Zernike polynomials," J. Opt. Soc. Am. A 12(9), 1932–1941 (1995). [CrossRef]
27. A. M. Zysk, R. W. Schoonover, P. S. Carney, and M. A. Anastasio, "Transport of intensity and spectrum for partially coherent fields," Opt. Lett. 35(13), 2239–2241 (2010). [CrossRef]
28. C. Zuo, J. Sun, J. Li, J. Zhang, A. Asundi, and Q. Chen, "High-resolution transport-of-intensity quantitative phase microscopy with annular illumination," Sci. Rep. 7(1), 7654 (2017). [CrossRef]
29. S. S. Kou, L. Waller, G. Barbastathis, P. Marquet, C. Depeursinge, and C. J. R. Sheppard, "Quantitative phase restoration by direct inversion using the optical transfer function," Opt. Lett. 36(14), 2671–2673 (2011). [CrossRef]
30. P. K. Poola and R. John, "Label-free nanoscale characterization of red blood cell structure and dynamics using single-shot transport of intensity equation," J. Biomed. Opt. 22(10), 106001 (2017). [CrossRef]
31. Y. Li, C. Ma, J. Zhang, J. Zhong, K. Wang, T. Xi, and J. Zhao, "Quantitative phase microscopy for cellular dynamics based on transport of intensity equation," Opt. Express 26(1), 586–593 (2018). [CrossRef]
32. C. Zuo, Q. Chen, and A. Asundi, "Comparison of Digital Holography and Transport of Intensity for Quantitative Phase Contrast Imaging," Fringe 2013, 137–142 (2013). [CrossRef]
33. B. Rappaz, B. Breton, E. Shaffer, and G. Turcatti, "Digital Holographic Microscopy: A Quantitative Label-Free Microscopy Technique for Phenotypic Screening," Comb. Chem. High Throughput Screening 17(1), 80–88 (2014). [CrossRef]
34. C. J. Mann, L. F. Yu, C.-M. Lo, and M. K. Kim, "High-resolution quantitative phase-contrast microscopy by digital holography," Opt. Express 13(22), 8693–8698 (2005). [CrossRef]
35. V. V. Volkov, Y. Zhu, and M. De Graef, "A new symmetrized solution for phase retrieval using the transport of intensity equation," Micron 33(5), 411–416 (2002). [CrossRef]
36. F. Roddier, "Curvature sensing and compensation: a new concept in adaptive optics," Appl. Opt. 27(7), 1223–1225 (1988). [CrossRef]
37. C. Zuo, Q. Chen, and A. Asundi, "Boundary-artifact-free phase retrieval with the transport of intensity equation: fast solution with use of discrete cosine transform," Opt. Express 22(8), 9220 (2014). [CrossRef]
38. T. Chakraborty and J. C. Petruccelli, "Source diversity for transport of intensity phase imaging," Opt. Express 25(8), 9122–9137 (2017). [CrossRef]
39. T. Chakraborty and J. C. Petruccelli, "Optical convolution for quantitative phase retrieval using the transport of intensity equation," Appl. Opt. 57, A134–A141 (2018). [CrossRef]
40. D. Paganin, A. Barty, P. McMahon, and K. Nugent, "Quantitative phase-amplitude microscopy. III. The Effects of Noise," J. Microsc. 214(1), 51–61 (2004). [CrossRef]
41. L. Waller, L. Tian, and G. Barbastathis, "Transport of Intensity phase-amplitude imaging with higher order intensity derivatives," Opt. Express 18(12), 12552–61 (2010). [CrossRef]
42. R. Bie, X.-H. Yuan, M. Zhao, and L. Zhang, "Method for estimating the axial intensity derivative in the TIE with higher order intensity derivatives and noise suppression," Opt. Express 20(7), 8186 (2012). [CrossRef]
43. S. Zheng, B. Xue, W. Xue, X. Bai, and F. Zhou, "Transport of intensity phase imaging from multiple noisy intensities measured in unequally spaced planes," Opt. Express 20(2), 972 (2012). [CrossRef]
44. C. Zuo, Q. Chen, Y. Yu, and A. Asundi, "Transport-of-intensity phase imaging using Savitzky-Golay differentiation filter–theory and applications," Opt. Express 21(5), 5346–5362 (2013). [CrossRef]
45. Z. Jingshan, R. A. Claus, J. Dauwels, L. Tian, and L. Waller, "Transport of Intensity phase imaging by intensity spectrum fitting of exponentially spaced defocus planes," Opt. Express 22(9), 10661 (2014). [CrossRef]
46. A. Kostenko, K. J. Batenburg, H. Suhonen, S. E. Offerman, and L. J. van Vliet, "Phase retrieval in in-line x-ray phase contrast imaging based on total variation minimization," Opt. Express 21(1), 710–723 (2013). [CrossRef]
47. L. Tian, J. C. Petruccelli, and G. Barbastathis, "Nonlinear diffusion regularization for transport of intensity phase imaging," Opt. Lett. 37(19), 4131–4133 (2012). [CrossRef]
A. Khmaladze, A. Restrepo-Martínez, M. K. Kim, R. Castañeda, and A. Blandón, "Simultaneous Dual-Wavelength Reflection Digital Holography Applied to the Study of the Porous Coal Samples," Appl. Opt. 47(17), 3203–3210 (2008).
A. Khmaladze, M. K. Kim, and C.-M. Lo, "Phase Imaging of Cells by Simultaneous Dual-Wavelength Reflection Digital Holography," Opt. Express 16(15), 10900–10911 (2008).
A. Khmaladze, T. Epstein, and Z. Chen, "Phase Unwrapping by Varying the Reconstruction Distance in Digital Holographic Microscopy," Opt. Lett. 35(7), 1040–1042 (2010).
A. Khmaladze, R. L. Matz, C. Zhang, T. Wang, M. M. Banaszak Holl, and Z. Chen, "Dual-Wavelength Linear Regression Phase Unwrapping in Three-Dimensional Microscopic Images of Cancer Cells," Opt. Lett. 36(6), 912–914 (2011).
A. Khmaladze, R. L. Matz, J. Jasensky, E. Seeley, M. M. Banaszak Holl, and Z. Chen, "Dual-Wavelength Digital Holographic Imaging with Phase Background Subtraction," Opt. Eng. 51(5), 055801 (2012).
A. Khmaladze, R. L. Matz, T. Epstein, J. Jasensky, M. M. Banaszak Holl, and Z. Chen, "Cell Volume Changes During Apoptosis Monitored in Real Time Using Digital Holographic Microscopy," J. Struct. Biol. 178(3), 270–278 (2012).
A. Sharikova, E. Quaye, J. Y. Park, M. C. Maloney, H. Desta, R. Thiyagarajan, K. L. Seldeen, N. U. Parikh, P. Sandhu, A. Khmaladze, B. R. Troen, S. A. Schwartz, and S. D. Mahajan, "Methamphetamine Induces Apoptosis of Microglia via the Intrinsic Mitochondrial-Dependent Pathway," J. Neuroimmune Pharmacol. 13(3), 396–411 (2018).
L. Y. D'Brant, H. Desta, T. C. Khoo, A. Sharikova, S. D. Mahajan, and A. Khmaladze, "Methamphetamine-induced apoptosis in glial cells examined under marker-free imaging modalities," J. Biomed. Opt. 24(4), 046503 (2019).
P. Ferraro, S. De Nicola, A. Finizio, G. Coppola, S. Grilli, C. Magro, and G. Pierattini, "Compensation of the inherent wave front curvature in digital holographic coherent microscopy for quantitative phase-contrast imaging," Appl. Opt. 42(11), 1938–1946 (2003).
T. Colomb, E. Cuche, F. Charrière, J. Kühn, N. Aspert, F. Montfort, P. Marquet, and C. Depeursinge, "Automatic procedure for aberration compensation in digital holographic microscopy and applications to specimen shape compensation," Appl. Opt. 45(5), 851–863 (2006).
T. Colomb, F. Montfort, J. Kühn, N. Aspert, E. Cuche, A. Marian, F. Charrière, S. Bourquin, P. Marquet, and C. Depeursinge, "Numerical parametric lens for shifting, magnification, and complete aberration compensation in digital holographic microscopy," J. Opt. Soc. Am. A 23(12), 3177–3190 (2006).
F. Montfort, F. Charrière, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, "Purely numerical compensation for microscope objective phase curvature in digital holographic microscopy: influence of digital phase mask position," J. Opt. Soc. Am. A 23(11), 2944–2953 (2006).
E. Cuche, P. Marquet, and C. Depeursinge, "Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms," Appl. Opt. 38(34), 6994–7001 (1999).
J. Min, B. Yao, P. Gao, B. Ma, S. Yan, F. Peng, J. Zheng, T. Ye, and R. Rupp, "Wave-front curvature compensation of polarization phase-shifting digital holography," Optik 123(17), 1525–1529 (2012).
T. C. Khoo, A. Sharikova, and A. Khmaladze, "Dual wavelength digital holographic imaging of layered structures," Opt. Commun. 458, 124793 (2020).
M. R. Teague, "Deterministic phase retrieval: a green's function solution," J. Opt. Soc. Am. 73(11), 1434–1441 (1983).
N. Streibl, "Phase imaging by the transport equation of intensity," Opt. Commun. 49(1), 6–10 (1984).
A. Barty, K. A. Nugent, D. M. Paganin, and A. Roberts, "Quantitative Optical Phase Microscopy," Opt. Lett. 23(11), 817–819 (1998).
T. E. Gureyev, Y. I. Nesterets, D. M. Paganin, A. Pogany, and S. W. Wilkins, "Linear Algorithms for Phase Retrieval in the Fresnel Region. 2. Partially Coherent Illumination," Opt. Commun. 259(2), 569–580 (2006).
T. E. Gureyev and K. A. Nugent, "Phase retrieval with the transport-of-intensity equation II. Orthogonal series solution for nonuniform illumination," J. Opt. Soc. Am. A 13(8), 1670–1682 (1996).
C. Zuo, Q. Chen, W. Qu, and A. Asundi, "Noninterferometric single-shot quantitative phase microscopy," Opt. Lett. 38(18), 3538–3541 (2013).
X. Tian, W. Yu, X. Meng, A. Sun, L. Xue, C. Liu, and S. Wang, "Real-time quantitative phase imaging based on transport of intensity equation with dual simultaneously recorded field of view," Opt. Lett. 41(7), 1427–1430 (2016).
W. Yu, X. Tian, X. He, X. Song, L. Xue, C. Liu, and S. Wang, "Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method," Appl. Phys. Lett. 109(7), 071112 (2016).
C. Zuo, Q. Chen, W. Qu, and A. Asundi, "High-speed transport-of-intensity phase microscopy with an electrically tunable lens," Opt. Express 21(20), 24060–24075 (2013).
J. C. Petruccelli, L. Tian, and G. Barbastathis, "The transport of intensity equation for optical path length recovery using partially coherent illumination," Opt. Express 21(12), 14430–14441 (2013).
T. E. Gureyev, A. Roberts, and K. A. Nugent, "Phase retrieval with the transport-of-intensity equation: matrix solution with use of Zernike polynomials," J. Opt. Soc. Am. A 12(9), 1932–1941 (1995).
A. M. Zysk, R. W. Schoonover, P. S. Carney, and M. A. Anastasio, "Transport of intensity and spectrum for partially coherent fields," Opt. Lett. 35(13), 2239–2241 (2010).
C. Zuo, J. Sun, J. Li, J. Zhang, A. Asundi, and Q. Chen, "High-resolution transport-of-intensity quantitative phase microscopy with annular illumination," Sci. Rep. 7(1), 7654 (2017).
S. S. Kou, L. Waller, G. Barbastathis, P. Marquet, C. Depeursinge, and C. J. R. Sheppard, "Quantitative phase restoration by direct inversion using the optical transfer function," Opt. Lett. 36(14), 2671–2673 (2011).
P. K. Poola and R. John, "Label-free nanoscale characterization of red blood cell structure and dynamics using single-shot transport of intensity equation," J. Biomed. Opt. 22(10), 106001 (2017).
Y. Li, C. Ma, J. Zhang, J. Zhong, K. Wang, T. Xi, and J. Zhao, "Quantitative phase microscopy for cellular dynamics based on transport of intensity equation," Opt. Express 26(1), 586–593 (2018).
C. Zuo, Q. Chen, and A. Asundi, "Comparison of Digital Holography and Transport of Intensity for Quantitative Phase Contrast Imaging," Fringe 2013, 137–142 (2013).
B. Rappaz, B. Breton, E. Shaffer, and G. Turcatti, "Digital Holographic Microscopy: A Quantitative Label-Free Microscopy Technique for Phenotypic Screening," Comb. Chem. High Throughput Screening 17(1), 80–88 (2014).
C. J. Mann, L. F. Yu, C.-M. Lo, and M. K. Kim, "High-resolution quantitative phase-contrast microscopy by digital holography," Opt. Express 13(22), 8693–8698 (2005).
V. V. Volkov, Y. Zhu, and M. De Graef, "A new symmetrized solution for phase retrieval using the transport of intensity equation," Micron 33(5), 411–416 (2002).
F. Roddier, "Curvature sensing and compensation: a new concept in adaptive optics," Appl. Opt. 27(7), 1223–1225 (1988).
C. Zuo, Q. Chen, and A. Asundi, "Boundary-artifact-free phase retrieval with the transport of intensity equation: fast solution with use of discrete cosine transform," Opt. Express 22(8), 9220 (2014).
T. Chakraborty and J. C. Petruccelli, "Source diversity for transport of intensity phase imaging," Opt. Express 25(8), 9122–9137 (2017).
T. Chakraborty and J. C. Petruccelli, "Optical convolution for quantitative phase retrieval using the transport of intensity equation," Appl. Opt. 57, A134–A141 (2018).
D. Paganin, A. Barty, P. McMahon, and K. Nugent, "Quantitative phase-amplitude microscopy. III. The Effects of Noise," J. Microsc. 214(1), 51–61 (2004).
L. Waller, L. Tian, and G. Barbastathis, "Transport of Intensity phase-amplitude imaging with higher order intensity derivatives," Opt. Express 18(12), 12552–61 (2010).
R. Bie, X.-H. Yuan, M. Zhao, and L. Zhang, "Method for estimating the axial intensity derivative in the TIE with higher order intensity derivatives and noise suppression," Opt. Express 20(7), 8186 (2012).
S. Zheng, B. Xue, W. Xue, X. Bai, and F. Zhou, "Transport of intensity phase imaging from multiple noisy intensities measured in unequally spaced planes," Opt. Express 20(2), 972 (2012).
C. Zuo, Q. Chen, Y. Yu, and A. Asundi, "Transport-of-intensity phase imaging using Savitzky-Golay differentiation filter–theory and applications," Opt. Express 21(5), 5346–5362 (2013).
Z. Jingshan, R. A. Claus, J. Dauwels, L. Tian, and L. Waller, "Transport of Intensity phase imaging by intensity spectrum fitting of exponentially spaced defocus planes," Opt. Express 22(9), 10661 (2014).
A. Kostenko, K. J. Batenburg, H. Suhonen, S. E. Offerman, and L. J. van Vliet, "Phase retrieval in in-line x-ray phase contrast imaging based on total variation minimization," Opt. Express 21(1), 710–723 (2013).
L. Tian, J. C. Petruccelli, and G. Barbastathis, "Nonlinear diffusion regularization for transport of intensity phase imaging," Opt. Lett. 37(19), 4131–4133 (2012).
Anastasio, M. A.
Aspert, N.
Asundi, A.
Bai, X.
Banaszak Holl, M. M.
Barbastathis, G.
Barty, A.
Batenburg, K. J.
Bie, R.
Blandón, A.
Bourquin, S.
Breton, B.
Carney, P. S.
Castañeda, R.
Chakraborty, T.
Charrière, F.
Chen, Q.
Claus, R. A.
Colomb, T.
Cuche, E.
D'Brant, L. Y.
Dauwels, J.
De Graef, M.
De Nicola, S.
Depeursinge, C.
Desta, H.
Epstein, T.
Ferraro, P.
Finizio, A.
Gao, P.
Grilli, S.
Gureyev, T. E.
He, X.
Jasensky, J.
Jingshan, Z.
John, R.
Khmaladze, A.
Khoo, T. C.
Kim, M. K.
Kostenko, A.
Kou, S. S.
Kühn, J.
Li, J.
Li, Y.
Liu, C.
Lo, C.-M.
Ma, B.
Ma, C.
Magro, C.
Mahajan, S. D.
Maloney, M. C.
Mann, C. J.
Marian, A.
Marquet, P.
Matz, R. L.
McMahon, P.
Meng, X.
Min, J.
Montfort, F.
Nesterets, Y. I.
Nugent, K.
Nugent, K. A.
Offerman, S. E.
Paganin, D.
Paganin, D. M.
Parikh, N. U.
Park, J. Y.
Peng, F.
Petruccelli, J. C.
Pierattini, G.
Pogany, A.
Poola, P. K.
Qu, W.
Quaye, E.
Rappaz, B.
Restrepo-Martínez, A.
Roberts, A.
Roddier, F.
Rupp, R.
Sandhu, P.
Schoonover, R. W.
Schwartz, S. A.
Seeley, E.
Seldeen, K. L.
Shaffer, E.
Sharikova, A.
Sheppard, C. J. R.
Song, X.
Streibl, N.
Suhonen, H.
Sun, A.
Sun, J.
Teague, M. R.
Thiyagarajan, R.
Tian, L.
Tian, X.
Troen, B. R.
Turcatti, G.
van Vliet, L. J.
Volkov, V. V.
Waller, L.
Wang, K.
Wang, T.
Wilkins, S. W.
Xi, T.
Xue, B.
Xue, L.
Xue, W.
Yan, S.
Yao, B.
Ye, T.
Yu, L. F.
Yu, W.
Yu, Y.
Yuan, X.-H.
Zhang, J.
Zhang, L.
Zhao, J.
Zhao, M.
Zheng, J.
Zheng, S.
Zhong, J.
Zhou, F.
Zhu, Y.
Zuo, C.
Zysk, A. M.
Appl. Opt. (6)
Comb. Chem. High Throughput Screening (1)
J. Biomed. Opt. (2)
J. Microsc. (1)
J. Neuroimmune Pharmacol. (1)
J. Opt. Soc. Am. (1)
J. Opt. Soc. Am. A (4)
J. Struct. Biol. (1)
Sci. Rep. (1)
Equations on this page are rendered with MathJax. Learn more.
(1) ∂ I ( x , z ) ∂ z = − λ 2 π M ∇ x ⋅ [ I ( x , z ) ∇ x ϕ ( x , z ) ] ,
(2) I ( x , Δ z ) − I ( x , − Δ z ) 2 Δ z = − λ 2 π M ∇ x ⋅ [ I ( x , 0 ) ∇ x ϕ ( x , 0 ) ] .
(3) ∇ x 2 Θ = − 2 π M λ I ( x , Δ z ) − I ( x , − Δ z ) 2 Δ z ,
(4) ∇ x 2 ϕ ( x , 0 ) = ∇ x ⋅ [ ∇ x Θ I ( x , 0 ) ] .
(5) O P D = λ 2 π ϕ ( x ) = ( n f − n o ) t .
RMS Error between TIE and DHM
RMS Error (µm)
Spoke Target 4(c) and 4(f) 0.04
Cheek cell, horizontal line 5(e) 0.14
Cheek cell, vertical line 5(f) 0.09 | CommonCrawl |
BMC Bioinformatics
Scalable analysis of Big pathology image data cohorts using efficient methods and high-performance computing strategies
Tahsin Kurc1Email author,
Xin Qi2, 8,
Daihou Wang3,
Fusheng Wang1, 4,
George Teodoro1, 5,
Lee Cooper6,
Michael Nalisnik6,
Lin Yang7,
Joel Saltz1 and
David J. Foran2, 8
BMC Bioinformatics201516:399
© Kurc et al. 2015
We describe a suite of tools and methods that form a core set of capabilities for researchers and clinical investigators to evaluate multiple analytical pipelines and quantify sensitivity and variability of the results while conducting large-scale studies in investigative pathology and oncology. The overarching objective of the current investigation is to address the challenges of large data sizes and high computational demands.
The proposed tools and methods take advantage of state-of-the-art parallel machines and efficient content-based image searching strategies. The content based image retrieval (CBIR) algorithms can quickly detect and retrieve image patches similar to a query patch using a hierarchical analysis approach. The analysis component based on high performance computing can carry out consensus clustering on 500,000 data points using a large shared memory system.
Our work demonstrates efficient CBIR algorithms and high performance computing can be leveraged for efficient analysis of large microscopy images to meet the challenges of clinically salient applications in pathology. These technologies enable researchers and clinical investigators to make more effective use of the rich informational content contained within digitized microscopy specimens.
Examination of the micro-anatomic characteristics of normal and diseased tissue is important in the study of many types of disease. The evaluation process can reveal new insights as to the underlying mechanisms of disease onset and progression and can augment genomic and clinical information for more accurate diagnosis and prognosis [1–3]. It is highly desirable in research and clinical studies to use large datasets of high-resolution tissue images in order to obtain robust and statistically significant results. Today a whole slide tissue image (WSI) can be obtained in a few minutes using a state-of-the-art scanner. These instruments provide complex auto-focusing mechanisms and slide trays, making it possible to automate the digitization of hundreds of slides with minimal human intervention. We expect that these advances will facilitate the establishment of WSI repositories containing thousands of images for the purposes of investigative research and healthcare delivery. An example of a large repository of WSIs is The Cancer Genome Atlas (TCGA) repository, which contains more than 30,000 tissue images that have been obtained from over 25 different cancer types.
As it is impractical to manually analyze thousands of WSIs, researchers have turned their attention towards computer-aided methods and analytical pipelines [4–12]. The systematic analysis of WSIs is both computationally expensive and data intensive. A WSI may contain billions of pixels. In fact, imaging a tissue specimen at 40x magnification can generate a color image of 100,000x100,000 pixels in resolution and close to 30GB in size (uncompressed). A segmentation and feature computation pipeline can take a couple of hours to process an image on a single CPU-core. It will generate on average 400,000 segmented objects (nuclei, cells) while computing large numbers of shape and texture features per object. The analysis of the TCGA datasets (over 30,000 images) would require 2–3 years on a workstation and generate 12 billion segmented nuclei and 480 billion features in a single analysis run. If a segmented nucleus were represented by a polygon of 5 points on average and the features were stored as 4-byte floating point numbers, the memory and storage requirements for a single analysis of 30,000 images would be about 2.4 Terabytes. Moreover, because many analysis pipelines are sensitive to input parameters, a dataset may need to be analyzed multiple times while systematically varying the operational settings to achieve optimized results.
These computational and data challenges and those that are likely to emerge as imaging technologies gain further use and adoption, require efficient and scalable techniques and tools to conduct large-scale studies. Our work contributes a suite of methods and software that implement three core functions to quickly explore large image datasets, generate analysis results, and mine the results reliably and efficiently. These core functions are:
Function 1: Content-based search and retrieval of images and image regions of interest from an image dataset
This function enables investigators to find images of interest based not only on image metadata (e.g., type of tissue, disease, imaging instrument), but also on image content and image-based signatures. We have developed an efficient content-based image search and retrieval methodology that can automatically detect and return those images (or sub-regions) in a dataset that exhibit the most similar computational signatures to a representative, sample image patch.
A growing number of applications now routinely utilize digital imaging technologies to support investigative research and routine diagnostic procedures. This trend has resulted in a significant need for efficient content-based image retrieval (CBIR) methods. CBIR has been one of the most active research areas in a wide spectrum of imaging informatics fields [13–25]. Several domains stand to benefit from the use of CBIR including education, investigative basic and clinical research, and the practice of medicine. CBIR has been successfully utilized in applications spanning radiology [16, 23, 26, 27], pathology [21, 28–30], dermatology [31, 32] and cytology [33–35]. Several successful CBIR systems have been developed for medical applications since the 1980's. Some of these systems utilize simple features such as color histograms [36], shape [16, 34], texture [18, 37], or fuzzy features [19] to characterize the content of images while allowing higher level diagnostic abstractions based on systematic queries [16, 37–39]. The recent adoption and popularity of case-based reasoning and evidence-based medicine [40] has created a compelling need for more reliable image retrieval strategies to support diagnostic decisions. In fact, a number of state-of-the-art CBIR systems have been designed to support the processing of queries across imaging modalities [16, 21, 23–25, 27, 28, 41–44].
Drawing from the previous work, our research and development effort has made several significant contributions in CBIR. To summarize, our team has developed (1) a library of image processing methods for performing automated registration, segmentation, feature extraction, and classification of imaged specimens; (2) data management and query capabilities for archiving imaged tissues and organizing imaging results; (3) algorithms and methods for automatically retrieving imaged specimens based upon similarities in computational signatures and correlated clinical data, including metadata describing the specified tissue and physical specimen; and (4) components for analyses of imaged tissue samples across multi-institutional environments. These algorithms, tools and components have been integrated into a software system, called ImageMiner, that supports a range of tissue related analyses in clinical and investigative oncology and pathology [45–49].
Function 2: Computing quantitative features on images by segmenting objects, such as nuclei and cells, and computing shape and texture features for the delineated structures
This function gleans quantitative information about morphology of an imaged tissue at the sub-cellular scales. We have developed a framework that utilizes CPUs and GPUs in a coordinated manner. The framework enables image analysis pipelines to exploit the hybrid architectures of modern high-performance computing systems for large-scale analyses.
The use of CPU-GPU equipped computing systems is a growing trend in the high performance computing (HPC) community [50]. Efficient utilization of these machines is a significant problem that necessitates new techniques and software tools that can optimize the scheduling of application operations to CPUs and GPUs. This challenge has motivated several programming languages and runtime systems [51–62] and specialized libraries [63]. Ravi et al. [60, 61] proposed compiler techniques coupled with runtime systems for execution of generalized reductions in CPU-GPU machines. Frameworks such as DAGuE [59] and StarPU [54] support regular linear algebra applications on CPU-GPU machines and implement scheduling strategies that prioritize computation of tasks in the critical path of execution. The use of HPC systems in Biomedical Informatics research is an increasingly important topic, which has been the focus of several recent research initiatives [46, 48, 57, 58, 64–70]. These efforts include GPU-accelerated systems and applications [71–78].
A major concentration for our team has been the development of algorithms, strategies and tools that facilitate high-throughput processing of large-scale WSI datasets. We have made several contributions in addressing and resolving several key issues: (1) Some image analysis operations have regular data access and processing patterns, which are suitable for parallelization on a GPU. However, data access and processing patterns in some operations, such as morphological reconstruction and distance transform operations in image segmentation, are irregular and dynamic. We have developed a novel queue based wavefront propagation approach to speed up such operations on GPUs, multi-core CPUs, and combined CPU-GPU configurations [79]; (2) Image analysis applications can be implemented as hierarchical data flow pipelines in which coarse-grain stages are divided into fine-grain operations. We have developed runtime to support composition and execution of hierarchical data flow pipelines on machines with multi-core CPUs and multiple GPUs [80, 81]. Our experiments showed significant performance gains over single program or coarse-grain workflow implementations; (3) Earlier work in mapping and scheduling of application operations onto CPUs and GPUs primarily targeted regular operations. Operations in image analysis pipelines can have high variability with respect to GPU accelerations and their performances depend on input data. Hence, high throughput processing of datasets on hybrid machines requires more dynamic scheduling of operations. We have developed a novel priority-queue based approach that takes into account the variability in GPU acceleration of data processing operations to perform better scheduling decisions [80, 81]. This seemingly simple priority-queue data structure and the associated scheduling algorithm (which we refer to as performance aware scheduling technique) have showed significant performance improvements. We have combined this scheduling approach with other optimizations such as data locality aware task assignment, data prefetching, and overlapping of data copy operations with computations in order to reduce data copy costs. (4) We have integrated all of these optimizations into an operational framework.
Function 3: Storing and indexing computed quantitative features in a database and mining them to classify images and subjects
This function enables interrogation and mining of large volumes of analysis results for researchers to look for patterns in image data and correlate these signatures with profiles of clinical and genomic data types. We have developed methods that leverage HPC and Cloud database technologies to support indexing and querying large volumes of data. Clustering algorithms use heuristics for partitioning a dataset into groups and are shown to be sensitive to input data and initialization conditions. Consensus and ensemble clustering approaches aim to address this problem by combining results from multiple clustering runs and/or multiple algorithms [82–89]. This requires significant processing power and large memory space when applied to large numbers of segmented objects. For example, when classification and correlation analyses are carried out on an ensemble of segmented nuclei, the number of delineated objects of interest may rise well into the millions. We introduce a parallel implementation of consensus clustering on a shared-memory cluster system to scale the process to large numbers of data elements for image analysis applications.
Figure 1 illustrates the three functions we have introduced in Section 1 using a high-level image analysis example. In this scenario, an investigator is interested in studying relationships between the properties of nuclei and clinical and molecular data for tissues that exhibit features similar to those of Gleason grade 5 prostate cancer tissue images (see Section 2.1 for a description of those features). Function 1: The investigator searches for images in a dataset of WSIs based not only on image metadata, e.g., prostate cancer tissue, but also based on whether an image contains patches that are similar to a given image query patch. The image query patch is a small image tile containing representative tissue with the target Gleason grade. In Section 2.1, we describe the methodologies to perform quick, reliable CBIR. We presented a high-throughput parallelization approach for CBIR algorithms in earlier work [49, 90]. Function 2: The output from Function 1 is a set of images and imaged regions that exhibit architecture and staining characteristics which are consistent with the input query provided by the investigator. In order to extract refined morphological information from the images, the investigator composes an analytical pipeline consisting of segmentation and feature computation operations. The segmentation operations detect nuclei in each image and extract their boundaries. The feature computation operations compute shape and texture features, such as area, elongation, and intensity distribution, for each segmented nucleus. The investigator runs this pipeline on a distributed memory cluster to quickly process the image set. We describe in Section 2.2 a framework to take advantage of high performance computing systems, in which computation nodes have multi-core CPUs and one or more GPUs, for high throughput processing of a large number of WSIs. Function 3: The analysis of images can generate a large number of features –the number of segmented nuclei in a single analysis run can be hundreds of millions for a dataset of a few thousand images. The investigator stores the segmentation results in a high performance database for further analysis. The next step after the segmentation and feature computation process is to execute a classification algorithm to cluster the segmentation results into groups and look for correlations between this grouping and the grouping of data based on clinical and molecular information (e.g., gene mutations or patient survival rates). A consensus clustering approach may be preferable to obtain robust clustering results [91]. In Section 2.3, we present a consensus clustering implementation for shared-memory parallel machines.
Functions supported by methods and tools described in this paper. Starting from a dataset of whole slide images, a researcher can employ methods in Function 1 to select a subset of images based on image content. If an image has a patch that is similar to the query patch, the image is selected for processing. The selected set of images is then processed through analysis pipelines in Function 2. In the figure, the analysis pipeline segments nuclei in each image and computes a set of shape and texture features. The segmented nuclei and their features are loaded to a database for future analyses in Function 3. In addition, the researcher runs a clustering algorithm to cluster nuclei and patients into groups to look at correlations with groupings based on clinical and genomic data. Clustering, more specifically consensus clustering, requires significant memory and computation power. Using the methods described in this paper, the research can employ a shared-memory system to perform consensus clustering
Function 1. Efficient methodology to search for images and image regions: hierarchical CBSIR
This function facilitates the retrieval of regions of interests within an image dataset. These regions have similar color, morphology or structural patterns to those of the query image patch. In this way, investigators and physicians can select a set of images based on their contents in addition to any corresponding metadata such as the type of tissue and image acquisition device utilized. In this function, given a query patch, each image within the data set is scanned in x and y directions systematically detecting each image patch having the patterns of the query patch [90]. For each candidate image patch, a CBIR algorithm is applied to check if the image patch is similar to the query patch. Existing CBIR libraries are mostly focused on natural or computer vision image retrieval. Our CBIR algorithm is primarily designed to support the interrogation and assessment of imaged histopathology specimens.
This approach is based on a novel method called hierarchical annular feature (HAF) and a three-stage search scheme. It provides several benefits: (1) scale and rotation invariance; (2) capacity to capture spatial configuration of image local features; and (3) suitability for hierarchical searching and parallel sub-image retrieval.
Execution of CBIR process
The CBIR process is executed in three stages: hierarchical searching, refined searching and mean-shift clustering. The hierarchical searching stage is an iterative process that discards those candidates exhibiting signatures inconsistent with the query. This is done in step-wise fashion in each iteration. The stage begins by calculating the image features of the inner (first) central bins for candidate patches and compares them with those of the query patch. Based on the level of dissimilarity between the query patch and the patch being tested, it removes a certain percentage of candidates after the first iteration. For the second iteration, it calculates the image features from the second central bin only, and further eliminates a certain percentage of candidates by computing the dissimilarity with the features of the query patch from the two inner bins. At the end of this stage, the final candidates are those that have passed all prior iterations. These are the candidates that are most similar to the query patch. The final results are further refined by computing image features from 8-equally-divided segments of each annular bin. To rank the candidates in each step, dissimilarity between the query's and the candidate patches' features is defined as their Euclidean distances. The hierarchical searching procedure can greatly reduce the time complexity, because it rejects a large portion of candidates in each iteration. The number of candidates moving to the next step is significantly reduced by rejecting the obvious negative candidates. In the refined searching stage, each annular bin is equally divided into 8 segments, and image features in each segment is computed and combined to generate one single feature vector. Due to the very limited number of candidates passing the hierarchical searching stage, this refined process is not particularly time consuming. In the last stage, a mean-shift clustering is applied to generate the final searching results.
Description of the Computation of Hierarchical Annular Features (HAF)
A given image is segmented into several closed bins with equal intervals, as shown in Fig. 2. Next, image feature of each bin is computed and then all the image features are concatenated to form a single vector, which we call hierarchical annular feature (HAF). With HAF, the discriminative power of each image patch descriptor is significantly improved compared with traditional image features extracted from the whole image. For medical images, it is very likely that image patches containing different structures have quite similar intensity distribution as a whole, but yet exhibit different HAF signatures.
Content-based image search using hierarchical annular features (HAF). In the first stage of the search operation, an iterative process is carried out in which a percent of candidate images or image patches are discarded in each iteration. At the end of this stage, successful candidates are refined in the second stage of processing
For the study reported in this paper, we focused on a representative application focused on prostate cancer. Prostate cancer is the second leading cause of male deaths in the U.S. with over 230,000 men diagnosed annually. Gleason scoring is the standard method for stratifying prostate cancer from onset of malignancy through advanced disease. Gleason grade 3 typically consists of infiltrative well-formed glands, varying in size and shapes. Grade 4 consists of poorly formed, fused or cribriform glands. Grade 5 consists of solids sheets or single cells with no glandular formation. In recognition of the complexity of the different Gleason grades, we utilized two different resolution image features to capture the characteristics of the underlying pathology of an ensemble of digitized prostate specimens. At low-magnification (10X), texture features are extracted to identify those regions with different textural variance. At high-magnification (20X), structural features are characterized. This strategy takes advantage of sampling patches from the whole-slide image while generating feature quantification at two different resolutions. This approach effectively minimizes the computation time while maintaining a high level of discriminatory performance. Section 3.1 provides the details of the steps taken to achieve automated analysis of imaged prostate histology specimens for the purposes of performing computer-assisted Gleason grading.
Texture features
The Gabor filter is a widely used because of its capacity to capture image texture characteristics at multiple directions and resolutions. In our work, we use 5 scales, and 8 directions to build the Gabor filter set. The mean and variance extracted from the filtered image are used to build an 80-dimension feature vector.
Structural features
In addition to the spatial structural differences, the density of nuclei on glands within the tissue samples increases during the course of disease progression which is reflected in the assignment of higher Gleason grades. In the case of Grade 5 images, however, only the nuclei and cytoplasm are evident with no clear glandular formation. The algorithms that we developed perform color segmentation to classify ach pixel as nuclear, lumen, cytoplasm or stroma. Figure 3 illustrates the glandular nuclei (green cross labeled) and stromal region nuclei (red cross labeled).
Detected glandular nuclei (rendered with green cross) and stromal region nuclei (labeled with red-crosses) during the content-based image search and retrieval process
Function 2. High throughput computation of quantitative measures on hybrid CPU-GPU systems
In most pathology imaging applications it is necessary to process image sets using a pipeline which performs image segmentation and a series of computational stages to generate the quantitative image features used in the analysis. In this paper, we introduce a nuclear segmentation pipeline. In the segmentation stage, nuclei in the image are automatically detected and their boundaries are extracted. The feature computation stage computes shape and texture features including area, elongation and intensity distribution or each segmented nucleus. These features may then be processed in a classification stage to cluster nuclei, images and patients into groups. We provide a detailed description of the high-performance computing approaches used for managing and clustering segmented nuclei in Section 2.3. In this section, we present approaches to enable high throughput processing of a large set of WSIs in a pipeline of image segmentation and feature computation stages. The goal is to reduce the execution times of the pipeline from days to hours and from weeks to days, when hundreds or thousands of images are to be analyzed.
Modern high performance computing systems with nodes of multi-core CPUs and co-processors (i.e., multiple GPUs and/or Intel Xeon Phi's) offer substantial computation capacity and distributed memory space. We have devised a framework, which integrates a suite of techniques, optimizations, and a runtime system, to take advantage of such systems to speed up processing of large numbers of WSIs [81]. Our framework implements several optimizations at multiple levels of an analytical pipeline. This includes optimizations in order to execute irregular pipeline operations on GPUs efficiently and scheduling of multi-level pipelines of operations to CPUs and GPUs in coordination to enable rapid processing of a large set of WSIs. The runtime system is designed to support high-throughput processing through a combined bag-of-tasks and dataflow computation pattern. We have chosen this design because of the characteristics of WSI data and WSI analysis pipelines as we describe below.
A possible strategy for speeding up segmentation of nuclei on a multi-core CPU is to parallelize each operation to run on multiple CPU cores. The processing of an image (or image tile) is partitioned across multiple cores. When a nucleus is detected, the segmentation of the nucleus is also carried out on multiple cores. This parallelization strategy is generally more applicable for a data set that has a relatively small number of large objects, because synchronization and data movement overheads can be amortized. A nucleus, however, occupies a relatively small region (e.g., 8x8 pixels) compared with the entire image (which can be 100Kx100K pixels). We performed micro-benchmarks on CPUs and GPUs to investigate performance with respect to atomic operations, random data accesses, etc. For instance, in a benchmark for atomic operations using all the cores available on the GPU and on the CPU, the GPU was able to execute close to 20 times more operations than the CPU. When we evaluated the performance of random memory data reads, the GPU was again faster -- as it performed up to 895 MB/s vs. 305 MB/s for the multi-core CPU in reading operations. These benchmarks showed that the GPU is more suitable than the CPU for finer-grain parallelization in our image analysis pipelines and that a coarser grain parallelization would be better for the CPU. While a nucleus is small, an image may contain hundreds of thousands to millions of nuclei. Moreover, we target datasets with hundreds to thousands of images. For execution on CPUs, an image level parallelization with a bag-of-tasks execution model will be more suitable for these datasets than a parallelization strategy that partitions the processing of a nucleus across multiple CPU-cores. Thus, in our framework, each image in a dataset is partitioned into rectangular tiles. Each CPU core is assigned an image tile and the task of segmenting all the nuclei in that image tile – we have not developed multi-core CPU implementations of operations in this work for this reason. Multiple image tiles are processed concurrently on multiple cores, multiple CPUs, and multiple nodes, as well as on multiple GPUs when a node has GPUs. This approach is effective because even medium size datasets with a few hundred images can have tens of thousands of image tiles and tens of millions of nuclei.
For execution on GPUs, we have leveraged existing implementations from the OpenCV library [63] and other research groups [92, 93] whenever possible. When no efficient implementations were available, we developed them in-house [79–81]. In our case, one of the challenges to the efficient use of a GPU is the fact that many operations in the segmentation stage are irregular and, hence, are more challenging to execute on a GPU. Data access patterns in these operations are irregular (random), because only active elements are accessed, and those elements are determined dynamically during execution. For instance, several operations, such as Reconstruct to Nucleus, Fill Holes, and Pre-Watershed, are computed using a flood fill scheme proposed by Vincent [92]. This scheme in essence implements an irregular wavefront propagation in which active elements are the pixels in wavefronts. We have designed and implemented an efficient hierarchical parallel queue to support execution of such irregular operations on a GPU. While the maintenance of this queue is much more complex than having a sequential CPU-based queue, comparisons of our implementations to the state-of-the art implementations show that our approach is very efficient [79]. Other operations in the segmentation stage, such as Area Threshold and Black and White Label, are parallelized using a Map-Reduce pattern. They rely on the use of atomic instructions, which may become a bottleneck on GPUs and multi-core CPUs. Operations in the feature computation stage are mostly regular and have a high computing intensity. As such, they are more suited for efficient execution on GPUs and expected to attain higher GPU speedups. We have used CUDA1 for all of the GPU implementations. The list of the operations in our current implementation is presented in Tables 1 and 2. The sources of the CPU and GPU implementations are presented in their respective columns of the table.
The list of operations in the segmentation and feature computation stages and the sources of the CPU and GPU versions
Segmentation Computations
CPU Implementation
GPU Implementation
Red Blood Cell Detection (RBC Detection)
Vincent [92] and OpenCV
Morphological Open (Morph. Open)
Reconstruct to Nucleus (ReconToNuclei)
Vincent [92]
Area Threshold
Fill Holes
Pre-Watershed
Vincent [92], and OpenCV for distance transformation
Körbes [93]
Black and White Label (BWLabel)
We used the implementation of the morphological reconstruction operation by Vincent for the implementation of several segmentation operations. Implemented indicates our implementation of the respective operations
Feature Computations
Computed Features
CPU and GPU Implementation
Pixel Statistics
Histogram Calculation
Mean, Median, Min, Max, 25 %, 50 %, and 75 % quartile
Gradient Statistics
Gradient and Histogram Calculation
Haralick
Normalization pixel values and Co-occurrence matrix
Inertia, Energy, Entropy, Homogeneity, Max prob, Cluster shade, Prominence
Canny and Sobel
Canny area, Sobel area
OpenCV (Canny), Implemented (Sobel)
Morphometry
Pixel counting, Dist. among points, Area and Perimeter, Fitting ellipse, Bounding box, Convex hull, Connected components, Area, Perimeter, Equivalent diameter, Compactness, Major/Minor axis length, Orientation, Eccentricity, Aspect ratio, Convex area, Euler number
Even though not all operations (e.g., irregular operations) in an analysis pipeline map perfectly to GPUs, most modern high performance machines come with one or more GPUs as co-processing units. Our goal is to leverage this additional processing capacity as efficiently as possible. We should note that the runtime system of our framework does not use only GPUs on a machine. As we shall describe below, it coordinates the scheduling of operations to CPU cores and GPUs to harvest the aggregate computation capacity of the machine. Each image tile is assigned to an idle CPU core or GPU; multiple CPU cores and GPUs process different input tiles concurrently.
The runtime system employs a Manager-Worker model (Fig. 5(a)) to implement the combined bag-of-tasks and dataflow pattern of execution. There is one Manager, and each node of a parallel machine is designated as a Worker. The processing of a single image tile is formulated as a two-level coarse-grain dataflow pattern. Segmentation and feature computation stages are the first level, and the operations invoked within each stage constitute the second level (Fig. 4). The segmentation stage itself is organized into a dataflow graph. The feature computation stage is implemented as a bag-of-tasks, i.e., multiple feature computation operations can be executed concurrently on segmented objects. This hierarchical organization of an analysis pipeline into computation stages and finer-grain operations within each stage is critical to the efficient use of nodes with CPUs and GPUs, because it allows for more flexible assignment of finer-grain operations to processing units (CPU cores or GPUs) and, hence, better utilization of available processing capacity.
Pipeline for segmenting nuclei in a whole slide tissue image, and computing a feature vector of characteristics per nucleus. The input to the pipeline is an image or image tile. The output is a set of features for each segmented nucleus. The segmentation stage consists of a pipeline of operations that detect nuclei and extract the boundary of each nucleus – please see Tables 1 and 2 for the full names of the operations. Each segmented nucleus is processed in the feature computation stage to compute a set of shape and texture features. The features include circularity, area, mean gradient of intensity (please see Tables 1 and 2 for the types and names of the features) as shown in the figure
The Manager creates instances of the segmentation and feature computation stages, each of which is represented by a stage task: (image tile, processing stage), and records the dependencies between the instances to enforce correct execution. The stage tasks are scheduled to the Workers using a demand-driven approach. When a Worker completes a stage task, it requests more tasks from the Manager, which chooses one or more tasks from the set of available tasks and assigns them to the Worker. A Worker may ask for multiple tasks from the Manager in order to keep all the computing devices on a node busy. Local Worker Resource Manager (WRM) (Fig. 5(b)) controls the CPU cores and GPUs used by a Worker. When the Worker receives a stage task, the WRM instantiates the finer-grain operations comprising the stage task. It dynamically creates operation tasks, represented by a tuple (input data, operation), and schedules them for execution as it resolves the dependencies between the operations – operations in the segmentation stage form a pipeline and operations in the feature computation stage depend on the output of the last operation in the segmentation stage.
Strategy for high throughput processing of images. (a) Execution on multiple nodes (left) is accomplished using a Manager-Worker model, in which stage tasks are assigned to Workers in a demand-driven fashion. A stage task is represented as a tuple of (stage name, data). The stage name may be "segmentation" in which case data will be an image tile, or it may be "feature computation" in which case data will be a mask representing segmented nuclei in an image tile and the image tile itself. The runtime system schedules stage tasks to available nodes while enforcing dependencies in the analysis pipeline and handles movement of data between stages. A node may be assigned multiple stage tasks. (b) A stage task scheduled to a Worker (right) is represented as a dataflow of operations for the segmentation stage and a set of operations for the feature computation stage. These operations are scheduled to CPU cores and GPUs by the Worker Resource Manager (WRM). The WRM uses the priority queue structure (shown as "sorted by speedup rectangle" in the figure) to dynamically schedule a waiting operation to an available computing devices
The set of stage tasks assigned to a Worker may create many operation tasks. The primary problem is to map operation tasks to available CPU cores and GPUs efficiently to fully utilize the computing capacity of a node. Our runtime system addresses this mapping and scheduling problem in two ways. First, it makes use of the concept of function variants [51, 94]. A function variant represents multiple implementations of a function with the same signature – the same function name and the same input and output parameters. In our case, the function variant corresponds to the CPU and GPU implementations of each operation. When an operation has only one variant, the runtime system can restrict the assignment of the operation to the appropriate type of computing device. Second, the runtime system executes a performance aware scheduling strategy to more effectively schedule operations to CPUs and GPUs for the best aggregate analysis pipeline performance. Several recent efforts [51–54] have worked on the problem of partitioning and mapping tasks between CPUs and GPUs for applications in which all operations have similar GPU speedups. In our case, operations in the segmentation and feature computation phases have diverse computation and data access patterns. As a result, the amount of acceleration on a GPU varies across the operations. We have developed a task scheduling strategy, called Performance Aware Task Scheduling (PATS), which assigns tasks to CPU cores or GPUs based on an estimate of each task's GPU speedup and on the computational loads of the CPUs and GPUs [80, 81]. The scheduler employs a demand-driven approach in which devices (CPU-cores and GPUs) request tasks as they become idle. It uses a priority queue of operation tasks, i.e., (data element, operation) tuples, sorted based on the expected amount of GPU acceleration of each tuple. New task tuples are inserted into the queue such that the queue remains sorted (see Fig. 5(b)). When a CPU core or a GPU becomes idle, one of the tuples from the queue is assigned to the idle device. If the idle device is a CPU core, the tuple with the minimum estimated speedup value is assigned to the CPU core. If the idle device is a GPU, the tuple with the maximum estimated speedup is assigned to the GPU. The priority queue structure allows for dynamic assignment of tasks to appropriate computing devices with a small maintenance overhead. Moreover, PATS relies on the order of tasks in the queue rather than the accuracy of the speedup estimates of individual tasks. As long as inaccuracy in speedup estimates is not large enough to affect the task order in the queue, PATS will correctly choose and map tasks to computing devices.
The cost of data transfer between the CPU and the GPU reduces the benefits of using the GPU. We have extended the base scheduler to facilitate data reuse. In addition to the extension for data reuse, we have implemented pre-fetching and asynchronous data copy to further reduce data transfer overheads [95].
Function 3. Managing and mining quantitative measures
After nuclei have been segmented and their features computed, a research study will require storage and management of the results for future analyses. It will employ machine learning and classification algorithms in order to look for relationships between tissue specimens and correlations of image features with genomic and clinical data. In this section we provide an overview of our work to address challenges in managing large volumes of segmented objects and features and in carrying out consensus clustering of large volumes of results.
We leverage emerging database technologies and high performance computing systems in order to scale data management capabilities to support large scale analysis studies.
We have developed a detailed data model to capture and index complex analysis results along with metadata about how the results were generated [96]. This data model represents essential information about images, markups, annotations, and provenance. Image data components capture image reference, resolution, magnification, tissue type, disease type as well as metadata about image acquisition parameters. For image markups, in addition to basic geometric shapes (such as points, rectangles, and circles), polygons and polylines as well as irregular image masks are also supported. Annotations could be human observations, machine generated features and classifications. Annotations can come with different measurement scales and have a large range of data types, including scalar values, arrays, matrixes, and histograms. Comparisons of results from multiple algorithms and/or multiple human observers require combinations of metadata and spatial queries on large volumes of segmentations and features. The data model is supported by a runtime system which is implemented on a relational database system for small-to-moderate scale deployments (e.g., image datasets containing up to a hundred images) and on a Cloud computing framework for large scale deployments (involving thousands of images and large numbers of analysis runs) [97]. Both these implementations enable a variety of query types, ranging from metadata queries such as "Find the number of segmented objects whose feature f is within the range of a and b" to complex spatial queries such as "Which brain tumor nuclei classified by observer O and brain tumor nuclei classified by algorithm P exhibit spatial overlap in a given whole slide tissue image" and "What are the min, max, and average values of distance between nuclei of type A as classified by observer O".
Consensus clustering on large shared-memory systems
Clustering is a common data mining operation [98]. Clustering algorithms employ heuristics and are sensitive to input parameters. A preferred mechanism is the consensus clustering approach to reduce sensitivity to input data and clustering parameters and obtain more reproducible results [91]. In consensus clustering, multiple runs of clustering algorithms on a dataset are combined to form the final clustering results. This process is computationally expensive and requires large memory space when applied to a large number of objects and features.
Our implementation is based on the method proposed by Monti et. al. [91] and consists of the following main steps: sampling, base clustering, construction of a consensus matrix, clustering of the consensus matrix, and mapping. The sampling step extracts N data points (i.e., nuclei and cells) from the entire dataset via sampling. In the second step, a clustering algorithm, e.g., K-means [99, 100], is executed M times with different initial conditions. The third step constructs a consensus matrix from the M runs. The consensus matrix is an NxN matrix. The value of element (i,j) indicates the number or percentage of the clustering runs, in which the two data points i and j were in the same cluster. In the fourth step, the consensus matrix is clustered to produce the final clustering result. The matrix is conceptually treated as a dataset of N data points, in which each data point has N dimensions. That is, each row (or column) of the matrix is viewed as a data point, and the row (or the column) values correspond to the values of the N dimensional vector of the data point. The last step maps the data points that were not selected in the sampling step to the final set of clusters. Each data point is mapped to the center of the closest cluster.
We focused on the base clustering, consensus matrix construction and consensus matrix clustering steps, since they are the most expensive. We use a publicly available parallel k-means algorithm [101] as our base clustering algorithm. In our implementation, the base clustering step is executed M times consecutively with one instance of the k-means algorithm using all of the available cores at each run. To construct the consensus matrix, the rows of the matrix are partitioned evenly among CPU cores. To compute a row i of the consensus matrix, the CPU core to which row i is mapped reads the base clustering results, which are stored in the shared memory, computes the number of times data points (i,j) are in the same cluster, and updates row i. The consensus matrix is a sparse matrix. When all M base clustering runs have been processed, row i is compressed using run length encoding to reduce memory usage. Multiple rows of the consensus matrix are computed concurrently. The clustering of the consensus matrix is carried out using the parallel k-means algorithm. We have modified the k-means implementation so that it works with compressed rows.
We present an evaluation of the methods described in Section 2 using real image datasets. The results are organized into three subsections with respected to the three core functions.
Function 1: CBIR performance: speed and accuracy
In this section we present an experimental evaluation of the CBIR speed and accuracy performance using a dataset of prostate cancer images. To avoid the pitfalls of developing the tools in isolation and then later evaluating them in a clinical setting, we work closely with oncologists and pathologists and test and optimize performance throughout the course of the development cycle.
In the first phase of the project we utilized the TMA analysis tools to investigate the effect of therapeutic starvation on prostate cancer by quantifying Beclin1 staining characteristics. Mixed sets of new TMA's were prepared with antibody for Beclin 1 and antibodies for androgen co-factor MED1, high-molecular weight keratin 34BE12, p63, and alpha methyl-Co A racemase (P504S AMACR).
To validate the proposed CBIR algorithm, we tested it on a dataset of 530 prostate histopathology images. The dataset was collected from 28 cases of whole slide imaging (WSI) from University of Iowa School of Medicine and University of Pittsburgh Medical Center (UPMC), with pixel resolution of 4096x4096 at 20X optical magnification, and 2048x2048 at 10X optical magnification. In consideration of the average query patch size for prostate gland representation in the given magnification, we use 5 bins in the HAF feature, and 50 % overlap percentage during the hierarchical searching, thus the HAF feature would capture enough content information in the underlying pathology while keep the computation amount within reasonable range.
To test the performance of the algorithm on prostate images of different Gleason grades, we conducted our experiments using randomly selected query patches of Gleason grade 3, 4 and 5. Figure 6 shows representative examples for prostate Gleason grade 3 (a), 4 (b) and 5 (c) query images retrieval results respectively.
An example set of prostate Gleason grade query patches (left) and sets of matching image patches in a given set of images (right). a Gleason grade 3 query patch and matching image regions. b Gleason grade 4 query patch and matching image regions. c Gleason grade 5 query patch and matching image regions
To further evaluate the accuracy of the CBIR algorithm, the recall rate from the top 100 retrieved patches was calculated. We define the recall rate as
$$ recallRate=\frac{Total\ relevant\ results\ retrieved}{All\ relevant\ patches\ exists\ in\ the\ topN\ range} $$
We define a retrieved result "relevant" if it has the same Gleason grade as the query image. We use top 100 as the calculation range. The average recall rate curves of Gleason grade 3, 4 and 5 are showed in Fig. 7.
Average recall rate curves of Gleason grades 3, 4 and 5, respectively
Function 2: high performance computation of quantitative measures
The methods and tools to support function 2 were evaluated on a distributed memory parallel machine called Keeneland [50]. Each Keeneland node has a dual 6-core Intel X5660 CPUs, 24GB RAM, and 3 NVIDIA M2090 GPUs. The nodes are connected using a QDR Infiniband switch. The image datasets used in the evaluation had been obtained in brain tumor studies [2]. Each image was partitioned into tiles of 4 K × 4 K pixels. The experiments were repeated 3 times; the standard deviation was not higher than 2 %. The speedups were calculated based on the single CPU core versions of the operations. The CPU codes were compiled using "gcc 4.1.2" with the "-O3" optimization as well as the vectorization option to let the compiler auto-vectorize regular operations, especially in the feature computation phase. The GPU codes were compiled using CUDA 4.0. The OpenCV 2.3.1 library was used for the operations based on OpenCV. Our codes are publicly available as Git repositories,2 . 3
Performance of GPU-enabled operations
The first set of experiments evaluates the performance of the segmentation and feature computation pipeline when the sizes of image tiles are varied. We want a tile size that results in a large number of tiles (high throughput concurrent execution across nodes and computing devices) and that leads to good speedup on a GPU. Figure 8(a) presents the execution times of the pipeline with the CPU operations and GPU-enabled operations when the image tile size is varied for an input image of 16Kx16K pixels. Figure 8(b) presents the speedup on the GPU in each configuration. We observed that tile size has little impact on the CPU execution times, but the GPU execution times decrease with larger tiles as a consequence of the larger amount of parallelism available that leads to better GPU utilization. The better GPU utilization is a result of the reduced percent in total execution time of GPU kernel launch cost/synchronization and higher data transfer rates with larger data. The analysis pipeline involves dozens of kernels, some of which are computationally inexpensive operations, such as data type transformations (from 4-byte integers to 1-byte characters), setting matrix memory to a value (memset), or device-to-device data copies. The cost of launching such kernels is high when processing small image tiles. The kernel for type transformations, for instance, takes about 77us and 864us, respectively for 1Kx1K and 4Kx4Ktiles. An operation processing a 4Kx4K image region in 1Kx1K tiles needs to call the kernel 16 times, which takes 1232us, compared to once when the same region is processed in 4Kx4K tiles. For the memset and device-to-device kernels, a single kernel call costs more or less the same for 1Kx1K and 4Kx4K tiles, making the situation worse. These performance issues are also observed in kernels with higher execution times, such as Reconstruct to Nucleus (ReconToNuclei), Fill Holes, Pre-Watershed and Watershed. The ReconToNuclei kernel, for instance, takes about 41 ms and 348 ms, respectively for 1Kx1K and 4Kx4K tiles. Processing a 4Kx4K image region in 1Kx1K tiles would result in a time cost of 656 ms. In addition to fewer kernel calls, larger image tiles lead to lower probability of thread collision. This in turn reduces the amount of serialization during atomic memory updates during the execution of a kernel. These results corroborate with other studies [102, 103] in which similar CPU/GPU relative performance trends were observed as data sizes increase. As is shown in Figs. 8(a) and (b), the 4Kx4K tile size attains the best performance (tile sizes higher than 4Kx4K did not show significant performance gains). Hence we used 4Kx4K tiles in the rest of the experiments.
Performance improvements with the GPU-based version of the segmentation and feature computation pipeline. a Application execution according to tile size. b Application speedup according to tile size. c Speedup of internal operations of the application using 4Kx4K image tiles. d Percentage of the application execution time consumed per operation using 4Kx4K image tiles
Figures 8(c) and (d) present the amount of GPU acceleration for operations in the segmentation and feature computation steps and their weight to the execution of the entire pipeline, respectively. The amount of acceleration varies significantly among the operations because of their different computation patterns. The segmentation operations are mostly irregular and, as such, are likely to attain lower speedups on the GPU compared with operations with more regular data access and processing patterns. The operations that perform a flood fill execution strategy (ReconToNuclei, Fill Holes, Pre-Watershed), for instance, perform irregular data access. As described in Section 2.2, we implemented a hierarchical parallel queue data structure to improve the performance of such operations on GPUs. These operations attained similar levels of acceleration. The Pre-Watershed operation achieved slightly higher performance improvements because it is more compute intensive. AreaThreshold and BWLabel (Black and White Label), for instance, rely on the use of atomic instructions. Atomic instructions on a GPU may be costly in cases in which threads in a warp (group of threads that execute the same instruction in a lock-step) try to access the same memory address, because data accesses are serialized. This explains the low speedups attained by these operations. The operations in the feature computation phase are more regular and compute intensive. Those operations benefit from the high data throughput of a GPU for regular data access as well as the high computation capacity of the GPU.
We profiled the operations in the segmentation and feature computation steps using the NVIDIA nvprof tool to measure their efficiency with respect to the use of the GPU stream multiprocessors (sm_efficiency) and the number of instructions executed per cycle (IPC). We have chosen these metrics because most of the operations in targeted analytical pipelines execute integer operations or a mixture of integer and floating-point operations. As is presented in Table 3, the efficiency of the operation kernels is high (over 93 %), as is expected from kernels that are designed well for GPU execution. Among the kernels, regular operations, such as Red Blood Cell Detection and feature computation, achieved higher efficiency, while irregular operations, such as those based on the flood-fill scheme (see Section 2.2), tend to have a slightly lower efficiency. This is expected since the flood-fill scheme only touches pixels if they contribute to wave propagation. In this scheme, there will be few active pixels (those in the wavefront that are stored in a queue) towards the end of execution. As a result, some stream multiprocessors (SMs) have no work assigned to them. The IPC metric is higher for operations that achieve higher speedup values, as expected. For floating point operations, the maximum IPC is 2, whereas it may be higher for integer-based operations. Also, memory-bound operations tend to have smaller IPC values. We should note that the feature computation step includes several operations that mix integer and floating-point instructions. We should note that although useful in our evaluation, the reported metrics may not be useful for comparing two different algorithms -- for instance, there are different implementations of the flood fill scheme that are regular (perform raster-/anti-raster scan passes on the image). These implementations will show higher values for the reported metrics while resulting in higher (worse) execution times as compared with our implementations.
Profiling information of the pipeline operations using NVIDIA nvprof tool
Pipeline Operation
sm_efficiency
Red Blood Cell (RBC) Detection
Morphological Open
Reconstruct to Nucleus
Black and White Label
Features Computation
We collected sm_efficiency and IPC metrics, which are the percentage of time at least one warp is active on a multiprocessor averaged over all multiprocessors on the GPU and the instructions executed per cycle, respectively
Cooperative execution on CPUs and GPUs
These experiments assess the performance impact when CPU cores and GPUs are cooperatively used on a computation node. Two version of the pipeline were used in the experiments: (i) 1 L refers to the version in which the operations are bundled together and each stage executes either using CPU or GPU; (ii) 2 L is the version expressed as a hierarchical pipeline with individual operations in a stage exposed for scheduling. Two scheduling strategies were compared: (i) First Come, First Served (FCFS), which does not take into account performance variability, and (ii) PATS. PATS uses the speedups presented in Fig. 8(c) for scheduling decisions.
The results obtained using three randomly selected images are presented in Fig. 9(a). 3 CPU cores manage the 3 GPUs, leaving only 9 CPU cores for computation, in configurations where the CPU cores and GPUs are used together. Each GPU and each CPU core receive an image tile for processing via our scheduling strategies in these experiments; as such all of the available CPU-cores are used during execution. As is shown in the figure, the performance of the analysis pipeline improves by 80 % when the GPUs are used compared to when only CPUs are used. In the 1 L version of the application, PATS is not able to make better decisions than FCFS, because all operations in a stage are executed as a single coarse-grain task. The 2 L version using PATS improved the performance by more than 30 %. This performance gain is a result of PATS ability to maximize system utilization by assigning a task to the most appropriate device. As shown in Table 4, PATS mapped most of the tasks with high GPU speedups to GPUs and the ones with lower speedups to CPUs. The FCFS, however, scheduled 58 % of the tasks to the GPUs and 42 % to the CPUs regardless of the GPU speedup of an operation. Figure 9(b) shows the performance impact of the data locality conscious task assignment (DL) and data prefetching (Prefetching) optimizations. The 2 L version of the pipeline was used in the experiments, because the data transfer cost to execution ratio is higher compared to the 1 L version. The data transfer cost in the 1 L version is a small fraction of the execution time, because instances of the stages are scheduled as coarse-grain tasks. The optimizations improved the performance of the analysis pipeline by 10 % (when FCFS is used for task scheduling) and 7 % (when PATS is used for task scheduling). The gains are smaller with PATS because PATS may decide that it is better to download the operation results to map another operation to the GPU.
Performance of segmentation and feature Computation steps with different versions: multi-core CPU, multi-GPU, and cooperative CPU-GPU. The CPU-GPU version also evaluates the composition of the application as a single level coarse-grained pipeline (1 L) in which all stage operations are executed as a single task, and the hierarchical pipeline (2 L) in which fine-grained operations in a stage are exported to the runtime system. Additionally, FCFS and PATS scheduling strategies are used to assign tasks to CPUs and GPUs. a Performance of multi-core CPU, multi-GPU and cooperative CPU-GPU versions of the application. b Improvements with data locality (DL) mapping and asynchronous data copy optimizations
Percent of tasks assigned to CPU and GPU according to the scheduling policy
Scheduling policy
Red Blood Cell Detection
While FCFS assigns all pipeline operations with similar proportion to CPU and GPU, PATS preferably assigns to the GPU operations that attain higher speedups on this device. Thus, PATS better utilizes the hybrid system
Execution on multiple nodes
These experiments evaluate the scalability of the analysis pipeline when multiple nodes of the machine are used. The evaluation was carried out using 340 Glioblastoma brain tumor WSIs, which were partitioned into a total of 36,848 4 K × 4 K tiles. The input data tiles were stored as image files on the Lustre file system. The results represent end-to-end execution of the analysis pipeline, which includes the overhead of reading input data. The execution times, as the number of nodes is varied from 8 to 100, are shown in Fig. 10(a). All implementations of the analysis pipeline achieved good speedup when more nodes are added. The performance improvements with the cooperative use of CPUs and GPUs as compared to the CPU only executions were on average 2.45x and 1.66x times with PATS and FCFS, respectively. The performance gains with the cooperative use of CPUs and GPUs are significant across the board. The analysis pipeline with CPU + GPU and PATS is at least 2.1x faster than the version that uses 12-CPU cores only. Figure 10(b) presents the throughput (tiles/s) with respect to the number of nodes. On 100 nodes with 1200 CPU cores and 300 GPUs, the entire set of 36,848 tiles was processed in less than four minutes.
Multi-node execution of the Segmentation and Feature Computation in a strong-scaling experiment. a Execution times. b Throughput in number of tiles processed per second
Function 3: performance of consensus clustering implementation
The performance evaluation of the consensus clustering implementation was carried out on a state-of-the-art shared-memory system, called Nautilus. The Nautilus system is an SGI Altrix UV 1000 funded by the National Science Foundation to provide resources for data analysis, remote visualization, and method development. It consists of 1024 CPU cores and 4 TB global shared memory accessible through a job scheduling system. In the experiments we used a dataset with 200 million nuclei with 75 features per nucleus. We created a sample of 500,000 data points by randomly selecting nuclei from all the image tiles such that each image tile contributed the same amount of nuclei – if an image tile had fewer nuclei than necessary, all of the nuclei in that image tile were added to the sampled dataset. The execution times of the base clustering, consensus matrix construction, and final clustering phases of the consensus clustering process are shown in Fig. 11. The number of clusters was set to 200 in the experiments. As is seen from the figure, the execution times of all the phases decrease as more CPU cores are added – speedup values of 2.52, 2.76, and 2.82 are achieved on 768 cores compared to 256 cores. The memory consumption on 768 cores was about 1.1 TB including space required for the data structures used by the k-means algorithm.
Execution times of three phases (base clustering runs, consensus matrix construction, and final clustering) in the consensus clustering process. The number of samples is 500,000. The base clustering runs and the final clustering are set to generate 200 clusters. The number of CPU cores is varied from 256 to 768. Note that the y-axis is logarithmic scale
Grading cancer specimens is a challenging task and can be ambiguous for some cases exhibiting characteristics within the various stages of progression ranging from low grade to high. Innovations in tissue imaging technologies have made it possible for researchers and clinicians to collect high-resolution whole slide images more efficiently. These datasets contain rich information that can complement information from gene expression, clinical, and radiology image datasets to better understand the underlying biology of disease onset and progression and to improve the diagnosis and grading process. However, the size of the datasets and compute-intensive pipelines necessary for analysis create barriers to the use of tissue image data. In our work we have identified three core functions to support more effective use of large datasets of tissue images in research and clinical settings. These functions are implemented through a suite of efficient methods as well as runtime frameworks that target modern high performance computing platforms.
The capacity to search and compare the morphology and staining characteristics across imaged specimens or within a given tissue sample is extremely valuable for assisting investigators and physicians who are charged with staging and classifying tissue samples. The methods of Function 1 (CBIR) enable this capacity. They can be used to generate image-based feature signature of unclassified imaged specimens with the profiles of a set of "gold-standard" cases and enable automatic retrieval of those samples exhibiting the most similar morphological and staining characteristics – in the case of prostate cancers, to deliver the computed Gleason score and confidence interval to the individual seeking support. Likewise investigators can provide a representative sample within a given imaged specimen and use the methods to quickly detect and locate other sub-regions throughout the specimen, which exhibit similar signatures. Our team is currently building an ImageMiner portal for diverse histopathology image analysis and applications, which includes medical image segmentation, CBIR, and registration. Upon completion the portal will be made available as open source to the research and clinical communities.
The methods and tools of Functions 2 and 3 are critical to building the capacity for analyses with very large tissue image datasets. Our work has demonstrated that high data processing rates can be achieved on modern HPC systems with CPU-GPU hybrid nodes. This is made possible by employing techniques that take into account variation in GPU performance of individual operations and implement data reuse and data prefetching optimizations. Shared memory systems provide a viable platform with large memory space and computing capacity for the classification stage when it is applied on segmented objects.
The experiments for the high performance computing software tools used datasets publicly available from The Cancer Genome Atlas repository (https://tcga-data.nci.nih.gov/tcga/). The source codes for the analysis pipelines in these experiments are released as a public open source through the following links: https://github.com/SBU-BMI/nscale and https://github.com/SBU-BMI/region-templates.
The work presented in this manuscript is focused on the development of software tools and methods. We have used publicly available datasets and de-identified datasets approved by the Institutional Review Boards for the respective grants: 5R01LM011119-05, 5R01LM009239-07, and 1U24CA180924-01A1.
This work is not a prospective study involving human participants.
The datasets used in the high performance computing experiments are publicly available from The Cancer Genome Atlas repository (https://tcga-data.nci.nih.gov/tcga/).
http://nvidia.com/cuda
https://github.com/SBU-BMI/nscale
https://github.com/SBU-BMI/region-templates
This work was funded in part by HHSN261200800001E from the NCI, 1U24CA180924-01A1 from the NCI, 5R01LM011119-05 and 5R01LM009239-07 from the NLM, and CNPq. This research used resources provided by the XSEDE Science Gateways program under grant TG-ASC130023, the Keeneland Computing Facility at the Georgia Institute of Technology, supported by the NSF under Contract OCI-0910735, and the Nautilus system at the University of Tennessee's Center for Remote Data Analysis and Visualization supported by NSF Award ARRA-NSF-OCI-0906324.
TK, GT, MN, FW designed the high performance computing and data management components and carried out experiments for performance evaluation. XQ, DW, LY developed the content based image retrieval methodologies. XQ, DW, LY, LC provided image analysis expertise and provided codes used for image analysis. JS and DF supervised the overall effort. All authors read and approved the final manuscript.
Department of Biomedical Informatics, Stony Brook University, Stony Brook, USA
Department of Pathology & Laboratory Medicine, Rutgers -- Robert Wood Johnson Medical School, New Brunswick, USA
Department of Electrical and Computer Engineering, Rutgers University, New Brunswick, USA
Department of Computer Science, Stony Brook University, Stony Brook, USA
Department of Computer Science, University of Brasilia, Brasília, Brazil
Department of Biomedical Informatics, Emory University, Atlanta, USA
Department of Biomedical Engineering, University of Florida, Gainesville, USA
Rutgers Cancer Institute of New Jersey, New Brunswick, USA
Saltz J, Kurc T, Cooper L, Kong J, Gutman D, Wang F, et al.. Multi-Scale, Integrative Study of Brain Tumor: In Silico Brain Tumor Research Center. Proceedings of the Annual Symposium of American Medical Informatics Association 2010 Summit on Translational Bioinformatics (AMIA-TBI 2010), San Francisco, LA 2010.Google Scholar
Cooper LAD, Kong J, Gutman DA, Wang F, Cholleti SR, Pan TC, et al. An integrative approach for in silico glioma research. IEEE Trans Biomed Eng. 2010;57(10):2617–21.View ArticlePubMedPubMed CentralGoogle Scholar
Cooper LAD, Kong J, Gutman DA, Wang F, Gao J, Appin C, et al. Integrated morphologic analysis for the identification and characterization of disease subtypes. J Am Med Inform Assoc. 2012;19(2):317–23.View ArticlePubMedPubMed CentralGoogle Scholar
Beroukhim R, Getz G, Nghiemphu L, Barretina J, Hsueh T, Linhart D, et al. Assessing the significance of chromosomal aberrations in cancer: methodology and application to glioma. Proc Natl Acad Sci U S A. 2007;104(50):20007–12.View ArticlePubMedPubMed CentralGoogle Scholar
Filippi-Chiela EC, Oliveira MM, Jurkovski B, Callegari-Jacques SM, da Silva VD, Lenz G. Nuclear morphometric analysis (NMA): screening of senescence, apoptosis and nuclear irregularities. PLoS ONE. 2012;7(8):e42522.View ArticlePubMedPubMed CentralGoogle Scholar
Gurcan MN, Pan T, Shimada H, Saltz J. Image Analysis for Neuroblastoma Classification: Segmentation of Cell Nuclei. In: 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2006. p. 4844–7.Google Scholar
Han J, Chang H, Fontenay GV, Spellman PT, Borowsky A, Parvin B. Molecular bases of morphometric composition in Glioblastoma multiforme. In: 9th IEEE International Symposium on Biomedical Imaging (ISBI '12): 2012. IEEE: 1631-1634.Google Scholar
Kothari S, Osunkoya AO, Phan JH, Wang MD: Biological interpretation of morphological patterns in histopathological whole-slide images. In: The ACM Conference on Bioinformatics, Computational Biology and Biomedicine: 2012. ACM: 218-225.Google Scholar
Phan J, Quo C, Cheng C, Wang M. Multi-scale integration of-omic, imaging, and clinical data in biomedical informatics. IEEE Rev Biomed Eng. 2012;5:74–87.View ArticlePubMedGoogle Scholar
Cooper L, Kong J, Wang F, Kurc T, Moreno C, Brat D et al.. Morphological Signatures and Genomic Correlates in Glioblastoma. In: IEEE International Symposium on Biomedical Imaging: From Nano to Macro: 2011; Beijing, China. 1624-1627.Google Scholar
Kong J, Cooper L, Sharma A, Kurc T, Brat D, Saltz J. Texture Based Image Recognition in Microscopy Images of Diffuse Gliomas With Multi-Class Gentle Boosting Mechanism. Dallas: The 35th International Conference on Acoustics, Speech, and Signal Processing (ICASSP); 2010. p. 457–60.Google Scholar
Kong J, Sertel O, Boyer KL, Saltz JH, Gurcan MN, Shimada H. Computer-assisted grading of neuroblastic differentiation. Arch Pathol Lab Med. 2008;132(6):903–4.PubMedGoogle Scholar
Gudivada VN, Raghavan VV: Content-based image retrieval system. Computer 1995:18-21.Google Scholar
Flickener M, Sawhney H, Niblack W, Ashley J, Huang Q, Dom B, et al. Query by image and video content: the qbic system. Computer. 1995;28(9):23–32.View ArticleGoogle Scholar
Smith JR, Chang SF. Visualseek: A Fully Automated Content-Based Image Query System. In: Proceeding of the Fourth ACM Internation Multimedia Conference and Exhibition. 1996. p. 87–98.Google Scholar
Tagare HD, Jaffe CC, Duncan J. Medical image databases: a content-based retrieval approach. J Am Med Inform Assoc. 1997;4:184–98.View ArticlePubMedPubMed CentralGoogle Scholar
Smeulders AWM, Worring M, Santini S, Gupta A, Jainh R. Content-based image retrieval at the end of early years. IEEE Trans Pattern Anal Machine Intel. 2000;22:1349–80.View ArticleGoogle Scholar
Wang J, Li J, Wiederhold G. Simplicity: semantics-sensitive integrated matching for picture libraries. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2001;23:947–63.View ArticleGoogle Scholar
Chen Y, Wang J. A region-based fuzzy feature matching approach to content-based image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002;24:1252–67.View ArticleGoogle Scholar
Chang E, Goh K, Sychay G, Wu G. CBSA: content-based soft annotation for multimodal image retreival using bayes point machines. IEEE Transations on Circuits and Systems for Video Technology. 2003;13:26–38.View ArticleGoogle Scholar
Zheng L, Wetzel AW, Gilbertson J, Becich MJ. Design and analysis of a content-based pathology image retrieval system. IEEE Trans Inf Technol Biomed. 2003;7(4):245–55.Google Scholar
Muller H, Michoux N, Bandon D, Geissbuhler A. A review of content-basd image retrieval systems in medical applicaitons - clinical benefits and future directions. Int J Med Inform. 2004;73:1–23.View ArticlePubMedGoogle Scholar
Lehmann TM, Guld MO, Deselaeers T, Keysers D, Schubert H, Spitzer K, et al. Automatic categorization of medical images for content-based retrieval and data mining. Comput Med Imaging Graph. 2005;29:143–55.View ArticlePubMedGoogle Scholar
Lam M, Disney T, Pham M, Raicu D, Furst J, Susomboon R. Content-based image retrieval for pulmonary computed tomography nodule images. Proc SPIE 6516, Medical Imaging 2007: PACS and Imaging Informatics, 65160 N 2007, 6516.Google Scholar
Rahman MM, Antani SK, Thoma GR. A learning-based similarity fusion and filtering approach for biomedical image retrieval using SVM classificaiton and relevance feedback. IEEE Trans Inf Technol Biomed. 2011;15(4):640–6.View ArticlePubMedGoogle Scholar
Thies C, Malik A, Keysers D, Kohnen M, Fischer B, Lehmann TM. Hierarchical feature clustering for content-based retrieval in medical image databases. Proc SPIE. 2003;5032:598–608.View ArticleGoogle Scholar
El-Naqa I, Yang Y, Galatsanos NP, NIshikawa RM, Wernick MN. A similarity learning approach to content-based image retrieval: application to digital mammography. IEEE Trans Med Imaging. 2004;23:1233–44.View ArticlePubMedGoogle Scholar
Akakin HC, Gurcan MN. Content-based microscopic image retrieval system for multi-image queries. IEEE Trans Inf Technol Biomed. 2012;16:758–69.View ArticlePubMedPubMed CentralGoogle Scholar
Zhang Q, Izquierdo E. Histology image retrieval in optimized multifeature spaces. IEEE Journal of Biomedical and Health Informatics. 2013;17:240–9.View ArticlePubMedGoogle Scholar
Tang HL, Hanka R, Ip HH. Histology image retrieval based on semantic content analysis. IEEE Trans Inf Technol Biomed. 2003;7:26–36.View ArticlePubMedGoogle Scholar
Schmidt-Saugenon P, Guillod J, Thiran JP. Towards a computer-aided diagnosis system for pigmented skin lesions. Comput Med Imag Graphics. 2003;27:65–78.View ArticleGoogle Scholar
Sbober A, Eccher C, Blanzieri E, Bauer P, Cristifolini M, Zumiani G, et al. A multiple classifier system for early melanoma diagnosis. Artifical Intel Med. 2003;27:29–44.View ArticleGoogle Scholar
Meyer F. Automatic screening of cytological specimens. Comput Vis Graphics Image Proces. 1986;35:356–69.View ArticleGoogle Scholar
Mattie MEL, Staib ES, Tagare HD, Duncan J, Miller PL. Content-based cell image retrieval using automated feature extraction. J Am Med Informatics Assoc. 2000;7:404–15.View ArticleGoogle Scholar
Beretti S, Bimbo AD, Pala P. Content-Based Retrieval of 3D Cellular Structures. In: Proceeding of the 2nd International Conference on Multimedica and Exposition, IEEE Computer Society. 2001. p. 1096–9.Google Scholar
Pentland A, Picard RW, Sclaroff S. Phtobook: tools for content-based manipulation of image databases. Int J Comput Vis. 1996;18:233–45.View ArticleGoogle Scholar
Lehmann TM, Guld MO, Thies C, Fischer B, Spitzer K, Keysers D, et al. Content-based image retrieval in medical applications. Methods Inf Med. 2004;4:354–60.Google Scholar
Cox IJ, Miller ML, Omohundro SM, Yianilos PN. Target Testing and the Picchunter Multimedica Retrieval System. Advances in Digital Libraries. Washington: Library of Congress; 1996. p. 66–75.Google Scholar
Carson C, Belongies S, Greenspan H, Malik J. Region-Based Image Querying. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 1997. p. 42–51.Google Scholar
Bui AAT, Taira RK, Dionision JDN, Aberle DR, El-Saden S, Kangarloo H. Evidence-based rediology. Acad Radiol. 2002;9:662–9.View ArticlePubMedGoogle Scholar
Qi X, Wang D, Rodero I, Diaz-Montes J, Gensure RH, Xing F, et al. Content-based histopathology image retrieval using Comet Cloud. BMC Bioinformatics. 2014;15:287. doi:10.1186/1471-2105-1115-1287.View ArticlePubMedPubMed CentralGoogle Scholar
Kong J, Cooper LAD, Wang F, Gutman DA, Gao J, Chisolm C, et al. Integrative, multimodal analysis of glioblastoma using tcga molecular data, pathology images and clinical outcomes. IEEE Trans Biomed Eng. 2011;58:3469–74.View ArticlePubMedPubMed CentralGoogle Scholar
Cavallaro A, Graf F, Kriegel H, Schubert M, Thoma M. Reion of Interest Queries in CT Scans. In: Proceedings of the 12th Internatinal Conference on Advances in Spatial and Temporal Databases. 2011. p. 65–73.Google Scholar
Naik J, Doyle S, Basavanhally A, Ganesan S, Feldman MD, Tomaszwski JE, et al. A boosted distance metric: application to content based image retrieval and classification of digitized histopathology. Proceedings of SPIE Medical Imaging. 2009;7260:1–4.Google Scholar
Chen W, Schmidt C, Parashar M, Reiss M, Foran DJ. Decentralized data sharing of tissue microarrays for investigative research in oncology. Cancer Informat. 2006;2:373–88.Google Scholar
Yang L, Chen W, Meer P, Salaru G, Feldman MD, Foran DJ. High throughput analysis of breast cancer specimens on the grid. Med Image Comput Assist Interv. 2007;10(1):617–25.Google Scholar
Yang L, Tuzel O, Chen W, Meer P, Salaru G, Goodell LA, et al. PathMiner: a web-based tool for computer-assisted diganostics in pathology. IEEE Trans Inf Technol Biomed. 2009;13(3):291–9.View ArticlePubMedPubMed CentralGoogle Scholar
Foran DJ, Yang L, Chen W, Hu J, Goodell LA, Reiss M, et al. ImageMiner: a software system for comparative analysis of tissue microarrays using content-based image retrieval, high-performance computing, and grid technology. J Am Med Inform Assoc. 2011;18(4):403–15.View ArticlePubMedPubMed CentralGoogle Scholar
Qi X, Kim H, Xing F, Parashar M, Foran DJ, Yang L. The analysis of image feature robustness using CometCloud. Journal of Pathology Informatics. 2012;3.Google Scholar
Vetter JS, Glassbrook R, Dongarra J, Schwan K, Loftis B, McNally S, et al. Keeneland: bringing heterogeneous GPU computing to the computational science community. Computing in Science and Engineering. 2011;13(5):90–5.View ArticleGoogle Scholar
Linderman MD, Collins JD, Wang H, Meng TH. Merge: a programming model for heterogeneous multi-core systems. SIGPLAN Notices. 2008;43(3):287–96.View ArticleGoogle Scholar
Diamos GF, Yalamanchili S. Harmony: An Execution Model and Runtime for Heterogeneous Many-Core Systems. In: Proceedings of the 17th International Symposium on High Performance Distributed Computing, vol. 1383447. Boston: ACM; 2008. p. 197–200.Google Scholar
Luk C-K, Hong S, Kim H. Qilin: Exploiting Parallelism on Heterogeneous Multiprocessors With Adaptive Mapping. In: Proceedings of the 42nd Annual IEEE/ACM International Symposium on Microarchitecture, vol. 1669121. New York: ACM; 2009. p. 45–55.Google Scholar
Augonnet C, Thibault S, Namyst R, Wacrenier P-A. StarPU: a unified platform for task scheduling on heterogeneous multicore architectures. Concurr Comput : Pract Exper. 2011;23(2):187–98.View ArticleGoogle Scholar
Teodoro G, Oliveira RS, Sertel O, Gurcan MN, Jr. WM, Çatalyürek ÜV, Ferreira R: Coordinating the use of GPU and CPU for improving performance of compute intensive applications. In: CLUSTER: 2009; New Orleans, Louisiana. conf/cluster/TeodoroOSGMCF09: IEEE: 1-10.Google Scholar
Sundaram N, Raghunathan A, Chakradhar ST: A framework for efficient and scalable execution of domain-specific templates on GPUs. In: Proceedings of the 2009 IEEE International Symposium on Parallel & Distributed Processing: 2009. 1587427: IEEE Computer Society: 1-12.Google Scholar
Teodoro G, Hartley TDR, Catalyurek U, Ferreira R: Run-time optimizations for replicated dataflows on heterogeneous environments. In: Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing: 2010; Chicago, Illinois. 1851479: ACM: 13-24.Google Scholar
Teodoro G, Hartley TD, Catalyurek UV, Ferreira R. Optimizing dataflow applications on heterogeneous environments. Clust Comput. 2012;15(2):125–44.View ArticleGoogle Scholar
Bosilca G, Bouteiller A, Herault T, Lemarinier P, Saengpatsa NO, Tomov S, Dongarra JJ: Performance Portability of a GPU Enabled Factorization with the DAGuE Framework. In: Proceedings of the 2011 IEEE International Conference on Cluster Computing: 2011. 2065710: IEEE Computer Society: 395-402.Google Scholar
Ravi VT, Ma W, Chiu D, Agrawal G. Compiler and Runtime Support for Enabling Generalized Reduction Computations on Heterogeneous Parallel Configurations. In: Proceedings of the 24th ACM International Conference on Supercomputing, vol. 1810106. Tsukuba, Ibaraki, Japan: ACM; 2010. p. 137–46.View ArticleGoogle Scholar
Huo X, Ravi VT, Agrawal G. Porting irregular reductions on heterogeneous CPU-GPU configurations. In: Proceedings of the 18th International Conference on High Performance Computing, vol. 2192618. Bangalore: IEEE Computer Society; 2011. p. 1–10.Google Scholar
Lee S, Min S-J, Eigenmann R: OpenMP to GPGPU: a compiler framework for automatic translation and optimization. In: Proceedings of the 14th ACM SIGPLAN symposium on Principles and practice of parallel programming: 2009; Raleigh, NC, USA. 1504194: ACM: 101-110.Google Scholar
Bradski G, Kaehler A. Learning OpenCV: Computer vision with the OpenCV library: O'Reilly. 2008.Google Scholar
Kahn MG, Weng C. Clinical research informatics: a conceptual perspective. J Am Med Inform Assoc. 2012;19:36–42.View ArticleGoogle Scholar
Carriero N, Osier MV, Cheung K-H, Miller PL, Gerstein M, Zhao H, et al. Case Report: A High Productivity/Low Maintenance Approach to High-performance Computation for Biomedicine: Four Case Studies. J Am Med Inform Assoc. 2005;12(1):90–8.View ArticlePubMedPubMed CentralGoogle Scholar
Lindberg DAB, Humphrey BL. High-performance computing and communications and the national information infrastructure: New opportunities and challenges. J Am Med Inform Assoc. 1995;2(3):197.View ArticlePubMedPubMed CentralGoogle Scholar
Huang Y, Lowe HJ, Klein D, Cucina RJ. Improved identification of noun phrases in clinical radiology reports using a high-performance statistical natural language parser augmented with the UMLS specialist lexicon. J Am Med Inform Assoc. 2004;12(3):275–85.View ArticleGoogle Scholar
Kaspar M, Parsad NM, Silverstein JC. An optimized web-based approach for collaborative stereoscopic medical visualization. J Am Med Inform Assoc. 2013;20(3):535–43.View ArticlePubMedGoogle Scholar
Yang L, Chen W, Meer P, Salaru G, Goodell LA, Berstis V, et al. Virtual microscopy and grid-enabled decision support for large-scale analysis of imaged pathology specimens. Trans Info Tech Biomed. 2009;13(4):636–44.View ArticleGoogle Scholar
Eliceiri KW, Berthold MR, Goldberg IG, Ibanez L, Manjunath BS, Martone ME, et al. Biological imaging software tools. Nat Meth. 2012;9(7):697–710.View ArticleGoogle Scholar
Fang Z, Lee JH. High-throughput optogenetic functional magnetic resonance imaging with parallel computations. J Neurosci Methods. 2013;218(2):184–95.View ArticlePubMedPubMed CentralGoogle Scholar
Wang Y, Du H, Xia M, Ren L, Xu M, Xie T, et al. A hybrid CPU-GPU accelerated framework for fast mapping of high-resolution human brain connectome. PLoS ONE. 2013;8(5):e62789.View ArticlePubMedPubMed CentralGoogle Scholar
Webb C, Gray A. Large-scale virtual acoustics simulation at audio rates using three dimensional finite difference time domain and multiple graphics processing units. J Acoust Soc Am. 2013;133(5):3613.View ArticleGoogle Scholar
Hernández M, Guerrero GD, Cecilia JM, García JM, Inuggi A, Jbabdi S, et al. Accelerating fibre orientation estimation from diffusion weighted magnetic resonance imaging using GPUs. PLoS ONE. 2013;8(4), e61892.View ArticlePubMedPubMed CentralGoogle Scholar
Hu X, Liu Q, Zhang Z, Li Z, Wang S, He L, et al. SHEsisEpi, a GPU-enhanced genome-wide SNP-SNP interaction scanning algorithm, efficiently reveals the risk genetic epistasis in bipolar disorder. Cell Res. 2010;20(7):854–7.View ArticlePubMedGoogle Scholar
Sertel O, Kong J, Shimada H, Catalyurek UV, Saltz JH, Gurcan MN. Computer-aided prognosis of neuroblastoma on whole-slide images: classification of stromal development. Pattern Recogn. 2009;42(6):1093–103.View ArticleGoogle Scholar
Ruiz A, Sertel O, Ujaldon M, Catalyurek U, Saltz JH, Gurcan M. Pathological Image Analysis Using the GPU: Stroma Classification for Neuroblastoma. In: IEEE International Conference on Bioinformatics and Biomedicine: 2007; Fremont, CA. 78-88.Google Scholar
Hartley TDR, Catalyurek U, Ruiz A, Igual F, Mayo R, Ujaldon M: Biomedical image analysis on a cooperative cluster of GPUs and multicores. In: Proceedings of the 22nd annual international conference on Supercomputing: 2008; Island of Kos, Greece. 1375533: ACM: 15-25.Google Scholar
Teodoro G, Pan T, Kurc TM, Kong J, Cooper LAD, Saltz JH. Efficient irregular wavefront propagation algorithms on hybrid CPU–GPU machines. Parallel Comput. 2013;39(4–5):189–211.View ArticlePubMedPubMed CentralGoogle Scholar
Teodoro G, Kurc TM, Pan T, Cooper LAD, Jun K, Widener P et al..: Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems. In: Proceedings of the IEEE 26th International Parallel & Distributed Processing Symposium: 21-25 May 2012 2012; Shanghai, China. 1093-1104.Google Scholar
Teodoro G, T. Pan, T. M. Kurc, J. Kong, L. A. Cooper, N. Podhorszki, et al.. High-throughput Analysis of Large Microscopy Image Datasets on CPU-GPU Cluster Platforms. In: the 27th IEEE International Parallel and Distributed Processing Symposium (IPDPS): May 20-24 2013; Boston, Massachusetts, USA. May 20-24. 24: 103 - 114.Google Scholar
Asur S, Ucar D, Parthasarathy S. An ensemble framework for clustering protein-protein interaction networks. Bioinformatics. 2007;23(13):i29–40.View ArticlePubMedGoogle Scholar
Forero P, Cano A, Giannakis G. Consensus Based k-Means Algorithm for Distributed Learning Using wireless sensor networks. Signal and Info Process, Sedona, AZ: Proc Workshop on Sensors; 2008.Google Scholar
Hore P, Hall LO, Goldgof DB. A scalable framework for cluster ensembles. Pattern Recogn. 2009;42(5):676–88.View ArticleGoogle Scholar
Iam-on N, Garrett S. LinkCluE: a MATLAB package for link-based cluster ensembles. J Stat Softw. 2010;36(9):1–36.View ArticleGoogle Scholar
Luo DJ, Ding C, Huang H, Nie FP. Consensus Spectral Clustering in Near-Linear Time. IEEE 27th International Conference on Data Engineering (ICDE 2011). 2011. p. 1079–90.View ArticleGoogle Scholar
Minaei-Bidgoli B, Topchy A, Punch W. A Comparison of Resampling Methods for Clustering Ensembles. International Conference on Machine Learning; Models, Technologies and Application (MLMTA04). 2004. p. 939–45.Google Scholar
Strehl A, Ghosh J. Cluster Ensembles - A Knowledge Reuse Framework for Combining Partitionings. In: Proceedings of Eighteenth National Conference on Artificial Intelligence (AAAI-02)/Fourteenth Innovative Applications of Artificial Intelligence Conference (IAAI-02). 2002. p. 93–8.Google Scholar
Zhang J, Yang Y, Wang H, Mahmood A, Huang F. Semi-Supervised Clustering Ensemble Based on Collaborative Training. In: Nguyen L, Wang G, Grzymala-Busse J, Janicki R, Hassanien A, Yu H, editors. Rough Sets and Knowledge Technology, ser Lecture Notes in Computer Science. 7414th ed. Berlin Heidelberg: Springer; 2012. p. 450–5.View ArticleGoogle Scholar
Yang L, Qi X, Xing F, Kurc T, Saltz J, Foran DJ. Parallel content-based sub-image retrieval using hierarchical searching. Bioinformatics. 2014;30(7):996–1002.View ArticlePubMedGoogle Scholar
Monti S, Tamayo P, Mesirov J, Golub T. Consensus clustering: a resampling-based method for class discovery and visualization of gene expression microarray data. Mach Learn. 2003;52(1):91–118.View ArticleGoogle Scholar
Vincent L. Morphological grayscale reconstruction in image analysis: applications and efficient algorithms. IEEE Trans Image Process. 1993;2(2):176–201.View ArticlePubMedGoogle Scholar
Körbes A, Vitor GB, Lotufo RA, Ferreira JV. Advances on Watershed Processing on GPU Architecture. In: Proceedings of the 10th International Conference on Mathematical Morphology and its Applications to Image and Signal Processing, vol. 2023072. Verbania-Intra: Springer; 2011. p. 260–71.View ArticleGoogle Scholar
Millstein T. Practical predicate dispatch. SIGPLAN Notices. 2004;39:345–464.View ArticleGoogle Scholar
Jablin TB, Prabhu P, Jablin JA, Johnson NP, Beard SR, August DI. Automatic CPU-GPU Communication Management and Optimization. In: Proceedings of the 32nd ACM SIGPLAN Conference on Programming Language Design and Implementation, vol. 1993516. San Jose: ACM; 2011. p. 142–51.View ArticleGoogle Scholar
Wang F, Kong J, Cooper L, Pan T, Kurc T, Chen W, et al. A data model and database for high-resolution pathology analytical image informatics. J Pathol Inform. 2011;2:32.View ArticlePubMedPubMed CentralGoogle Scholar
Wang F, Kong J, Gao J, Cooper LA, Kurc T, Zhou Z, et al. A high-performance spatial database based approach for pathology imaging algorithm evaluation. J Pathol Inform. 2013;4:5.View ArticlePubMedPubMed CentralGoogle Scholar
Hartigan J. Clustering Algorithms. Hoboken: Wiley; 1975.Google Scholar
Forgy EW. Cluster Analysis of Multivariate Data - Efficiency vs Interpretability of Classifications. Biometrics. 1965;21(3):768.Google Scholar
Lloyd SP. Least-Squares Quantization in Pcm. IEEE Trans Inf Theory. 1982;28(2):129–37.View ArticleGoogle Scholar
Parallel k-means data clustering package [http://users.eecs.northwestern.edu/~wkliao/Kmeans. Access date: Nov, 2015
Volkov V, Demmel JW. Benchmarking GPUs to tune dense linear albebra. International Conference for High Performance Computing, Networking, Storage and Analysis, Supercomputing. 2008;2008:499–509.Google Scholar
Tomov S, Dongarra J, Baboulin M. Towards dense linear algebra for hybrid GPU accelerated many core systems. Parallel Comput. 2010;36(5-6):232–40.View ArticleGoogle Scholar
Imaging, image analysis and data visualization
Submission enquiries: [email protected] | CommonCrawl |
Coronary stent as a tubular flow heater in magnetic resonance imaging
Stanislav Vrtnik1,2,
Magdalena Wencka3,
Andreja Jelen1,2,
Hae Jin Kim4 &
Janez Dolinšek1,2
Journal of Analytical Science and Technology volume 6, Article number: 1 (2015) Cite this article
A coronary stent is an artificial metallic tube, inserted into a blocked coronary artery to keep it open. In magnetic resonance imaging (MRI), a stented person is irradiated by the radio-frequency electromagnetic pulses, which induce eddy currents in the stent that produce Joule (resistive) heating. The stent in the vessel is acting like a tubular flow heater that increases the temperature of the vessel wall and the blood that flows through it, representing a potential hazard for the stented patient.
Heating of a metallic coronary stent in MRI was studied theoretically and experimentally. An analytical theoretical model of the stent as a tubular flow heater, based on the thermodynamic law of heat conduction, was developed. The model enables to calculate the time-dependent stent's temperature during the MRI examination, the increase of the blood temperature passing through the stent and the distribution of the temperature in the vessel wall surrounding the stent. The model was tested experimentally by performing laboratory magnetic resonance heating experiments on a non-inserted stainless-steel coronary stent in the absence of blood flow through it. The model was then used to predict the temperature increase of the stainless-steel coronary stent embedded in a coronary artery in the presence of blood flow under realistic MRI conditions.
The increase of the stent's temperature and the blood temperature were found minute, of the order of several tenths of a degree, because the blood flow efficiently cools the stent due to a much larger heat capacity of the blood as compared to the heat capacity of the stent. However, should the stent in the vessel become partially re-occluded due to the restenosis problem, where the blood flow through the stent is reduced, the stent's temperature may become dangerously high.
In the normal situation of a fully open (unoccluded) stent, the increase of the stent temperature and the blood temperature exiting the stent were found minute, of less than 1°C, so that the blood flow efficiently cools the stent. However, should the problem of restenosis occur, where the blood flow through the stent is reduced, there is a risk of hazardous heating.
In medicine, a stent is an artificial "tube" inserted into a natural passage/conduit in the body to prevent, or counteract, a disease-induced, localized flow constriction. The most widely known stent use is in the coronary arteries (Hubner 1998), by employing a bare-metal stent, a drug-eluting stent or occasionally a covered stent. Coronary stents are inserted during a percutaneous coronary intervention (PCI) procedure to keep the blocked coronary arteries open. Stents are also applied to the urinary tract (Yachia 1998), where ureteral stents are used to ensure patency of a ureter, which may be compromised, for example, by a kidney stone. Prostatic stents (Yachia 1998) are needed if a man is unable to urinate due to an enlarged prostate. Stents are also used in a variety of vessels aside from the coronary arteries and as a component of peripheral artery angioplasty.
A coronary stent is a tubular metal mesh, attached initially in its collapsed form onto the outside of a balloon catheter. In the angioplasty procedure, the physician threads the stent through the lesion in the vessel and expands the balloon that deforms the stent to its expandable size by matching the undeformed vessel diameter. After removal of the deflated balloon, the framework of the stent remains in direct contact with the vessel wall (Figure 1), where it is overgrown by the endothelial tissue in the course of subsequent months. The coronary stent remains permanently inserted in the vessel, representing a metallic object firmly incorporated into the human body for life.
Schematic presentation of the coronary stent in the vessel.
Stents can be assembled from a range of metallic alloys, including stainless steel, nitinol (nickel–titanium) and cobalt–chromium. The presence of a metallic object in the body brings up several safety issues in Magnetic Resonance Imaging (MRI) diagnostics. The two most important are the influence of magnetic forces on the implanted stent (Ahmed and Shellock 2001; Jost and Kumar 1998; Scott and Pettigrew 1994; Shellock and Shellock 1999; Strohm et al. 1999; Kagetsu and Litt 1991; Woods 2007; Lopič et al. 2013) and the stent heating by the radio-frequency (rf) electromagnetic pulses (Shellock and Morisoli 1994; Nyenhuis et al. 2005; Shellock 2011). In MRI, a stented person is irradiated by the rf pulses, which induce eddy currents in the stent that produce Joule (resistive) heating of the stent and its surroundings. During the MRI, the stent in the vessel is acting much like a tubular flow heater that increases the temperature of the vessel wall and the blood that flows through it. Due to the possible heat-induced protein coagulation and the formation of blood clots, stent heating in MRI deserves careful attention. In this paper, we present an analytical theoretical model of the stent as a tubular flow heater in the MRI examination. The model is based on the thermodynamic law of heat conduction and enables us to calculate the time-dependent stent temperature during the MRI examination, the increase of the blood temperature passing through the stent and the distribution of the temperature in the vessel wall surrounding the stent. We have tested the model experimentally by performing laboratory magnetic resonance rf heating experiments on a non-inserted stainless-steel coronary stent in the absence of blood flow through it and good matching to the theory was found. The model was then used to predict the temperature increase of the stainless-steel coronary stent embedded in a coronary artery in the presence of blood flow through it under realistic MRI conditions. The results indicate that the increase of the stent's temperature and the blood temperature are minute, of the order of several tenths of a degree, because the blood flow efficiently cools the stent due to a much larger heat capacity of the blood as compared to the heat capacity of the stent. However, should the problem of restenosis occur, where the stent in the vessel becomes partially re-occluded and the flow reduces, there is a risk of hazardous heating and the stent's temperature may become dangerously high.
Stent description and characterization
Our experiments were performed on a commercial balloon-expandable coronary stent (HORUSS HDS 1625, International Biomedical Systems, Trieste). The stent was fabricated of a surgical stainless steel, having the nominal length of 16 mm, the nominal diameter of 2.5 mm and the nominal pressure for the expansion of 8 atm. A small part of the stent was cut away for the physical property determination of the material, whereas the large part (12 mm length) was used for the rf heating experiments. The photograph of the employed stent is shown in Figure 2a.
The shape and chemical composition of the employed stent. (a) A photograph of the stent in its collapsed form. (b) A SEM secondary-electron image of the stent's wire of 200–μm diameter. (c) Chemical composition (in weight %) of the stent obtained by EDS analysis, displayed as a histogram.
The stent's material was surgical stainless steel type 316LVM (where "L" stands for Low Carbon and "VM" denotes Vacuum Melted), using a wire of 200–μm diameter (Figure 2b). The 316LVM is an austenitic steel, which is widely used for implants because it never develops surface rust and shows superior resistance to constant salt water exposure. According to the international standards (ASM International Handbook Committee 1990), the elemental composition of 316LVM is in the range 0.03 wt.% C, up to 2.0% Mn, 2.0 – 3.0% Mo, up to 1.0% Si, 16.0 – 18.0% Cr, 10.0 – 14.0% Ni, up to max. 0.16% N, and about 65% Fe as the majority element. We have determined the particular chemical composition of our investigated stent by the energy-dispersive X-ray spectrometer (EDS) using a scanning electron microscope (SEM) Supra VP 35 Zeiss (Carl Zeiss AG, Oberkochen, Germany). The resulting chemical composition (in weight %) is displayed as a histogram in Figure 2c. The too high carbon concentration (2.94%) is artificial; it originates from the contamination of the SEM by the hydrocarbons (a known problem in SEM microscopy). Apart from the carbon, other elements are within the specifications for the 316LVM steel.
Since the induction of the eddy currents by the rf field is strong in metallic alloys of low electrical resistivity, we have measured the electrical resistivity ρ of the stent's wire. The direct current (dc) resistivity was determined in the temperature interval from 2 to 360 K by a standard four-contact method using a Quantum Design PPMS (Physical Property Measurement System). The dc resistivity is usually good approximation to the frequency-dependent (ac) resistivity ρ(ω) up to the microwave frequencies, so that it is considered valid also in the radio-frequency range of the MRI experiments. The result is shown in Figure 3, where it is evident that ρ(T) exhibits a positive temperature coefficient at the values in the range ρ ≈ 100 – 150μΩcm, typical of moderately electrically conducting alloys. At the body temperature of 37°C, the resistivity amounts \( {\rho}_{37^oC}=\kern0.75em 146\upmu \Omega \mathrm{cm}, \) which is low enough that a significant induction-heating effect may be expected under the rf-pulse irradiation.
Temperature-dependent electrical resistivity of the 316LVM stainless-steel stent material. At the body temperature of 37°C (marked by an arrow), the resistivity amounts to \( {\rho}_{37^oC}=\kern0.75em 146\kern0.5em \upmu \Omega \mathrm{cm}. \)
To verify the analytical model of the stent as a tubular flow heater presented in this paper, the specific heat c s of the stent must be known. We have determined c s experimentally by using a Quantum Design PPMS. The graph of the temperature-dependent c s in the interval from 2 to 370 K is presented in Figure 4, showing that at 37°C, c s = 0.46 J/gK.
Specific heat c s of the stent in the temperature interval from 2 to 370 K. At the body temperature of 37°C (marked by an arrow), c s = 0.46 J/gK.
Rf heating experiments
Rf heating experiments of the stent were conducted in a standard 4.7 T vertical-bore NMR spectrometer operating at the proton resonance frequency ν 0 (1H) = 200 MHz. An AMT 300 W rf power transmitter was used. To monitor the temperature of the stent, a platinum Pt100 resistor was glued to its outer surface by a thermally conducting varnish (GE/IMI 7031). The stent with the sensor was wrapped up into a Teflon foil of 1.0 mm total thickness. The output of the sensor was connected to a data logger (PT-104 PT 100 Converter, Pico Technology, Cambridgeshire, UK) that has digitized the signal and enabled us to follow the time-dependent increase of the stent's temperature under the rf irradiation.
Analytical model of the stent as a tubular flow heater in MRI
We approximate the actual stent's geometry of a cylindrical metal mesh by a homogeneous thin cylinder of the length l and the radius R (Figure 5). The mass of the stent is m s and its specific heat at constant pressure is c s . The stent is surrounded by the vessel wall of cylindrical shape with the wall thickness d and the thermal conductivity λ. The stent's temperature is T s , whereas the temperature at the outer surface of the vessel is T 0 (taken roughly as the body temperature 37°C). The blood mass flow Φ b = dm b /dt (where m b is the mass of the blood) enters the stent at the body temperature T 0 and exits at an elevated temperature T b . Due to the heart pulsing, the blood flow is pulsed, but we approximate it by a stationary flow \( {\varPhi}_b={\rho}_b{S}_s{\overline{v}}_b \), where ρ b is the blood density, S s = π R 2 is the stent's cross section and \( {\overline{v}}_b \) is the average blood velocity through the stent. The specific heat at constant pressure of the blood is c b .
Schematic presentation of the stent as a tubular flow heater in the MRI examination. In the model, the cylindrical metal mesh geometry of the stent is approximated by a thin homogeneous cylinder of the radius R and length l. The surrounding vessel wall of the thickness d is assumed to be cylindrical as well. A blood flow Φ b is passing through the stent.
The rf electromagnetic pulses during the MRI examination supply the stent with the rf power P rf , which transforms into the heat dQ = P rf dt via the Joule heating by the induced eddy currents. A part of this heat is released through the vessel wall. For a cylindrical vessel, the thermal power (heat flow) P v through the wall amounts (Halliday et al. 2005)
$$ {P}_v=\frac{\lambda 2\pi l}{ \ln \left(1+d/R\right)}\left({T}_s-{T}_0\right). $$
The blood flow through the stent takes away the thermal power
$$ {P}_b={\varPhi}_b{c}_b\left({T}_b-{T}_0\right). $$
The heat imbalance (P rf − P v − P b ) dt increases the stent's temperature by dT s ,
$$ \left({P}_{rf}-{P}_v-{P}_b\right)\;dt={m}_s{c}_sd{T}_s. $$
Using Equations 1 and 2, we rewrite Equation 3 in the form
$$ {P}_{rf}-\frac{\lambda 2\pi l}{ \ln \left(1+d/R\right)}\left({T}_s-{T}_0\right)-{\varPhi}_b{c}_b\left({T}_b-{T}_0\right)={m}_s{c}_s\frac{d{T}_s}{dt}, $$
which contains two unknown variables, T s (t) and T b (t). To proceed, we assume that the time-dependence of the blood temperature T b follows the time-dependence of the stent's temperature T s , though T b may be lower than T s ,
$$ {T}_b-{T}_0=\varepsilon \left({T}_s-{T}_0\right), $$
with 0 ≤ ε ≤ 1. Equation 4 is then cast into the form
$$ \frac{d{T}_s}{dt}+\alpha \left({T}_s-{T}_0\right)=\frac{P_{rf}}{m_s{c}_s}, $$
$$ \alpha =\frac{1}{m_s{c}_s}\left(\frac{\lambda 2\pi l}{ \ln \left(1+d/R\right)}+\varepsilon\;{\varPhi}_b{c}_b\right). $$
The solution of Equation 6 is
$$ {T}_s={T}_0+\varDelta T\left(1-{e}^{-\alpha\;t}\right), $$
where ΔT = P rf /α m s c s . The stent reaches the new steady-state temperature T s (t → ∞) = T 0 + ΔT exponentially with the time constant α − 1 after the start of the rf pulsing in the MRI examination. Using the definition of α, the increase of the stent's temperature can also be written in the form
$$ \varDelta T=\frac{P_{rf}}{\frac{\lambda 2\pi l}{ \ln \left(1+d/R\right)}+\varepsilon {\varPhi}_b{c}_b}. $$
The time-dependent stent's temperature given by Equation 8 is shown in Figure 6a.
Theoretical stent's temperature in the MRI examination. (a) The time-dependent stent's temperature T s (t) given by Equation 8. The start of the rf irradiation is marked by an arrow on the time axis. The time-dependent blood temperature T b (t) exiting the stent has the same form, except that the steady-state temperature increase is reduced to εΔT with 0 ≤ ε ≤ 1. (b) Radial distribution of the temperature T(r) within the vessel wall surrounding the stent (0 ≤ r ≤ d) in the steady state, as given by Equation 11. At the contact surface to the stent, the vessel wall heats up to the stent temperature, T(r = 0) = T s , whereas at the vessel's outer surface, the temperature drops to the body temperature, T(r = d) = T 0, via a ln(1 + r/R) radial dependence.
According to the above model, the blood temperature T b (t) follows the stent's temperature T s (t), as expressed by Equation 5. The efficiency of the heat transfer from the stent to the blood is given by the empirical factor ε, which assumes a value between 1 and 0, depending on the details of the vessel and the blood flow through it. ε should be determined experimentally for a particular vessel and the type of inserted stent. The time-dependent blood temperature obeys the equation T b (t) = T 0 + ε (T s (t) − T 0), where T s (t) is given by Equation 8. The T s (t) curve shown in Figure 6a, scaled by a factor ε, is thus valid for the blood temperature T b (t) as well. After the steady-state is reached during the irradiation, the increase of the blood temperature passing the stent is εΔT with ΔT given by Equation 9.
The radial distribution of the temperature T(r) within the vessel wall surrounding the stent (0 ≤ r ≤ d) in the steady state is obtained by assuming a stationary radial heat flow through the wall, so that P v as given by Equation 1 is constant at any radial distance r from the stent. The condition
$$ {P}_v=\frac{\lambda 2\pi l\left({T}_s-T(r)\right)}{ \ln \left(1+r/R\right)} = const. $$
yields the T(r) dependence
$$ T(r)={T}_s-\frac{P_v \ln \left(1+r/R\right)}{\lambda 2\pi l}, $$
which is shown in Figure 6b. At the contact surface to the stent, the vessel wall heats up to the stent temperature, T(r = 0) = T s , whereas at the vessel's outer surface, the temperature drops to the body temperature, T(r = d) = T 0, via a ln(1 + r/R) radial dependence.
In the above analytical model of the stent as a tubular flow heater, some simplifications were used mostly to ensure mathematical tractability of the calculation. Two of them deserve to be discussed in more detail. In the model, the vessel wall surrounding the stent is considered to be cylindrical of the thickness d, where the temperature at the contact surface to the stent is assumed to equal the stent temperature T s, whereas at the outer surface it equals the body temperature T 0. The vessel wall thickness was not specified, but for the coronary arteries and the coronary stents it is reasonable to assume the inequality d < R, where R is the radius of the stent. This approximation assumes a very powerful cooling process in the body and an excessive temperature gradient within the vessel wall. In fact human temperature regulation is not powerful enough to justify this assumption. In reality, the temperature in the tissue surrounding the stent will drop to the body temperature over the distance of the order of several R, thus considerably larger than the vessel wall thickness. In the case where the vessel and the tissue behind it have similar thermal conductivity λ values (a reasonable assumption), this discrepancy can be removed by simply taking larger d value in Equations 1 to 9. Since d always appears in a logarithmic function of the form ln(1 + d/R), which is located in the denominator of the expression for ΔT (Equation 9), larger d will reduce the heat flow P v through the vessel wall. Due to the logarithmic dependence on d, the changes are relatively weak (e.g., for a thickness increase from d = R to d = 10R, the function ln(1 + d/R) increases by a factor 3.4 only). Smaller P v will increase the stent's steady-state temperature ΔT, thus subjecting the inner side of the vessel to higher temperatures and also heating stronger the blood flowing through the stent. The heating effect is thus increased under the assumption that the temperature of the tissue surrounding the stent drops to the "unperturbed" body temperature at a distance considerably larger than the stent's diameter R.
The second simplification is the assumption that the blood temperature follows the stent temperature, as expressed by Equation 5. This assumption has enabled the elimination of one unknown variable (the blood temperature T b (t)) from the calculation and kept the model simple and analytically tractable. The assumption required the introduction of the empirical "heat-transfer efficiency factor" ε (with 0 ≤ ε ≤ 1,) describing the efficiency of the heat transfer from the stent to the blood. While it is intuitively plausible to relate the blood temperature to the stent temperature by the Equation 5, the factor ε is not well-defined as the stent-to-blood heat transfer efficiency will depend on the details of the vessel, the type and geometry of the inserted stent and the blood flow through it. Weaker heat transfer (ε → 0) will increase the risk of high temperatures developed in the stent, as the stent is not giving up the heat to the blood. Good heat transfer can generally be expected for longer stents and lower blood velocity.
Rf heating experiments on a coronary stent in the absence of blood flow
In our experimental study of the temperature increase of the stainless-steel coronary stent subjected to irradiation by the rf pulses, the stent with the attached Pt100 thermometer was placed into the rf coil of the NMR probe head and inserted into the magnet. A typical MRI rf pulse sequence contains one or more pulses that are usually shaped in the time domain, where a truncated sinc and a Gaussian shape are most commonly employed (Callaghan 1991). Instead of using shaped pulses, our pulse sequence was composed of a train of 50 rectangular pulses of τ = 20 μs length each (1 ms total pulse duration), separated by 5 μs and repeated with the repetition rate t 0 = 100 ms (Figure 7), yielding the duty cycle δ D = 0.01. A shaped rf pulse in a realistic MRI experiment has a similar duration and repetition rate. The power of the rf pulses has been varied during the experiment and the temperature raise of the stent was monitored at different power levels. Here it is worth mentioning that the actual shape of the rf pulses is relatively unimportant for the heating effect; what matters is the average rf power delivered by the pulses to the stent.
The rf pulse sequence used in the stent heating experiment. A train of 50 rectangular rf pulses of τ = 20μs duration each, separated by 5 μs was continuously repeated with the repetition rate of t 0 = 100 ms.
The irradiation of the stent started by switching on the pulse sequence of Figure 7 at a given moment of time and then continuously repeating it at a selected transmitter rf output power level. This resulted in rapid initial growth of the stent temperature, which has saturated to a constant plateau after some time when the balance between the incoming rf energy and the outgoing heat due to the thermal conduction was achieved. In order to minimize heat losses by thermal conduction to the surrounding air, the rf probe head with the stent was placed into an Oxford continuous-flow cryostat CF 1200 (Oxford Instruments, Abingdon, Oxfordshire, UK), where the air could be evacuated to a pressure down to 0.2 bar. After a steady-state temperature was reached with time at a given rf power level, the transmitter power was increased and the new stent's temperature was recorded. In the following we present the stent's temperature increase as a function of the average rf power over the pulse sequence repetition time t 0, defined as \( \overline{P}=\left(1/{t}_0\right){\displaystyle {\int}_0^{t_0}P(t)\kern0.5em dt}={\delta}_D{P}_{tr}, \) where P tr is the transmitter power (e.g., \( \overline{P}\kern0.5em = 3\ \mathrm{W} \) for the full transmitter power P tr = 300 W and the duty cycle δ D = 0.01). The time-dependent stent's temperature under the rf irradiation at different average power levels \( \overline{P}\kern0.5em = 0.1,\ 0.3,\ 1,\ \mathrm{and}\ 3\ \mathrm{W} \) is shown in Figure 8a. The initial stent's temperature was 19.5°C. We observe that at each power level, the rapid initial increase of the temperature slows down with time and reaches a steady state in about 100 s. For the highest average rf power of \( \overline{P}\kern0.5em = 3\ \mathrm{W}, \) the stent has reached an astonishingly high temperature of 52.5°C (yielding an increase by as much as ΔT = 33°C from the initial temperature before the irradiation). The values of the steady-state temperature increase ΔT, as a function of \( \overline{P} \), are given in Table 1. The detailed shape of the stent's time-dependent temperature T(t) under irradiation by the rf pulses for the initial power level of \( \overline{P} = 0.1\ \mathrm{W} \) (enclosed in a dashed box in Figure 8a), is shown expanded in Figure 8b, where an exponential increase is obvious. We have also checked the increase of the stent's temperature in the air environment at the ambient pressure of 1 bar, where the stent's heat supplied by the rf irradiation is taken away more efficiently by the thermal convection to the surrounding air. The highest temperature of the stent in the 1 bar air atmosphere for \( \overline{P}\kern0.5em = 3\ \mathrm{W} \) has dropped to 48°C, as compared to 52.5°C in an identical experiment in the reduced 0.2 bar atmosphere, showing that the thermal convection to the air takes away considerable amount of the heat from the stent.
Experimental stent's temperature under the rf irradiation. (a) The time-dependent stent's temperature under irradiation at different average power levels \( \overline{P}\kern0.5em = 0.1,\ 0.3,\ 1,\ \mathrm{and}\ 3\ \mathrm{W} \) (denoted on the graph). The initial stent temperature was 19.5°C. At each higher power level, the temperature has reached a new steady-state temperature ΔT higher than the initial temperature. The dashed box encloses the time-dependent temperature increase of the stent for the initial average power \( \overline{P}\kern0.5em = 0.1\ \mathrm{W} \) and is shown expanded in panel (b). The experimental data in the panel (b) (thin black curve) were reproduced theoretically by the exponential function of Equation 8 (thick green curve) with the fit parameters given in the text. The irradiation by the rf pulses on the time axis began at the point marked by an arrow.
Table 1 The temperature increase ΔT of the stent as a function of the rf power \( \overline{P} \)
The above results indicate the possibility of hazardous heating of the stent in the MRI examination by as much as ΔT ≈ 30°C. It is, however, important to emphasize that this conclusion is based on experiments with no blood flow through the stent. We shall discuss in the following how does this conclusion change under the blood flow condition.
Comparison between the theory and experiment
The stent's temperature T s (t) in our rf heating experiment of Figure 8b was reproduced theoretically by the Equation 8. Excellent fit (thick green curve in Figure 8b) was obtained using the parameters ΔT = 3.1°C and the time constant α − 1 = 45 s, confirming that T s reaches the new steady-state value exponentially in time. The experimental ΔT and α − 1 values were compared to the theoretical values, calculated from Equations 9 and 7. In the absence of the blood flow, ΔT = P rf /[λ2π l/ln(1 + d/R)]. The role of the vessel wall in the experiment was played by the Teflon jacket of the thickness d = 1 mm and the thermal conductivity (at 25°C) λ = 0.25 W/mK around the stent. Taking the employed \( {P}_{rf}=\overline{P}=0.1\ \mathrm{W}, \) and using the stent's geometrical parameters l = 12 mm and R = 1.25 mm, we obtain ΔT = 3.1°C, a value that matches perfectly the experimental one. The parameter α = [λ2π l/ln(1 + d/R)]/m s c s was calculated by using the stent mass m s = 21.7 mg and the specific heat value at 37°C c s = 0.46 J/gK, yielding α − 1 = 0.3 s. The theoretical time constant α − 1 is a factor 150 smaller than the experimental one, so that the theory predicts a much faster increase of the stent temperature than observed experimentally. This discrepancy can be understood by noticing that in our experiment, a Pt100 sensor was rigidly attached to the stent and was heated up together with it. The thermal conductivity of the Pt100 ceramic housing is much lower than that of the metallic stent, so that the combined stent–Pt100 system has reached the new steady state temperature in a considerably longer time that it would be reached by the metallic stent alone.
Application of the model to the coronary stent in the presence of blood flow
In order to predict the temperature increase of the stainless-steel coronary stent embedded in a coronary artery in vivo in a realistic MRI examination, we calculate the parameter ΔT for our investigated stent in the presence of the blood flow. We take the geometrical parameters of the as-fabricated stent l = 16 mm and R = 1.25 mm. For the coronary artery, we take the following order-of-magnitude estimated parameters: the wall thickness d = 1 mm and the thermal conductivity λ = 1 W/mK (this value was estimated from the reported thermal conductivity of the human skin plus fat that amounts to 0.73 W/mK at 36°C and the thermal conductivity of muscles that amounts to 1.91 W/mK at 36°C (Ducharme and Tikuisis 1991)). For the blood flow through the stent we take a typical volume flow through a coronary artery Φ bV = 2 ml/s (Spaan 1991). Since the blood density is ρ b = 1.06 g/cm3, this yields the blood mass flow Φ b = ρ b Φ bV = 2 g/s. 2 g/s. The specific heat of the blood is c b = 3.78 J/gK. For the stent-to-blood heat transfer efficiency parameter we take an ad hoc value ε = 0.5. P rf is taken arbitrary as 3 W (recall that at this rf power, the stent has heated up by as much as ΔT = 33°C in the absence of the blood flow, as shown in Figure 8a).
Using the above parameter values, the increase of the stent's temperature was calculated from Equation 9 to amount ΔT = 0.8°C only, whereas the blood temperature increase is by εΔT = 0.4°C. This ΔT value is minute as compared to the case where there is no blood flow through the stent. In the absence of the blood flow (setting Φ b = 0 in Equation 9), the increase of the stent's temperature would be considerable, ΔT = 17.6°C. The reason for the smallness of the ΔT value in the presence of the blood flow becomes evident by inspecting the denominator of Equation 9 that contains two terms. The first term λ2π l/ln(1 + d/R) = 0.17 W/K originates from the heat flow through the vessel wall, whereas the second term εΦ b c b = 3.78 W/K originates from the heat taken away by the blood flow. The second term is much larger than the first one, εΦ b c b /[λ2π l/ln(1 + d/R)] = 22, so that the blood flow efficiently cools the stent in the MRI examination, owing to the much larger heat capacity of the blood, C b = m b c b , as compared to the heat capacity of the stent, C s = m s c s (where the ratio of the specific heats is c b /c s = 8.2). The estimated increase of the stainless-steel coronary stent temperature by ΔT = 0.8°C during MRI in vivo is thus small enough to be considered harmless to the human body.
The problem of restenosis
The above result of a minute increase of the stent's temperature due to the efficient cooling by the blood flow applies to the normal situation, where the stent's cross section is large enough to enable the rated (unrestricted) blood flow through a coronary vessel, i.e., the stent is fully "open". However, the problem of restenosis is sometimes encountered after the stent insertion in the course of months or years, where the stent becomes partially re-occluded and the flow reduces. Consequently, the reduced flow is no more capable of efficiently cooling the stent in the MRI examination of a patient with a partially re-occluded stent and there may appear a risk of hazardous heating of the stent's surroundings. Defining the blood flow reduction factor \( x=\left(1-{\varPhi}_b/{\varPhi}_b^0\right)\times 100 \) (in %) with \( {\varPhi}_b^0 \) denoting the unrestricted flow, we are able to predict from Equation 9 the temperature increase of the stent ΔT for an arbitrary reduced blood flow. The x = 0% value corresponds to the fully open stent (no flow reduction), whereas x = 100% corresponds to the fully blocked stent (100% flow reduction). For the calculation we took the same stent and vessel parameters as before (recall that for these parameters, the model yielded ΔT = 0.8°C for the fully open stent, whereas ΔT = 17.6°C for the fully blocked stent). The graph of the stent's temperature in the body, T s = 37°C + ΔT, as a function of the blood flow reduction, is shown in Figure 9. We observe that for the flow reduction between 0 and 90%, the stent's temperature increase is relatively small (T s increases from 37.8°C at 0% reduction to 42.5°C at 90%), whereas T s increases drastically for the high flow reduction between 90% and 100% (from 42.5°C at 90% to 54.6°C at 100% reduction). This behavior is again a consequence of the much larger heat capacity of the blood as compared to the stent, demonstrating that even a substantially diminished blood flow through the stent is still able to cool it efficiently. In contrast, dangerously high temperatures are developed in the body for the stent occlusion close to 100%.
The problem of restenosis in MRI. Theoretical stent's steady-state temperature in the body, T s = 37°C + ΔT, under the rf irradiation is presented as a function of the blood flow reduction due to restenosis, where ΔT was calculated from Equation 9 (see text). The blood flow reduction is defined as \( x=\left(1-{\varPhi}_b/{\varPhi}_b^0\right)\times 100 \) (in %) with \( {\varPhi}_b^0 \) denoting the unrestricted flow. The x = 0 % value corresponds to the fully open stent (no flow reduction), whereas x = 100 % corresponds to the fully blocked stent (100 % flow reduction).
In the MRI examination, the patient is irradiated by the rf pulses typically for a time of about 20 – 30 minutes, which represents the time during which the stent "heater" is switched on. The long heater switch-on time and the high enough temperature for the heat-induced protein coagulation and the formation of blood clots represent a risk of hazardous heating in the stent's surroundings during the MRI of a stented patient with a high stent occlusion between 90% and 100%. Restenosis thus represents a potentially hazardous heating problem in MRI.
We have investigated the heating of a metallic coronary stent in a MRI examination. The experimental results presented in this paper are valid for the particular geometrical parameters (length and diameter) of the employed stent, the 316LVM stainless steel material, the proton resonance frequency of 200 MHz (corresponding to the 4.7 T magnetic field) and the average rf irradiation power up to 3 W. For the highest average rf power of 3 W employed in our experiments on the stent in the absence of the blood flow through it, the stent has heated up by as much as ΔT = 33°C from the initial temperature before the irradiation. The experimental and theoretical methodologies applied in this study are, however, suitable to investigate other types of stents with different geometries, fabricated of different metallic alloys, e.g., nickel–titanium and cobalt–chromium and other rf irradiation strengths.
The metallic coronary stent acting as a tubular flow heater in the MRI examination was also modeled theoretically, by considering that the stent receives the energy from the rf electromagnetic field and heats up the surrounding vessel and the blood flowing through it. The analytical model has successfully reproduced the exponential increase of the stent's temperature during our rf heating experiments in the absence of blood flow through the stent. The model was then used to predict the increase of the temperature of the stainless-steel coronary stent embedded in a coronary artery in the presence of blood flow through it, mimicking an in vivo realistic MRI examination. In the normal situation of a fully open (unoccluded) stent, the increase of the stent's temperature as well as the increase of the blood temperature exiting the stent were found minute, of less than 1°C, so that the blood flow efficiently cools the stent. This is a consequence of the much larger heat capacity of the blood as compared to the heat capacity of the stent. However, should the problem of restenosis occur with time after the stent insertion, where the stent in the vessel becomes partially re-occluded and the flow reduces, there is a risk of hazardous heating. The temperature of the occluded stent may become dangerously high to enable protein coagulation and the formation of blood clots in the stent's surroundings.
Ahmed S, Shellock FG (2001) Magnetic resonance imaging safety: implications for cardiovascular patients. J Cardiovasc Magn Reson 3:171–182
ASM International Handbook Committee (1990) Metals handbook, vol 1, 10th edn. ASM International Handbook Committee, Ohio Park
Callaghan PT (1991) Principles of nuclear magnetic resonance microscopy. Clarendon Press, Oxford, p 100
Ducharme MB, Tikuisis P (1991) In vivo thermal conductivity of the human forearm tissues. J Appl Physiol 70:2682–2690
Halliday D, Resnick R, Walker J (2005) Fundamentals of Physics, 7th edn. John Wiley & Sons, New York, p 493
Hubner PJB (1998) Guide to coronary angioplasty and stenting. Harwood Academic Publishers, Amsterdam
Jost C, Kumar V (1998) Are current cardiovascular stents MRI safe? J Invas Cardiol 10:477–479
Kagetsu ND, Litt AW (1991) Important considerations in measurement of attractive forces on metallic implants in MR imaging. Radiology 179:505–508
Lopič N, Jelen A, Vrtnik S, Jagličić Z, Wencka M, Starc R, Blinc A, Dolinšek J (2013) Quantitative determination of magnetic force on a coronary stent in MRI. J Magn Reson Imaging 37:391–397
Nyenhuis JA, Park SM, Kamondetdacha R, Amjad A, Shellock FG, Rezai A (2005) MRI and implanted medical devices: basic interactions withan emphasis on heating. IEEE Trans Device Mat Rel 5:467–478
Scott NA, Pettigrew RI (1994) Absence of movement of coronary stents after placement in a magnetic resonance imaging field. Am J Cardiol 73:900–901
Shellock FG (2011) Reference manual for magnetic resonance safety, implants and devices. Biomedical Research Publishing Group, Los Angeles, pp 246–251
Shellock FG, Morisoli SM (1994) Ex vivo evaluation of ferromagnetism, heating, and artifacts for heart valve prostheses exposed to a 1.5 Tesla MR system. J Magn Reson Imaging 4:756–758
Shellock FG, Shellock VJ (1999) Metallic stents: evaluation of MR imaging safety. Am J Roentgenol 173:543–547
Spaan JAE (1991) Coronary blood flow: mechanics, distribution and control. Kluwer Academic Publishers, Dordrecht
Strohm O, Kivelitz D, Gross W, Schulz-Menger J, Liu X, Hamm B, Dietz D, Friedrich MG (1999) Safety of implantable coronary stents during 1H-magnetic resonance imaging at 1.0 and 1.5 T. J Cardiovasc Magn Reson 1:239–245
Woods TO (2007) Standards for medical devices in MRI: present and future. J Magn Reson Imaging 26:1186–1189
Yachia D (1998) Stenting the urinary system. Isis Medical Media, Oxford
Jožef Stefan Institute, Jamova 39, SI-1000, Ljubljana, Slovenia
Stanislav Vrtnik, Andreja Jelen & Janez Dolinšek
Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, SI-1000, Ljubljana, Slovenia
Institute of Molecular Physics, Polish Academy of Sciences, Smoluchowskiego 17, PL-60-179, Poznań, Poland
Magdalena Wencka
Division of Materials Science, Korea Basic Science Institute, Daejeon, 305-333, Republic of Korea
Hae Jin Kim
Stanislav Vrtnik
Andreja Jelen
Janez Dolinšek
Correspondence to Janez Dolinšek.
SV carried out the stent heating experiments. MW carried out the physical-property measurements of the stent's material. AJ carried out the scanning electron microscopy experiments and the EDS compositional analysis. HJK participated in the application of the theoretical model to the experimental data. JD conceived of the study, developed the analytical model of a stent as a tubular flow heater, and performed the coordination. All authors read and approved the final manuscript.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
Vrtnik, S., Wencka, M., Jelen, A. et al. Coronary stent as a tubular flow heater in magnetic resonance imaging. J Anal Sci Technol 6, 1 (2015). https://doi.org/10.1186/s40543-014-0041-2
Coronary stent
Radiofrequency field heating effect
Modeling biomedical systems
MRI safety | CommonCrawl |
Modifying the definition of action
Why magic and physics cannot coexist peacefully? After reading many physics and math stuff, I think that the problem maybe come from the principle of least action. So, I edited the definition of action, in order to make magic and physics coexist peacefully.
https://en.wikipedia.org/wiki/Principle_of_least_action
Definition of action in my world: $$ S=\int_{t_1}^{t_2} L(\mathbf{q,\dot{q},t})\;dt + \frac{\int_{t_1}^{t_2} M(\mathbf{r-q,\dot{r}-\dot{q},t})\;dt}{1+|\int_{t_1}^{t_2} L(\mathbf{q,\dot{q},t})\;dt|} $$ The first term is the definition of action in real world. The second term is added by me. $$ \int_{t_1}^{t_2} M(\mathbf{r-q,\dot{r}-\dot{q},t})\;dt $$ It is the total magical power needed to change a body's trajectory.
r is the new position. (after magic influence)
r dot is the new velocity. (after magic influence) $$ 1+|\int_{t_1}^{t_2} L(\mathbf{q,\dot{q},t})\;dt| $$ is used to suppress the influence of magic in daily lives.
"1+" is used to prevent value which is smaller than one and bigger than zero to appear in the denominator.
Absolute value is used to prevent negative value.
Finally, the magical power need to have a unit $ J^2 s$ as well. lol
Is there any contradiction (physics or math) caused by this edited definition of action? (I know that many equations will change, such as F=ma, but it is ok if there is no contradiction.)
Response to Comments:
I didn't plan to define "magical power" using some well-known physics concept, such as energy. I want to make "magical power" into something new, so its unit $ J^2 s$ is the only explanation. :p
Magic in my world also affects quantum, but the calculation of quantum field theory is difficult, I need to spend more time on it.
magic physics mathematics
fairytalefairytale
$\begingroup$ I think a great deal more context is needed here? Is magic simply a form of energy? Does it only affect newtonian physics? $\endgroup$
– knowads
For one thing, action has units, so you can't add $1$ to it as you do in the denominator. But that's easily worked around by replacing the $1$ with a constant $h_M$.
From there, let me simplify your expression by writing the modified action as $$S = S^{(L)} + \frac{S^{(M)}}{h_M + \lvert S^{(L)}\rvert}$$ According to the stationary action principle, you need the variation $\delta S = 0$ for physical paths. Variation basically follows the same rules as derivatives, so $$\delta S = \delta S^{(L)} + \frac{\delta S^{(M)}}{h_M + \lvert S^{(L)}\rvert} \pm \frac{S^{(M)}}{(h_M + \lvert S^{(L)}\rvert)^2}\delta S^{(L)}$$ The $\pm$ sign, resulting from your use of the absolute value, is already a clue that this is going to get fairly complicated.
I won't bother to explain the sign choice in detail, but it does bring up the related issue of a sort of "gauge dependence" of your action: the fact that adding a constant to the action changes the equations of motion. In standard Lagrangian mechanics, you can add any value representable as $f(t_2) - f(t_1)$, for some differentiable function $f$, to the action with no effect on the underlying physics. This property is essential in deriving the Euler-Lagrange equations.
In your case, however, significant changes could result from adding that kind of term. That sign choice, for instance, depends on whether $S^{(L)}$ is positive or negative. So let's do the following:
Take some path $\mathcal{P}$ such that $S^{(L)}[\mathcal{P}] < 0$.
Define $s = \lvert S^{(L)}[\mathcal{P}]\rvert$.
Define $$f(t) = \frac{2st}{t_2^{[\mathcal{P}]} - t_1^{[\mathcal{P}]}}$$ and add $f(t_2) - f(t_1) = 2s$ to the action, which changes its value from negative to positive.
Now you've flipped the sign of the action, and also flipped the sign in the variation formula, which means the quantity that needs to be set to zero is different.
What this means is that the invariance to total derivatives which is present in standard Lagrangian mechanics does not exist in your framework. By itself, that isn't a dealbreaker; the magic action still presumably has stationary paths. But good luck finding them without being able to use variational calculus on the general action. You won't have the Euler-Lagrange equations to work with, and unless you get really lucky, there won't be any general way to find the paths that set the variation to zero. This makes it a very frustrating (at best) and possibly practically useless framework.
And of course, this doesn't rule out the possibility of contradictions. It just makes them harder to identify, along with everything else.
David ZDavid Z
$\begingroup$ That means a better modification of action is $S=S^{(L)}+S^{(M)}+f(t)$ ? $\endgroup$
– fairytale
$\begingroup$ There would be no point in doing that, because then you're just back to the regular action principle. Or, it's like splitting the action into two parts that you label $S^{(L)}$ and $S^{(M)}$. (You can't have $f(t)$ because $t$ is not a parameter to the action.) $\endgroup$
– David Z
$\begingroup$ Then what is the better choice? $S=S^{(L)}+S^{(M)}+2s$ ? Sorry I get confused again. lol $\endgroup$
$\begingroup$ The point is you can't really change the way the stationary action principle works without making it exceedingly difficult (perhaps impossible) to work with. You can, however, add terms to the action to change the physics. For instance, $S = S^{(0)} + S^{(M)}$, where $S^{(0)}$ is the regular action of Newtonian mechanics and $S^{(M)}$ is your magical addition. This is equivalent to making $L^{(0)} + M$ your Lagrangian. $\endgroup$
$\begingroup$ Can the least action principle principle apply to systems that have non-conservative force? $\endgroup$
Not the answer you're looking for? Browse other questions tagged magic physics mathematics .
Rules for a magic generated by belief
How would the Lilim live?
Turing incomplete magic, avoiding computers in world where magic is programmable
How much energy in return force?
A cure, a vaccine, and one tricky disease
How can a renegade coven recruit its members without being discovered?
How can I meaningfully define the energy cost of magical levitation?
what are the signs and effects of two realities combined into one reality
Anti-magic prison in an iceberg | CommonCrawl |
Cosine angle sum identity
Math Doubts
Angle sum
$(1).\,\,$ $\cos{(a+b)}$ $\,=\,$ $\cos{a}\cos{b}$ $-$ $\sin{a}\sin{b}$
$(2).\,\,$ $\cos{(x+y)}$ $\,=\,$ $\cos{x}\cos{y}$ $-$ $\sin{x}\sin{y}$
Let us consider that $a$ and $b$ are two variables, which denote two angles. The sum of two angles is written as $a+b$, which is actually a compound angle. The cosine of a compound angle $a$ plus $b$ is expressed as $\cos{(a+b)}$ in trigonometry.
The cosine of sum of angles $a$ and $b$ is equal to the subtraction of the product of sines of both angles $a$ and $b$ from the product of cosines of angles $a$ and $b$.
$\cos{(a+b)}$ $\,=\,$ $\cos{a} \times \cos{b}$ $-$ $\sin{a} \times \sin{b}$
This mathematical equation is called the cosine angle sum trigonometric identity in mathematics.
The cosine angle sum identity is used in two different cases in trigonometric mathematics.
The cosine of sum of two angles is expanded as the subtraction of the product of sines of angles from the product of cosines of angles.
$\implies$ $\cos{(a+b)}$ $\,=\,$ $\cos{(a)}\cos{(b)}$ $-$ $\sin{(a)}\sin{(b)}$
The subtraction of the product of sines of angles from the product of cosines of angles is simplified as the cosine of sum of two angles.
$\implies$ $\cos{(a)}\cos{(b)}$ $-$ $\sin{(a)}\sin{(b)}$ $\,=\,$ $\cos{(a+b)}$
The angle sum identity in cosine function can be expressed in several forms but the following are some popularly used forms in the world.
$(3).\,\,$ $\cos{(\alpha+\beta)}$ $\,=\,$ $\cos{\alpha}\cos{\beta}$ $-$ $\sin{\alpha}\sin{\beta}$
Learn how to derive the cosine of angle sum trigonometric identity by a geometric method in trigonometry.
Learn Proof
Latest Math Topics
Learn cosine of angle difference identity
Learn constant property of a circle with examples
Concept of Set-Builder notation with examples and problems
Completing the square method with problems
How to find the minors of 2 by 2 matrix
Latest Math Problems
Evaluate $\cos(100^\circ)\cos(40^\circ)$ $+$ $\sin(100^\circ)\sin(40^\circ)$
Evaluate $\begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9\\ \end{bmatrix}$ $\times$ $\begin{bmatrix} 9 & 8 & 7\\ 6 & 5 & 4\\ 3 & 2 & 1\\ \end{bmatrix}$
Evaluate ${\begin{bmatrix} -2 & 3 \\ -1 & 4 \\ \end{bmatrix}}$ $\times$ ${\begin{bmatrix} 6 & 4 \\ 3 & -1 \\ \end{bmatrix}}$
Evaluate $\displaystyle \large \lim_{x\,\to\,0}{\normalsize \dfrac{\sin^3{x}}{\sin{x}-\tan{x}}}$
Solve $\sqrt{5x^2-6x+8}$ $-$ $\sqrt{5x^2-6x-7}$ $=$ $1$
Math Doubts is a best place to learn mathematics and from basics to advanced scientific level for students, teachers and researchers. Know more
Hyperbolic functions
Math Problems
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising. | CommonCrawl |
Physics And Astronomy (22)
Materials Research (18)
Earth and Environmental Sciences (1)
MRS Online Proceedings Library Archive (14)
Journal of Materials Research (4)
Epidemiology and Psychiatric Sciences (1)
Journal of Plasma Physics (1)
The Journal of Agricultural Science (1)
Materials Research Society (18)
Leucine alters immunoglobulin a secretion and inflammatory cytokine expression induced by lipopolysaccharide via the nuclear factor-κB pathway in intestine of chicken embryos
S. Q. Liu, L. Y. Wang, G. H. Liu, D. Z. Tang, X. X. Fan, J. P. Zhao, H. C. Jiao, X. J. Wang, S. H. Sun, H. Lin
Journal: animal / Volume 12 / Issue 9 / September 2018
Published online by Cambridge University Press: 22 December 2017, pp. 1903-1911
The mammalian target of rapamycin (mTOR) has been shown to be involved in lipopolysaccharide (LPS)-induced immune responses in many mammal cells. Here, we suggest that the mTOR pathway is involved in the intestinal inflammatory responses evoked by LPS treatment in chicken embryos. The intestinal tissue from Specific pathogen free chick embryos was cultured in the presence of LPS for 2 h. Secretory immunoglobulin A (sIgA) concentrations, messenger RNA (mRNA) expression of cytokines, and protein levels of nuclear factor-κB (NF-κB), mitogen-activated protein kinase (MAPK), mTOR and p70 ribosomal S6 kinase (p70S6K) were determined. The results showed that LPS treatment increased sIgA concentrations in a dose-dependent manner. The mRNA levels of interleukine (IL)-6, IL-8, IL-10, tumor necrosis factor-α and Toll-like receptor (TLR) 4 were upregulated by LPS treatment (P<0.05). Lipopolysaccharide increased the phosphorylation of Jun N-terminal kinase (JNK), p38 MAPK and NF-κB (P<0.05) while decreasing the phosphorylation level of mTOR (P<0.05). Supplementation of leucine at doses of 10, 20 and 40 mM dose-dependently decreased sIgA production. Leucine supplementation at 40 mM restored the phosphorylation level of mTOR and p70S6K while suppressing the phosphorylation levels of NF-κB (P<0.05) and partially down-regulating the phosphorylation of p38 MAPK and JNK. The transcription of IL-6 was significantly decreased by leucine supplementation. These results suggested that leucine could alleviate LPS-induced inflammatory responses by down-regulating NF-κB signaling pathway and evoking mTOR/p70S6K signaling pathway, which may involve in the regulation of the intestinal immune system in chicken embryos.
Risperidone-induced topological alterations of anatomical brain network in first-episode drug-naive schizophrenia patients: a longitudinal diffusion tensor imaging study
M. Hu, X. Zong, J. Zheng, J. J. Mann, Z. Li, S. P. Pantazatos, Y. Li, Y. Liao, Y. He, J. Zhou, D. Sang, H. Zhao, J. Tang, H. Chen, L. Lv, X. Chen
Journal: Psychological Medicine / Volume 46 / Issue 12 / September 2016
Published online by Cambridge University Press: 24 June 2016, pp. 2549-2560
It remains unclear whether the topological deficits of the white matter network documented in cross-sectional studies of chronic schizophrenia patients are due to chronic illness or to other factors such as antipsychotic treatment effects. To answer this question, we evaluated the white matter network in medication-naive first-episode schizophrenia patients (FESP) before and after a course of treatment.
We performed a longitudinal diffusion tensor imaging study in 42 drug-naive FESP at baseline and then after 8 weeks of risperidone monotherapy, and compared them with 38 healthy volunteers. Graph theory was utilized to calculate the topological characteristics of brain anatomical network. Patients' clinical state was evaluated using the Positive and Negative Syndrome Scale (PANSS) before and after treatment.
Pretreatment, patients had relatively intact overall topological organizations, and deficient nodal topological properties primarily in prefrontal gyrus and limbic system components such as the bilateral anterior and posterior cingulate. Treatment with risperidone normalized topological parameters in the limbic system, and the enhancement positively correlated with the reduction in PANSS-positive symptoms. Prefrontal topological impairments persisted following treatment and negative symptoms did not improve.
During the early phase of antipsychotic medication treatment there are region-specific alterations in white matter topological measures. Limbic white matter topological dysfunction improves with positive symptom reduction. Prefrontal deficits and negative symptoms are unresponsive to medication intervention, and prefrontal deficits are potential trait biomarkers and targets for negative symptom treatment development.
Allelic variations in the soluble starch synthase II gene family result in changes of grain quality and starch properties in rice (Oryza sativa L.)
X. Y. FAN, M. GUO, R. D. LI, Y. H. YANG, M. LIU, Q. ZHU, S. Z. TANG, M. H. GU, R. G. XU, C. J. YAN
Journal: The Journal of Agricultural Science / Volume 155 / Issue 1 / January 2017
Soluble starch synthase II (SSII) plays an important role in the biosynthesis of starch and in rice it consists of three isoforms encoded by SSII-1, SSII-2 and SSII-3. However, the genetic effects of various SSII alleles on grain quality have not been systematically characterized. In the present study, the japonica alleles on SSII-1, SSII-2 and SSII-3 (SSIIa) loci from a japonica cultivar, Suyunuo, were respectively introgressed by molecular marker-assisted selection into a typical indica cultivar, Guichao2, through successive backcrossing, generating three sets of near-isogenic lines (NILs). Grain quality and starch property analysis showed that NIL-SSII-3j exhibited significant decreases in the following parameters: amylose content, average granule size, and setback viscosity and consistency; but increases in peak viscosity, hot paste viscosity, gelatinization temperature and relative crystallinity. Moreover, the proportion of short amylopectin chains and branching degree also increased when compared with those of NIL-SSII-3i (Guochao2). Similar effects were observed in NIL-SSII-1j , and certain alterations in the fine structure of starch (granule size) were revealed. However, NIL-SSII-2j did not exert significant effect on grain quality and starch properties. In brief, among the SSII gene family, the functional diversity occurred on SSII-1 and SSII-3, and not on SSII-2. Therefore, it appears that more attention should be directed to SSII-1 and SSII-3 loci for improving the eating and cooking quality of rice.
Two-year follow-up of a Chinese sample at clinical high risk for psychosis: timeline of symptoms, help-seeking and conversion
T. H. Zhang, H. J. Li, K. A. Woodberry, L. H. Xu, Y. Y. Tang, Q. Guo, H. R. Cui, X. H. Liu, A. Chow, C. B. Li, K. D. Jiang, Z. P. Xiao, L. J. Seidman, J. J. Wang
Journal: Epidemiology and Psychiatric Sciences / Volume 26 / Issue 3 / June 2017
Background.
Chinese psychiatrists have gradually started to focus on those who are deemed to be at 'clinical high-risk (CHR)' for psychosis; however, it is still unknown how often those individuals identified as CHR from a different country background than previously studied would transition to psychosis. The objectives of this study are to examine baseline characteristics and the timing of symptom onset, help-seeking, or transition to psychosis over a 2-year period in China.
The presence of CHR was determined with the Structured Interview for Prodromal Syndromes (SIPS) at the participants' first visit to the mental health services. A total of 86 (of 117) CHR participants completed the clinical follow-up of at least 2 years (73.5%). Conversion was determined using the criteria of presence of psychotic symptoms (in SIPS). Analyses examined baseline demographic and clinical predictors of psychosis and trajectory of symptoms over time. Survival analysis (Kaplan–Meier) methods along with Log-rank tests were performed to illustrate the relationship of baseline data to either conversion or non-conversion over time. Cox regression was performed to identify baseline predictors of conversion by the 2-year follow-up.
In total 25 (29.1%) of 86 completers transitioned to a psychotic disorder over the course of follow-up. Among the CHR sample, the mean time between attenuated symptom onset and professional help-seeking was about 4 months on average, and converters developed fully psychotic symptoms about 12 months after symptom onset. Compared with those CHR participants whose risk syndromes remitted over the course of the study, converters had significantly longer delays (p = 0.029) for their first visit to a professional in search of help. At baseline assessment, the conversion subgroup was younger, had poorer functioning, higher total SIPS positive symptom scores, longer duration of untreated prodromal symptoms, and were more often given psychosis-related diagnoses and subsequently prescribed antipsychotics in the clinic.
Conclusions.
Chinese CHR identified primarily by a novel clinical screening approach had a 2-year transition rate comparable with those of specialised help-seeking samples world-wide. Early clinical intervention with this functionally deteriorating clinical population who are suffering from attenuated psychotic symptoms, is a next step in applying the CHR construct in China.
Characterization of Trapped Charge in Ge/LixGe Core/Shell Structure during Lithiation using Off-axis Electron Holography
Z. Gan, M. Gu, J. Tang, C. Y. Wang, K. L. Wang, C. M. Wang, D. J. Smith, M. R. McCartney
Journal: Microscopy and Microanalysis / Volume 21 / Issue S3 / August 2015
Published online by Cambridge University Press: 23 September 2015, pp. 1397-1398
Demonstration of laser pulse amplification by stimulated Brillouin scattering
E. Guillaume, K. Humphrey, H. Nakamura, R. M. G. M. Trines, R. Heathcote, M. Galimberti, Y. Amano, D. Doria, G. Hicks, E. Higson, S. Kar, G. Sarri, M. Skramic, J. Swain, K. Tang, J. Weston, P. Zak, E. P. Alves, R. A. Fonseca, F. Fiúza, H. Habara, K. A. Tanaka, R. Bingham, M. Borghesi, Z. Najmudin, L. O. Silva, P. A. Norreys
Journal: High Power Laser Science and Engineering / Volume 2 / 01 July 2014
Published online by Cambridge University Press: 25 September 2014, e33
Print publication: 01 July 2014
The energy transfer by stimulated Brillouin backscatter from a long pump pulse (15 ps) to a short seed pulse (1 ps) has been investigated in a proof-of-principle demonstration experiment. The two pulses were both amplified in different beamlines of a Nd:glass laser system, had a central wavelength of 1054 nm and a spectral bandwidth of 2 nm, and crossed each other in an underdense plasma in a counter-propagating geometry, off-set by $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}10^\circ $ . It is shown that the energy transfer and the wavelength of the generated Brillouin peak depend on the plasma density, the intensity of the laser pulses, and the competition between two-plasmon decay and stimulated Raman scatter instabilities. The highest obtained energy transfer from pump to probe pulse is 2.5%, at a plasma density of $0.17 n_{cr}$ , and this energy transfer increases significantly with plasma density. Therefore, our results suggest that much higher efficiencies can be obtained when higher densities (above $0.25 n_{cr}$ ) are used.
Optically Active Nanoparticle Coated Polystyrene Spheres
Brandy Kinkead, Abdiwali A. Ali, John-C. Boyer, Byron D. Gates
Published online by Cambridge University Press: 13 May 2013, mrss13-1546-l06-26
Nanoparticles (NPs) with either plasmonic or upconverting properties have been selectively coated onto the surfaces of polystyrene (PS) spheres, imparting their optical properties to the PS colloids. These NP coated PS spheres have many potential applications, such as in medicine as drug-delivery systems or diagnostic tools. To prepare the NP coated PS spheres, gold or core-shell NaYF4Tm0.5Yb30/NaYF4 NPs were synthesized and separately combined with amino-functionalized PS spheres. The mechanism by which the NPs adhered to the PS spheres is attributed to interactions of the NP and a polyvinylpyrrolidone additive with the surfaces of the PS spheres. Two-photon fluorescence microscopy and SERS analysis demonstrate the potential applications of these NP coated PS spheres.
Deriving fractional rate of degradation of logistic-exponential (LE) model to evaluate early in vitro fermentation
M. Wang, X. Z. Sun, S. X. Tang, Z. L. Tan, D. Pacheco
Journal: animal / Volume 7 / Issue 6 / June 2013
Water-soluble components of feedstuffs are mainly utilized during the early phase of microbial fermentation, which could be deemed an important determinant of gas production behavior in vitro. Many studies proposed that the fractional rate of degradation (FRD) estimated by fitting gas production curves to mathematical models might be used to characterize the early incubation for in vitro systems. In this study, the mathematical concept of FRD was developed on the basis of the Logistic-Exponential (LE) model, with initial gas volume being zero (LE0). The FRD of the LE0 model exhibits a continuous increase from initial (FRD0) toward final asymptotic value (FRDF) with longer incubation time. The relationships between the FRD and gas production at incubation times 2, 4, 6, 8, 12 and 24 h were compared for four models, in addition to LE0, Generalization of the Mitscherlich (GM), cth order Michaelis–Menten (MM) and Exponential with a discrete LAG (EXPLAG). A total of 94 in vitro gas curves from four subsets with a wide range of feedstuffs from different laboratories and incubation periods were used for model testing. Results indicated that compared with the GM, MM and EXPLAG models, the FRD of LE0 model consistently had stronger correlations with gas production across the four subsets, especially at incubation times 2, 4, 6, 8 and 12 h. Thus, the LE0 model was deemed to provide a better representation of the early fermentation rates. Furthermore, the FRD0 also exhibited strong correlations (P < 0.05) with gas production at early incubation times 2, 4, 6 and 8 h across all four subsets. In summary, the FRD of LE0 model provides an alternative to quantify the rate of early stage incubation, and its initial value could be an important starting parameter of rate.
Optical Limiting in Fullerene Materials
B. Z. Tang, H. Peng, S. M. Leung, N.-T. Yu, H. Hiraoka, W. Fok
Published online by Cambridge University Press: 03 September 2012, 69
Fullerene chemistry is booming, but how the chemical reactions affect fullerene's materials properties has seldom been studied. We have investigated optical limiting behavior of a series of fullerene derivatives, polymers, and glasses and have observed the following structure-property relationships for optical limiting in the fullerene materials: (i) The fullerene polymers with aromatic and chlorine moieties, i.e., C60-containing polycarbonate (C60-PC), polystyrene (C60- PS), and poly(vinyl chloride) (C60-PVC), limit the 8-ns pulses of 532-nm laser light more effectively than does the parent C60; (ii) the fullerene polymers with carbonyl groups, i.e., C60- containing CR-39 (C60-CR-39) and poly(methyl methacrylate), (C60-PMMA), do not enhance C60's limiting power; and (iii) the aminated fullerene derivatives, i.e., HxC60 (NHR)x [R = -(CH2CH2O)2H (1), x = 11; -(CH2)6OH (2), x = 7; -cyclo-C6H11 (3), x = 11; -(CH2)3Si(OC2H5)3 (4), x = 4], and their sol-gel glasses, i.e., 1–3/SiO2 (physical blending) and 4-SiO2 (chemical bonding), show complex limiting responses, with 4(-SiO2) performing consistently better than 1-3(/SiO2). The fullerene glasses are optically stable and their optical limiting properties remainunchanged after being subjected to continuous attack by the strong laser pulses for ca. 1 h.
Structure Characterization and Electrochemical Characteristics of Carbon Nanotube- Spinel Li4Ti5O12 Nanoparticles
Xiangcheng Sun, A. Iqbal, I. D. Hosein, M. J. Yacaman, Z. Y. Tang, P. V. Radovanovic, B. Cui
Published online by Cambridge University Press: 09 August 2012, mrss12-1440-o09-34
Carbon nanotube-spinel lithium titanate (CNT-Li4Ti5O12) nanoparticles have been synthesized by hydrothermal reaction and higher-temperature calcinations with LiOH·H2O and TiO2 precursors in the presence of carbon nanotubes sources. The CNT-Li4Ti5O12 nanoparticles have been characterized by X-ray diffraction (XRD), high angle annular dark field (HAADF) images, and selected area electron diffraction (SAED). The particles exhibited a spinel cubic crystal phase and homogenous size distribution, with sizes around 50-70 nm. HAADF imaging confirmed that carbon content exists on the surface of the CNT-Li4Ti5O12 nanoparticles with graphitic carbon coating of 3-5 nm thickness under 800oC in the Ar gas. The graphitic carbon phase was further confirmed with Raman spectroscopy analysis on powder samples. Electrochemical characteristics were evaluated with galvanostatic discharge/charge tests, which showed that the initial discharge capacity is 172 mA·h/g at 0.1C. The nanoscale carbon layers uniformly coated the particles, and the interconnected carbon nanotube network is responsible for the improved charge rate capability and conductivity.
Energetic electron generation by magnetic reconnection in laboratory laser-plasma interactions
Q.-L. DONG, D.-W. YUAN, S.-J. WANG, Y. T. LI, X. LIU, S. E. JIANG, Y. K. DING, K. DU, M.-Y. YU, X.-T. HE, Y. J. TANG, J. Q. ZHU, G. ZHAO, Z.-M. SHENG, J. ZHANG
Journal: Journal of Plasma Physics / Volume 78 / Issue 4 / August 2012
The magnetic reconnection (MR) configuration was constructed by using two approaching laser-produced plasma bubbles. The characteristics of the MR current sheet were investigated. The driving energy of the laser pulse affects the type of the current sheet. The experiments present "Y-type" and "X-type" current sheets for larger and smaller driving energy, respectively. The energetic electrons were found to be well-collimated. The formation and ejection of plasmoid from the "Y-type" current sheet was expected to enhance the number of accelerated electrons.
The use of albendazole and diammonium glycyrrhizinate in the treatment of eosinophilic meningitis in mice infected with Angiostrongylus cantonensis
Y. Li, J.-P. Tang, D.-R. Chen, C.-Y. Fu, P. Wang, Z. Li, W. Wei, H. Li, W.-Q. Dong
Journal: Journal of Helminthology / Volume 87 / Issue 1 / March 2013
Published online by Cambridge University Press: 13 December 2011, pp. 1-11
Angiostrongylus cantonensis (A. cantonensis) infection causes eosinophilic meningitis in humans. Eosinophilia and a Th2-type immune response are the crucial immune mechanisms for eosinophilic meningitis. CD4+CD25+ regulatory T cells (Treg) are involved in the pathogenesis of A. cantonensis. Diammonium glycyrrhizinate (DG) is a compound related to glycyrrhizin (GL), a triterpene glycoside extracted from liquorice root. We investigated the curative effects and probable mechanisms of therapy involving a combination of albendazole and DG in BALB/c mice infected with A. cantonensis, and compared these with therapy involving albendazole and dexamethasone. We analysed survival time, body weight, signs, eosinophil numbers, immunoglobulin E (IgE), interleukin-5 (IL-5), and eotaxin concentrations, numbers and Foxp3 expression of CD4+CD25+ Treg, worm recovery and histopathology. The present results demonstrated that the combination of albendazole and DG could increase survival time more efficiently and relieve neurological dysfunction; decrease weight loss, eosinophil numbers, concentrations of IgE, IL-5 and eotaxin, the number and expression of Foxp3 of CD4+CD25+ Treg; and improve worm recovery and histopathology changes in treated animals, compared with the combination of albendazole and dexamethasone. The observations presented here suggest that the albendazole and dexamethasone combination could be replaced by the combination of albendazole and DG.
Preparation of Protonated Titanate Nanotube Films with an Extremely Large Wetting Contrast
Y. K. Lai, Y. X. Tang, D. G. Gong, J. J. Gong, Y. C. Chen, C. J. Lin, Z. Chen
Published online by Cambridge University Press: 19 April 2011, mrsf10-1309-ee09-19
A facile anodic electrophoretic deposition (EPD) process has been developed to prepare thin uniform films consisting of titanate nanotubes (TNTs) that were synthesized by a hydrothermal approach. Such an EPD process offers easy control in the film thickness and the adhesion to the substrate was found to be strong. The chemical composition and structure of the products have been characterized by HRTEM, FESEM, XRD and TG/DTA. It was found that the functionalization of TNTs plays a key role on the electrolyte stability and the successful formation of a uniform TNT film with good adhesion. The as-prepared TNT films show exceptional superhydrophilic behavior with ultra-fast spreading, while it converts to superhydrophobicity yet with high adhesion after 1H,1H,2H,2H-perfluorooctyl-triethoxysilane modification. This study provides an interesting method to prepare films with extremely high wettability contrast that are useful for producing different kinds of functional materials.
Synthesis of Layered Titanate Micro/nano-materials for Efficient Pollutant Treatment in Aqueous Media
Y. X. Tang, Y. K. Lai, D. G. Gong, Zhili Dong, Z. Chen
Published online by Cambridge University Press: 21 March 2011, mrsf10-1309-ee03-15
In this work, the one dimensional (1D) titanate nanotubes (TNT)/nanowires (TNW), bulk titanate micro-particles (TMP), and three dimensional (3D) titanate microsphere particles (TMS) with high specific surface area were synthesized via different approaches. The chemical composition and structure of these products have been characterized by field emission scanning electron microscopy (FESEM), transmission electron microscope (TEM) study and Raman scattering spectroscopy. The as-prepared TMS shows excellent adsorption performance compared with TMP, TNW and TNT when methylene blue (MB) and PbII ions are used as representative organic and inorganic pollutants.
Ion Channel Sensor on a Silicon Support
Michael Goryll, Seth Wilk, Gerard M. Laws, Stephen M. Goodnick, Trevor J. Thornton, Marco Saraniti, John M. Tang, Robert S. Eisenberg
Published online by Cambridge University Press: 15 March 2011, O7.2
We are building a biosensor based on ion channels inserted into lipid bilayers that are suspended across an aperture in silicon. The process flow only involves conventional optical lithography and deep Si reactive ion etching to create micromachined apertures in a silicon wafer. In order to provide surface properties for lipid bilayer attachment that are similar to those of the fluorocarbon films that are currently used, we coated the silicon surface with a fluoropolymer using plasma-assisted chemical vapor deposition. When compared with the surface treatment methods using self-assembled monolayers of fluorocarbon chemicals, this novel approach towards modifying the wettability of a silicon dioxide surface provides an easy and fast method for subsequent lipid bilayer formation. Current-Voltage measurements on OmpF ion channels incorporated into these membranes show the voltage dependent gating action expected from a working porin ion channel.
Wetting Pattern on TiO2 Nanostructure Films and its Application as a Template for Selective Materials Growth
Y. K. Lai, Y. Yang, Y. X. Huang, Z. Q. Lin, Y. X. Tang, D. G. Gong, C. J. Lin, Z. Chen
Published online by Cambridge University Press: 09 March 2011, mrsf10-1316-qq06-11
The present paper describes an unconventional approach to fabricate superhydrophilic-superhydrophobic template on the TiO2 nanotube structured film by a combination of electrochemical anodization and photocatalytic lithography. Based on template with extreme wetting contrast, various functional nanostructures micropattern with high resolution have been successfully fabricated. The resultant micropattern has been characterized with scanning electron microscopy, optical microscopy, X-ray photoelectron spectroscopy. It is shown that functional nanostructures can be selectively grown at superhydrophilic areas which are confined by the hydrophobic regions, indicating that the combined process of electrochemically self-assembly and photocatalytic lithography is a very promising approach for constructing well-defined templates for various functional materials growth.
Distinguishing Between Coherent Interdiffusion and Incoherent Roughness in Synthetic Multilayers Using X-Ray Diffraction
Z. Xu, Z. Tang, S. D. Kevan, Thomas Novet, David C. Johnson
ABSTRACT:: We have developed a method to separate coherent interfacial interdiffusion from incoherent interfacial roughness by extending an electromagnetic dynamical theory to calculate the reflectivity of a multilayer having an arbitrary interfacial profile with a variable degree of randomness in the repeating layer thicknesses. We find that the intensity of the subsidiary maxima are extremely sensitive to incoherent roughness while the intensity of the Bragg maxima are largely determined by the interfacial electron density profiles. Experimental data are modeled in a manner similar to that used by Warren and Averback to determine domain size of crystallites. We divide the multilayer into coherent domains differing from one another by small deviations from the average layer thicknesses. The diffraction intensity from each of these domains is then added to obtain the experimental pattern. The diffraction spectra of a set of Pt/Co multilayers with similar layer thicknesses but prepared with different sputtering gases illustrates the ability to separate the effects of coherent interdiffusion from incoherent roughness. The extent of incoherent roughness obtained using this model to analyze the diffraction data of these Pt-Co multilayers is in good agreement with TEM and STM results from the same samples. The diffraction patterns could not be simulated with abrupt concentration profiles and the extent of interdiffusion was found to be correlated with the energy of reflected neutrals present during the synthesis of the multilayers.
PbI2 Confined In The Spaces Of LTA Zeolites
O. Terasaki, Z. K. Tang, Y. Nozue, T. Goto
PbI2 clusters confined in spaces of LTA zeolite are successfully prepared through vapour phase. An HREM image showed that the crystallinity of the zeolite was preserved after preparation and showed directly that the clusters were incorporated into the α-cages. Absorption spectra were measured by diffuse reflection method as a function of loading density of PbI2 molecules. Several absorption bands from different cluster sizes were observed and showed remarkable blue shift. At the maximum loading, extra reflections, which are forbidden for Fm3A of LTA, were observed in electron and X-ray diffraction patterns. The appearence of the extra reflections and the dependence of absorption curve on the loading density suggest that superlattice of clusters was produced. The characteristic feature of zeolites as containers to make an artificial superlattice of clusters is pointed out.
Poly(styrene-b-butadiene-b-styrene) self-organized Ultrathin films
D. Y. Wang, F. Q. Liu, Y. A. Cao, Z. Q. Liu, D. F. Shen, X. M. Qian, J. Q. Song, Z. F. Liu, Y. B. Bai, T. J. Li, X. Y. Tang
Direct images of the surface morphology of a series of the poly(styrene-b-butadiene-b-styrene) (SBS) ultrathin films have been obtained by atomic force microscopy (AFM). These films were formed under kinetic control. The resulting films consist of 25 nm-diameter spherical structures and/or cylindrical ones. These structures are surprisingly different fi-om alternative-lamellar structures which should be formed under annealed conditions because of the nearly equal length of two blocks of SBS. In nonselective solvents, the surface of the cast solution layer induces the copolymers to self-organize into micelle consisting of a polystyrene (PS) core and a polybutadiene (PB) shell in order to decrease film surface energy. The final structures depend on the property of the solvent used and filn-forming condition.
Thermal Reliability of Pt/Ti/Pt/Au Schottky Contact on InP with a GalnP Schottky Barrier Enhancement Layer
H. C. Kuo, C. H. Lin, B. G. Moser, H. Hsia, Z. Tang, H. Chen, M. Feng, G. E. Stillman
We present the studies of the thermal stability of various metal including Au, Ti, Pt, Pd and Pt/Ti/Pt/Au Schottky contacts on strained Ga0.2In0.8P/InP semiconductors. Auger electron spectroscopy (AES) analysis and cross-sectional TEM of the thermally annealed Schottky diode were performed to investigate the failure mechanism. For Pt/Ti/Pt/Au schottky contacts on strained GalnP/InP, no significant change was found for samples annealed up to 350°C. However, a drastic degradation of the barrier height and the ideality factor was observed in samples annealed at 400°C, which may be caused by the interdiffusion and penetration of metals into the semiconductor. Finally InGaAs/InP doped channel heterojunction FET's (DC-HFET's) with a GaInP Schottky barrier enhancement layer (SBEL) were grown and fabricated. The 0.25 μm gate-length devices showed excellent DC and RF performance, with anfi of 117 GHz and an fmax of 168 GHz. | CommonCrawl |
The determinant bound for discrepancy is almost tight
by Jiří Matoušek PDF
In 1986 Lovász, Spencer, and Vesztergombi proved a lower bound for the hereditary discrepancy of a set system $\mathcal {F}$ in terms of determinants of square submatrices of the incidence matrix of $\mathcal {F}$. As shown by an example of Hoffman, this bound can differ from $\mathrm {herdisc}(\mathcal {F})$ by a multiplicative factor of order almost $\log n$, where $n$ is the size of the ground set of $\mathcal {F}$. We prove that it never differs by more than $O((\log n)^{3/2})$, assuming $|\mathcal {F}|$ bounded by a polynomial in $n$. We also prove that if such an $\mathcal {F}$ is the union of $t$ systems $\mathcal {F}_1,\ldots ,\mathcal {F}_t$, each of hereditary discrepancy at most $D$, then $\mathrm {herdisc}(\mathcal {F})\le O(\sqrt t (\log n)^{3/2}D)$. For $t=2$, this almost answers a question of Sós. The proof is based on a recent algorithmic result of Bansal, which computes low-discrepancy colorings using semidefinite programming.
N. Bansal. Constructive algorithms for discrepancy minimization. http://arxiv.org/abs/1002.2259, also in FOCS'10: Proc. 51st IEEE Symposium on Foundations of Computer Science, pages 3–10, 2010.
Phani Bhushan Bhattacharya, Surender Kumar Jain, and S. R. Nagpaul, First course in linear algebra, A Halsted Press Book, John Wiley & Sons, Inc., New York, 1983. MR 719018
József Beck and Vera T. Sós, Discrepancy theory, Handbook of combinatorics, Vol. 1, 2, Elsevier Sci. B. V., Amsterdam, 1995, pp. 1405–1446. MR 1373682
R. J. Duffin, Infinite programs, Linear inequalities and related systems, Annals of Mathematics Studies, no. 38, Princeton University Press, Princeton, N.J., 1956, pp. 157–170. MR 0087573
B. Gärtner and J. Matoušek, Approximation algorithms and semidefinite programming. Springer, Heidelberg, 2012.
Jeong Han Kim, Jiří Matoušek, and Van H. Vu, Discrepancy after adding a single set, Combinatorica 25 (2005), no. 4, 499–501. MR 2143253, DOI 10.1007/s00493-005-0030-x
L. Lovász, J. Spencer, and K. Vesztergombi, Discrepancy of set-systems and matrices, European J. Combin. 7 (1986), no. 2, 151–160. MR 856328, DOI 10.1016/S0195-6698(86)80041-5
Jiří Matoušek, Geometric discrepancy, Algorithms and Combinatorics, vol. 18, Springer-Verlag, Berlin, 2010. An illustrated guide; Revised paperback reprint of the 1999 original. MR 2683232, DOI 10.1007/978-3-642-03942-3
Dömötör Pálvölgyi, Indecomposable coverings with concave polygons, Discrete Comput. Geom. 44 (2010), no. 3, 577–588. MR 2679054, DOI 10.1007/s00454-009-9194-y
Lieven Vandenberghe and Stephen Boyd, Semidefinite programming, SIAM Rev. 38 (1996), no. 1, 49–95. MR 1379041, DOI 10.1137/1038003
Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 05D99
Retrieve articles in all journals with MSC (2010): 05D99
Jiří Matoušek
Affiliation: Department of Applied Mathematics and Institute of Theoretical Computer Science (ITI), Charles University, Malostranské nám. 25, 118 00 Praha 1, Czech Republic
Received by editor(s): January 4, 2011
Received by editor(s) in revised form: July 2, 2011
Published electronically: June 18, 2012
Additional Notes: The author was partially supported by the ERC Advanced Grant No. 267165.
Communicated by: Jim Haglund
The copyright for this article reverts to public domain 28 years after publication.
MSC (2010): Primary 05D99
DOI: https://doi.org/10.1090/S0002-9939-2012-11334-6 | CommonCrawl |
Search SpringerLink
Measuring academic entities' impact by content-based citation analysis in a heterogeneous academic network
Fang Zhang1 &
Shengli Wu ORCID: orcid.org/0000-0003-2008-17361,2
Scientometrics volume 126, pages 7197–7222 (2021)Cite this article
Evaluating the impact of papers, researchers and venues objectively is of great significance to academia and beyond. This may help researchers, research organizations, and government agencies in various ways, such as helping researchers find valuable papers and authoritative venues and helping research organizations identify good researchers. A few studies find that rather than treating citations equally, differentiating them is a promising way for impact evaluation of academic entities. However, most of those methods are metadata-based only and do not consider contents of cited and citing papers; while a few content-based methods are not sophisticated, and further improvement is possible. In this paper, we study the citation relationships between entities by content-based approaches. Especially, an ensemble learning method is used to classify citations into different strength types, and a word-embedding based method is used to estimate topical similarity of the citing and cited papers. A heterogeneous network is constructed with the weighted citation links and several other features. Based on the heterogeneous network that consists of three types of entities, we apply an iterative PageRank-like method to rank the impact of papers, authors and venues at the same time through mutual reinforcement. Experiments are conducted on an ACL dataset, and the results demonstrate that our method greatly outperforms state-of-the art competitors in improving ranking effectiveness of papers, authors and venues, as well as in being robust against malicious manipulation of citations.
Due to the rapid development of science and technology, the total number of papers published in recent years has increased significantly. According to an STM report (Johnson et al., 2018), there were 33,100 peer-reviewed English journals in mid-2018, and over 3 million articles were published per year. The total number of publications and the number of journals have both grown steadily for over two centuries, at the rates of 3% and 3.5% per year, respectively. Facing such a huge number of publications, academia and other sectors of the society have become keen to find answers to the following questions: How can the importance of a research paper be measured? How can the performance of a researcher or a research organization be evaluated? It is necessary to have an objective evaluation system to measure the performance of papers, authors and venues.
For a long time, many researchers have tried various ways to evaluate the academic impact effectively. Citation count plays an important role in evaluating papers and authors. Based on citation count, many metrics, such as the h-index (Hirsch, 2005), the g-index (Egghe, 2006), the journal impact factor (Garfield, 2006), and others, have been proposed. These metrics are straightforward, but some factors, such as citation sources and co-authorship, are not considered. Heterogeneous academic networks, which include multiple types of entities including papers, authors, and venues, are very good a platform for academic performance evaluation, because all related information is available for us to exploit. Based on such networks, graph-based methods can be used (Jiang et al., 2016; Simkin & Roychowdhury, 2003; Zhang & Wu, 2020). For example, both SCImago Journal Rank (SJR) (González-Pereira et al., 2010, 2012) and the Eigenfactor score (Bergstrom, 2007) use PageRank-like algorithms (Brin & Page, 1998) to evaluate journals. MutualRank (Jiang et al., 2016) and Tri-Rank (Liu et al., 2014) rank papers, authors and venues simultaneously based on heterogeneous academic networks. These graph-based methods have some advantages for ranking academic entities due to their ability of leveraging structural information in academic networks and the mutual reinforcement relationship among papers, authors and venues.
Many existing graph-based ranking algorithms treat all citations as equally influential (Chakraborty & Narayanam, 2016; Zhu et al., 2015), without distinguishing that some of them may be more important than others. Such an approach may be questionable. Typically, for many papers, a small number of references play an important role (Chakraborty & Narayanam, 2016; Simkin & Roychowdhury, 2003; Wan & Liu, 2014), while most of the others do not have much impact (Teufel et al., 2006). In order to deal with such a problem, various aspects have been considered to weight citation links. For a given paper, we may consider many different aspects such as who cites the paper, where the citing paper is published, the time gap between two papers' publication, if it is a self-citation, and so on. We may also consider the topical similarity of the two papers or how the cited paper is related to the citing paper (referred to as citation strength in this paper). Different rationales are behind those aspects. For example, considering the venue that the citing paper is published, the citation is more valued if it is cited by a paper published in a prestigious venue than in an average venue. If it is a self-citation, it will get less credit than the others.
The primary goal of this paper is to investigate the middle to long-term impact of academic entities through a comprehensive framework (Kanellos et al., 2021). Especially we exploit some content-based features such as citation strength and topical similarity between the cited and citing papers, which are used to define weighted citation links. A heterogeneous network of papers, authors, and venues is built to reflect the relationships among them. Three types of entities are ranked at the same time through a PageRank-like algorithm with mutual reinforcement.
One possible problem with PageRank is it favors older papers than newer papers. This is referred to as the ranking bias (Jiang et al., 2016; Zhang et al., 2019a). It always takes time for a paper to be recognized in the community; a similar situation may also happen to authors. Therefore, a good evaluation system should be able to balance papers published at different time. In the same vein, we apply time-aware weights for all the papers involved.
Moreover, our framework includes a number of good features. In the heterogeneous network generated, seven types of relations are defined and supported. They are paper citation, author citation, venue citation, co-authorship, paper-author, paper-venue, and author-venue relations. For both authors and venues, their performance is evaluated on a yearly basis. Such a fine granularity enables us to catch the dynamics of the entities involved more precisely.
Citation manipulation (e.g., padded, swapped, and coerced citations) usually occurs in citations that do not contribute to the content of an article.Footnote 1 Because some government agencies rely heavily on impact factors to evaluate the performance of researchers and research organizations, there is evidence that various types of citation manipulation exist. For example, some scholars add authors to their research papers even those individuals contribute nothing to the research effort (Fong & Wilhite, 2017). Some journal editors suggest or request that authors cite papers in designated journals to inflate their citation counts (Fong & Wilhite, 2017; Foo, 2011). Peer reviewers may deliberately manipulate the peer-review process to boost their own citation counts (Chawla, 2019). Some scientists may self-citing extremely (Noorden & Chawla, 2019). Therefore, it is desirable to take this problem seriously into consideration when ranking academic entities. Citation manipulation (Bai et al., 2016; Chakraborty & Narayanam, 2016; Wan & Liu, 2014) is a problem that needs to be considered for academic entity ranking. As an extra benefit to the measures we apply, we believe that the proposed approach is robust and able to mitigate various kinds of citation manipulation problems (Bai et al., 2016; Chakraborty & Narayanam, 2016; Wan & Liu, 2014).
By consolidating all the measures above-mentioned, in this paper we propose a framework, WCCMR (Weighted Citation Count-based Multi-entity Ranking), to evaluate the impact of multiple entities. There are a number of contributions in this piece of work:
An ensemble learning method is used with three base classifiers to classify citations into five different categories. The fused results are better than that of all base classifiers, which represent the up-to-date technologies.
A word embedding-based method is used to measure topical similarity between the citing paper and the cited paper.
The above two content-based features are combined to define weighted citation links. To the best of our knowledge, we have not seen such a weighing scheme for citation before.
Apart from the weighted citation scheme, our framework has a number of good features: time-aware weighting, fine granularity for authors and venues, and seven types of relations among the same or different types of entities.
Experiments with the ACL (Association for Computational Linguistics Anthology Network) dataset (Radev et al., 2013) show that the proposed method outperforms other state-of-the-art methods in evaluating the effectiveness of papers, authors and venues, as well as in robustness against malicious manipulations.
The remainder of this paper is organized as follows: Sect. 2 presents related work on performance evaluation of academic entities, mainly by using various types of academic networks. Section 3 describes the framework proposed in this study. Section 4 presents the detailed experimental settings, procedures, and results. Some analysis of the experimental results is also given. Section 5 concludes the paper.
As an important task to the research community and beyond, evaluating scientific papers, authors and venues has been studied by many researchers for a long time. Citation count has been widely used and many citation-based metrics have been proposed (Jiang et al., 2016; Wang et al., 2016). For example, h-index (Hirsch, 2005) and g-index (Egghe, 2006) are used to measure researchers, the Impact Factor (IF) (Garfield, 1972), 5 year Impact Factor (5 year IF) (Pajić, 2015), and Source Normalized Impact per Paper (SNIP) (Moed, 2010; Waltman et al., 2013) are used to measure venues. These citation-based metrics are easy to understand and calculate. However, they have some crucial shortcomings. Firstly, many related metadata about any paper, such as its author(s) and venue, are ignored. This may have a negative effect on accuracy of the evaluation; Secondly, simple citation count lacks immunity to manipulation of citations. This is also an important issue that needs to be addressed.
As a remedy to some of the problems of using simple citation count, applying PageRank-like algorithms into academic networks has been investigated by quite a few researchers in recent years. For instance, the Eigenfactor score (Bergstrom, 2007) and SJR (González-Pereira et al., 2010, 2012) are used to evaluate journals. According to what type of information is used, we may divide those methods into two categories: metadata-based approach (time-aware weighting is a popular sub-category) and content-based approach.
Metadata-based approach has been investigated in (Yan & Ding, 2010; Zhang & Wu, 2018; Zhang et al., 2019a, b; Zhou et al., 2016) among others. To improve paper ranking performance and robustness against malicious manipulation, Zhou et al. (2016) proposed a weight assignment method for citation based on the ratio of common references between the citing and cited papers. Similar to Zhou et al. (2016), Zhang et al. (2019b) considered the reference similarity between the citing and cited papers. They also considered the topical similarity (calculated using titles and abstracts) between the two papers and combined them for weighting. Believing that immediate citations after publication is an indicator of good quality, some researchers allocated heavy weights to those papers that are cited shortly after publication (Yan & Ding, 2010; Zhang & Wu, 2018; Zhang et al., 2019a). For alleviating the ranking bias towards newly published papers, Walker et al. (2006) and Dunaiski et al. (2016) allocated heavier weights to newer papers, while Wang et al. (2019) considered the citations in the first 10 years of any paper since its publication and ignored the later ones. Self-citation, which is given a lighter weight than a "normal" citation, is investigated in (Bai et al., 2016).
Content-based approach has been investigated in (Chakraborty & Narayanam, 2016; Wan & Liu, 2014; Xu et al., 2014). Wan and Liu (2014) and Chakraborty and Narayanam (2016) classified citations into five categories of strength based on content analysis of the citing papers, and then assigned different weights for those citations accordingly. In Wan and Liu (2014), Support Vector Regression is used to estimate the strength of each citation. While in Chakraborty and Narayanam (2016), a graph-based semi-supervised model, GraLap, is used to estimate citation strength. In both cases, dozens of features, either metadata-based or content-based, are used in their model. Xu et al. (2014) proposed a variant of PageRank in which a dynamic damping factor is used instead. At each paper node, its damping factor is decided by the topic freshness and publication age of the paper in question. Topic freshness per year is obtained by analyzing contents of all the papers in the dataset investigated.
To make full use of the information in academic networks and/or evaluate multiple entities at the same time, some researchers have proposed some PageRank variants by using various heterogeneous networks (Bai et al., 2020; Jiang et al., 2016; Liu et al., 2014; Meng & Kennedy, 2013; Yan et al., 2011; Yang et al., 2020; Yang et al., 2020; Zhang & Wu, 2018, 2020; Zhang et al., 2018, 2019a; Zhao et al., 2019; Zhou et al., 2021). Yan et al. (2011) proposed an indicator, P-Rank, to score papers. For each citation, the impact of the citing paper, the citing authors and the citing journal are considered at the same time. Differentiating each venue year by year, Zhang and Wu (2018) proposed a ranking method, MR-Rank, to evaluate papers and venues simultaneously. Meng and Kennedy (2013) proposed a method, Co-Ranking, for ranking papers and authors. Tri-Rank, proposed by Liu et al. (2014), can rank authors, papers, and journals simultaneously. Especially, Tri-Rank considers the ordering of authors and self-citation problems. Jiang et al. (2016) proposed a ranking model MutualRank, which is a modified version of randomized HITS for ranking papers, authors and venues simultaneously. Zhang et al. (2018) proposed a classification-based method to predict authors' influence. They firstly classified authors into different types according to their citation dynamics and then applied the modified random walk algorithms in a heterogeneous temporal academic network for prediction. Based on a heterogeneous network that includes both paper citation and paper-author relations, Zhao et al. (2019) measured the influence of authors on two large data sets, and one of which included 500 million citation links. By assigning weight to the links of citation network and authorship network according to the citation relevance and author contribution, Zhang et al. (2019a) ranked scientific papers by integrating the impact of papers, authors, venues and time awareness. By differentiating each venue and researcher on a yearly basis, Zhang and Wu (2020) proposed a framework, WMR-Rank, to predict the future influence of entities including papers, authors, and venues simultaneously. For balanced treatment of old and new papers, they considered both the publication age and recent citations of all the papers involved at the same time. Bai et al. (2020) measured the impact of institutes and papers simultaneously based on the heterogeneous institution-citation network. Based on a heterogeneous network that including co-authorship, author-paper and paper citation relation, Zhou et al. (2021) proposed an improved random walk algorithm to recommend research collaborators. Especially, they considered both time awareness and topic similarity. Similar to Zhou et al. (2021), Yang et al. (2020) recommend researcher collaborators by using an improved walking algorithm. A heterogeneous network by combing co-author network and institution network is used.
It is likely that the work in Wan and Liu (2014) and Chakraborty and Narayanam (2016) are the most relevant to our work in this paper, however, there are considerable differences between our work in this paper and either of them. First, we use an ensemble learning method for citation strength estimation and the results show that it is more effective than the methods used in those two papers. Besides, topic similarity is also included for determining the weighting of citation link. This is not included in either Wan and Liu (2014) and Chakraborty and Narayanam (2016). Lastly, a sophisticated network with multiple types of entities is built and used in this paper to evaluate their impact at the same. As we will see later in the experimental part, it works with other components to achieve very good results.
The proposed method
In this section, we introduce all the components required and then present the multi-entity ranking algorithm. The Symbols used in this paper and their meanings are summarized in Table 1.
Table 1 Some symbols used in this paper and their meanings
Citation strength and topical similarity
When researchers write papers, they usually need to cite other papers for various reasons, such as pointing to a baseline method for comparison, applying a proposed method or making some improvement of it, referring to the definition of an evaluation metric, as evidence of supporting a point of view, and so on. Considering all those different purposes of citation, some of which may be more important than some others. Therefore, in line with the work of Liu (2014) and Chakraborty and Narayanam (2016), we define five levels of citation strength as follows.
Level 1 The cited reference has the lowest importance to the citing paper. It is related to the citing paper casually. It usually follows words like "such as", "for example", "note" in the text, and can be removed or replaced without hurting the competence of the references.
Level 2 The cited reference is related to the citing paper to some extent. For example, it is cited to support a point of view or to introduce the development of research fields related to the citing paper. It is usually mentioned together with other references and appears in parts such as "introduction", "related work", or "conclusion and future work".
Level 3 The cited reference is important and related to the citing paper. For example, it may serve as a baseline method. It is usually mentioned several times in the paper with long citation sentences and may appear in more than one part of the paper.
Level 4 The cited reference is very important to the citing paper. It is usually mentioned separately in one or more sentences and appears in the methodology section, such as algorithms or models used in the citing paper. It can be an integral part of the model proposed in the paper.
Level 5 The cited reference is extremely important and highly related to the citing paper. For example, the citing paper makes an improvement based on the cited reference or borrows its main idea from the cited reference. It is usually mentioned multiple times, sometimes following "this method is influenced by", "we extend", etc., and very likely appears in multiple parts of the paper such as "introduction", "related work", "method", "experiment", "discussion", or "conclusion".
Citation topical similarity refers to the topical similarity between the cited paper and the citing paper. It is independent from citation strength. A word-embedding based approach is used for this. It is also a good indicator of proper citation. The higher the similarity is between the citing paper and the cited paper, the lower the likelihood that the cited paper is artificially manipulated. A linear combination of them is set to be the weight of the citation. See Eq. (1) later in this paper. Based on that, a heterogeneous network can be built with the desirable properties. We consider that differentiating citations instead of taking simple citation counts may produce more reliable evaluation results.
A heterogeneous academic network
A heterogeneous academic network is composed of nodes and edges. Each node represents an entity and each edge between two nodes represents the relation between the two entities. There are three types of nodes: papers, authors, and venues, and seven types of relations: paper citation, paper-author relation, paper-venue relation, coauthor relation, author citation, author-venue relation and venue citation. A suitable weight needs to be assigned to each of the edges involved. In the following we discuss these seven types of relations one by one, in which weight assignment for each type of edges is the key issue.
Paper citation relation
A paper citation relation exists when one paper cites another paper. If paper \({p}_{j}\) cites paper \(p_{i}\), the weight is defined as
$$ W_{PP} \left( {p_{i} ,p_{j} } \right) = \left\{ {\begin{array}{*{20}l} {{\text{strength}}(p_{i} ,p_{j} ) + {\text{sim}}(p_{i} ,p_{j} )} \hfill & {p_{i} \leftarrow p_{j} } \hfill \\ 0 \hfill & { otherwise} \hfill \\ \end{array} } \right. $$
where \(\mathrm{strength}\left({p}_{i},{p}_{j}\right)\) and \(\mathrm{sim}\left({p}_{i},{p}_{j}\right)\) are the citation strength and topical similarity between \({p}_{i}\) and \({p}_{j}\), respectively. \({p}_{i}\leftarrow {p}_{j}\) denotes that paper \({p}_{i}\) is cited by paper \({p}_{j}\). It is required that both \(\mathrm{strength}\left({p}_{i},{p}_{j}\right)\) and \(\mathrm{sim}\left({p}_{i},{p}_{j}\right)\) are defined in the same range. Otherwise, normalization may be required to make them comparable.
Author citation relation
Through paper citation, we can set up an indirect relation of author citation. paper \({p}_{i}\) is cited by paper \({p}_{j}\), \({\overline{a} }_{m}\) is the only author or one of the authors of \({p}_{i}\), and \({\overline{a} }_{n}\) is the only author or one of the authors of \({p}_{j}\), then \({\overline{a} }_{m}\) is cited by \({\overline{a} }_{n}\) \(({\overline{a} }_{m}\leftarrow {\overline{a} }_{n})\). The same as in Zhang and Wu (2020), we differentiate each author year by year and allocate the credit that author \({\overline{a} }_{m}\) who published paper \({p}_{i}\) in year \({t}_{{\overline{a} }_{m}}\), obtains from \({\overline{a} }_{n}\) who published paper \({p}_{j}\) in year \({t}_{{\overline{a} }_{n}}\), through paper citation \({p}_{i}\leftarrow {p}_{j}\) as
$$ W_{{C\overline{A}\_raw}} \left( {\overline{a}_{m} ,\overline{a}_{n} ,p_{i} ,p_{j} } \right) = \frac{1}{{{\text{order}}\left( {\overline{a}_{m} ,p_{i} } \right) \times {\text{order}}\left( {\overline{a}_{n} ,p_{j} } \right)}} $$
where \(\mathrm{order}(a,p)\) is the position of author \(a\) in paper \(p\). Normalization is required for all the authors involved.
$${W}_{C\overline{A} }\left({\overline{a} }_{m},{\overline{a} }_{n},{p}_{i},{p}_{j}\right)={W}_{PP}({p}_{i},{p}_{j})\frac{{W}_{C\overline{A }\_raw}\left({\overline{a} }_{m},{\overline{a} }_{n},{p}_{i},{p}_{j}\right)}{{\sum }_{\begin{array}{c}{p}_{i}\leftarrow {p}_{j}\\ \begin{array}{c}{\overline{a} }_{k}\in {S}_{A}\left({p}_{i}\right)\\ {\overline{a} }_{l}\in {S}_{A}\left({p}_{j}\right)\end{array}\end{array}}{W}_{C{\overline{A }\_}_{raw}}\left({\overline{a} }_{k},{\overline{a} }_{l},{p}_{i},{p}_{j}\right)}$$
where \({S}_{A}\left(p\right)\) is the set of all the authors of paper \(p\).
An author \({\overline{a} }_{n}\) may cite another author \({\overline{a} }_{m}\) multiple times. The total credit that \({\overline{a} }_{m}\) in year \({t}_{{\overline{a} }_{m}}\) obtains from \({\overline{a} }_{n}\) in year \({t}_{{\overline{a} }_{n}}\) is the summation of all the papers involved.
$${W}_{C\overline{A} }\left({\overline{a} }_{m},{\overline{a} }_{n}\right)=\sum_{\begin{array}{c}{p}_{i}\in {S}_{P}\left({\overline{a} }_{m}\right)\\ {p}_{j}\in {S}_{P}({\overline{a} }_{n})\\ {p}_{i}\leftarrow {p}_{j}\end{array}}{W}_{C\overline{A} }\left({\overline{a} }_{m},{\overline{a} }_{n},{p}_{i},{p}_{j}\right)$$
where \({S}_{P}\left(a\right)\) is the set of papers written by author \(a\).
Coauthorship relation
A coauthorship relation exists in the network if two or more author nodes connect to the same paper node. Any author obtains certain credit from all other authors if they write a paper together. The credit that \({\overline{a} }_{i}\) who has published papers in year \({t}_{{\overline{a} }_{i}}\) obtains from her coauthor \({\overline{a} }_{j}\) through paper \(p\) is defined as
$${W}_{CO\overline{A }\_raw}\left({\overline{a} }_{i},{\overline{a} }_{j},p\right)=\frac{1}{\mathrm{order}({\overline{a} }_{i},p)\times \mathrm{order}({\overline{a} }_{j},p)}$$
which needs to be normalized. We have
$${W}_{CO\overline{A} }\left({\overline{a} }_{i},{\overline{a} }_{j},p\right)=\frac{{W}_{CO\overline{A }\_raw}\left({\overline{a} }_{i},{\overline{a} }_{j},p\right)}{{\sum }_{{\overline{a} }_{k, }{\overline{a} }_{l}\in {S}_{A}\left(p\right)}{W}_{CO\overline{A }\_raw}\left({\overline{a} }_{k},{\overline{a} }_{l},p\right)}$$
Two authors may co-write more than one paper. Hence, the credit that \({\overline{a} }_{i}\) in year \({t}_{{\overline{a} }_{i}}\) obtains from \({\overline{a} }_{j}\) over all co-authored papers is
$${W}_{CO\overline{A} }\left({\overline{a} }_{i},{\overline{a} }_{j}\right)=\sum_{\begin{array}{c}p\in {S}_{P}\left({\overline{a} }_{i}\right)\\ p\in {S}_{P}\left({\overline{a} }_{j}\right)\end{array}}{W}_{CO\overline{A} }\left({\overline{a} }_{i},{\overline{a} }_{j},p\right)$$
where \({S}_{P}\left({\overline{a} }_{i}\right)\) denotes all the papers written by \({\overline{a} }_{i}\).
Venue citation relation
Similar to author citation, we may define venue citation. For venues \({v}_{i}\) and \({v}_{j}\), if \({v}_{i}\leftarrow {v}_{j}\), the weight between \({v}_{i}\) and \({v}_{j}\) can be denoted as
$${W}_{VV}\left({v}_{i},{v}_{j}\right)=\sum_{\begin{array}{c} {p}_{k}\leftarrow {p}_{l}\\ {p}_{k}\in {S}_{P}\left({v}_{i}\right)\\ {p}_{l}\in {S}_{P}\left({v}_{j}\right)\end{array}}{W}_{PP}\left({p}_{k},{p}_{l}\right)$$
Paper-author relation
Paper coauthorship happens very often. However, for one paper written by a group of coauthors, their contributions to the paper are differentiated by their ordered positions (Abbas, 2011; Du & Tang, 2013; Egghe et al., 2000; Stallings et al., 2013). More specifically, we adopt a geometric counting approach (Egghe et al., 2000) for the paper-author relation. Suppose author \({a}_{i}\) is in the Rth position among all T coauthors in paper \({p}_{j}\); then, the amount of credit that author \({a}_{i}\) and paper \({p}_{j}\) obtain from each other is as follows:
$${W}_{AP}\left({a}_{i},{p}_{j}\right)={W}_{PA}\left({p}_{j},{a}_{i}\right)=\frac{{2}^{T-R}}{{2}^{T}-1}$$
Paper-venue relation
If paper \({p}_{i}\) is published in venue \({v}_{j}\), then there is an edge between paper \({p}_{i}\) and venue \({v}_{j}\); thus, paper \({p}_{i}\) and venue \({v}_{j}\) get credit from each other. We let
$$ W_{VP} \left( {v_{j} ,p_{i} } \right) = W_{PV} \left( {p_{i} ,v_{j} } \right) = \left\{ {\begin{array}{*{20}l} 1 \hfill & {p_{i} \in S_{P} \left( {v_{j} } \right)} \hfill \\ 0 \hfill & {otherwise} \hfill \\ \end{array} } \right. $$
Author-venue relation
If author \({a}_{i}\) publishes more than one paper in venue \({v}_{j}\), then the credit that \({a}_{i}\) obtains from \({v}_{j}\) is the sum of the credit she obtains from all the papers published in \({v}_{j}\). The same is true for the credit \({v}_{j}\) obtains from \({a}_{i}\).
$${W}_{AV}\left({a}_{i},{v}_{j}\right)={W}_{VA}\left({v}_{j}{,a}_{i}\right)=\sum_{\begin{array}{c}{p}_{k}\in {S}_{P}({a}_{i})\\ {p}_{k}\in {S}_{P}({v}_{j})\end{array}}{W}_{AP}({a}_{i},{p}_{k})$$
Recent citation bonus
An entity (paper or author) obtains a score from a citation and its final score is the sum of these individual scores. In order to mitigate the ranking bias toward old papers (Jiang et al., 2016) and treat all the papers in a balanced way, it is necessary to consider the recent citations of entities including papers and authors. Therefore, besides the normal scores, an entity obtains an extra bonus if the citation is very close to the evaluation year.
For an entity \({e}_{i}\), assume that \({e}_{i}\) has been cited in the most recent N years (including the evaluation year), and the evaluation year is \({t}_{evaluate}\). A bonus is given to entity \({e}_{i}\) as
$$\mathrm{RCB}\left({e}_{i}\right)=\sum_{{e}_{i}\leftarrow {e}_{j}}\mathrm{score}\left({e}_{j}\right)\times W({e}_{i},{e}_{j})\times f({t}_{j})$$
where \(score\left({e}_{j}\right)\) is the score of \({e}_{j}\) that is calculated based on some other aspects of the entity, \(W({e}_{i},{e}_{j})\) is the weight between \({\mathrm{e}}_{i}\) and \({e}_{j}\), \(f({t}_{j})\) is a time-related function.
$$f\left({t}_{j}\right)=\left\{\begin{array}{ll}{\theta }^{{t}_{\mathrm{evaluate}}-{t}_{j}} {t}_{\mathrm{evaluate}}-{t}_{j}\le N & {}\\ 0 &otherwise\end{array}\right.$$
where \(\theta \) is a parameter. In this paper, we set \(\theta \hspace{0.17em}\)= 0.8 and N = 5. \(W({e}_{i},{e}_{j})\times f({t}_{j})\) is the bonus weight of entities.
For papers, the bonus weight \({W}_{RP}\) is defined as
$${W}_{RP}\left({p}_{i},{p}_{j}\right)={W}_{PP}\left({p}_{i}{,p}_{j}\right)\times f({t}_{j})$$
For authors, the bonus weight \({W}_{R\overline{A} }\) is defined as
$${W}_{R\overline{A} }\left({\overline{a} }_{i},{\overline{a} }_{j}\right)={W}_{C\overline{A} }\left({\overline{a} }_{i},{\overline{a} }_{j}\right)\times f({t}_{j})$$
Self-connections between same type of entities
In this framework, both authors and venues may be considered as a whole or on a yearly basis. Therefore, we need to connect them in some situations. For example, for an author \({a}_{j}\in A\), there are a group of \({\overline{a} }_{i}\in \overline{A }\) (for 1 ≤ i ≤ n), both \({a}_{j}\) and \({\overline{a} }_{i}\) refer to the same author. Each \({\overline{a} }_{i}\) refers to \({a}_{j}\) in a specific year. \({W}_{\overline{A}A }\left({\overline{a} }_{i},{a}_{j}\right)\) is defined as
$$ W_{{\overline{A}A}} \left( {\overline{a}_{i} ,a_{j} } \right) = \left\{ {\begin{array}{*{20}l} 1 \hfill & {if \overline{a}_{i} and a_{j} is the same author} \hfill \\ 0 \hfill & {otherwise} \hfill \\ \end{array} } \right. $$
The second one is to set different weights for papers published in different years.
$$ W_{{T\overline{A}}} \left( {a_{i} ,\overline{a}_{j} } \right) = \left\{ {\begin{array}{*{20}l} {e^{{\mu (t_{{\overline{a}_{j} - }} t_{{{\text{evaluate}}}} )}} } \hfill & {if \overline{a}_{i} and a_{j} is the same author} \hfill \\ 0 \hfill & {otherwise} \hfill \\ \end{array} } \right. $$
where \(\mu \) is a parameter, \({t}_{{\overline{a} }_{j}}\) is the year at which \({\overline{a} }_{j}\) is published.
Venues are considered on a yearly basis. However, there is a need to consider its previous performance for \({t}_{v}\) years. Suppose \({v}_{i}\) and \({v}_{j}\) are the same conference but held in different years, \({v}_{i}\) is held later than \({v}_{j}\) but within \({t}_{v}\) years, the corresponding weight is defined as
$$ W_{{\tilde{V}V}} \left( {v_{i} ,v_{j} } \right) = \left\{ {\begin{array}{*{20}l} {\frac{1}{{t_{v} + 1}}} \hfill & {v_{j} and v_{i} satisfy the condition} \hfill \\ 0 \hfill & {otherwise} \hfill \\ \end{array} } \right. $$
The WCCMR method
The proposed method, WCCMR, works with the abovementioned heterogeneous academic network. After setting initial values for all the entities, an iterative process is applied to them, and at each step every entity obtains an updated score. Note that all the entities involved affect each other and all the scores converge after enough iterations. The algorithm stops when a threshold \(\upvarepsilon \) for the difference between two consecutive iterations is satisfied. Algorithm 1 gives the details of the proposed method.
Initially, the rank vector of papers P, authors A (without considering the time), and venues V are set to \({I}_{P}/|{V}_{P}|\), \({I}_{A}/|{V}_{A}|\), and \({I}_{V}/|{V}_{P}|\). \({I}_{P}\), \({I}_{A}\) and \({I}_{V}\) are unit vectors, and \(\left|{V}_{P}\right|\), \(|{V}_{A}|\) and \(|{V}_{V}|\) are the number of papers, authors and venues.
The main part of the algorithm is included in a while loop. Inside the loop (lines 1–13), the scores for all the nodes involved are updated. All papers' new scores are calculated in lines 3–4. Four factors are considered: authors (line 3), venues (line 3), citations (line 4), and recent citation bonus (line 4). All authors' new scores are calculated in lines 5–7. Five factors are considered: published papers (line 5), coauthors to the published papers (line 5), the venues in which the papers are published (line 5), author citations (line 6), and recent citation bonus (line 6). Finally, we sum up all the yearly scores by using a time function to obtain the total score for each author (line 7). All venues' new scores are calculated in line 8–9. Three factors are considered: published papers (line 8), authors (line 8), and venue citations (line 9). Although multiple types of entities are involved in the algorithm, it still converges quite quickly. For example, with the dataset used in this study and \(\upvarepsilon \) set to 1e-6., the algorithm stops after 13 iterations.
Experimental setting
In this experiment, we use the ACL Anthology Network datasetFootnote 2 (AAN) (Radev et al., 2013), which is constructed from papers published in natural language processing venues (including journals, conferences and workshops from 1965 to 2011).Footnote 3 We choose AAN because it provides both citations and full text for almost all the papers involved.
In order to make it suitable for the experiment, the dataset is pre-processed as follows. First, those papers that neither cite any other papers nor are cited by any other papers are removed, because they have no impact to the investigation in this paper. Those papers that have no full text are also removed, because we need full text for citation strength analysis and estimation. Second, any joint conferences are considered to have dual identity. For example, COLING-ACL'2006 is a joint conference of COLING and ACL. Third, in addition to regular papers, many conferences publish short papers, student papers, demos, posters, tutorials, etc. Usually, the quality of non-regular papers is not as good as that of regular papers. Therefore, we let all regular papers remain in the main conference while putting all non-regular papers into its companion, a separate venue. Finally, for those papers with more than 5 authors, we retained the first five authors and ignored the rest. After above-mentioned pre-processing, 13,591 papers remain with an average of 5.26 references for each of them, 10,140 authors and 248 venues without considering time, or 437 venues if taking each venue per year as a separate entity. Table 2 shows the general statistics of the dataset.
Table 2 Statistical information of experimental data sets
Calculating citation strength and topical similarity
Machine learning methods are good options for estimating citation strength because they have been very successful in many such applications. Stacking technique can combine classifiers via a meta-classifier to achieve better performance. In this study, we classify the citation strength by using the stacking technique with the features used in Chakraborty and Narayanam (2016). Random Forest (RF), Support Vector Classifier (SVC) and GraLap (Chakraborty & Narayanam, 2016) are selected as base classifiers because they are very good and represent up-to-date technology. Figure 1 shows the major steps involved in a meta-classifier. First a training data set is required to training base models and the meta-model as well. Then the trained model can be used to classify instances in the test set.
The major processes in stacking classification
First, we select a group of 96 papers from the whole data set randomly. From them we get 2735 valid references whose full texts are available in the data set. By using the Parscit package (Councill et al., 2008) plus a few hand-coded rules, we extracted 4993 citation sentences and sections in which the sentences locate. Such information along with the original papers are provided to a group of 15 annotators, all of which are graduate research students in computer science in our school. Among all 2735 papers, 215 are annotated at level 1, 2046 are at level 2, 287 are at level 3142 are at level 4, and 45 are at level 5.
Then as in Chakraborty and Narayanam (2016) and Wan and Liu (2014), we extracted citation features such as the number of occurrences, sections in which it appears, similarity between the citing paper and cited paper, and others for all 2735 citing papers. They are divided into five groups, each of which includes one fifth of the papers at each individual level. This was done by running a random selection process to the papers at each level separately.
A five-fold cross-validation is carried out to validate the performance of the stacking approach. We find that classification of the instances at level 5 are the least accurate, while level 2 instances reaches the highest classification accuracy of more than 0.8. Note that level 2 has the largest number of instances while level 5 has the least number of instances. One possible explanation is: for level 2 instances, we have enough instances for the base classifiers and the stacking method to learn a good model. In contrast for level 5 instances, they are not enough. Table 3 shows its performance with two other approaches, SVR (Support Vector Regression) (Wan & Liu, 2014), and GraLap (Chakraborty & Narayanam, 2016). Note that SVR is slightly different from SVC. Both use support vector machine but treat the same problem as either a classification problem or a regression problem. We can see that the stacking classifier is slightly better than the two other methods when any of the three measures are used for evaluation.
Table 3 Performance comparison of three citation strength estimation methods
For topical similarity, we extract the title and abstract of each paper and calculate the topic similarity based on word2vec after performing stemming. In the experiment, the dimension of the word vector is set to 200, and the context window is set to 5.
Ranking benchmarks
For papers, rather than calculating citation count of each paper, we consider that experts' opinion is a more authoritative measure to decide the impact of papers in the scientific community. Therefore, in this article, we use the gold standard papers provided in Jiang et al. (2016). A collection of gold standard papers, named GoldP, is assembled as recommended papers from the reading lists of graduate-level courses in natural language processing or computational linguistics and the reference lists of two best-selling natural language processing textbooks. Only those papers taken from the AAN dataset with at least two recommendations are selected. In total, 93 papers are selected in GoldP. The statistical information of those selected papers is shown in Table 4.
Table 4 Statistical information of the gold standard papers
In the same vein as gold standard papers, we use WRT (weighted recommendation times) to measure the influence of authors. The influence score of author \({a}_{i}\) is defined as
$$\mathrm{WRT}({a}_{i})=\sum_{{p}_{j}\in {A}_{P}\left({a}_{i}\right) \& {p}_{j}\in GoldP}{W}_{AP}\left({a}_{i},{p}_{j}\right)\times \mathrm{RT}({p}_{j})$$
where \(RT({p}_{j})\) is the number of recommendations that paper \({p}_{j}\) receives and \({W}_{AP}\left({a}_{i},{p}_{j}\right)\) is related to the ordering position of the author in question. See Eq. (12) in the "Paper-author relation" section for its definition of \({W}_{AP}\left({a}_{i},{p}_{j}\right)\). The final score that \({a}_{i}\) obtains, \(WRT({a}_{i})\), is the sum of the scores of all the papers in GoldP written by \({a}_{i}\). We consider this measure to be better than the citation count for authors because the inflationary effect can be mitigated. All the authors are regarded as influential authors (GoldA) if he/she wrote one or more gold standard papers. In this way, we obtain 149 authors in total.
For any venue, if it has two or more recommended papers in GoldP, then we set it as a recommended venue, GoldV. It includes 55 venues in total. The statistical information of GoldV is shown in Table 5.
Table 5 Statistical information of the gold standard venue collection
The influence score of venue \({v}_{i}\) is defined as
$$InS({v}_{i})=\sum_{{p}_{i}\in {V}_{P}\left({v}_{i}\right) \& {p}_{i}\in GoldP} RT({p}_{i})$$
It summarizes the recommendations received by all the papers in the venue.
Evaluation metrics
We use two evaluation metrics: precision at a given ranking level and a modified version of NDCG (Jiang et al., 2016). They are used to evaluate the effectiveness of a ranked list of entities E = {\({e}_{1}\), \({e}_{2}\),…,\({e}_{n}\)}.
Precision \(P@K\) is defined as
$$P@K=\frac{\sum_{i=1}^{K}{inf(e}_{i})}{K}$$
where \({inf(e}_{i})\) takes binary values of 0 or 1. If \({e}_{i}\) is an influential entity, then \({inf(e}_{i})\) is 1, otherwise, \({inf(e}_{i})\) is 0.
For a number of entities, the best ranking must exist, and it ranks all the entities in descending order of a given metric values. A group of papers can be ranked according to the times of recommendation received. WRT scores and number of recommended papers can be used for author and venue ranking, respectively. For a ranked list of entities \(E = \left\{ {e_{1} ,e_{2} , \ldots ,e_{K} } \right\}\), assume that its corresponding best ranking list is \(E^{\prime} = \left\{ {e^{\prime}_{1} ,e^{\prime}_{2} , \ldots ,e^{\prime}_{K} } \right\}\) , we let \(credit()\) denote the metric value of entity \(e_{k}\) obtain, and \(best\_credit()\) the metric value of entity \(e^{\prime}_{k}\) obtain. \(\mathrm{NDCG}@\mathrm{K}\) is defined as
$$\mathrm{NDCG}@\mathrm{K}=\frac{\sum_{k=1}^{K}\frac{credit({e}_{k})}{{log}_{2}(k+1)}}{\sum_{k=1}^{K}\frac{best\_credit({e}_{k})}{{log}_{2}(k+1)}}$$
In Eq. (22), the top-ranked entities are given a weight of 1, then the weights decrease with rank by a factor \({1/log}_{2}(k+1)\).
Methods for comparison
The ranking algorithms used for comparison are as follows:
Citation Count (CC). It is widely used to assess the influence of papers because it is single-valued and easy to understand (Zhu et al., 2015).
SVR-based Weighted Citation Count (WCC-SVR). It provides each citation with a citation strength value calculated by SVR (Wan & Liu, 2014).
GraLap-based Weighted Citation Count (WCC-GraLap). It provides each citation with a citation strength value calculated by GraLap (Chakraborty & Narayanam, 2016).
MutualRank (MR). A state-of-the-art method that ranks papers, authors and venues simultaneously in heterogeneous networks (Jiang et al., 2016).
Tri-Rank (Tri). Similar to MutualRank, Tri-Rank also ranks papers, authors and venues simultaneously in heterogeneous networks (Liu et al., 2014).
PageRank with SVR_based network (PR-SVR). The PageRank algorithm runs over a modified citation network in which each citation has a specific weight calculated by SVR (Wan & Liu, 2014).
PageRank with GraLap-based network (PR-GraLap). The PageRank algorithm runs over a modified citation network in which each citation has a specific weight calculated by GraLap (Chakraborty & Narayanam, 2016).
WCCMR. The method proposed in this paper (see Algorithm 1).
Parameter setting
There are five parameters in the proposed ranking model: \({\alpha }_{1}\), \({\alpha }_{2}\), \({\alpha }_{3}\), \({\alpha }_{4}\) and \(\upvarepsilon \). We set \(\upvarepsilon \) to 1e-6. For \({\alpha }_{1}\), \({\alpha }_{2}\), \({\alpha }_{3}\) and \({\alpha }_{4}\), we first set an intuitively reasonable value for each parameter: \({\alpha }_{1}\hspace{0.17em}\)= 0.50, \({\alpha }_{2}={\alpha }_{4}\hspace{0.17em}\)= 0.33, and \({\alpha }_{4}\hspace{0.17em}\)= 0.50. Then, fix three of them and let the remaining one vary to see its effect, and Fig. 2 shows the results (P@100 is used for performance evaluation).
Effect of different parameter values on ranking performance. a Effect of α1 on papers. b Effect of α2 and α3 on authors. c Effect of α4 on venues
From Fig. 2a, one can see that paper evaluation performance is quite stable when \({\alpha }_{1}\) is in the range of 0.00 and 1.00. The best performance is achieved when \({\alpha }_{1}\hspace{0.17em}\)= 0.90. Similarly, from Fig. 2b, c we can see that \({\alpha }_{2}\hspace{0.17em}\)= 0.35, \({\alpha }_{3}\hspace{0.17em}\)= 0.35, and \({\alpha }_{4}\hspace{0.17em}\)= 0.5 are also good for these parameters.
Note that the parameters of \({\alpha }_{1}\) and (1 − \({\alpha }_{1}\)) are used to adjust the relative weights of authors and venues. A larger \(\alpha \) value does not necessarily mean that authors are more important than venues because these two components are not directly comparable. \({\alpha }_{1}\) partially serves as a normalization measure. We find the same conclusion for the other parameters \({\alpha }_{2}\), \({\alpha }_{3}\) and \({\alpha }_{4}\).
Ranking performance
In this section, we present the evaluation results of the proposed algorithm, along with those of a group of state-of-the-art baseline methods.
Ranking effectiveness for papers
We first study paper ranking effectiveness of the proposed algorithm. Figure 3 shows the effectiveness curves of the different algorithms for ranking papers measured by P@K and NDCG@K. We can see that the proposed method, WCCMR, constantly outperforms all the other methods when either P@K or NDCG@K is used. Tri and CC are close. They are not as good as WCCMR but better than the others. It is also noticeable that the curves of PR-SVR and PR-GraLap are always very close. This is not surprising because both run PageRank. The difference between them is the way of setting citation weights in the heterogeneous network.
Effectiveness of different algorithms for ranking papers. a Measured by P@K. b Measured by NDCG@K
To investigate the properties of all the methods involved for top-ranked papers, we list the top 20 papers returned by WCCMR and its competitors in Table 6. We can see that 18 of the top 20 WCCMR papers are influential papers, while the numbers for Citation Count, MutualRank, Tri-Rank, PR-SVR, and PR-GraLap are 16, 15, 16, 7, and 8, respectively. All the methods fail to identify the most influential paper, but all of them successfully identify the second most influential paper in top 20.
Table 6 Top 20 papers ranked by WCCMR and other baseline methods (compared with the Gold standard ranking in descending order of the times of recommendation received, each number indicates the ranking position of that paper in the Gold standard ranking, an interval is given if two or more papers share the same ranking position inside the Gold standard ranking)
Ranking effectiveness for authors
We use both GoldA and WRC for influence evaluation of authors (see Eq. 19 in "Ranking benchmarks" section for its definition). Figure 4 shows the effectiveness curves of the different algorithms for ranking authors measured by precision and NDCG. From Fig. 4, we can see that the proposed method, WCCMR, is better than all the other methods when NDCG is used, MutualRank is the worst, while the other four are very close. However, when P@K is used, the performances of all the methods are closer. When K is 50 or more, WCCMR is a little better than the others. MutualRank is the worst in most of the cases, although the difference between it and the others is small.
Effectiveness of different algorithms for ranking authors. a Measured by P@K. b Measured by NDCG@K
To have a close look at the top 20 ranked authors by all the methods involved, we list them in Table 7 their corresponding ranking position in GoldA by their WRT scores. MutualRank identifies 17 influential authors, while all other methods reach 19. The results show that all the algorithms are very good on identifying influential authors. Therefore, P@20 is very good for all the methods involved.
Table 7 Top 20 authors ranked by WCCMR and other baseline methods (compared with the Gold standard ranking in descending order of WRT scores, each number indicates the ranking position of that paper in the Gold standard ranking, an interval is given if two or more papers share the same ranking position inside the Gold standard ranking)
Ranking effectiveness for venues
Figure 5 shows the effectiveness curves of different algorithms for ranking venues measured by precision and NDCG. From Fig. 5, we can see that WCCMR performs better than the other algorithms when either the precision or NDCG is used. However, the difference between and WCCMR and four others besides MutualRank is small. MutualRank is the worst and it is much worse than all the others.
Effectiveness of different algorithms for ranking venues. a Measured by Precision. b Measured by NDCG
For the top 20 venues returned by WCCMR and all other algorithms, we also list their corresponding ranking positions by the number of recommended papers in Table 8. It shows that all five algorithms besides MutualRank are equally good by identifying the same number of 16 influential venues, while MutualRank is not as good as the others and it secures 12 of them.
Table 8 Top 20 venues ranked by WCCMR and other baseline methods (compared with the Gold standard ranking in descending order of recommended paper numbers, each number in the table indicates the corresponding ranking position of that venue in the Gold standard ranking, an interval is given if two or more venues share the same ranking position inside the Gold standard ranking)
Average and median ranking positions of all influential entities
It is generally accepted that a good ranking algorithm should be effective in identifying all the influential entities in a comprehensive style (Wang et al., 2019). For the ranked list from a given ranking method, we find out the ranking positions of all those influential entities (e.g., all the papers in GoldP) and calculate the average rank and median rank of them. In this way, we are able to evaluate the general performance of the algorithm by using a single metric. Figure 6 shows the results.
Performance of different ranking methods by identifying the positions of all influential entities. a Measured by average ranking positions. b Measured by median ranking positions
From Fig. 6, we can see that the average rank and the median rank for WCCMR are the smallest in all the cases. In five out of six cases, the difference between it and the others are significant. However, the difference is very small in the case of average rank for venues. On the other hand, considering performance variance of all the algorithms involved, paper ranking is the highest, venue ranking is the lowest, while author ranking is in the middle. Especially when average rank is considered for author ranking, all the algorithms are very close.
Evaluation of several variants of WCCMR
WCCMR incorporates a few factors such as variable citation weights and bonus for recent citations. It is interesting to find how these two factors impact ranking performance. To achieve this goal, we define some variants that implement none or one of the features of WCCMR.
WCCMR-R. It is a variant of WCCMR that sets equal weight to all the citations.
WCCMR-S. It is a variant of WCCMR that does not implement bonus for recent citations.
WCCMR-N. It is a variant of WCCMR. It sets all citation weights equally and does not implement bonus for recent citations.
Now let us have a look at how these variants perform compared with the original algorithm. See Fig. 7 for the results. It is not surprising that WCCMR performs better than all three variants of WCCMR, while the variant with none of the two components performs the worst in ranking all three types of academic entities. Such a phenomenon demonstrates that both components are useful for entity ranking, either used separately or in combination. However, the usefulness of these two components is not the same. In most cases, WCCMR-S performs better than WCCMR-R, which means that variable citation weights have larger impact than bonus for recent citations.
Comparison of three feature-based variants of WCCMR with the original algorithm. a Paper ranking. b Author ranking. C Venue ranking
Some types of abnormality may happen in citation networks. it can be caused by citation manipulation. Such a phenomenon certainly impacts the ranking of scientific entities, especially for PageRank-like algorithms. Therefore, robustness is a desirable property for ranking algorithms to fight against inappropriate citations. Of course, if there is no way to distinguish important citations from trivial ones, then we cannot do much to mitigate this problem. Therefore, we assume that it is more likely that citation manipulation happens to those with low to moderate citation strength and/or topical similarity and to those recently published papers.
To investigate the robustness of WCCMR when working with an abnormous network, we need a proper data set. AAN may not be good for this without any moderation. Instead of using some other data sets, we decide to make AAN more suitable for this purpose by adding some fake citations into it. Let us look at the situation for paper, author, and venue ranking separately.
For paper ranking, we select a target paper pt from the data set, then generate up to 50 fake papers, and each of which cites pt and a number of others chosen randomly.
For author ranking, we select a target author at from the data set, then generate up to 50 fake papers, and each of which cites a randomly chosen paper written by at and a number of others not written by at.
For venue ranking, we select a target venue vt from the data set, then generate up to 50 fake papers, and each of which cites a randomly selected paper published in vt and a number of other papers not published in vt.
For a target entity, we observe its ranking position change when more fake citations are added into the network. It is obvious that if an entity already has relatively a large number of citations, then adding a few more may not affect much its ranking position, while those entities with very few citations are more sensitive to such changes. In order to investigate the robustness of our algorithm, we choose those entities with very few citations (0 citation for a paper or an author and up to 10 citations for a venue). For all added fake citations, both citation strength and topical similarity are set to small to moderate values. We use rank difference to measure the robustness of any algorithm \({\Delta R}_{h}={R}_{0}-{R}_{h}\). Here \({R}_{0}\) is the initial rank of the entity and Rh is the rank position of the entity after h citations are added. Naturally, smaller rank difference indicates better robustness (Zhou et al., 2016).
Figure 8 shows the results of a group of algorithms, which is the average of 50 trials. The curves of WCC-SVR, WCC-GraLap always overlap with each other, because they are implemented in a very similar way with small difference. Not surprisingly, Citation Count is the most sensitive to added citations and WCCMR is the most insensitive, while WCC-SVR, WCC-GraLap, and Tri-Rank are in the middle.
Robustness of different ranking algorithms against citation manipulation. a Paper ranking. b Author ranking. c Venue ranking
In this paper, we have presented a ranking method for the impact of papers, authors, and venues in a heterogeneous academic network. Its main characteristic is rather than assigning equal weights to all the citations, we assign variable weight to each of them based on its strength and topical similarity between the citing paper and the cited paper. Both of these two values are determined through content analysis of the papers involved. Especially the ensemble learning technique has been used to decide citation strength of two papers. Experiments carried out with a publicly available data set AAN show that the proposed ranking algorithm, WCCMR, outperforms other baseline algorithms including MutualRank, Tri-rank, and GraLap.
Based on the AAN data set with some fake citations added, we demonstrate that WCCMR is more robust than the others. Although the data set used for this purpose is not completely real, the assumptions behind the artificial citations is reasonable.
As our future work, we would go further in a few directions. The first is to study appropriate approaches to deal with the missed citation information in the data set used. For example, for many papers in the AAN data set, their citation information is not complete. Some external resources such as Google scholar and Microsoft Academic may be used to enhance it. How to include such extra information into the academic network and the ranking framework in an efficiently and effectively style is a challenging issue. The second is how to evaluate academic entities across disciplines. For example, Biology and Mathematics are very different. One can expect that on average a Biology research paper can attract more citations than a Mathematics research paper. Even inside one discipline different research areas may have different properties. For example, in computer science, one can expect that on average a machine learning paper may attract more citations than an information retrieval paper. How to balance disparity among different disciplines or areas is also a challenging research problem. The third is to further study machine learning methods for content-based citation strength estimation. Two major subtasks includes detecting useful features and effective machine learning models.
https://publicationethics.org/files/COPE_DD_A4_Citation_Manipulation_Jul19_SCREEN_AW2.pdf. Accessed 30 July 2020.
See http://clair.eecs.umich.edu/aan/index.php.
Note that the dataset we use does not include papers published in 2011, just as in Jiang et al. (2016).
Abbas, A. M. (2011). Weighted indices for evaluating the quality of research with multiple authorship. Scientometrics, 88(1), 107–131.
Bai, X., Xia, F., & Lee, I. (2016). Identifying anomalous citations for objective evaluation of scholarly article impact. PLoS ONE, 11(9), e0162364.
Bai, X., Zhang, F., Ni, J., Shi, L., & Lee, I. (2020). Measure the impact of institution and paper via institution-citation network. IEEE Access, 8, 17548–17555.
Bergstrom, C. (2007). Eigenfactor: Measuring the value and prestige of scholarly journals. College and Research Libraries News, 68(5), 314–316.
Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems, 30(1–7), 107–117.
Chakraborty, T. & Narayanam, R. (2016). All Fingers are not Equal: Intensity of References in Scientific Articles. In Conference on empirical methods in natural language processing (Pp. 1348–1358).
Chawla, D. S. (2019). Elsevier investigates hundreds of peer reviewers for manipulating citations. Nature, 573, 174.
Councill I. G., Giles C. L. & Kan M. -Y. (2008). Parscit: an open-source CRF reference string parsing package. In Proceeding of the Language Resources and Evaluation Conference (Pp. 661–667).
Du, J., & Tang, X. (2013). Potential of harmonic counts for encouraging ethical co-authorship practices. Scientometrics, 96(1), 277–295.
Dunaiski, M., Visser, W., & Geldenhuys, J. (2016). Evaluating paper and author ranking algorithms using impact and contribution awards. Journal of Informetrics, 10(2), 392–407.
Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, 69(1), 131–152.
Egghe, L., Rousseau, R., & Hooydonk, G. V. (2000). Methods for accrediting publications to authors or countries: Consequences for evaluation studies. Journal of the American Society for Information Science, 51(2), 145–157.
Fong, E. A., & Wilhite, A. W. (2017). Authorship and citation manipulation in academic research. PLoS One. https://doi.org/10.1371/journal.pone.0187394
Foo, J. (2011). Impact of excessive journal self-citations: A case study on the Folia Phoniatrica et Logopaedica journal. Science and Engineering Ethics, 17(1), 65–73.
Garfield, E. (1972). Citation analysis as a tool in journal evaluation. Science, 178(4060), 471–479.
Garfield, E. (2006). The history and meaning of the journal impact factor. JAMA, 295(1), 90–93.
González-Pereira, B., Guerrero-Bote, V. P., & Moya-Anegón, F. (2010). A new approach to the metric of journals scientific prestige: The SJR indicator. Journal of Informetrics, 4(3), 379–391.
González-Pereira, B., Guerrero-Bote, V. P., & Moya-Anegón, F. (2012). A further step forward in measuring journals scientific prestige: The SJR2 indicator. Journal of Informetrics, 6(4), 674–688.
Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572.
MATH Article Google Scholar
Jiang, X. R., Sun, X. P., Yang, Z., Zhuge, H., & Yao, J. M. (2016). Exploiting heterogeneous scientific literature networks to combat ranking bias: Evidence from the computational linguistics area. Journal of the Association for Information Science and Technology, 67(7), 1679–1702.
Johnson, R., Watkinson, A. & Mabe, M. (2018). The STM report: an overview of scientific and scholarly publishing. https://www.stm-assoc.org/2018_10_04_STM_Report_2018.pdf. Accessed June 2019.
Kanellos, I., Vergoulis, T., Sacharidis, D., Dalamagas, T., & Vassiliou, Y. (2021). Impact-based ranking of scientific publications: A survey and experimental evaluation. IEEE Transactions on Knowledge and Data Engineering, 33(4), 1567–1584.
Liu, Z. R., Huang, H. Y., Wei, X. C. & Mao, X. L. (2014). Tri-Rank: An Authority Ranking Framework in Heterogeneous Academic Networks by Mutual Reinforce. In 26th IEEE international conference on TOOLS with artificial intelligence (ICTAI2014) (Pp. 493–500).
Meng, Q. & Kennedy, P. J. (2013). Discovering influential authors in heterogeneous academic networks by a co-ranking method. In Proceedings of the 22nd ACM international conference on information & knowledge management (Pp. 1029–1036).
Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277.
Noorden, R. V., & Chawla, D. S. (2019). Hundreds of extreme self-citing scientists revealed in new database. Nature, 572, 578–579.
Pajić, D. (2015). On the stability of citation-based journal rankings. Journal of Informetrics, 9(4), 990–1006.
Radev, D. R., Muthukrishnan, P., Qazvinian, V., & Abu-Jbara, A. (2013). The ACL anthology network corpus. Language Resources and Evaluation, 47(4), 919–944.
Simkin, M. V., & Roychowdhury, V. P. (2003). Read before you cite! Complex System, 14(2003), 269–274.
Stallings, J., Vance, E., Yang, J., Vannier, M., Liang, J., Pang, L., Dai, L., Ye, I., & Wang, G. (2013). Determining scientific impact using a collaboration index. Proceedings of the National Academy of Sciences of the United States of America, 110(24), 9680–9685.
MathSciNet MATH Article Google Scholar
Teufel, S., Siddharthan, A. & Tidhar, D. (2006). Automatic classification of citation function. In Conference on empirical methods in natural language processing (Pp.103–110).
Walker, D., Xie, H., Yan, K., & Maslov, S. (2006). Ranking scientific publications using a simple model of network traffic. Journal of Statistical Mechanics-Theory and Experiment, 6(6), P06010–P06015.
Waltman, L., Eck, N. J. V., Leeuwen, T. N. V., & Visser, M. S. (2013). Some modifications to the snip journal impact indicator. Journal of Informetrics, 7(2), 272–285.
Wan, X. J., & Liu, F. (2014). Are all literature citations equally important? Automatic citation strength estimation and its applications. Journal of the Association for Information Science and Technology, 65(9), 1929–1938.
Wang, S. Z., Xie, S. H., Zhang, X. M., Li, Z. J., Yu, P. S., & He, Y. Y. (2016). Coranking the future influence of multi-objects in bibliographic network through mutual reinforcement. ACM Transactions on Intelligent Systems and Technology, 7(4), 1–28.
Wang, Y., Zeng, A., Fan, Y., & Di, Z. (2019). Ranking scientific publications considering the aging characteristics of citations. Scientometrics, 120(3), 155–166.
Xu, H., Martin, E., & Mahidadia, A. (2014). Contents and time sensitive document ranking of scientific literature. Journal of Informatics, 8(3), 546–561.
Yang, C., Liu, T., Chen, X., Bian, Y., & Liu, Y. (2020). HNRWalker: Recommending academic collaborators with dynamic transition probabilities in heterogeneous networks. Scientometrics, 123(1), 429–449.
Yan, E., & Ding, Y. (2010). Weighted citation: An indicator of an article's prestige. Journal of the American Society for Information Science and Technology, 61(8), 1635–1643.
Yan, E., Ding, Y., & Sugimoto, C. R. (2011). P-Rank: An indicator measuring prestige in heterogeneous scholarly networks. Journal of the American Society for Information Science and Technology, 62(3), 467–477.
Zhang, F. & Wu, S. (2018). Ranking scientific papers and venues in heterogeneous academic networks by mutual reinforcement. In: ACM/IEEE joint conference on digital libraries (JCDL) (Pp.127–130).
Zhang, F., & Wu, S. (2020). Predicting future influence of papers, researchers, and venues in a dynamic academic network. Journal of Informatics, 14(2), 101035.
Zhang, J., Xu, B., Liu, J., Tobla, A., Al-Makhadmeh, Z., & Xia, F. (2018). PePSI: Personalized prediction of scholars' impact in heterogeneous temporal academic networks. IEEE Access, 6, 55661–55672.
Zhang, L., Fan, Y., Zhang, W., Zhang, S., Yu, D., & Zhang, S. (2019a). Measuring scientific prestige of papers with time-aware mutual reinforcement ranking model. Journal of Intelligent and Fuzzy Systems, 36, 1505–1519.
Zhang, Y., Wang, M., Gottwalt, F., Saberi, M., & Chang, E. (2019b). Ranking scientific articles based on bibliometric networks with a weighting scheme. Journal of Informetrics, 13(2), 616–634.
Zhao, F., Zhang, Y., Lu, J., & Shai, O. (2019). Measuring academic influence using heterogeneous author-citation networks. Scientometrics, 118(3), 1119–1140.
Zhou, J., Zeng, A., Fan, Y., & Di, Z. (2016). Ranking scientific publications with similarity-preferential mechanism. Scientometrics, 106(2), 805–816.
Zhou, X., Liang, W., Wang, K., Huang, R., & Jin, Q. (2021). Academic influence aware and multidimensional network analysis for research collaboration navigation based on scholarly big data. IEEE Transactions on Emerging Topics in Computing, 9(1), 246–257.
Zhu, X. D., Turney, P., Lemire, D., & Vellino, A. (2015). Measuring academic influence: Not all citations are equal. Journal of the American Society for Information Science and Technology, 66(2), 408–427.
School of Computer Science, Jiangsu University, Zhenjiang, China
Fang Zhang & Shengli Wu
School of Computing, Ulster University, Belfast, UK
Shengli Wu
Fang Zhang
Correspondence to Shengli Wu.
Zhang, F., Wu, S. Measuring academic entities' impact by content-based citation analysis in a heterogeneous academic network. Scientometrics 126, 7197–7222 (2021). https://doi.org/10.1007/s11192-021-04063-1
Issue Date: August 2021
Scientific impact evaluation
Heterogeneous network
Content-based citation analysis
Citation strength
Topical similarity
Over 10 million scientific documents at your fingertips
Switch Edition
Academic Edition
Not affiliated
© 2022 Springer Nature Switzerland AG. Part of Springer Nature. | CommonCrawl |
The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Login or register account
remember me Password recovery
INFONA - science communication portal
resources people groups collections journals conferences series
Search results for: M. Abbrescia
Search for author's resources
Items from 1 to 20 out of 683 results
Measurements of triple-differential cross sections for inclusive isolated-photon+jet events in $$\mathrm{p}\mathrm{p}$$ pp collisions at $$\sqrt{s} = 8\,\text {TeV} $$ s=8TeV
A. M. Sirunyan, A. Tumasyan, W. Adam, F. Ambrogi, more
The European Physical Journal C > 2019 > 79 > 11 > 1-24
Measurements are presented of the triple-differential cross section for inclusive isolated-photon+jet events in $$\mathrm{p}\mathrm{p}$$ pp collisions at $$\sqrt{s} = 8$$ s=8 TeV as a function of photon transverse momentum ($$p_{\mathrm {T}} ^{{\upgamma {}{}}}$$ pTγ ), photon pseudorapidity ($$\eta ^{{\upgamma {}{}}}$$ ηγ ), and jet pseudorapidity ($$\eta ^{\text {jet}}$$ ηjet ). The data correspond...
Measurement of the average very forward energy as a function of the track multiplicity at central pseudorapidities in proton-proton collisions at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV
The average total energy as well as its hadronic and electromagnetic components are measured with the CMS detector at pseudorapidities $$-6.6<\eta <-5.2$$ -6.6<η<-5.2 in proton-proton collisions at a centre-of-mass energy $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV . The results are presented as a function of the charged particle multiplicity in the region $$|\eta |<2$$ |η|<2 . This measurement...
Search for new physics in top quark production in dilepton final states in proton-proton collisions at $$\sqrt{s} = 13\,\text {TeV} $$ s=13TeV
A search for new physics in top quark production is performed in proton-proton collisions at $$13\,\text {TeV} $$ 13TeV . The data set corresponds to an integrated luminosity of $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 collected in 2016 with the CMS detector. Events with two opposite-sign isolated leptons (electrons or muons), and $$\mathrm{b}$$ b quark jets in the final state are selected. The search...
Search for supersymmetry in proton-proton collisions at 13 TeV in final states with jets and missing transverse momentum
The CMS collaboration, A. M. Sirunyan, A. Tumasyan, W. Adam, more
Journal of High Energy Physics > 2019 > 2019 > 10 > 1-61
Abstract Results are reported from a search for supersymmetric particles in the final state with multiple jets and large missing transverse momentum. The search uses a sample of proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV collected with the CMS detector in 2016–2018, corresponding to an integrated luminosity of 137 fb−1, representing essentially the full LHC Run 2 data sample. The...
Search for dark photons in decays of Higgs bosons produced in association with Z bosons in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV
Abstract A search is presented for a Higgs boson that is produced in association with a Z boson and that decays to an undetected particle together with an isolated photon. The search is performed by the CMS Collaboration at the Large Hadron Collider using a data set corresponding to an integrated luminosity of 137 fb−1 recorded at a center-of-mass energy of 13 TeV. No significant excess of events...
Search for resonances decaying to a pair of Higgs bosons in the b b ¯ $$ \overline{\mathrm{b}} $$ q q ¯ $$ \overline{\mathrm{q}} $$ 'ℓν final state in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV
Abstract A search for new massive particles decaying into a pair of Higgs bosons in proton-proton collisions at a center-of-mass energy of 13 TeV is presented. Data were collected with the CMS detector at the LHC, corresponding to an integrated luminosity of 35.9 fb−1. The search is performed for resonances with a mass between 0.8 and 3.5 TeV using events in which one Higgs boson decays into a bottom...
Azimuthal separation in nearly back-to-back jet topologies in inclusive 2- and 3-jet events in $${\text {p}} {\text {p}} $$ pp collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
The European Physical Journal C > 2019 > 79 > 9 > 1-24
A measurement for inclusive 2- and 3-jet events of the azimuthal correlation between the two jets with the largest transverse momenta, $$\varDelta \phi _{12}$$ Δϕ12 , is presented. The measurement considers events where the two leading jets are nearly collinear ("back-to-back") in the transverse plane and is performed for several ranges of the leading jet transverse momentum. Proton-proton collision...
Search for supersymmetry with a compressed mass spectrum in the vector boson fusion topology with 1-lepton and 0-lepton final states in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV
Journal of High Energy Physics > 2019 > 2019 > 8 > 1-45
Abstract A search for supersymmetric particles produced in the vector boson fusion topology in proton-proton collisions is presented. The search targets final states with one or zero leptons, large missing transverse momentum, and two jets with a large separation in rapidity. The data sample corresponds to an integrated luminosity of 35.9 fb−1 of proton-proton collisions at s $$ \sqrt{s} $$...
Measurement of exclusive $${{{\uprho _{}^{}} _{}^{}}{{\left( {770}\right) }{}_{}^{}}} ^{0}$$ ρ7700 photoproduction in ultraperipheral pPb collisions at $$\sqrt{\smash [b]{s_{_{\mathrm {NN}}}}} = 5.02\,\text {Te}\text {V} $$ sNN=5.02Te
Exclusive $${{{\uprho _{}^{}} _{}^{}}{{\left( {770}\right) }{}_{}^{}}} ^{0}$$ ρ7700 photoproduction is measured for the first time in ultraperipheral pPb collisions at $$\sqrt{\smash [b]{s_{_{\mathrm {NN}}}}} = 5.02\,\text {Te}\text {V} $$ sNN=5.02Te with the CMS detector. The cross section $$\sigma ({\upgamma _{}^{}} \mathrm{p}\rightarrow {{{\uprho _{}^{}} _{}^{}}{{\left( {770}\right) }{}_{}^{}}}...
Search for charged Higgs bosons in the H± → τ±ντ decay channel in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Abstract A search is presented for charged Higgs bosons in the H± → τ±ντ decay mode in the hadronic final state and in final states with an electron or a muon. The search is based on proton-proton collision data recorded by the CMS experiment in 2016 at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb−1. The results agree with the background expectation from...
HE-LHC: The High-Energy Large Hadron Collider
A. Abada, M. Abbrescia, S. S. AbdusSalam, I. Abdyukhanov, more
The European Physical Journal Special Topics > 2019 > 228 > 5 > 1109-1382
In response to the 2013 Update of the European Strategy for Particle Physics (EPPSU), the Future Circular Collider (FCC) study was launched as a world-wide international collaboration hosted by CERN. The FCC study covered an energy-frontier hadron collider (FCC-hh), a highest-luminosity high-energy lepton collider (FCC-ee), the corresponding 100 km tunnel infrastructure, as well as the physics opportunities...
FCC-hh: The Hadron Collider
The European Physical Journal Special Topics > 2019 > 228 > 4 > 755-1107
Search for a heavy pseudoscalar boson decaying to a Z and a Higgs boson at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A search is presented for a heavy pseudoscalar boson $$\text {A}$$ A decaying to a Z boson and a Higgs boson with mass of 125$$\,\text {GeV}$$ GeV . In the final state considered, the Higgs boson decays to a bottom quark and antiquark, and the Z boson decays either into a pair of electrons, muons, or neutrinos. The analysis is performed using a data sample corresponding to an integrated luminosity...
Search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at 13 TeV
Abstract Results are reported of a search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at the LHC. The data sample corresponds to an integrated luminosity of 35.9 fb−1 collected at a center-of-mass energy of 13 TeV using the CMS detector. The results are interpreted in the context of models of gauge-mediated supersymmetry breaking. Production...
Search for the associated production of the Higgs boson and a vector boson in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV via Higgs boson decays to τ leptons
Abstract A search for the standard model Higgs boson produced in association with a W or a Z boson and decaying to a pair of τ leptons is performed. A data sample of proton-proton collisions collected at s $$ \sqrt{s} $$ = 13 TeV by the CMS experiment at the CERN LHC is used, corresponding to an integrated luminosity of 35.9 fb−1. The signal strength is measured relative to the expectation...
FCC Physics Opportunities
The European Physical Journal C > 2019 > 79 > 6 > 1-161
We review the physics opportunities of the Future Circular Collider, covering its e+e-, pp, ep and heavy ion programmes. We describe the measurement capabilities of each FCC component, addressing the study of electroweak, Higgs and strong interactions, the top quark and flavour, as well as phenomena beyond the Standard Model. We highlight the synergy and complementarity of the different colliders,...
FCC-ee: The Lepton Collider
The European Physical Journal Special Topics > 2019 > 228 > 2 > 261-623
In response to the 2013 Update of the European Strategy for Particle Physics, the Future Circular Collider (FCC) study was launched, as an international collaboration hosted by CERN. This study covers a highest-luminosity high-energy lepton collider (FCC-ee) and an energy-frontier hadron collider (FCC-hh), which could, successively, be installed in the same 100 km tunnel. The scientific capabilities...
Search for a low-mass τ−τ+ resonance in association with a bottom quark in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV
Abstract A general search is presented for a low-mass τ−τ+ resonance produced in association with a bottom quark. The search is based on proton-proton collision data at a center-of-mass energy of 13 TeV collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb−1. The data are consistent with the standard model expectation. Upper limits at 95% confidence level...
Search for supersymmetry in events with a photon, jets, $$\mathrm {b}$$ b -jets, and missing transverse momentum in proton–proton collisions at 13$$\,\text {Te}\text {V}$$ Te
A search for supersymmetry is presented based on events with at least one photon, jets, and large missing transverse momentum produced in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te . The data correspond to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 and were recorded at the LHC with the CMS detector in 2016. The analysis characterizes signal-like...
Combined measurements of Higgs boson couplings in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented. The analysis uses the LHC proton–proton collision data set recorded with the CMS detector in 2016 at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te , corresponding to an integrated luminosity of 35.9$${\,\text {fb}^{-1}} $$ fb-1 . The combination is based...
Add an author who is a portal user
Add a recipient who is not a portal user
Sending message cancelled
Are you sure you want to cancel sending this message?
Last 3 years (234)
Date range setting
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.
Set your own date range
Content availability
CMS (157)
HADRON-HADRON SCATTERING (29)
HIGGS (20)
RESISTIVE PLATE CHAMBERS (12)
SUPERSYMMETRY (12)
RPC (11)
TOP QUARK (9)
QCD (8)
MESONS (7)
CROSS SECTION (6)
RESISTIVE PLATE CHAMBER (6)
BSM (5)
DETECTORS (5)
EXTRA DIMENSIONS (5)
B2G (4)
ELECTROWEAK (4)
HEAVY ION (4)
LARGE HADRON COLLIDER (4)
RESONANCES (4)
SPATIAL RESOLUTION (4)
SUSY (4)
W′ (4)
AVALANCHE MODE (3)
B-PHYSICS (3)
COMPACT MUON SOLENOID (3)
DIBOSON (3)
GAMMA SENSITIVITY (3)
MUON (3)
NEUTRON SENSITIVITY (3)
PHOTON (3)
STREAMER MODE (3)
TAU (3)
Z′ (3)
2HDM (2)
ALPHA-S (2)
AQGC (2)
B PHYSICS (2)
B-TAGGING (2)
CHARGE ASYMMETRY (2)
COSMIC RAY APPARATUS (2)
COSMIC RAY SHOWERS (2)
DETECTOR (2)
DIJET (2)
DILEPTONS (2)
DIMUONS (2)
DIPHOTON (2)
EXTREME ENERGY EVENTS PROJECT (2)
HEAVY-IONS (2)
LEPTON-FLAVOUR-VIOLATION (2)
LEPTONS (2)
LOW MISSING TRANSVERSE ENERGY (2)
MICRO-PATTERN GAS DETECTORS (2)
MRPC (2)
MSSM (2)
MULTIGAP RESISTIVE PLATE CHAMBERS (2)
MUON DETECTION (2)
MUON SYSTEM (2)
MUONS (2)
PHOTONS (2)
POSITION SENSITIVE PARTICLE DETECTORS (2)
QUARKONIUM PRODUCTION (2)
STRONG COUPLING CONSTANT (2)
TIME RESOLUTION (2)
TIMING (2)
TPRIME (2)
UPC (2)
VH (2)
Τ (2)
C 2 H 2 F 4 (1)
13 TEV (1)
3-JET MASS (1)
ieee (10)
Journal of High Energy Physics (308)
Physics Letters B (160)
The European Physical Journal C (127)
Nuclear Inst. and Methods in Physics Research, A (44)
Nuclear Physics B (Proceedings Supplements) (11)
Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment (6)
The European Physical Journal Plus (6)
Nuclear Physics A (4)
The European Physical Journal Special Topics (3)
Nuclear Physics, Section A (2)
Journal of the Korean Physical Society (1)
Report an error / abuse
© 2015 Interdisciplinary Centre for Mathematical and Computational Modelling
Reporting an error / abuse
Sending the report failed
Submitting the report failed. Please, try again. If the error persists, contact the administrator by writing to [email protected].
You can adjust the font size by pressing a combination of keys:
CONTROL + + increase font size
CONTROL + – decrease font
Navigate the page without a mouse
You can change the active elements on the page (buttons and links) by pressing a combination of keys:
TAB go to the next element
SHIFT + TAB go to the previous element
Financed by the National Centre for Research and Development under grant No. SP/I/1/77065/10 by the strategic scientific research and experimental development program: SYNAT - "Interdisciplinary System for Interactive Scientific and Scientific-Technical Information". | CommonCrawl |
The Mathematical Pirate and the Formula Triangle
Written by Colin+ in algebra, cav being wrong, ninja maths, pirate maths, ranting.
The Mathematical Pirate took one look at the piece of paper attached to the dock.
"They're BANNING formula triangles?! By order of @srcav?!" He swished his sword around. "Let me figure out where he lives, I'll show him."
"He lives… inshore, cap'n" said the $n$th mate. "It's too dangerous."
"Can we send Ninja-chops?" He was pretty sure the Mathematical Ninja wasn't within earshot, but lowered his voice all the same.
"I'll see what we can do," said the $n$th mate.
The scene went wibble-wibble-wibble as the Mathematical Pirate thought back to his time at Pirate Maths school. The Pirate Teacher was sneakily asking physics questions: "What's the potential difference? You, yes, you?"
"$V$?" said the young Mathematical Pirate. "… Aye! Arrr!"
"Well done," said the Pirate Teacher. "$V = IR$. So what's $I$? You, there!" He pointed at the $n$th mate.
"$\frac{V}{R}$," he said without even a beat.
"Blistering barnacles!" said the Mathematical Pirate. "How did you rearrange that so quickly?"
The $n$th mate drew out a formula triangle, like this:
"How does that help?"
"If two things are on the same line, you can multiply them together to get the other."
"So, $V=IR$. I knew that."
"But if they're on different lines, you can divide the top one by a lower one to get the third."
"So $I = \frac{V}{R}$, like you said."
"Exactly. And it works any time you have $a = bc$ (whatever is alone goes on top) or $d = \frac{e}{f}$ (whatever's on top stays on top) - so you can use it for SOH CAH TOA, density, speed, constant acceleration… all of the things you need in your day-to-day pirating adventures."
"Arr," said the Mathematical Pirate. "I think I'll steal that."
The Mathematical Pirate is a passionate - you might say violent - advocate of the formula triangle, but it's not as well-received - generally - among teachers, several of whom would ban them given the chance. There are reasons for this antipathy, some better than others - which I thought I'd put to the Mathematical Pirate.
I don't like them
"Arr. That's just stupid," said the Mathematical Pirate. "I don't like the difference of two squares (just learn to factorise properly!) but I realise it works for some people. They're welcome to use it. Doesn't bother me. I'm a pirate, not a fascist."
"Right", I said, "you could make the same argument about learning $7 \times 8 = 56$ when you 'should' be working it out as $(10-2) (10-3)$ every time."
They can go wrong
The Mathematical Pirate tapped his cutlass, absent-mindedly, on the table. "And algebra never does, I suppose?"
"They say algebra, done properly, is infallible."
A derisive snort. "A formula triangle, used appropriately, works just fine. It's based on algebra."
They're gimmicky
"@shahlock says he doesn't teach them because they're gimmicky, like singing the quadratic equation."
The Mathematical Pirate looked slightly hurt. "That's my favourite shanty! Hoist up the minus b's sails! See how the square root's set!"
"Erm… yeah, sorry. Is that one of those Nix the Tricks arguments? That gimmicks are harmful and we should only work from a spirit of true understanding? Because that's bollocks in the real world."
"Language!"
"Oh!" said the Mathematical Pirate. "C'est débile en réalite."
"That's better. I agree, in an ideal world, every student would generate the rules of algebra starting from a few self-evident axioms and apply them rigorously to appropriate problems, but that's not how it works, however much we'd like it to. 'Gimmicks' like the triangle, the Table of Joy1 , subtracting simultaneous equations, difference of two squares, and dozens of others, make some problems accessible to students who'll otherwise get discouraged and think "I've no idea what's going on here, I must be stupid and rubbish at maths."
They avoid understanding and cementing algebraic skills
"Wait, what?" said the Mathematical Pirate. "The mechanical shuffling of abstract symbols around an equals sign somehow shows more 'understanding' than setting up a triangle? Give me a break."
"Well, I'm sure some teachers would argue that students ought to know what the equals sign means, and that whatever you do to one side you need to do to the other."
"They ought to, aye," said the Mathematical Pirate. "And I can see that it's just for a special case - but it's a special case that crops up ALL THE SODDING TIME."
"Thinking about it, there's one big distinction that got missed in the whole twitter spat the other day: the difference between numeracy and maths. The triangle is a great tool for numeracy: it gives you the right answer quickly. It's not quite so great if you're trying to train a mathematician (or someone who'll use mathematical skills for science or whatever) as they really ought to be able to rearrange."
"But does that mean they avoid understanding?"
"I use formula triangles to rearrange things all the time, especially something like $a = \frac bc$ - if I want $c$, I can visualise the triangle much more quickly than multiplying by $c$ and dividing by $a$. And I like to think that I understand algebra pretty well, thank you very much."
They avoid interconnectivity
"So, one method that works for problems across maths, physics, chemistry, biology, economics, navigation and 3,000 other things is somehow anti-connectivity? I don't get it."
"I think the argument is that using a triangle isn't connected to the rest of algebra."
"Except that it's based on algebra and a nifty way of dealing with a special case, like all the ones we outlined earlier? Except that it gives a second check on an algebraic method? Except that it gives students a hook to see that algebra works the same way as another method they're comfortable with?"
"I know. The whole thing smacks of people wanting to teach the One True Path To Mathematics - but there's no such thing. By necessity, maths teaching involves a hodge-podge of methods that start out as mystical recipes before eventually, with any luck, taking part of a strong mathematical structure."
They eliminate the need for thinking
"Arrr!" said the Mathematical Pirate. "That they do. I've got better things to do with my brain than think about the mechanics of simple algebraic equations. Like give sanity checks to my answers. The first thing I learned in my navigation PhD was that the correct order of thinking is a) dredge it up from memory, b) look it up; c) work it out. That frees your brain up to do important things like interpret the question, interpret the answer, swash buckles, and so on."
"Isn't the whole point of maths to make things simpler? Saying 'thou shalt rearrange' feels like one of those arbitrary school rules that, even though there might have been good reasons for putting them in place, just serve to annoy people and end up being ignored."
"Rules?" said the Mathematical Pirate, with a quizzical look on his face.
"In any event, taking a perfectly valid method a student knows how to use and labelling it 'wrong' is a sure-fire way to kill their confidence and mathematical creativity."
They're not proper maths
The cutlass handle wobbled, as its blade was several centimetres deep in the fo'c's'le. "Arr," said the Mathematical Pirate, "I'll never get that out now. Proper maths? PROPER MATHS? Exploiting a recurring pattern isn't proper maths?" He breathed out.
"I know, I know. It's a bizarre argument: one of the main ideas of 'proper' maths is to take something tricky, spot a pattern you know how to deal with, and use it to make the tricky thing simpler."
"Hello, the @srcav residence?" said the voice on the phone.
"The Mathematical Pirate tells me you want to ban formula triangles."
"… well, not nece…"
"SHUT IT! They're not my preferred way of doing things, either. But they're really useful to some students."
"I didn't me…"
"THE MATHEMATICAL NINJA HAS SPOKEN."
Click.
* Edited 2014-10-15 to put in missing image, and again to stop the author confusing Ninjas with Pirates.
The Mathematical Pirate and The Quotient Rule
How the Mathematical Pirate works out the high times tables
Sport, maths, twitter and hulk-smashing (a rant)
Secrets of the Mathematical Pirate: Switcheroos
Attack of the Mathematical Zombies: Five excuses that need a bullet to the head
See Basic Maths For Dummies, available at all good bookstores and some rotten ones [↩]
7 comments on "The Mathematical Pirate and the Formula Triangle"
mathsjem
RT @icecolbeveridge: [FCM] The Mathematical Pirate and the Formula Triangle: http://t.co/Dimce7SBlt
tessmaths
TarquinGroup
SherriBurroughs
At first your pirate formula confused me.But slowly I grasped it.I must say well written.
Joshua Zucker
I have a lot of hostility toward tricks — which to me means methods that work only in certain special cases, even special cases that come up really often. Learning "do the same thing to both sides of an equation" is a very general idea, useful in so many cases. Learning this is useful in fewer cases.
Similarly when people learn FOIL as the way of multiplying two binomials, I worry that this means they're that much less likely to know what to do when they're faced with three binomials or two trinomials or whatever. If they learn how to distribute, they can apply it everywhere.
In other words, these methods are *another* thing to learn, not an alternative thing to learn. I won't go so far as to tell kids "you can't use that" but I will say "you need to learn ideas that apply more broadly and I hope that when you do you'll decide that you don't need this particular tool"
Reading the Pirate's opinions here, though, made me feel like it's more of a tradeoff between "learn fewer things and work harder to apply them" and "learn more things and find it easier to apply each one". I lean much more toward the first because I think a big part of the value of learning math is in learning how to take as many steps back toward abstraction as we can handle. I think the Pirate's argument here is that in each individual case (individual person, and individual idea or technique) it may not be worth handling, and for some people learning more things may have the net effect of freeing up some mindspace instead of costing it.
One huge advantage of formula triangles is that they give a visual connection to the rearrangement of formulas as opposed to a symbolic or linguistic one only. In the case I think about the most, of "FOIL" for binomials, the distributing method gives an area model visual to go with it as well as being more general, so it's a much clearer win.
I broadly agree – I think the pirate is mainly saying that a one-size-fits-all approach does some students a disservice. Like you say, there's a balance between Things You Can Work Out and Things You Know – for example, the Mathematical Ninja has internalised a lot of Taylor series to a couple of terms, which most students would work out laboriously. For another, it usually pays to memorise the multiplication tables rather than distribute over $(1+1+1)(1+1+1+1+1+1+1)$.
Part of the problem (at least in the UK) is that the mathematics curriculum has to serve for both numeracy (fluency in numbers for everyday use) and maths proper (fluency in more advanced maths, as well as the kind of abstraction you're talking about). Students with different goals may respond better to different methods.
All of which is to say: I think tricks can (and do) have a place, even if they're not necessarily the ideal way to teach. | CommonCrawl |
The Dirichlet Space and Related Function Spaces
Nicola Arcozzi : University of Bologna, Bologna, Italy
Richard Rochberg : Washington University in Saint Louis, Saint Louis, MO
Eric T. Sawyer : McMaster University, Hamilton, ON, Canada
Brett D. Wick : Washington University in Saint Louis, Saint Louis, MO
MSC: Primary 30; 31; 32; 39; 46; 47;
The study of the classical Dirichlet space is one of the central topics on the intersection of the theory of holomorphic functions and functional analysis. It was introduced about100 years ago and continues to be an area of active current research. The theory is related to such important themes as multipliers, reproducing kernels, and Besov spaces, among others. The authors present the theory of the Dirichlet space and related spaces starting with classical results and including some quite recent achievements like Dirichlet-type spaces of functions in several complex variables and the corona problem.
The first part of this book is an introduction to the function theory and operator theory of the classical Dirichlet space, a space of holomorphic functions on the unit disk defined by a smoothness criterion. The Dirichlet space is also a Hilbert space with a reproducing kernel, and is the model for the dyadic Dirichlet space, a sequence space defined on the dyadic tree. These various viewpoints are used to study a range of topics including the Pick property, multipliers, Carleson measures, boundary values, zero sets, interpolating sequences, the local Dirichlet integral, shift invariant subspaces, and Hankel forms. Recurring themes include analogies, sometimes weak and sometimes strong, with the classical Hardy space; and the analogy with the dyadic Dirichlet space.
The final chapters of the book focus on Besov spaces of holomorphic functions on the complex unit ball, a class of Banach spaces generalizing the Dirichlet space. Additional techniques are developed to work with the nonisotropic complex geometry, including a useful invariant definition of local oscillation and a sophisticated variation on the dyadic Dirichlet space. Descriptions are obtained of multipliers, Carleson measures, interpolating sequences, and multiplier interpolating sequences; \(\overline\partial\) estimates are obtained to prove corona theorems.
Graduate students and researchers interested in classical functional analysis.
The Dirichlet space; Foundations
Geometry and analysis on the disk
Hilbert spaces of holomorphic functions
Intermezzo: Hardy spaces
Carleson measures
Analysis on trees
The Pick property
The Dirichlet space; Selected topics
Onto interpolation
Boundary values
Alternative norms and applications
Shift operators and invariant subspaces
Invariant subspaces of the Dirichlet shift
Bilinear forms on $\mathcal {D}$
Besov spaces on the ball
Besov spaces on balls and trees
Interpolating sequences
Spaces on trees
Corona theorems for Besov spaces in $\mathbb {C}^n$
Some functional analysis
Schur's test | CommonCrawl |
AutoCAD 20.1 Crack With Keygen [Win/Mac] (Latest)
Equipped with the right applications, a computer can be of great help in virtually any domain of activity. When it comes to designing and precision, no other tool is as accurate as a computer. Moreover, specialized applications such as AutoCAD give you the possibility to design nearly anything ranging from art, to complex mechanical parts or even buildings.
Suitable for business environments and experienced users
After a decent amount of time spent installing the application on your system, you are ready to fire it up. Thanks to the office suite like interface, all of its features are cleverly organized in categories. At a first look, it looks easy enough to use, but the abundance of features it comes equipped with leaves room for second thoughts.
Create 2D and 3D objects
You can make use of basic geometrical shapes to define your objects, as well as draw custom ones. Needless to say that you can take advantage of a multitude of tools that aim to enhance precision. A grid can be enabled so that you can easily snap elements, as well as adding anchor points to fully customize shapes.
With a little imagination and patience on your behalf, nearly anything can be achieved. Available tools allow you to create 3D objects from scratch and have them fully enhanced with high-quality textures. A powerful navigation pane is put at your disposal so that you can carefully position the camera to get a clearer view of the area of interest.
Various export possibilities
Similar to a modern web browser, each project is displayed in its own tab. This comes in handy, especially for comparison views. Moreover, layouts and layers also play important roles, as it makes objects handling a little easier.
Sine the application is not the easiest to carry around, requiring a slightly sophisticated machine to properly run, there are several export options put at your disposal so that the projects itself can be moved around.
Aside from the application specific format, you can save as an image file of multiple types, PDF, FBX and a few more. Additionally, it can be sent via email, directly printed out on a sheet of paper, or even sent to a 3D printing service, if available.
To end with
All in all, AutoCAD remains one of the top applications used by professionals to achieve great precision with projects of nearly any type. It encourages usage with incredible offers for student licenses so you get acquainted with its abundance of features early on. A lot can be said about what it can and can't do, but the true surprise lies in discovering it step-by-step.
AutoCAD 20.1 [Mac/Win] [Updated]
AutoCAD began development as a drafting program for microcomputers running Microsoft's Windows operating system. Development began in 1983 by a small group of former Trane Corporation software developers working at the United Kingdom's Mullard Microcomputers (later to become Autodesk), originally led by Stuart David Lane (co-founder of The Lane-Foundation for Applied Technology, creator of the popular Mapping package in LISP on the Mac OS), and Jim Barrett (later of Autodesk).
AutoCAD shipped in December, 1982, and was initially priced at about £1,600. The program was originally bundled with an Epson graphics tablet.
AutoCAD R12 (1985)
Early versions of AutoCAD were built on the cross-platform language Assembly. Assembly is similar to COBOL, and was originally used as an object-oriented language that compiled to machine code.
In 1985, Autodesk began to make AutoCAD compatible with the then-new Windows-only operating system, MS-DOS. AutoCAD R12 was released in 1985. The new program's name was taken from the release of the Apple Macintosh computer in 1984, and to make the software more compatible with Windows, the program was redesigned from the ground up as a Windows application.
AutoCAD, originally an assembly-based program, was not rewritten in C++ until 1987. C++ is a general-purpose programming language in the object-oriented programming paradigm.
AutoCAD was originally written in the assembly language developed by Autodesk, Inc. and Stuart Lane (co-founder of the Lane Foundation for Applied Technology) at The Lane-Foundation for Applied Technology.
Lane had left the Autodesk company in 1984, and was replaced as AutoCAD's lead developer by Jim Barrett. Over the next few years, AutoCAD grew to become the leading CAD program for architects and engineers.
In 1992, Autodesk began work on a new version of AutoCAD, to take advantage of the increasing computing power available on PC platforms. The new program, AutoCAD 1992, was released as an upgrade to the existing software.
Over the following year, it was realized that the Windows 32-bit operating system was operating at maximum capacity, and that the only way to increase the performance of the program was to move to C
AutoCAD 20.1 With License Code Free
Information regarding computer-aided design in Autodesk is available at: A-Z Index of Autodesk products
To help designers with non-technical issues, Autodesk has put together a set of videos called the Design Cafe. The topics include: using tools, and: using templates, storing documents, storyboards, and presentations.
Autodesk also publishes a number of architectural building information modeling (BIM) packages.
AutoCAD Architecture 2D (from Autodesk Architecture Studio)
Autodesk Architectural Desktop (from Autodesk Architectural Desktop)
The Architectural desktop was originally only available as a Windows-based application. As of the 2010 release, it is also available as a tablet application in the Apple Store for iPad.
When users create and edit drawings in Architectural Desktop, data are stored in DWG (AutoCAD Drawing), which was originally a type of CAD (Computer-Aided Design) format. The format's advantage over CAD is that it's easier to edit and work with, and can be edited and imported into many other applications. Architectural Desktop is also available on the iPad.
Autodesk Architectural Desktop (for desktop and mobile devices)
Autodesk Architectural Design (for mobile devices)
Autodesk Architectural Modeling and Design (for mobile devices)
Autodesk Architectural Studio (for mobile devices)
Autodesk Architectural Visualization (for mobile devices)
Autodesk BIM 360 (for desktop and mobile devices)
Autodesk BIM 360 Construction (for desktop and mobile devices)
Autodesk Revit (for desktop and mobile devices)
Autodesk Revit Architecture (for desktop and mobile devices)
Autodesk Revit MEP (for desktop and mobile devices)
Autodesk Revit MEP 360 (for desktop and mobile devices)
Autodesk Revit Steel (for desktop and mobile devices)
Autodesk Revit Structure (for desktop and mobile devices)
Autodesk Revit Structure 360 (for desktop and mobile devices)
Autodesk Revit Transportation (for desktop and mobile devices)
Autodesk Revit Utility (for desktop and mobile devices)
Autodesk Revit Virtual Design
5b5f913d15
AutoCAD 20.1 Crack
Load the exe file Autodesk Keygen 2020 [64-bit] into Autodesk
Open Autodesk Autocad and then select the following features:
Vector:
Extras,
Shapes,
3D Printing:
Mesh 2D:
Mesh Wireframe:
Mesh Boundary,
Solid:
Trajectory,
Graphs,
Trajectory:
Multiple Tubes,
Block Tracking,
Graphic Workflow:
Multiple Parts,
Stereolithography:
Keygen,
« I had the same thing happen, » Swanson told the paper. « The bag went in the back seat, I think he had a chip on his teeth and it took him a couple of hours to realize. »
Swanson said he could see straight through the bag.
« I just saw a big brown gloppy thing, » he said. « I thought, 'What is that?' I didn't want to touch it. »
Swanson said the dog's owner stopped at the window to apologize, but didn't explain why the dog was in the car.
« We would like to know why, » Swanson said, « so we could try to work on preventing a similar thing from happening again. »Q:
A good complex valued function with a 3-point property
I was wondering whether there is a « nice » complex valued function $f$ on the open unit disk $\mathbb{D}$ such that for every $z \in \mathbb{D}$ there exists a positive integer $n$ and a positive real number $r$ such that $|f(z)| 1$ for $z=0$.
Such functions would be called 3-point functions.
My first idea was to take a linear combination of $z$ and $\bar{z}$ but then one should use this linear combination twice because the second function would also have the 3-point property.
$f(z)=e^{az+b}$ does it.
If you want $|f(z)|$ to be $
Draw on the fly in AutoCAD. With AutoCAD 2023, you can draw, edit, and annotate using 2D and 3D tools while connected to a device.
Edit connected views with a single click. Want to make changes to two parts at the same time? It's now a simple click away.
Intuitive tools that save you time and effort. With the new Project workspace, you can create, track, manage, and edit projects all from the same, intuitive workspace.
With new library management, you can easily collaborate on files and receive feedback on your designs.
The Sync and Share tool enables you to share drawings, annotations, and annotations with other users.
A full set of digital drawing tools for more options. AutoCAD 2023 is the most sophisticated, most flexible and feature-rich version of AutoCAD ever released, and we are committed to delivering the best workflows and tools for you.
With a workflow that incorporates the whole design process, you can work your way from conceptual design and construction documents to fabricated components, right through to part reviews and approvals.
Edit your drawing automatically from SketchUp. A new extension to the Autodesk Subscription services, DesignSync, brings SketchUp into Autodesk, making it easier than ever to share SketchUp models with AutoCAD and Inventor and to create a cohesive look and feel between 2D and 3D models.
With Dynamic Input, you can create dynamic 3D models and use your existing model as a guide in AutoCAD. It is the easiest way to add geometry to your drawing and reuse previous geometry for new designs.
Design animation:
Create interactive 3D models with the DesignSphere. Move your models anywhere on the screen to view them from different angles and explore their design options.
Create and manage parametric entities in DesignSpace, including multi-resolution parametric surfaces, groups, and polyline nets.
Create designs with a streamlined workflow. You can now start creating your design quickly and easily with the help of parametric entities. Create and edit lines, surfaces, arcs, and solids directly from the DesignSpace workspace and then animate them with the included DesignAnimation tool.
Create a new way of working with models. Include parametric entities in your modeling process and work with them as you create and edit your models. Export
System Requirements For AutoCAD:
If you are seeing this text, you are having a problem running this game on your computer, or you have found a bug. Before submitting a bug report, please make sure you have tried the latest version of the game, and that you have read the help documentation by pressing F1 at the game startup.
Sticky Tiles/Mini-maps
Puzzles – The puzzles included in this release are: – 10 puzzles in The Forest – 12 puzzles in Under the Dome – 6 puzzles in The Aquarium – 19
https://brightsun.co/wp-content/uploads/2022/06/AutoCAD-2.pdf
https://www.macroalgae.org/portal/checklists/checklist.php?clid=8540
https://www.blackheadpopping.com/autocad-crack-activator/
https://beautysecretskincarespa.com/2022/06/07/autocad-21-0-crack-registration-code-for-windows-latest/
https://herbariovaa.org/checklists/checklist.php?clid=19615
https://irabotee.com/wp-content/uploads/2022/06/AutoCAD-4.pdf
http://prachiudyog.com/?p=3013
https://www.hony.nl/wp-content/uploads/AutoCAD-3.pdf
https://beautysecretskincarespa.com/2022/06/07/autocad/
https://b-labafrica.net/autocad-crack-keygen-for-lifetime-download-3264bit-2022/
http://3net.rs/wp-content/uploads/2022/06/AutoCAD-1.pdf
https://houstonhousepc.com/autocad-21-0-crack-keygen-2022-new/
https://workplace.vidcloud.io/social/upload/files/2022/06/fjZSLNVmniBy7aAUWMvD_07_c3e8397eeea1af69380e8db1f8a3b7df_file.pdf
https://criptovalute.it/wp-content/uploads/2022/06/AutoCAD-1.pdf
https://beznaem.net/wp-content/uploads/2022/06/AutoCAD-3.pdf
http://tekbaz.com/2022/06/07/autocad-2023-24-2-crack-free-3264bit-updated/
http://goldenhome.info/2022/06/07/autocad-2020-23-1-registration-code-3264bit/
http://www.perfectlifestyle.info/autocad-crack-keygen-pc-windows-april-2022/
http://freebuyertraffic.com/?p=7318
https://millicanreserve.com/autocad-activation-win-mac-latest-2022/
AutoCAD 2017 21.0 Crack Product Key Full Download X64 (2022)
AutoCAD Crack | CommonCrawl |
When comparing supplements, consider products with a score above 90% to get the greatest benefit from smart pills to improve memory. Additionally, we consider the reviews that users send to us when scoring supplements, so you can determine how well products work for others and use this information to make an informed decision. Every month, our editor puts her name on that month's best smart bill, in terms of results and value offered to users.
Do you want to try Nootropics, but confused with the plethora of information available online? If that's the case, then you might get further confused about what nootropic supplement you should buy that specifically caters to your needs. Here is a list of the top 10 Nootropics or 10 best brain supplements available in the market, and their corresponding uses:
There is evidence to suggest that modafinil, methylphenidate, and amphetamine enhance cognitive processes such as learning and working memory...at least on certain laboratory tasks. One study found that modafinil improved cognitive task performance in sleep-deprived doctors. Even in non-sleep deprived healthy volunteers, modafinil improved planning and accuracy on certain cognitive tasks. Similarly, methylphenidate and amphetamine also enhanced performance of healthy subjects in certain cognitive tasks.
(In particular, I don't think it's because there's a sudden new surge of drugs. FDA drug approval has been decreasing over the past few decades, so this is unlikely a priori. More specifically, many of the major or hot drugs go back a long time. Bacopa goes back millennia, melatonin I don't even know, piracetam was the '60s, modafinil was '70s or '80s, ALCAR was '80s AFAIK, Noopept & coluracetam were '90s, and so on.)
In avoiding experimenting with more Russian Noopept pills and using instead the easily-purchased powder form of Noopept, there are two opposing considerations: Russian Noopept is reportedly the best, so we might expect anything I buy online to be weaker or impure or inferior somehow and the effect size smaller than in the pilot experiment; but by buying my own supply & using powder I can double or triple the dose to 20mg or 30mg (to compensate for the original under-dosing of 10mg) and so the effect size larger than in the pilot experiment.
Took pill 12:11 PM. I am not certain. While I do get some things accomplished (a fair amount of work on the Silk Road article and its submission to places), I also have some difficulty reading through a fiction book (Sum) and I seem kind of twitchy and constantly shifting windows. I am weakly inclined to think this is Adderall (say, 60%). It's not my normal feeling. Next morning - it was Adderall.
AMP and MPH increase catecholamine activity in different ways. MPH primarily inhibits the reuptake of dopamine by pre-synaptic neurons, thus leaving more dopamine in the synapse and available for interacting with the receptors of the postsynaptic neuron. AMP also affects reuptake, as well as increasing the rate at which neurotransmitter is released from presynaptic neurons (Wilens, 2006). These effects are manifest in the attention systems of the brain, as already mentioned, and in a variety of other systems that depend on catecholaminergic transmission as well, giving rise to other physical and psychological effects. Physical effects include activation of the sympathetic nervous system (i.e., a fight-or-flight response), producing increased heart rate and blood pressure. Psychological effects are mediated by activation of the nucleus accumbens, ventral striatum, and other parts of the brain's reward system, producing feelings of pleasure and the potential for dependence.
Cognition is a suite of mental phenomena that includes memory, attention and executive functions, and any drug would have to enhance executive functions to be considered truly 'smart'. Executive functions occupy the higher levels of thought: reasoning, planning, directing attention to information that is relevant (and away from stimuli that aren't), and thinking about what to do rather than acting on impulse or instinct. You activate executive functions when you tell yourself to count to 10 instead of saying something you may regret. They are what we use to make our actions moral and what we think of when we think about what makes us human.
as scientific papers become much more accessible online due to Open Access, digitization by publishers, and cheap hosting for pirates, the available knowledge about nootropics increases drastically. This reduces the perceived risk by users, and enables them to educate themselves and make much more sophisticated estimates of risk and side-effects and benefits. (Take my modafinil page: in 1997, how could an average person get their hands on any of the papers available up to that point? Or get detailed info like the FDA's prescribing guide? Even assuming they had a computer & Internet?)
Smart pills have revolutionized the diagnosis of gastrointestinal disorders and could replace conventional diagnostic techniques such as endoscopy. Traditionally, an endoscopy probe is inserted into a patient's esophagus, and subsequently the upper and lower gastrointestinal tract, for diagnostic purposes. There is a risk of perforation or tearing of the esophageal lining, and the patient faces discomfort during and after the procedure. A smart pill or wireless capsule endoscopy (WCE), however, can easily be swallowed and maneuvered to capture images, and requires minimal patient preparation, such as sedation. The built-in sensors allow the measurement of all fluids and gases in the gut, giving the physician a multidimensional picture of the human body.
Factor analysis. The strategy: read in the data, drop unnecessary data, impute missing variables (data is too heterogeneous and collected starting at varying intervals to be clean), estimate how many factors would fit best, factor analyze, pick the ones which look like they match best my ideas of what productive is, extract per-day estimates, and finally regress LLLT usage on the selected factors to look for increases.
My first time was relatively short: 10 minutes around the F3/F4 points, with another 5 minutes to the forehead. Awkward holding it up against one's head, and I see why people talk of LED helmets, it's boring waiting. No initial impressions except maybe feeling a bit mentally cloudy, but that goes away within 20 minutes of finishing when I took a nap outside in the sunlight. Lostfalco says Expectations: You will be tired after the first time for 2 to 24 hours. It's perfectly normal., but I'm not sure - my dog woke me up very early and disturbed my sleep, so maybe that's why I felt suddenly tired. On the second day, I escalated to 30 minutes on the forehead, and tried an hour on my finger joints. No particular observations except less tiredness than before and perhaps less joint ache. Third day: skipped forehead stimulation, exclusively knee & ankle. Fourth day: forehead at various spots for 30 minutes; tiredness 5/6/7/8th day (11/12/13/4): skipped. Ninth: forehead, 20 minutes. No noticeable effects.
1 PM; overall this was a pretty productive day, but I can't say it was very productive. I would almost say even odds, but for some reason I feel a little more inclined towards modafinil. Say 55%. That night's sleep was vile: the Zeo says it took me 40 minutes to fall asleep, I only slept 7:37 total, and I woke up 7 times. I'm comfortable taking this as evidence of modafinil (half-life 10 hours, 1 PM to midnight is only 1 full halving), bumping my prediction to 75%. I check, and sure enough - modafinil.
* These statements have not been evaluated by the Food and Drug Administration. The products and information on this website are not intended to diagnose, treat, cure or prevent any disease. The information on this site is for educational purposes only and should not be considered medical advice. Please speak with an appropriate healthcare professional when evaluating any wellness related therapy. Please read the full medical disclaimer before taking any of the products offered on this site.
2 break days later, I took the quarter-pill at 11:22 PM. I had discovered I had for years physically possessed a very long interview not available online, and transcribing that seemed like a good way to use up a few hours. I did some reading, some Mnemosyne, and started it around midnight, finishing around 2:30 AM. There seemed a mental dip around 30 minutes after the armodafinil, but then things really picked up and I made very good progress transcribing the final draft of 9000 words in that period. (In comparison, The Conscience of the Otaking parts 2 & 4 were much easier to read than the tiny font of the RahXephon booklet, took perhaps 3 hours, and totaled only 6500 words. The nicotine is probably also to thank.) By 3:40 AM, my writing seems to be clumsier and my mind fogged. Began DNB at 3:50: 61/53/44. Went to bed at 4:05, fell asleep in 16 minutes, slept for 3:56. Waking up was easier and I felt better, so the extra hour seemed to help.
Much better than I had expected. One of the best superhero movies so far, better than Thor or Watchmen (and especially better than the Iron Man movies). I especially appreciated how it didn't launch right into the usual hackneyed creation of the hero plot-line but made Captain America cool his heels performing & selling war bonds for 10 or 20 minutes. The ending left me a little nonplussed, although I sort of knew it was envisioned as a franchise and I would have to admit that showing Captain America wondering at Times Square is much better an ending than something as cliche as a close-up of his suddenly-opened eyes and then a fade out. (The movie continued the lamentable trend in superhero movies of having a strong female love interest… who only gets the hots for the hero after they get muscles or powers. It was particularly bad in CA because she knows him and his heart of gold beforehand! What is the point of a feminist character who is immediately forced to do that?)↩
Taken together, these considerations suggest that the cognitive effects of stimulants for any individual in any task will vary based on dosage and will not easily be predicted on the basis of data from other individuals or other tasks. Optimizing the cognitive effects of a stimulant would therefore require, in effect, a search through a high-dimensional space whose dimensions are dose; individual characteristics such as genetic, personality, and ability levels; and task characteristics. The mixed results in the current literature may be due to the lack of systematic optimization.
…researchers have added a new layer to the smart pill conversation. Adderall, they've found, makes you think you're doing better than you actually are….Those subjects who had been given Adderall were significantly more likely to report that the pill had caused them to do a better job….But the results of the new University of Pennsylvania study, funded by the U.S. Navy and not yet published but presented at the annual Society for Neuroscience conference last month, are consistent with much of the existing research. As a group, no overall statistically-significant improvement or impairment was seen as a result of taking Adderall. The research team tested 47 subjects, all in their 20s, all without a diagnosis of ADHD, on a variety of cognitive functions, from working memory-how much information they could keep in mind and manipulate-to raw intelligence, to memories for specific events and faces….The last question they asked their subjects was: How and how much did the pill influence your performance on today's tests? Those subjects who had been given Adderall were significantly more likely to report that the pill had caused them to do a better job on the tasks they'd been given, even though their performance did not show an improvement over that of those who had taken the placebo. According to Irena Ilieva…it's the first time since the 1960s that a study on the effects of amphetamine, a close cousin of Adderall, has asked how subjects perceive the effect of the drug on their performance.
This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38 \times 7.25 = 275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so \frac{365.25}{120} \times 9 \times 5 = 137.
By which I mean that simple potassium is probably the most positively mind altering supplement I've ever tried…About 15 minutes after consumption, it manifests as a kind of pressure in the head or temples or eyes, a clearing up of brain fog, increased focus, and the kind of energy that is not jittery but the kind that makes you feel like exercising would be the reasonable and prudent thing to do. I have done no tests, but feel smarter from this in a way that seems much stronger than piracetam or any of the conventional weak nootropics. It is not just me – I have been introducing this around my inner social circle and I'm at 7/10 people felt immediately noticeable effects. The 3 that didn't notice much were vegetarians and less likely to have been deficient. Now that I'm not deficient, it is of course not noticeable as mind altering, but still serves to be energizing, particularly for sustained mental energy as the night goes on…Potassium chloride initially, but since bought some potassium gluconate pills… research indicates you don't want to consume large amounts of chloride (just moderate amounts).
Adrafinil is Modafinil's predecessor, because the scientists tested it as a potential narcolepsy drug. It was first produced in 1974 and immediately showed potential as a wakefulness-promoting compound. Further research showed that Adrafinil is metabolized into its component parts in the liver, that is into inactive modafinil acid. Ultimately, Modafinil has been proclaimed the primary active compound in Adrafinil.
He recommends a 10mg dose, but sublingually. He mentions COLURACETAM's taste is more akin to that of PRAMIRACETAM than OXIRACETAM, in that it tastes absolutely vile (not a surprise), so it is impossible to double-blind a sublingual administration - even if I knew of an inactive equally-vile-tasting substitute, I'm not sure I would subject myself to it. To compensate for ingesting the coluracetam, it would make sense to double the dose to 20mg (turning the 2g into <100 doses). Whether the effects persist over multiple days is not clear; I'll assume it does not until someone says it does, since this makes things much easier.
I have a needle phobia, so injections are right out; but from the images I have found, it looks like testosterone enanthate gels using DMSO resemble other gels like Vaseline. This suggests an easy experimental procedure: spoon an appropriate dose of testosterone gel into one opaque jar, spoon some Vaseline gel into another, and pick one randomly to apply while not looking. If one gel evaporates but the other doesn't, or they have some other difference in behavior, the procedure can be expanded to something like and then half an hour later, take a shower to remove all visible traces of the gel. Testosterone itself has a fairly short half-life of 2-4 hours, but the gel or effects might linger. (Injections apparently operate on a time-scale of weeks; I'm not clear on whether this is because the oil takes that long to be absorbed by surrounding materials or something else.) Experimental design will depend on the specifics of the obtained substance. As a controlled substance (Schedule III in the US), supplies will be hard to obtain; I may have to resort to the Silk Road.
If you haven't seen the movie, imagine unfathomable brain power in capsule form. Picture a drug from another universe. It can transform an unsuccessful couch potato into a millionaire financial mogul. Ingesting the powerful smart pill boosts intelligence and turns you into a prodigy. Its results are instant. Sounds great, right? If only it were real.
Most of the most solid fish oil results seem to meliorate the effects of age; in my 20s, I'm not sure they are worth the cost. But I would probably resume fish oil in my 30s or 40s when aging really becomes a concern. So the experiment at most will result in discontinuing for a decade. At $X a year, that's a net present value of sum $ map (\n -> 70 / (1 + 0.05)^n) [1..10] = $540.5.
I bought 500g of piracetam (Examine.com; FDA adverse events) from Smart Powders (piracetam is one of the cheapest nootropics and SP was one of the cheapest suppliers; the others were much more expensive as of October 2010), and I've tried it out for several days (started on 7 September 2009, and used it steadily up to mid-December). I've varied my dose from 3 grams to 12 grams (at least, I think the little scoop measures in grams), taking them in my tea or bitter fruit juice. Cranberry worked the best, although orange juice masks the taste pretty well; I also accidentally learned that piracetam stings horribly when I got some on a cat scratch. 3 grams (alone) didn't seem to do much of anything while 12 grams gave me a nasty headache. I also ate 2 or 3 eggs a day.
"I think you can and you will," says Sarter, but crucially, only for very specific tasks. For example, one of cognitive psychology's most famous findings is that people can typically hold seven items of information in their working memory. Could a drug push the figure up to nine or 10? "Yes. If you're asked to do nothing else, why not? That's a fairly simple function."
Disclaimer: None of the statements made on this website have been reviewed by the Food and Drug Administration. The products and supplements mentioned on this site are not intended to diagnose, treat, cure, alleviate or prevent any diseases. All articles on this website are the opinions of their respective authors who do not claim or profess to be medical professionals providing medical advice. This website is strictly for the purpose of providing opinions of the author. You should consult with your doctor or another qualified health care professional before you start taking any dietary supplements or engage in mental health programs. Any and all trademarks, logos brand names and service marks displayed on this website are the registered or unregistered Trademarks of their respective owners.
QUALITY : They use pure and high quality Ingredients and are the ONLY ones we found that had a comprehensive formula including the top 5 most proven ingredients: DHA Omega 3, Huperzine A, Phosphatidylserine, Bacopin and N-Acetyl L-Tyrosine. Thrive Natural's Super Brain Renew is fortified with just the right ingredients to help your body fully digest the active ingredients. No other brand came close to their comprehensive formula of 39 proven ingredients. The "essential 5" are the most important elements to help improve your memory, concentration, focus, energy, and mental clarity. But, what also makes them stand out above all the rest was that they have several supporting vitamins and nutrients to help optimize brain and memory function. A critical factor for us is that this company does not use fillers, binders or synthetics in their product. We love the fact that their capsules are vegetarian, which is a nice bonus for health conscious consumers.
Second, users are concerned with the possibility of withdrawal if they stop taking the nootropics. They worry that if they stop taking nootropics they won't be as smart as when they were taking nootropics, and will need to continue taking them to function. Some users report feeling a slight brain fog when discontinuing nootropics, but that isn't a sign of regression.
20 March, 2x 13mg; first time, took around 11:30AM, half-life 3 hours, so halved by 2:30PM. Initial reaction: within 20 minutes, started to feel light-headed, experienced a bit of physical clumsiness while baking bread (dropped things or poured too much thrice); that began to pass in an hour, leaving what felt like a cheerier mood and less anxiety. Seems like it mostly wore off by 6PM. Redosed at 8PM TODO: maybe take a look at the HRV data? looks interestingly like HRV increased thanks to the tianeptine 21 March, 2x17mg; seemed to buffer effects of FBI visit 22 March, 2x 23 March, 2x 24 March, 2x 25 March, 2x 26 March, 2x 27 March, 2x 28 March, 2x 7 April, 2x 8 April, 2x 9 April, 2x 10 April, 2x 11 April, 2x 12 April, 2x 23 April, 2x 24 April, 2x 25 April, 2x 26 April, 2x 27 April, 2x 28 April, 2x 29 April, 2x 7 May, 2x 8 May, 2x 9 May, 2x 10 May, 2x 3 June, 2x 4 June, 2x 5 June, 2x 30 June, 2x 30 July, 1x 31 July, 1x 1 August, 2x 2 August, 2x 3 August, 2x 5 August, 2x 6 August, 2x 8 August, 2x 10 August, 2x 12 August: 2x 14 August: 2x 15 August: 2x 16 August: 1x 18 August: 2x 19 August: 2x 21 August: 2x 23 August: 1x 24 August: 1x 25 August: 1x 26 August: 2x 27 August: 1x 29 August: 2x 30 August: 1x 02 September: 1x 04 September: 1x 07 September: 2x 20 September: 1x 21 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 28 September: 2x 29 September: 2x 5 October: 2x 6 October: 1x 19 October: 1x 20 October: 1x 27 October: 1x 4 November: 1x 5 November: 1x 8 November: 1x 9 November: 2x 10 November: 1x 11 November: 1x 12 November: 1x 25 November: 1x 26 November: 1x 27 November: 1x 4 December: 2x 27 December: 1x 28 December: 1x 2017 7 January: 1x 8 January: 2x 10 January: 1x 16 January: 1x 17 January: 1x 20 January: 1x 24 January: 1x 25 January: 2x 27 January: 2x 28 January: 2x 1 February: 2x 3 February: 2x 8 February: 1x 16 February: 2x 17 February: 2x 18 February: 1x 22 February: 1x 27 February: 2x 14 March: 1x 15 March: 1x 16 March: 2x 17 March: 2x 18 March: 2x 19 March: 2x 20 March: 2x 21 March: 2x 22 March: 2x 23 March: 1x 24 March: 2x 25 March: 2x 26 March: 2x 27 March: 2x 28 March: 2x 29 March: 2x 30 March: 2x 31 March: 2x 01 April: 2x 02 April: 1x 03 April: 2x 04 April: 2x 05 April: 2x 06 April: 2x 07 April: 2x 08 April: 2x 09 April: 2x 10 April: 2x 11 April: 2x 20 April: 1x 21 April: 1x 22 April: 1x 23 April: 1x 24 April: 1x 25 April: 1x 26 April: 2x 27 April: 2x 28 April: 1x 30 April: 1x 01 May: 2x 02 May: 2x 03 May: 2x 04 May: 2x 05 May: 2x 06 May: 2x 07 May: 2x 08 May: 2x 09 May: 2x 10 May: 2x 11 May: 2x 12 May: 2x 13 May: 2x 14 May: 2x 15 May: 2x 16 May: 2x 17 May: 2x 18 May: 2x 19 May: 2x 20 May: 2x 21 May: 2x 22 May: 2x 23 May: 2x 24 May: 2x 25 May: 2x 26 May: 2x 27 May: 2x 28 May: 2x 29 May: 2x 30 May: 2x 1 June: 2x 2 June: 2x 3 June: 2x 4 June: 2x 5 June: 1x 6 June: 2x 7 June: 2x 8 June: 2x 9 June: 2x 10 June: 2x 11 June: 2x 12 June: 2x 13 June: 2x 14 June: 2x 15 June: 2x 16 June: 2x 17 June: 2x 18 June: 2x 19 June: 2x 20 June: 2x 22 June: 2x 21 June: 2x 02 July: 2x 03 July: 2x 04 July: 2x 05 July: 2x 06 July: 2x 07 July: 2x 08 July: 2x 09 July: 2x 10 July: 2x 11 July: 2x 12 July: 2x 13 July: 2x 14 July: 2x 15 July: 2x 16 July: 2x 17 July: 2x 18 July: 2x 19 July: 2x 20 July: 2x 21 July: 2x 22 July: 2x 23 July: 2x 24 July: 2x 25 July: 2x 26 July: 2x 27 July: 2x 28 July: 2x 29 July: 2x 30 July: 2x 31 July: 2x 01 August: 2x 02 August: 2x 03 August: 2x 04 August: 2x 05 August: 2x 06 August: 2x 07 August: 2x 08 August: 2x 09 August: 2x 10 August: 2x 11 August: 2x 12 August: 2x 13 August: 2x 14 August: 2x 15 August: 2x 16 August: 2x 17 August: 2x 18 August: 2x 19 August: 2x 20 August: 2x 21 August: 2x 22 August: 2x 23 August: 2x 24 August: 2x 25 August: 2x 26 August: 1x 27 August: 2x 28 August: 2x 29 August: 2x 30 August: 2x 31 August: 2x 01 September: 2x 02 September: 2x 03 September: 2x 04 September: 2x 05 September: 2x 06 September: 2x 07 September: 2x 08 September: 2x 09 September: 2x 10 September: 2x 11 September: 2x 12 September: 2x 13 September: 2x 14 September: 2x 15 September: 2x 16 September: 2x 17 September: 2x 18 September: 2x 19 September: 2x 20 September: 2x 21 September: 2x 22 September: 2x 23 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 27 September: 2x 28 September: 2x 29 September: 2x 30 September: 2x October 01 October: 2x 02 October: 2x 03 October: 2x 04 October: 2x 05 October: 2x 06 October: 2x 07 October: 2x 08 October: 2x 09 October: 2x 10 October: 2x 11 October: 2x 12 October: 2x 13 October: 2x 14 October: 2x 15 October: 2x 16 October: 2x 17 October: 2x 18 October: 2x 20 October: 2x 21 October: 2x 22 October: 2x 23 October: 2x 24 October: 2x 25 October: 2x 26 October: 2x 27 October: 2x 28 October: 2x 29 October: 2x 30 October: 2x 31 October: 2x 01 November: 2x 02 November: 2x 03 November: 2x 04 November: 2x 05 November: 2x 06 November: 2x 07 November: 2x 08 November: 2x 09 November: 2x 10 November: 2x 11 November: 2x 12 November: 2x 13 November: 2x 14 November: 2x 15 November: 2x 16 November: 2x 17 November: 2x 18 November: 2x 19 November: 2x 20 November: 2x 21 November: 2x 22 November: 2x 23 November: 2x 24 November: 2x 25 November: 2x 26 November: 2x 27 November: 2x 28 November: 2x 29 November: 2x 30 November: 2x 01 December: 2x 02 December: 2x 03 December: 2x 04 December: 2x 05 December: 2x 06 December: 2x 07 December: 2x 08 December: 2x 09 December: 2x 10 December: 2x 11 December: 2x 12 December: 2x 13 December: 2x 14 December: 2x 15 December: 2x 16 December: 2x 17 December: 2x 18 December: 2x 19 December: 2x 20 December: 2x 21 December: 2x 22 December: 2x 23 December: 2x 24 December: 2x 25 December: 2x ran out, last day: 25 December 2017 –>
In the United States, people consume more coffee than fizzy drink, tea and juice combined. Alas, no one has ever estimated its impact on economic growth – but plenty of studies have found myriad other benefits. Somewhat embarrassingly, caffeine has been proven to be better than the caffeine-based commercial supplement that Woo's company came up with, which is currently marketed at $17.95 for 60 pills.
AMP was first investigated as an asthma medication in the 1920s, but its psychological effects were soon noticed. These included increased feelings of energy, positive mood, and prolonged physical endurance and mental concentration. These effects have been exploited in a variety of medical and nonmedical applications in the years since they were discovered, including to treat depression, to enhance alertness in military personnel, and to provide a competitive edge in athletic competition (Rasmussen, 2008). Today, AMP remains a widely used and effective treatment for ADHD (Wilens, 2006).
These days, young, ambitious professionals prefer prescription stimulants—including methylphenidate (usually sold as Ritalin) and Adderall—that are designed to treat people with attention deficit hyperactivity disorder (ADHD) and are more common and more acceptable than cocaine or nicotine (although there is a black market for these pills). ADHD makes people more likely to lose their focus on tasks and to feel restless and impulsive. Diagnoses of the disorder have been rising dramatically over the past few decades—and not just in kids: In 2012, about 16 million Adderall prescriptions were written for adults between the ages of 20 and 39, according to a report in the New York Times. Both methylphenidate and Adderall can improve sustained attention and concentration, says Barbara Sahakian, professor of clinical neuropsychology at the University of Cambridge and author of the 2013 book Bad Moves: How Decision Making Goes Wrong, and the Ethics of Smart Drugs. But the drugs do have side effects, including insomnia, lack of appetite, mood swings, and—in extreme cases—hallucinations, especially when taken in amounts the exceed standard doses. Take a look at these 10 foods that help you focus.
Autism Brain brain fuel brain health Brain Injury broth Cholesterol choline DAI DHA Diabetes digestion Exercise Fat Functional Medicine gastric Gluten gut-brain Gut Brain Axis gut health Health intestinal permeability keto Ketogenic leaky Gut Learning Medicine Metabolism Music Therapy neurology Neuroplasticity neurorehabilitation Nutrition omega Paleo Physical Therapy Recovery Science second brain superfood synaptogenesis TBI Therapy tube feed uridine
Organizations, and even entire countries, are struggling with "always working" cultures. Germany and France have adopted rules to stop employees from reading and responding to email after work hours. Several companies have explored banning after-hours email; when one Italian company banned all email for one week, stress levels dropped among employees. This is not a great surprise: A Gallup study found that among those who frequently check email after working hours, about half report having a lot of stress.
Finally, all of the questions raised here in relation to MPH and d-AMP can also be asked about newer drugs and even about nonpharmacological methods of cognitive enhancement. An example of a newer drug with cognitive-enhancing potential is modafinil. Originally marketed as a therapy for narcolepsy, it is widely used off label for other purposes (Vastag, 2004), and a limited literature on its cognitive effects suggests some promise as a cognitive enhancer for normal healthy people (see Minzenberg & Carter, 2008, for a review).
Enhanced learning was also observed in two studies that involved multiple repeated encoding opportunities. Camp-Bruno and Herting (1994) found MPH enhanced summed recall in the Buschke Selective Reminding Test (Buschke, 1973; Buschke & Fuld, 1974) when 1-hr and 2-hr delays were combined, although individually only the 2-hr delay approached significance. Likewise, de Wit, Enggasser, and Richards (2002) found no effect of d-AMP on the Hopkins Verbal Learning Test (Brandt, 1991) after a 25-min delay. Willett (1962) tested rote learning of nonsense syllables with repeated presentations, and his results indicate that d-AMP decreased the number of trials needed to reach criterion.
Another important epidemiological question about the use of prescription stimulants for cognitive enhancement concerns the risk of dependence. MPH and d-AMP both have high potential for abuse and addiction related to their effects on brain systems involved in motivation. On the basis of their reanalysis of NSDUH data sets from 2000 to 2002, Kroutil and colleagues (2006) estimated that almost one in 20 nonmedical users of prescription ADHD medications meets criteria for dependence or abuse. This sobering estimate is based on a survey of all nonmedical users. The immediate and long-term risks to individuals seeking cognitive enhancement remain unknown.
"As a brain injury survivor that still deals with extreme light sensitivity, eye issues and other brain related struggles I have found a great diet is a key to brain health! Cavin's book is a much needed guide to eating for brain health. While you can fill shelves with books that teach you good nutrition, Cavin's book teaches you how to help your brain with what you eat. This is a much needed addition to the nutrition section! If you are looking to get the optimum performance out of your brain, get this book now! You won't regret it."
The demands of university studies, career, and family responsibilities leaves people feeling stretched to the limit. Extreme stress actually interferes with optimal memory, focus, and performance. The discovery of nootropics and vitamins that make you smarter has provided a solution to help college students perform better in their classes and professionals become more productive and efficient at work. | CommonCrawl |
Analysis of the heterogeneous multiscale method for elliptic homogenization problems
by Weinan E, Pingbing Ming and Pingwen Zhang HTML | PDF
J. Amer. Math. Soc. 18 (2005), 121-156 Request permission
A comprehensive analysis is presented for the heterogeneous multiscale method (HMM for short) applied to various elliptic homogenization problems. These problems can be either linear or nonlinear, with deterministic or random coefficients. In most cases considered, optimal estimates are proved for the error between the HMM solutions and the homogenized solutions. Strategies for retrieving the microstructural information from the HMM solutions are discussed and analyzed.
A75 R.A. Adams and J.J. F. Fournier, Sobolev Spaces, second edition, Academic Press, New York, 2003.
Grégoire Allaire, Homogenization and two-scale convergence, SIAM J. Math. Anal. 23 (1992), no. 6, 1482–1518. MR 1185639, DOI 10.1137/0523084
Ivo Babuška, Homogenization and its application. Mathematical and computational problems, Numerical solution of partial differential equations, III (Proc. Third Sympos. (SYNSPADE), Univ. Maryland, College Park, Md., 1975) Academic Press, New York, 1976, pp. 89–116. MR 0502025
Ivo Babuška, Solution of interface problems by homogenization. I, SIAM J. Math. Anal. 7 (1976), no. 5, 603–634. MR 509273, DOI 10.1137/0507048
Alain Bensoussan, Jacques-Louis Lions, and George Papanicolaou, Asymptotic analysis for periodic structures, Studies in Mathematics and its Applications, vol. 5, North-Holland Publishing Co., Amsterdam-New York, 1978. MR 503330
Bm81 L. Boccardo and F. Murat, Homogénéisation de problémes quasi-linéaires, Publ. IRMA, Lille., 3 (1981), no. 7, 13–51.
J. F. Bourgat, Numerical experiments of the homogenization method for operators with periodic coefficients, Computing methods in applied sciences and engineering (Proc. Third Internat. Sympos., Versailles, 1977) Lecture Notes in Math., vol. 704, Springer, Berlin, 1979, pp. 330–356. MR 540121
Achi Brandt, Multi-level adaptive solutions to boundary-value problems, Math. Comp. 31 (1977), no. 138, 333–390. MR 431719, DOI 10.1090/S0025-5718-1977-0431719-X
Susanne C. Brenner and L. Ridgway Scott, The mathematical theory of finite element methods, Texts in Applied Mathematics, vol. 15, Springer-Verlag, New York, 1994. MR 1278258, DOI 10.1007/978-1-4757-4338-8
Philippe G. Ciarlet, The finite element method for elliptic problems, Studies in Mathematics and its Applications, Vol. 4, North-Holland Publishing Co., Amsterdam-New York-Oxford, 1978. MR 0520174
P. G. Ciarlet and P.-A. Raviart, The combined effect of curved boundaries and numerical integration in isoparametric finite element methods, The mathematical foundations of the finite element method with applications to partial differential equations (Proc. Sympos., Univ. Maryland, Baltimore, Md., 1972) Academic Press, New York, 1972, pp. 409–474. MR 0421108
Ph. Clément, Approximation by finite element functions using local regularization, Rev. Française Automat. Informat. Recherche Opérationnelle Sér. 9 (1975), no. R-2, 77–84 (English, with Loose French summary). MR 0400739
Joseph G. Conlon and Ali Naddaf, On homogenization of elliptic equations with random coefficients, Electron. J. Probab. 5 (2000), no. 9, 58. MR 1768843, DOI 10.1214/EJP.v5-65
Dul91 L.J. Durlofsky, Numerical calculation of equivalent grid block permeability tensors for heterogeneous poros-media, Water. Resour. Res., 28 (1992), 699-708.
Weinan E, Homogenization of linear and nonlinear transport equations, Comm. Pure Appl. Math. 45 (1992), no. 3, 301–326. MR 1151269, DOI 10.1002/cpa.3160450304
Weinan E and Bjorn Engquist, The heterogeneous multiscale methods, Commun. Math. Sci. 1 (2003), no. 1, 87–132. MR 1979846, DOI 10.4310/CMS.2003.v1.n1.a8
Ee02b W. E and B. Engquist, The heterogeneous multiscale method for homogenization problems, submitted to MMS, 2002.
Weinan E and Bjorn Engquist, Multiscale modeling and computation, Notices Amer. Math. Soc. 50 (2003), no. 9, 1062–1070. MR 2002752
Ey04 W. E and X.Y. Yue, Heterogeneous multiscale method for locally self-similar problems, Comm. Math. Sci., 2 (2004), 137–144.
Yalchin R. Efendiev, Thomas Y. Hou, and Xiao-Hui Wu, Convergence of a nonconforming multiscale finite element method, SIAM J. Numer. Anal. 37 (2000), no. 3, 888–910. MR 1740386, DOI 10.1137/S0036142997330329
Mark Freidlin, Functional integration and partial differential equations, Annals of Mathematics Studies, vol. 109, Princeton University Press, Princeton, NJ, 1985. MR 833742, DOI 10.1515/9781400881598
M. I. Freidlin and A. D. Wentzell, Random perturbations of dynamical systems, 2nd ed., Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 260, Springer-Verlag, New York, 1998. Translated from the 1979 Russian original by Joseph Szücs. MR 1652127, DOI 10.1007/978-1-4612-0611-8
N. Fusco and G. Moscariello, On the homogenization of quasilinear divergence structure operators, Ann. Mat. Pura Appl. (4) 146 (1987), 1–13. MR 916685, DOI 10.1007/BF01762357
David Gilbarg and Neil S. Trudinger, Elliptic partial differential equations of second order, 2nd ed., Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 224, Springer-Verlag, Berlin, 1983. MR 737190, DOI 10.1007/978-3-642-61798-0
Thomas Y. Hou and Xiao-Hui Wu, A multiscale finite element method for elliptic problems in composite materials and porous media, J. Comput. Phys. 134 (1997), no. 1, 169–189. MR 1455261, DOI 10.1006/jcph.1997.5682
Ioannis G. Kevrekidis, C. William Gear, James M. Hyman, Panagiotis G. Kevrekidis, Olof Runborg, and Constantinos Theodoropoulos, Equation-free, coarse-grained multiscale computation: enabling microscopic simulators to perform system-level analysis, Commun. Math. Sci. 1 (2003), no. 4, 715–762. MR 2041455, DOI 10.4310/CMS.2003.v1.n4.a5
KO02J. Knap and M. Ortiz, An analysis of the quasicontinuum method, J. Mech. Phys. Solids., 49 (2001), 1899-1923.
S. M. Kozlov, The averaging of random operators, Mat. Sb. (N.S.) 109(151) (1979), no. 2, 188–202, 327 (Russian). MR 542557
A. M. Matache, I. Babuška, and C. Schwab, Generalized $p$-FEM in homogenization, Numer. Math. 86 (2000), no. 2, 319–375. MR 1777492, DOI 10.1007/PL00005409
Graeme W. Milton, The theory of composites, Cambridge Monographs on Applied and Computational Mathematics, vol. 6, Cambridge University Press, Cambridge, 2002. MR 1899805, DOI 10.1017/CBO9780511613357
My03 P.B. Ming and X.Y. Yue, Numerical methods for multiscale elliptic problems, preprint, 2003.
Shari Moskow and Michael Vogelius, First-order corrections to the homogenised eigenvalues of a periodic composite medium. A convergence proof, Proc. Roy. Soc. Edinburgh Sect. A 127 (1997), no. 6, 1263–1299. MR 1489436, DOI 10.1017/S0308210500027050
François Murat and Luc Tartar, $H$-convergence, Topics in the mathematical modelling of composite materials, Progr. Nonlinear Differential Equations Appl., vol. 31, Birkhäuser Boston, Boston, MA, 1997, pp. 21–43. MR 1493039
Gabriel Nguetseng, A general convergence result for a functional related to the theory of homogenization, SIAM J. Math. Anal. 20 (1989), no. 3, 608–623. MR 990867, DOI 10.1137/0520043
J. Tinsley Oden and Kumar S. Vemaganti, Estimation of local modeling error and goal-oriented adaptive modeling of heterogeneous materials. I. Error estimates and adaptive algorithms, J. Comput. Phys. 164 (2000), no. 1, 22–47. MR 1786241, DOI 10.1006/jcph.2000.6585
G. C. Papanicolaou and S. R. S. Varadhan, Boundary value problems with rapidly oscillating random coefficients, Random fields, Vol. I, II (Esztergom, 1979) Colloq. Math. Soc. János Bolyai, vol. 27, North-Holland, Amsterdam-New York, 1981, pp. 835–873. MR 712714
Rolf Rannacher and Ridgway Scott, Some optimal error estimates for piecewise linear finite element approximations, Math. Comp. 38 (1982), no. 158, 437–445. MR 645661, DOI 10.1090/S0025-5718-1982-0645661-4
Christoph Schwab and Ana-Maria Matache, Generalized FEM for homogenization problems, Multiscale and multiresolution methods, Lect. Notes Comput. Sci. Eng., vol. 20, Springer, Berlin, 2002, pp. 197–237. MR 1928567, DOI 10.1007/978-3-642-56205-1_{4}
Ridgway Scott, Optimal $L^{\infty }$ estimates for the finite element method on irregular meshes, Math. Comp. 30 (1976), no. 136, 681–697. MR 436617, DOI 10.1090/S0025-5718-1976-0436617-2
Sergio Spagnolo, Convergence in energy for elliptic operators, Numerical solution of partial differential equations, III (Proc. Third Sympos. (SYNSPADE), Univ. Maryland, College Park, Md., 1975) Academic Press, New York, 1976, pp. 469–498. MR 0477444
Luc Tartar, An introduction to the homogenization method in optimal design, Optimal shape design (Tróia, 1998) Lecture Notes in Math., vol. 1740, Springer, Berlin, 2000, pp. 47–156. MR 1804685, DOI 10.1007/BFb0106742
Jinchao Xu, Two-grid discretization techniques for linear and nonlinear PDEs, SIAM J. Numer. Anal. 33 (1996), no. 5, 1759–1777. MR 1411848, DOI 10.1137/S0036142992232949
V. V. Yurinskiĭ, Averaging of symmetric diffusion in a random medium, Sibirsk. Mat. Zh. 27 (1986), no. 4, 167–180, 215 (Russian). MR 867870
V. V. Zhikov, On an extension and an application of the two-scale convergence method, Mat. Sb. 191 (2000), no. 7, 31–72 (Russian, with Russian summary); English transl., Sb. Math. 191 (2000), no. 7-8, 973–1014. MR 1809928, DOI 10.1070/SM2000v191n07ABEH000491
V. V. Zhikov, S. M. Kozlov, and O. A. Oleĭnik, Usrednenie differentsial′nykh operatorov, "Nauka", Moscow, 1993 (Russian, with English and Russian summaries). MR 1318242
Retrieve articles in Journal of the American Mathematical Society with MSC (2000): 65N30, 74Q05, 74Q15, 65C30
Retrieve articles in all journals with MSC (2000): 65N30, 74Q05, 74Q15, 65C30
Weinan E
Affiliation: Department of Mathematics and PACM, Princeton University, Princeton, New Jersey 08544 and School of Mathematical Sciences, Peking University, Beijing 100871, People's Republic of China
Email: [email protected]
Pingbing Ming
Affiliation: No. 55, Zhong-Guan-Cun East Road, Institute of Computational Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100080, People's Republic of China
Email: [email protected]
Pingwen Zhang
Affiliation: School of Mathematical Sciences, Peking University, Beijing 100871, People's Republic of China
Email: [email protected]
Received by editor(s): January 2, 2003
Published electronically: September 16, 2004
Additional Notes: The work of the first author was partially supported by ONR grant N00014-01-1-0674 and the National Natural Science Foundation of China through a Class B Award for Distinguished Young Scholars 10128102.
The work of the second author was partially supported by the Special Funds for the Major State Basic Research Projects G1999032804 and was also supported by the National Natural Science Foundation of China 10201033.
The work of the third author was partially supported by the Special Funds for the Major State Research Projects G1999032804 and the National Natural Science Foundation of China for Distinguished Young Scholars 10225103.
We thank Bjorn Engquist for inspiring discussions on the topic studied here.
The copyright for this article reverts to public domain 28 years after publication.
Journal: J. Amer. Math. Soc. 18 (2005), 121-156
MSC (2000): Primary 65N30, 74Q05; Secondary 74Q15, 65C30
DOI: https://doi.org/10.1090/S0894-0347-04-00469-2 | CommonCrawl |
Non-uniform refinement: adaptive regularization improves single-particle cryo-EM reconstruction
Ali Punjani ORCID: orcid.org/0000-0002-4133-66301,2,3,
Haowei Zhang1 &
David J. Fleet ORCID: orcid.org/0000-0003-0734-71141,2
Nature Methods volume 17, pages1214–1221(2020)Cite this article
Cryoelectron microscopy
Cryogenic electron microscopy (cryo-EM) is widely used to study biological macromolecules that comprise regions with disorder, flexibility or partial occupancy. For example, membrane proteins are often kept in solution with detergent micelles and lipid nanodiscs that are locally disordered. Such spatial variability negatively impacts computational three-dimensional (3D) reconstruction with existing iterative refinement algorithms that assume rigidity. We introduce non-uniform refinement, an algorithm based on cross-validation optimization, which automatically regularizes 3D density maps during refinement to account for spatial variability. Unlike common shift-invariant regularizers, non-uniform refinement systematically removes noise from disordered regions, while retaining signal useful for aligning particle images, yielding dramatically improved resolution and 3D map quality in many cases. We obtain high-resolution reconstructions for multiple membrane proteins as small as 100 kDa, demonstrating increased effectiveness of cryo-EM for this class of targets critical in structural biology and drug discovery. Non-uniform refinement is implemented in the cryoSPARC software package.
Single-particle cryo-EM has transformed rapidly into a mainstream technique in biological research1. Cryo-EM images individual protein particles, rather than crystals and has therefore been particularly useful for structural studies of integral membrane proteins, which are difficult to crystallize2. These molecules are critical for drug discovery, targeted by more than half of drugs today3. Membrane proteins pose challenges in cryo-EM sample preparation, imaging and computational 3D reconstruction, as they are often of small size, appear in multiple conformations, have flexible subunits and are embedded in a detergent micelle or lipid nanodisc2. These characteristics cause strong spatial variation in structural properties, such as rigidity and disorder, across the target molecule's 3D density. Traditional cryo-EM reconstruction algorithms, however, are based on the simplifying assumption of a uniform, rigid particle.
We develop an algorithm that incorporates such domain knowledge in a principled way, improving 3D reconstruction quality and allowing single-particle cryo-EM to achieve higher-resolution structures of membrane proteins. This expands the range of proteins that can be effectively studied and is especially important for structure-based drug design4,5. We begin by formulating a cross-validation (CV) regularization framework for single-particle cryo-EM refinement and use it to account for the spatial variability in resolution and disorder found in a typical molecular complex. The framework incorporates general domain knowledge about protein molecules, without specific knowledge of any particular molecule and critically, without need for manual user input. Through this framework we derive a new algorithm called non-uniform refinement, which automatically accounts for structural variability, while ensuring that key statistical properties for validation are maintained to mitigate the risk of over-fitting during 3D reconstruction.
With a graphics processing unit-accelerated implementation of non-uniform refinement in the cryoSPARC software package6, we demonstrate improvements in resolution and map quality for a range of membrane proteins. We show results on a 48-kDa membrane protein in lipid nanodisc with a Fab bound, a 180-kDa membrane protein complex with a large detergent micelle and a 245-kDa sodium channel complex with flexible domains. Non-uniform refinement is reliable and automatic, requiring no change in parameters between datasets and is without reliance on hand-made spatial masks or manual labels.
Iterative refinement and regularization
In standard cryo-EM 3D structure determination6,7,8, a generative model describes the formation of two-dimensional (2D) electron microscope images from a target 3D protein density (Coulomb potential). According to the model, the target density is rotated, translated and projected along the direction of the electron beam. The 2D projection is modulated by a microscope contrast transfer function (CTF) and corrupted by additive noise. The goal of reconstruction is to infer the 3D density map from particle images, without knowledge of latent 3D pose variables, that is, the orientation and position of the particle in each image. Iterative refinement methods formulate inference as a form of maximum likelihood or maximum a posteriori optimization6,9,10,11. Such algorithms can be viewed as a form of block-coordinate descent or expectation-maximization12, each iteration comprising an E-step, estimating the pose of each particle image, given the 3D structure, and an M-step, regularized 3D density estimation given the latent poses.
Like many inverse problems with noisy, partial observations, the quality of cryo-EM map reconstruction depends heavily on regularization. Regularization methods, widely used in computer science and statistics, leverage prior domain knowledge to penalize unnecessary model complexity and avoid over-fitting. In cryo-EM refinement, regularization is needed to mitigate the effects of imaging and sample noise so that protein signal alone is present in the inferred 3D density and so accumulated noise does not contaminate latent pose estimates.
Existing refinement algorithms use an explicit regularizer in the form of a shift-invariant linear filter, typically obtained from Fourier shell correlation (FSC)6,10,13,14,15,16,17. Such filters smooth the 3D structure using the same kernel, and hence the same degree of smoothing, at all locations. As FSC captures the average resolution of the map, such filters presumably under- and over-regularize different regions, allowing noise accumulation in some regions and a loss of resolvable detail in others. This effect should be pronounced with membrane proteins that have highly non-uniform rigidity and disorder across the molecule. As a motivating example, Fig. 1 shows a reconstruction of the TRPA1 membrane protein18 with a relatively low density threshold to help visualize regions with substantial noise levels, which indicate over-fitting (e.g. the disordered micelle and the flexible tail at and bottom of the protein). We hypothesize that under-fitting occurs in the core region where over-regularization attenuates useful signal. As such, accumulated noise and attenuated signal will degrade pose estimates during refinement, limiting final structure quality. For inference problems of this type, the amount and form of regularization depends on regularization parameters. Correctly optimizing these parameters is often critical, but care must be taken to ensure that the optimization itself is not also prone to over-fitting.
Fig. 1: A 3D map from uniform refinement reveals spatial variations in resolution in a prototypical membrane protein (TRPA1 ion channel, EMPIAR-10024).
Following the default FSC-based regularization and B-factor sharpening in cryoSPARC, the density map has been thresholded at a relatively low value to clearly visualize regions with substantial levels of noise. Color depicts local resolution22 as a proxy for local structure properties. Red indicates higher resolution (e.g. the core inner region), yellow indicates moderate resolutions (e.g. the solvent-facing region) and blue indicates poor resolution (e.g. the disordered detergent micelle and the flexible tail at the bottom).
We next outline the formulation of an adaptive form of regularization and with it, a new refinement algorithm called non-uniform refinement. We discuss its properties and demonstrate its application on several membrane protein datasets.
Adaptive cross-validation regularization
With the aim of incorporating spatial non-uniformity into cryo-EM reconstruction, we formulate a family of regularizers denoted rθ, with parameters θ(x) that depend on spatial position x. Given a 3D density map m(x), the regularization operator, evaluated at x, is defined by
$$({r}_{\theta }\circ m)(x)\ =\ \sum _{\xi }h(\xi ;\theta (x))\,m(\xi -x)\,,$$
where h(x; ψ) is symmetric smoothing kernel, the spatial scale of which is determined by parameter ψ.
This family provides greater flexibility than shift-invariant regularizers, but in exchange, requires making the correct choice of a new set of parameters, θ(x). We formulate the selection of the regularization parameters as an optimization subproblem during refinement, for which we adopt a twofold CV objective19,20. The data are first randomly partitioned into two halves. On each of two trials, one part is considered as training data and the other is treated as held-out validation data. To find the regularizer parameters θ, one minimizes the sum of the per-trial validation errors (e), measuring consistency between the model and the validation data; that is,
$$E(\theta )\,=\,e({r}_{\theta }\circ {m}_{1},\ {m}_{2})+e({r}_{\theta }\circ {m}_{2},\ {m}_{1})$$
where m1 and m2 are reconstructions from the two folds of the data. Similar objectives have been used for image de-noising21. We also introduce constraints on the parameters θ(x) to control degrees of freedom. The optimization problem is solved using a discretized search algorithm (Methods).
The resulting CV regularizer automatically identifies regions of a protein density with differing structural properties, optimally removing noise from each region. Fig. 2 illustrates the difference between uniform filtering (FSC-based) and the CV-optimal adaptive regularizer in non-uniform refinement. Shift-invariant regularization smooths all regions equally, while the adaptive regularizer preferentially removes noise from disordered regions, retaining high-resolution signal in well-structured regions that is essential for 2D−3D alignment.
Fig. 2: Images illustrate the difference between shift-invariant regularization and adaptive (CV-optimal) regularization on a membrane protein (PfCRT, EMPIAR-10330).
A central slice through a raw reconstruction (M-step) of a half-map after nine iterations (left). It shows substantial levels of noise. The same reconstruction is shown after uniform isotropic filtering (based on FSC between half-maps). It shows uniform noise attenuation throughout the slice (middle). The same half-map reconstruction after adaptive regularization with the optimal CV regularizer shows greater attenuation of noise in the solvent background and the nanodisc region, while better preserving the high-resolution structure in the well-ordered protein region (right).
Non-uniform refinement algorithm
The non-uniform refinement algorithm takes as input a set of particle images and a low-resolution ab initio 3D map. The data are randomly partitioned into two halves, each of which is used to independently estimate a 3D half-map. This 'gold-standard' refinement17 allows use of FSC for evaluating map quality and for comparison with existing algorithms. The key to non-uniform refinement is the adaptive CV regularization, applied at each iteration of 3D refinement. The regularizer parameters θ(x) are estimated independently for each half-map (Methods), adhering to the 'gold-standard' assumptions. In contrast, in conventional uniform refinement the two half-maps are not strictly independent as the regularization parameters, determined by FSC, are shared by both half-maps. Finally, the optimization and application of the adaptive regularizer cause non-uniform refinement to be approximately two times slower than uniform refinement in cryoSPARC.
Validation with membrane protein datasets
We experimentally compared non-uniform refinement with conventional uniform refinement with three membrane protein datasets, all processed in cryoSPARC (see Supplementary Information for an additional membrane protein with no soluble region). Both algorithms were given the same ab initio structures. Except for the regularizer, all parameters and stages of data analysis, including the 2D−3D alignment and back-projection, were identical in uniform and non-uniform refinements. The default use of spatial solvent masking during 2D−3D alignment in cryoSPARC was used for both algorithms (Supplementary Information). No manual masks were used and no masking was used to identify or separate micelle or nanodisc regions. The same non-uniform refinement default parameters (other than symmetry) were used for all datasets.
When computing gold-standard FSC during the analysis of the reconstructed 3D density maps, for each dataset we used the same mask for uniform and non-uniform refinement. The masks were tested using phase randomization13 to avoid FSC bias. Also, the same B-factor was used to sharpen both uniform and non-uniform refinement maps for each dataset. This consistency helped to ensure that visible differences in 3D structure were due solely to algorithmic differences. To color maps using local resolution, we used a straightforward implementation of Blocres22 for resolution estimation. No local filtering or local sharpening was used for visualization. Uniform and non-uniform refinement density maps were thresholded to contain the same total volume for visual comparison.
We also note that the FSC-based regularizer in conventional uniform refinement is equivalent to a shift-invariant regularizer optimized with CV. That is, one can show that the optimal shift-invariant filter under CV with squared error has a transfer function equivalent to the FSC curve. Thus, the experiments below also capture the differences between adaptive and shift-invariant regularization.
STRA6-CaM: CV regularization yields improved pose estimates and FSC
Zebrafish STRA6-CaM23 is a 180-kDa C2-symmetric membrane protein complex comprising the 74-kDa STRA6 protein bound to calmodulin (CaM). STRA6 mediates the uptake of retinol in various tissues. We processed a dataset of 28,848 particle images of STRA6-CaM in a lipid nanodisc, courtesy of O. Clarke and F. Mancia (O. Clarke, personal communication). Non-uniform refinement provides a substantial improvement in nominal resolution from 4.0 Å to 3.6 Å (Fig. 3a), indicating an improvement in the average signal-to-noise over the entire structure. However, different regions exhibited different resolution characteristics (Fig. 3c), as is often observed with protein reconstructions22. There was substantial improvement in structural detail in most regions, while peripheral and flexible regions remained at low resolutions. Differences in structure quality, clearly visible in detailed views of α-helices (Fig. 3d), are especially important during atomic model building, where in many cases, protein backbone and side chains can only be traced with confidence in the non-uniform refinement map. Improvements in structure quality coincided with changes in particle alignments (Fig. 3b), approximately 3° on average. While disorder in the lipid nanodisc and nonrigidity of the CaM subunits are problematic for uniform refinement, adaptive regularization in non-uniform refinement reduces the influence of noise on alignments and produces improved map quality, especially in the periphery of the protein.
Fig. 3: Results of uniform and non-uniform refinement from 28,848 particle images of STRA6-CaM in lipid nanodisc (pixel size 1.07 Å, box size 256 pixels, C2 symmetry yielding 57,696 asymmetric units).
a, FSC curves computed with the same mask for both refinements, showing improvement from 4.0 Å to 3.6 Å between uniform and non-uniform methods. b, Histograms of change in particle alignment pose and shift between uniform and non-uniform refinement. c, 3D density maps from uniform and non-uniform refinement are filtered using the corresponding FSC curves and sharpened with the same B-factor, −140 Å2. No local filtering or sharpening was used and thresholds were set to keep the enclosed volume constant. Map differences were due to algorithmic rather than visualization differences. Map color depicts local resolution (Blocres22) on a single-color scale and shows how map improvement depends on the region within the map. d, Individual α-helical segments from the non-uniform map (purple) and uniform map (gray) illustrate differences in resolvability of backbone and side chains. The left-most α-helix is peripheral, whereas the right-most is central. Docked atomic model is courtesy of O. Clarke.
PfCRT: enabling atomic model building from previously unusable data
The Plasmodium falciparum chloroquine resistance transporter (PfCRT) is an asymmetric 48-kDa membrane protein24. Mutations in PfCRT are associated with the emergence of resistance to chloroquine and piperaquine as antimalarial treatments. We processed 16,905 particle images of PfCRT in lipid nanodisc with a Fab bound (EMPIAR-10330 (ref. 24)). For PfCRT, the difference in resolution (Fig. 4a) and map quality (Fig. 4c) between uniform and non-uniform refinement is striking.
Fig. 4: Results of uniform and non-uniform refinement from 16,905 particle images of PfCRT in lipid nanodisc with a single Fab bound (pixel size 0.5175 Å, box size 300 pixels, no symmetry).
a, FSC curves, computed with the same mask, show numerical improvement from 6.9 Å to 3.6 Å. b, Histograms of change in particle alignments between uniform and non-uniform refinement. c, 3D density maps from uniform and non-uniform refinement, both filtered using the corresponding FSC curve and sharpened with the same B-factor of −100 Å2. No local filtering or sharpening is used and thresholds are set to keep the enclosed volume constant. Density differences are thus due to algorithmic rather than visualization differences. Maps are colored by local resolution from Blocres22, all on the same color scale. d, Individual α-helical segments and β-strands from the non-uniform map (purple) and uniform map (gray) illustrate localized differences in resolvability between the maps.
Using uniform refinement, reaching 6.9 Å, transmembrane α-helices are barely resolvable. Non-uniform refinement recovers signal up to 3.6 Å and provides a map from which an atomic model can be built with confidence. Transmembrane α-helices can be directly traced, including side chains. In contrast, the uniform refinement map does not show helical pitch and does not separate β-strands in the Fab domain. Indeed, an early version of the non-uniform refinement algorithm was essential for reconstructing the published high-resolution density map and model24.
On the spectrum of proteins studied with cryo-EM, the PfCRT-Fab complex (100 kDa) is small. The lipid nanodisc (~50 kDa) also accounts for a large fraction of total particle molecular weight (see Fig. 2). We hypothesized that disorder in this relatively large nanodisc region leads to over- and under-regularization in uniform refinement. Most particle images exhibited orientation differences >6° between the two algorithms (Fig. 4b), suggesting that a large fraction of particle images were grossly misaligned by uniform refinement.
Nav1.7 channel: improvement in high-resolution features
Nav1.7 is a voltage-gated sodium channel found in the human nervous system25. It plays a role in the generation and conduction of action potentials and is targeted by toxins and therapeutic drugs (e.g. for pain relief). We processed data of Nav1.7 bound to two Fabs, forming a 245-kDa C2-symmetric complex solubilized in detergent25. Following pre-processing (Methods), as described elsewhere25, we detected both active and inactive conformational states of the channel. We obtained reconstructions with resolutions better than the published literature for both, but here we focus on 300,759 particle images of the active state.
Compared to the preceding datasets, the Nav1.7 complex has a higher molecular weight and, in relative terms, a smaller detergent micelle. However, other regions are disordered or flexible, namely, a central four-helix bundle, peripheral transmembrane domains and the Fabs. For Nav1.7, uniform refinement reaches 3.4 Å resolution and non-uniform refinement reaches an improved 3.1 Å (Fig. 5a). This result is also an improvement over the published result of 3.6 Å (EMDB-0341)25, where the authors performed all processing in cisTEM10.
Fig. 5: Results of uniform and non-uniform refinement on 300,759 particle images of Nav1.7 in a detergent micelle with two Fabs bound (pixel size 1.21 Å, box size 360 pixels, refined with C2 symmetry yielding 601,518 asymmetric units).
a, FSC curves for uniform and non-uniform refinement indicate global numerical resolutions of 3.4 Å to 3.1 Å, respectively. b, Histograms of change in particle alignments between uniform and non-uniform refinement. Optimized adaptive regularization yields improved alignments through multiple iterations. c, The 3D density maps were filtered using corresponding FSC curves, sharpened with a B-factor of −85 Å2. Thresholds were set to keep the enclosed volume constant. No local filtering or sharpening was used. Color depicts local resolution (Blocres22) on the same color scale. d, Individual α-helical segments from the non-uniform map (purple) and uniform map (gray) illustrate localized differences in resolvability in the peripheral transmembrane domain (left, center) and the central core (right). Docked atomic model is PDB 6N4Q25.
With non-uniform refinement, map quality was clearly improved in central transmembrane regions, while some flexible parts of the structure (Fab domains and four-helix bundle) remained at intermediate resolutions (Fig. 5c). In detailed views of α-helices (Fig. 5d) closer to the periphery of the protein, improvement in map quality and interpretability was readily apparent in the non-uniform refinement map, allowing modeling of side-chain rotamers with confidence. Central α-helices showed less improvement, but map quality remained equal or slightly improved, indicating that reconstructions of protein regions without disorder were not harmed by using non-uniform refinement.
Increased data efficiency
Examining map quality as a function of dataset size further helped to explore data efficiency. Figure 6 plots inverse resolution versus the number of particles. Such plots are useful as they relate to image noise and computational extraction of signal15. Across all datasets, non-uniform refinement reached the same resolution as uniform refinement with fewer than half the number of particle images. It has also been argued that higher curves indicate more accurate pose estimates26. Notably, the resolution gap between uniform and non-uniform refinement persists over a wide range of dataset sizes. While this resolution gap may decrease for much larger datasets than those considered here, the collection of more data alone may not allow uniform refinement to match the performance of non-uniform refinement for membrane protein targets until resolution is saturated as one nears fundamental limits.
Fig. 6: Plots of inverse resolution (frequency squared) versus number of particles on a log scale, for each of the STRA6-CaM, PfCRT and Nav1.7 datasets.
Each point represents an independent run of uniform (black) or non-uniform (purple) refinement, using a random sample from the entire dataset. For each dataset, the same mask is used to compute the FSC at all sample sizes. Data points for non-uniform refinement have higher resolution value than for uniform refinement at all sample sizes, for all three datasets (STRA6-CaM23, PfCRT24 and Nav1.7 (ref. 25)).
With adaptive regularization and effective optimization of regularization parameters using CV, non-uniform refinement achieves reconstructions of higher resolution and quality from single-particle cryo-EM data. It is particularly effective for membrane proteins, which exhibit varying levels of disorder, flexibility or occupancy. Here we focused on one specific family of adaptive regularizers (Methods), but it is possible within the same framework to explore other families that may be even more effective. For example, one could look to the extensive de-noising literature for different regularizers.
Spatial variability of structure properties and the existence of over- and under-regularization in uniform refinement are well known. For instance, cryo-EM methods for estimating local map resolution once 3D refinement is complete leverage statistical tests on the coefficients of a windowed Fourier transform22 or some form of wavelet transform27,28. Although non-uniform refinement does not estimate resolution per se, the regularizer parameter θ(x) is related to a local frequency band limit. As such it might be viewed as a proxy for local resolution, but with important differences. Notably, θ(x) is optimized with a CV objective and it does not depend on a specific definition of 'local resolution' nor on an explicit resolution estimator.
Local resolution estimates22,29 or statistical tests30 have also been used to adaptively filter 3D maps, for visualization, to assess local map quality or to aid molecular model building. The family of filters and frequency cutoffs are typically selected to maximize visual quality. They are not optimized for the local resolution estimator nor is consideration given to the number of degrees of freedom in filter parameters or the strict independence of half-maps, all critical for reliable regularization. Thus, while local resolution estimation followed by local filtering is useful for post-processing, its use for regularization during iterative refinement (e.g. EMAN2.2 documentation9), can yield over-fitting (Methods). Map artifacts such as spikes or streaks are especially problematic for datasets with junk particles, structured outliers or small difficult-to-align particles. Non-uniform refinement couples an implicit resolution measure to the choice of regularizer, with optimization designed to control model capacity and avoid over-fitting of regularization parameters (Methods).
Another related technique, used in cisTEM10 and Frealign7 and among the first to acknowledge and address under- and over-fitting, entails manual creation of a mask to label a local region one expects to be disordered (e.g. detergent micelle), followed by low-pass filtering in that region to a pre-set resolution at each refinement iteration. While this shares the motivation for non-uniform refinement, it relies on manual interaction during refinement, often necessitating a tedious trial and error process that can be difficult to replicate.
SIDESPLITTER31 is a recently proposed method to mitigate over-fitting during refinement. Based on estimates of signal-to-noise ratios from voxel statistics, it smooths local regions with low signal-to-noise more aggressively than indicated by a global FSC-based resolution. The method does not directly address under-fitting nor does it maintain independence of half-maps or control for the number of degrees of freedom in the local filter model. More generally, the CV framework here is robust to different noise distributions and model mis-specification, which can be problematic for methods with parametric noise models. Empirical results indicate that SIDESPLITTER mitigates over-fitting and shows improvement in map quality relative to uniform refinement.
Bases other than the Fourier basis may also enable local control of reconstruction. Wavelet bases have been used for local resolution estimation, but less commonly for reconstruction32. Kucukelbir et al.33 proposed the use of an adaptive wavelet basis and a sparsity prior. While similar in spirit to the goal of non-uniform refinement, this method has a single regularizer parameter for the entire 3D map, computed from noise in the corners of particle images. This may not capture variations in noise due to disorder, motion or partial occupancy.
Non-uniform refinement has been successful in helping to resolve several new structures. Examples include multiple conformations of an ABC exporter34, mTORC1 docked on the lysosome35, the respiratory syncytial virus polymerase complex36, a GPCR-G protein-β-arrestin megacomplex37 and the SARS-CoV-2 spike protein38.
Regularization in iterative refinement
In the standard cryo-EM 3D reconstruction problem setup, the target 3D density map m is typically parameterized as a real-space 3D array with density at each voxel, in a Cartesian grid of box size N, and a corresponding discrete Fourier representation, \(\hat{m}=Fm\). The goal of reconstruction is to infer the 3D densities of the voxel array, called model parameters. Representing the 3D density, its 2D projections, the observed images and the noise model in the Fourier domain is common practice for computational efficiency, exploiting the well-known convolution and Fourier-slice theorems6,7,8,9. The unobserved pose variables for each image are latent variables.
Algorithm 1
Iterative refinement (expectation-maximization)
Require: Particle image dataset \({\mathcal{D}}\) and ab initio 3D map
1: Use smoothed ab initio 3D map array as the initial model parameters m(0)
2: while not converged do
3: E-step: Given current estimate of model parameters, m(t−1), from step t − 1, estimate (via marginalization or maximization) the latent variables: \({z}^{(t)}\leftarrow f({m}^{(t-1)},{\mathcal{D}})\)
4: M-step: Given the latent variables z(t), compute raw estimates of the model parameter \({\tilde{m}}^{(t)}\) (without regularization): \({\tilde{m}}^{(t)}\leftarrow h({z}^{(t)},{\mathcal{D}})\)
5: Regularize: Given noisy model parameters \({\tilde{m}}^{(t)}\), apply the regularization operator, rθ, with regularization parameters θ: \({m}^{(t)}\leftarrow {r}_{\theta }({\tilde{m}}^{(t)})\)
6: end while
Iterative refinement methods (Algorithm 1), which provide state-of-the-art results6,9,10,11, can be interpreted as variants of block-coordinate descent or the expectation-maximization algorithm12. In cryo-EM, and more generally in inverse problems with noisy, partial observations, a critical component that modulates the quality of the results is regularization.
One can regularize problems explicitly, using a prior distribution over model parameters or implicitly, by applying a regularization operator to the model parameters during optimization. Iterative refinement methods tend to use implicit regularizers, attenuating noise in the reconstructed map at each iteration. In either case, the separation of signal from noise is the crux of many inference problems.
In the cryo-EM refinement problem, like many latent variable inverse problems, there is an additional interplay between regularization, noise buildup and the estimation of latent variables. Retained noise due to under-regularization will contaminate the estimation of latent variables. This contamination is propagated to subsequent iterations and causes over-fitting.
This paper reconsiders the task of regularization based on the observation that common iterative refinement algorithms often systematically under-fit and over-fit different regions of a 3D structure simultaneously. This causes a loss of resolvable detail in some parts of a structure and the accumulation of noise in others. The reason stems from the use of frequency-space filtering as a form of regularization. Some programs, such as cisTEM10, use a strict resolution cutoff, beyond which Fourier amplitudes are set to zero before alignment of particle images to the current 3D structure. In RELION16, regularization was initially formulated with an explicit Gaussian prior on Fourier amplitudes of the 3D structure, with a hand-tuned parameter that controls Fourier amplitude shrinkage. Later versions of RELION17 and cryoSPARC's homogeneous (uniform) refinement6 use a transfer function (or Wiener filter) determined by FSC computed between two half-maps13,14,15.
Such methods presume a Fourier basis and shift-invariance. Although well suited to stationary processes, they are less well suited to protein structures, which are spatially compact and exhibit non-uniform disorder (e.g. due to motion or variations in signal resolution). FSC, for instance, provides an aggregate measure of resolution. To the extent that FSC under- or over-estimates resolution in different regions, FSC-based shift-invariant filtering will over-smooth some regions, attenuating useful signal and under-smooth others, leaving unwanted noise. To address these issues we introduce a family of adaptive regularizers that can, in many cases, find better 3D structures with improved estimates of the latent poses during refinement.
Cross-validation regularization
We formulate a new regularizer for cryo-EM reconstruction in terms of the minimization of a CV objective19,20. CV is a general principle that is widely used in machine learning and statistics for model selection and parameter estimation with complex models. In CV, observed data are randomly partitioned into a training set and a held-out validation set. Model parameters are inferred using the training data, the quality of which is then assessed by measuring an error function applied to the validation data. In k-fold CV, the observations are partitioned into k parts. In each of k trials, one part is selected as the held-out validation set and the remaining k − 1 parts comprise the training set. The per-trial validation errors are summed, providing the total CV error. This procedure measures agreement between the optimized model and the observations without bias due to over-fitting. Rather, over-fitting during training is detected directly as an increase in the validation error. Notably, formulating regularization in a CV setting provides a principled way to design regularization operators that are more complex than the conventional, isotropic frequency-space filters. The CV framework is not restricted to a Fourier basis. One may consider more complex parameterizations, the use of meta-parameters and incorporate cryo-EM domain knowledge.
Given a family of regularizers rθ with parameters θ, the minimization of CV error to find θ is often applied as an outer loop. This requires the optimization of model parameters m to be repeated many times with different values of θ, a prohibitively expensive cost for problems like cryo-EM. Instead, one can also perform CV optimization as an inner loop, while optimization of model parameters occurs in the outer loop. Regularizer parameters θ are then effectively optimized on-the-fly, preventing under- or over-fitting without requiring multiple 3D refinements to be completed.
To that end, consider the use of twofold CV optimization to select the regularization operator, denoted rθ(m) in the regularization step in Algorithm 1 (note that k > 2 is also possible). The dataset \({\mathcal{D}}\) is partitioned into two halves, \({{\mathcal{D}}}_{1}\) and \({{\mathcal{D}}}_{2}\) and two (unregularized) refinements are computed, namely m1 and m2. For each, one half of the data is the 'training set' and the other is held out for validation. To find the regularizer parameters θ we wish to minimize the total CV error E, that is,
$$\begin{array}{lll}\mathop{\min }\limits_{\theta }E(\theta )&=&\mathop{\min }\limits_{\theta }\ e({r}_{\theta }({m}_{1});{{\mathcal{D}}}_{2})+e({r}_{\theta }({m}_{2});{{\mathcal{D}}}_{1})\\ &=&\mathop{\min }\limits_{\theta }\parallel {r}_{\theta }({m}_{1})-{m}_{2}{\parallel }^{2}+\parallel {r}_{\theta }({m}_{2})-\left.{m}_{1}\right){\parallel }^{2}\end{array}$$
where e is the negative log likelihood of the validation half-set given the regularized half-map. The second line simplifies this expression by using the raw reconstruction from the opposite half-set as a proxy for the actual observed images. Note that assumptions for 'gold-standard' refinement17 are not broken in this procedure. With the L2 norm, equation (3) reduces to a sum of per-voxel squared errors, corresponding to white Gaussian noise between the half-set reconstructions. When the choice of θ causes rθ to remove too little noise from the raw reconstruction, the residual error E will be unnecessarily large. If θ causes rθ to over-regularize, removing too much structure from the raw reconstruction, then E increases as the structure retained by rθ no longer cancels corresponding structure in the opposite half-map. As such, minimizing E(θ) provides the regularizer that optimally separates signal from noise.
We note that similar objectives have been used for general image de-noising21 and adapted for cryo-EM images39,40,41; however, in these methods the aim was to learn a general neural network de-noiser, whereas the goal here is to optimize regularization parameters on a single data sample. It is also worth noting that this formulation can be extended in a straightforward way to compare each half-set reconstruction against images directly (dealing appropriately with the latent pose variables) or to use error functions corresponding to different noise models.
Regularization parameter optimization
The CV formulation in equation (3) provides great flexibility in choosing the family of regularizers rθ, taking domain knowledge into account. For non-uniform refinement, we wish to accommodate protein structures with spatial variations in disorder, motion and resolution. Accordingly, we define the regularizer to be a space-varying linear filter. The filter's spatial extent is determined by the regularization parameter θ(x), which varies with spatial position. Here, we write the regularizer in operator form:
where h(x; ψ) is symmetric smoothing kernel, the spatial scale of which is specified by ψ. In practice we let h(x; ψ) be an eighth-order Butterworth kernel. The eighth-order kernel provides a middle ground between the relatively poor frequency resolution of the Gaussian kernel and the sharp cutoff of the sinc kernel, which suffers from spatial ringing. We have experimented with different orders of Butterworth filters and found that an eighth-order kernel performs well on a broad range of particles.
When equation (4) is combined with the CV objective for the estimation of θ(x), one obtains
$$\begin{array}{lll}{\theta }^{* }&=&\arg \mathop{\min }\limits_{\theta }E(\theta )\\ &=&\arg \mathop{\min }\limits_{\theta }\sum _{x}| \ ({r}_{\theta }\circ {m}_{1})(x)-{m}_{2}(x)\ {| }^{2}\ +\ | \ ({r}_{\theta }\circ {m}_{2})(x)-{m}_{1}(x)\ {| }^{2}.\end{array}$$
With one regularization parameter at each voxel, that is, θ(x), this reduces to a large set of decoupled optimization problems, one for each voxel. That is, for voxel x one obtains
$${\theta }^{* }(x)\,=\,\arg \mathop{\min }\limits_{\theta (x)}\ | \ ({r}_{\theta }\circ {m}_{1})(x)-{m}_{2}(x)\ {| }^{2}\ +\,| \ ({r}_{\theta }\circ {m}_{2})(x)-{m}_{1}(x)\ {| }^{2}$$
With this decoupling, θ(x) can transition quickly from voxel to voxel, yielding high spatial resolution. On the other hand, the individual sub-problems in equation (6) are not well constrained as each parameter is estimated from data at a single location, so the parameter estimates are not useful. In essence, our regularizer design has two competing goals, namely, reliable signal detection and high spatial resolution (respecting boundaries between regions with different properties). Signal detection improves through aggregation of observations (e.g. neighboring voxels), while high spatial resolution prefers minimal aggregation (equation (6)).
To improve signal detection, we further constrain θ* to be smooth. That is, although in some regions θ should change quickly (solvent–protein boundaries), in most regions we expect it to change slowly (in solvent and regions of rigid protein mass). Smoothness effectively limits the number of degrees of freedom in θ, which is important to ensure that θ itself does not over-fit during iterative refinement. One can encourage smoothness in θ by explicitly penalizing spatial derivatives of θ in the objective (equation (5)), but this yields a Markov random field problem that is hard to optimize. Alternatively, one can express θ in a low-dimensional basis (e.g. radial basis functions), but this requires prior knowledge of the expected degree of smoothness. Instead, we adopt a simple but effective approach. Assuming that θ is smoothly varying, we treat measurements in the local neighborhood of x as additional constraints on θ(x). A window function can be used to give more weight to points close to x. We thereby obtain the following least-squares objective:
$$\mathop{\min }\limits_{\theta (x)}\sum _{\xi }{w}_{\rho }(\xi -x)\,\left[\ | \ ({r}_{\theta (x)}\circ {m}_{1})(\xi )-{m}_{2}(\xi )\ {| }^{2}+| \ ({r}_{\theta (x)}\circ {m}_{2})(\xi )-{m}_{1}(\xi )\ {| }^{2}\ \right]$$
where wρ(x) is positive and integrates to 1, with spatial extent ρ. This allows one to estimate θ at each voxel independently, while the overlapping neighborhoods ensure that θ(x) varies smoothly.
This approach also provides a natural way to allow for variable neighborhood sizes, where ρ(x) depends on location x, so both rigid regions and transition regions are well modeled. Notably, we want ρ(x) to be large enough to reliably estimate the local power of the CV residual error to estimate θ(x) correctly, but small enough to enable rapid local transitions. A reasonable balance can be specified in terms of the highest frequency with substantial signal power, which is captured by the regularization parameter θ(x) itself. In particular, for regularizers that are close to optimal, we expect the residual signal to have its power concentrated at wavelengths near θ(x). In this case a good measure of the local residual power is to aggregate the squared residual over a small number of wavelengths θ(x)42. Thus we can reliably estimate both θ(x) and ρ(x) as long as ρ(x) is constrained to be a small multiple of θ(x).
We therefore adopt a simple heuristic, that ρ(x) > γ θ(x) where γ, the adaptive window factor (AWF), is a constant. With this constraint we obtain the final form of the computational problem solved in non-uniform refinement to regularize 3D electron density at each iteration; that is,
$$\begin{array}{ll}{\theta }^{* }(x)=&{\arg \min }_{\theta (x)}\ \mathop{\min }\limits_{\rho (x)}\sum _{\xi }{w}_{\rho (x)}(\xi -x)\,\\ &\times\left[\ | \ ({r}_{\theta (x)}\circ {m}_{1})(\xi )-{m}_{2}(\xi )\ {| }^{2}\right. \\ \ \ &+ \left. | \ ({r}_{\theta (x)}\circ {m}_{2})(\xi )-{m}_{1}(\xi )\ {| }^{2}\ \right] \,s.t.\ \ \rho (x)> \gamma \,\theta (x)\end{array}$$
We find that as long as γ > 3 we obtain reliable estimates of the local power of the residual signal. For γ < 2, estimates of residual power are noisy and optimization of the regularization parameters therefore suffers. The algorithm is relatively insensitive to values of γ > 3, but there is some loss in the spatial resolution of the adaptive regularizer as γ increases.
Regularization step for non-uniform refinement
Require Particle image dataset \({\mathcal{D}}\) with pose estimates z
1: Randomly partition \({\mathcal{D}}\) into halves, \({{\mathcal{D}}}_{1}\) and \({{\mathcal{D}}}_{2}\) with corresponding poses z1 and z2
2: Reconstruct \({\tilde{m}}_{1}\) and \({\tilde{m}}_{2}\), the raw (noisy) 3D maps from each half-set
3: Estimate regularization parameters θ* by solving equation (8)
4: Reconstruct a single map from \({\mathcal{D}},\ z\) and apply the optimal regularizer \({r}_{{\theta }^{* }}\)
Given a set of particle images and a low-resolution ab initio 3D map, non-uniform refinement comprises three main steps, similar to conventional uniform (homogeneous) refinement (Algorithm 1). The data are randomly partitioned into two halves, each of which is used to independently estimate a 3D half-map. This 'gold-standard' refinement17 allows use of FSC for evaluating map quality, and for comparison with existing algorithms. The alignment of particle images against their respective half-maps, and the reconstruction of the raw 3D density map (the E and M steps in Algorithm 1) are also identical to uniform refinement.
The difference between uniform and non-uniform refinement is in the regularization step. First, in non-uniform refinement, regularization is performed independently in the two half-maps. As such, the estimation of the spatial regularization parameters in Algorithm 2 effectively partitions each half-dataset into quarter-datasets. We often refer to the raw reconstructions in Algorithm 2 as quarter-maps. The non-uniform refinements on half-maps are therefore entirely independent, satisfying the assumptions of a 'gold-standard' refinement17. In contrast, conventional uniform refinement uses FSC between half-maps to determine regularization parameters at each iteration, thereby sharing masks and regularization parameters, both of which contaminate final FSC-based assessment because the two half-maps are no longer reconstructed independently.
Most importantly, non-uniform refinement uses equation (8) to define the optimal parameters with which to regularize each half-set reconstruction at each refinement iteration. Figure 2 shows an example of the difference between uniform filtering (FSC-based) and the new CV-optimal regularizer used in non-uniform refinement. Uniform regularization removes signal and noise from all parts of the 3D map equally. Adaptive regularization, on the other hand, removes more noise from disordered regions, while retaining the high-resolution signal in well-structured regions that is critical for aligning 2D particle images in the next iteration.
In practice, for regularization parameter estimation, equation (8) is solved on a discretized parameter space where a relatively simple discrete search method can be used (e.g. as opposed to continuous gradient-based optimization). The algorithm is implemented in Python within the cryoSPARC software platform6, with most of the computation implemented on graphics processing unit accelerators. An efficient solution to equation (8) is important in practice because this subproblem is solved twice for each iteration of a non-uniform refinement.
Finally, the tuning parameters for adaptive regularization are interpretable and relatively few in number. They include the order of the Butterworth kernel, the discretization of the parameter space and the scalar relating ρ(x) and θ(x), called the AWF. In all experiments, we use an eighth-order Butterworth filter and a fixed AWF parameter γ = 3. We discretize the regularization parameters into 50 possible values, equispaced in the Fourier domain to provide greater sensitivity to small-scale changes at finer resolutions. We find that non-uniform refinement is approximately two times slower than uniform refinement in our current implementation.
Over-fitting of regularizer parameters
As mentioned in the Discussion, local resolution estimates or local statistical tests have commonly been used to adaptively filter 3D maps. While these methods are generally satisfactory as one-time post-processing steps for visualization, in our experience they can lead to severe over-fitting when used iteratively within refinement as a substitute for regularization. (Supplementary Fig. 1 illustrates one example with the Nav1.7 dataset.) During iterative refinement, small mis-estimations of local resolution at a few locations (due to high estimator variance22) cause subtle over- or under-fitting, leaving slight density variations. Over multiple iterations of refinement, these errors can produce strong erroneous density that contaminate particle alignments and the local estimation of resolution itself, creating a vicious cycle. A related technique using iterative local resolution and filtering was described briefly in EMAN2.2 documentation9 and may suffer the same problem. The resulting artifacts (e.g. streaking and spikey density radiating from the structure) are particularly prevalent in datasets with junk particles, structured outliers or small particles that are already difficult to align. To mitigate these problems, the approach we advocate couples an implicit resolution measure to a particular choice of local regularizer, with optimization explicitly designed to control model capacity and avoid over-fitting of regularizer parameters.
Experimental results for STRA6-CaM and PfCRT datasets were computed directly from particle image stacks, with no further pre-processing. The original data for the Nav1.7 protein comprise 25,084 raw microscope movies (EMPIAR-10261) from a Gatan K2 Summit direct electron detector in counting mode, with a pixel size of 0.849 Å. We processed the dataset through motion correction, CTF estimation, particle picking, 2D classification, ab initio reconstruction and heterogeneous refinement in cryoSPARC v.2.11 (ref. 6). A total of 738,436 particles were extracted and then curated using 2D classification yielding 431,741 particle images (pixel size 1.21 Å, box size 360 pixels). As described elsewhere25, we detected two discrete conformations corresponding to the active and inactive states of the channel, with 300,759 and 130,982 particles. We obtained reconstructions with resolutions better than the published literature for both states, but for the results in this work we focus solely on the active state (refined with C2 symmetry yielding 601,518 asymmetric units).
Reporting Summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
No new datasets were created in this study. The raw datasets analyzed in this study were either downloaded from the EMPIAR repository (EMPIAR-10024, EMPIAR-10330, EMPIAR-10261) or were provided by authors of other studies, cited in the main text. Density maps and atomic coordinates from EMDB-0341 and Protein Data Bank 6N4Q were used for visualization. All other results and outputs of data analysis in this study are available from the corresponding author on reasonable request.
Code availability
The cryoSPARC software package is freely available in executable form for nonprofit academic use at www.cryosparc.com.
Cheng, Y. Single-particle Cryo-EM at crystallographic resolution. Cell 161, 450–457 (2015).
Cheng, Y. Membrane protein structural biology in the era of single particle cryo-EM. Curr. Opin. Struct. Biol. 52, 58–63 (2018).
Overington, J. P., Al-Lazikani, B. & Hopkins, A. L. How many drug targets are there? Nat. Rev. Drug Discov. 5, 993–996 (2006).
Renaud, J.-P. et al. Cryo-EM in drug discovery: achievements, limitations and prospects. Nat. Rev. Drug Discov. 17, 471–492 (2018).
Scapin, G., Potter, C. S. & Carragher, B. Cryo-EM for small molecules discovery, design, understanding, and application. Cell Chem. Biol. 25, 1318–1325 (2018).
Punjani, A., Rubinstein, J. L., Fleet, D. J. & Brubaker, M. A. CryoSPARC: Algorithms for rapid unsupervised cryo-em structure determination. Nat. Methods 14, 290–296 (2017).
Grigorieff, N. FREEALIGN: High resolution refinement of single particle structures. J. Struct. Biol. 157, 117–125 (2007).
Scheres, S. H. W. & Bayesian, A. view on cryo-em structure determination. J. Mol. Biol. 415, 406–418 (2012).
Bell, J. M., Chen, M., Baldwin, P. R. & Ludtke, S. J. High resolution single particle refinement in EMAN2.1. Methods 100, 25–34 (2016).
Grant, T., Rohou, A. & Grigorieff, N. cisTEM, user-friendly software for single-particle image processing. eLife 7, e35383 (2018).
Zivanov, J. et al. New tools for automated high-resolution cryo-em structure determination in RELION-3. eLife 7, e42166 (2018).
Dempster, A. P., Laird, N. M. & Rubin, D. B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B 39, 1–38 (1977).
Chen, S. et al. High-resolution noise substitution to measure overfitting and validate resolution in 3D structure determination by single particle electron cryomicroscopy. Ultramicroscopy 135, 24–35 (2013).
Harauz, G. & van Heel, M. Exact filters for general geometry three dimensional reconstruction. Optik 73, 146–156 (1986).
Rosenthal, P. B. & Henderson, R. Optimal determination of particle orientation, absolute hand, and contrast loss in single-particle electron cryomicroscopy. J. Mol. Biol. 333, 721–745 (2003).
Scheres, S. H. W. RELION: implementation of a Bayesian approach to cryo-EM structure determination. J. Struct. Biol. 180, 519–530 (2012).
Scheres, S. H. W. & Chen, S. Prevention of overfitting in cryo-EM structure determination. Nat. methods 9, 853–854 (2012).
Paulsen, C. E., Armache, J. P., Gao, Y., Cheng, Y. & Julius, D. Structure of the TRPA1 ion channel suggests regulatory mechanisms. Nature 520, 511–517 (2015).
Golub, G. M., Heath, M. & Wahba, G. Generalized cross-validation as as a method for choosing a good ridge parameter. Technometrics 21, 215–223 (1979).
Wahba, G. A comparison of GCV and GML for choosing the smoothing parameter in the generalized spline smoothing problem. Ann. Stat. 13, 1378–1402 (1985).
Lehtinen, J. et al. Noise2Noise: learning image restoration without clean data. Proc. Mach. Learn. Res. 7, 4620–4631 (2018).
Cardone, G., Heymann, J. B. & Steven, A. C. One number does not fit all: mapping local variations in resolution in cryo-em reconstructions. J. Struct. Biol. 184, 226–236 (2013).
Chen, Y. et al. Structure of the stra6 receptor for retinol uptake. Science 353, aad8266 (2016).
Kim, J. et al. Structure and drug resistance of the Plasmodium falciparum transporter PfCRT. Nature https://doi.org/10.1038/s41586-019-1795-x (2019).
Xu, H. et al. Structural basis of Nav1.7 inhibition by a gating-modifier spider toxin. Cell https://doi.org/10.1016/j.cell.2018.12.018 (2019).
Stagg, S., Noble, A., Spilman, M. & Chapman, M. Reslog plots as an empirical metric of the quality of cryo-EM reconstructions. J. Struct. Biol. 185, 418–426 (2014).
Kucukelbir, A., Sigworth, F. J. & Tagare, H. D. Quantifying the local resolution of cryo-EM density maps. Nat. Methods 11, 63–65 (2014).
Vilas, J. L. et al. Monores: automatic and accurate estimation of local resolution for electron microscopy maps. Structure 26, 337–344 (2018).
Felsberg, M. & Sommer, G. The monogenic signal. IEEE Trans. Signal Process. 49, 3136–3144 (2001).
Ramlaul, K., Palmer, C. M. & Aylett, C. H. S. A local agreement filtering algorithm for transmission EM reconstructions. J. Struct. Biol. 205, 30–40 (2019).
Ramlaul, K., Palmer, C. M., Nakane, T. & Aylett, C. H. S. Mitigating local over-fitting during single particle reconstruction with SIDESPLITTER. J. Struct. Biol. 211, 107545 (2020).
Vonesch, C., Wang, L., Shkolnisky, Y. and Singer, A. Fast wavelet-based single particle reconstruction in cryo-EM. Proc. IEEE Int. Symp. Biomed. Imaging https://doi.org/10.1109/ISBI.2011.5872791 (2011).
Kucukelbir, A., Sigworth, F. J. & Tagare, H. D. A Bayesian adaptive basis algorithm for single particle reconstruction. J. Struct. Biol. 179, 56–67 (2012).
Hofmann, S. et al. Conformation space of a heterodimeric ABC exporter under turnover conditions. Nature 571, 580–583 (2019).
Rogala, K. B. et al. Structural basis for the docking of mTORC1 on the lysosomal surface. Science 366, 468–475 (2019).
Gilman, M. S. A. et al. Structure of the respiratory syncytial virus polymerase complex. Cell 179, 193–204 (2019).
Nguyen, A. H. et al. Structure of an endosomal signaling GPCR-G protein-β-arrestin megacomplex. Nat. Struct. Mol. Biol. 26, 1123–1131 (2019).
Wrapp, D. et al. Cryo-EM structure of the 2019-nCoV spike in the prefusion conformation. Science 367, 1260–1263 (2020).
Bepler, T., Noble, A. & Berger, B. Topaz-Denoise: general deep denoising models for cryoEM. Nat. Commun. 11, 5208 (2020).
Tegunov, D. & Cramer, P. Real-time cryo-electron microscopy data preprocessing with warp. Nat. Methods 16, 1146–1152 (2019).
Buchholz, T.-O., Jordan, M., Pigino, G. & Jug, F. cryoCARE: content-aware image restoration for cryo-transmission electron microscopy data. Preprint at arXiv https://arxiv.org/abs/1810.05420 (2018).
Potamianos, A. & Maragos, P. A comparison of the energy operator and the Hilbert transform approach to signal and speech demodulation. Signal Process. 37, 95–120 (1994).
We are extraordinarily grateful to O. Clarke, F. Mancia and Y. Zi Tan for providing valuable cryo-EM data early in this project and for sharing their experience as early adopters of non-uniform refinement and cryoSPARC. We thank the entire team at Structura Biotechnology Inc., which designs, develops and maintains the cryoSPARC software system in which this project was implemented and tested. We thank J. Rubinstein for comments on this manuscript. Resources used in this research were provided, in part, by the Province of Ontario, the Government of Canada through NSERC (Discovery Grant RGPIN 2015-05630 to D.J.F.) and CIFAR and companies sponsoring the Vector Institute.
Department of Computer Sciences, University of Toronto, Toronto, Ontario, Canada
Ali Punjani, Haowei Zhang & David J. Fleet
Vector Institute, Toronto, Ontario, Canada
Ali Punjani & David J. Fleet
Structura Biotechnology Inc., Toronto, Ontario, Canada
Ali Punjani
Haowei Zhang
David J. Fleet
A.P. and D.J.F. designed the algorithm, A.P. and H.Z. implemented the software and performed experimental work. A.P. and D.J.F. contributed expertise and supervision. D.J.F. and A.P. contributed to manuscript preparation.
Correspondence to Ali Punjani or David J. Fleet.
A.P. is CEO of Stuctura Biotechnology, which builds the cryoSPARC software package. D.J.F. is an advisor to Stuctura Biotechnology. Novel aspects of the method presented are described in a patent application (WO2019068201A1), with more details available at https://cryosparc.com/patent-faqs.
Peer review information Allison Doerr was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.
Supplementary Figs. 1–4 and Supplementary Notes.
Supplementary Video 1
Density maps comparing uniform and non-uniform refinement. The video shows 3D density maps and detail views from three different membrane protein datasets. For each dataset, the entire structure is shown using uniform (conventional) and nonuniform refinement on the same data. Density maps are sharpened with the same B-factor and have thresholds set to enclose the same volume. Detail views show the improvement of reconstructed map quality for individual helical segments within each protein.
Punjani, A., Zhang, H. & Fleet, D.J. Non-uniform refinement: adaptive regularization improves single-particle cryo-EM reconstruction. Nat Methods 17, 1214–1221 (2020). https://doi.org/10.1038/s41592-020-00990-8
CryoEM map of Pseudomonas aeruginosa PilQ enables structural characterization of TsaP
Matthew McCallum
, Stephanie Tammam
, John L. Rubinstein
, Lori L. Burrows
& P. Lynne Howell
Structure (2020)
Cryo‐EM of ABC transporters: an ice‐cold solution to everything?
Dovile Januliene
& Arne Moeller
FEBS Letters (2020)
Primers & PrimeViews
Nature Methods ISSN 1548-7105 (online) | CommonCrawl |
Kernel Fisher Discriminant Analysis for Natural Gait Cycle Based Gait Recognition
Jun Huang* , Xiuhui Wang** and Jun Wang**
Corresponding Author: Jun Huang* ([email protected])
Jun Huang*, College of Modern Science and Technology, China Jiliang University, Hangzhou, China, [email protected]
Xiuhui Wang**, College of Information Engineering, China Jiliang University, Hangzhou, China, [email protected]
Jun Wang**, College of Information Engineering, China Jiliang University, Hangzhou, China, [email protected]
Received: January 16 2018
Accepted: March 20 2018
Abstract: This paper studies a novel approach to natural gait cycles based gait recognition via kernel Fisher discriminant analysis (KFDA), which can effectively calculate the features from gait sequences and accelerate the recognition process. The proposed approach firstly extracts the gait silhouettes through moving object detection and segmentation from each gait videos. Secondly, gait energy images (GEIs) are calculated for each gait videos, and used as gait features. Thirdly, KFDA method is used to refine the extracted gait features, and low-dimensional feature vectors for each gait videos can be got. The last is the nearest neighbor classifier is applied to classify. The proposed method is evaluated on the CASIA and USF gait databases, and the results show that our proposed algorithm can get better recognition effect than other existing algorithms.
Keywords: Gait Energy Image , Gait Recognition , Kernel Fisher Discriminant Analysis , Natural Gait Cycle
Gait recognition is one kind of gait identification technology, which can realize long-distance and hidden identity authentication and has been widely applied in the field of intelligent video surveillance. The existing gait recognition algorithms are mainly divided into two categories: methods based on appearance [1] and model-based methods [2]. The former is based on the spatiotemporal shape and motion characteristics of the gait sequence, and the latter adopts the structural model to measure the time varying gait characteristic parameters, such as gait cycle, frequency, direction of key point and so on. Different from biometrics of human faces [3] and fingerprint [4], gait features are a group of dynamic features and the significance and robustness of the features can be demonstrated in a running cycle [5]. How to extract the gait features which have significance and robustness and low data space from a set of gait image sequences is one of the research problems. GEI use a simple weighted average method to synthesize a periodic gait image into a better image. On the basis of the GEI method, the authors [6,7] proposed the two directional two-dimensional principal component analysis '(2D)2PCA' and weighted (2D)2PCA 'W(2D)2PCA' by the combination of the rows and the columns. Compared to traditional feature extraction methods such as the principal component analysis (PCA) and linear discriminant analysis (LDA), (2D)2PCA and W(2D)2PCA have more resilient to change the angle of view. In order to extract local information better, Yang et al. [8] proposed the discriminant common vectors (DCV) method to solve small sample size problem and Liu et al. [9] used the local binary pattern (LBP) to extract the local feature of GEI and the DCV to reduce the LBP features, which explored the GEI information better than traditional methods. Atta et al. [10] proposed a spatio-temporal gait recognition method based on radon transform to overcome the limitations associated with the most existing temporal template approaches. To deal with a gait features directly as a matrix, the authors [11] suggested the concept of kernel cuboid, and proposed a new kernel-based image feature extraction method for recognition, which deal with a face image in a block-wise manner, and independently perform kernel discriminant analysis in every block set by using kernel cuboid instead of kernel matrix. Kernel Fisher discriminant analysis (KFDA) method [12] is another way to use the kernel function to map the nonlinear separable data to optimal feature space. In face recognition, KFDA has achieved better recognition results than other existing methods—such as PCA [13], LDA [14], and linear preserving projection (LPP) [15]. The experimental results in [16] show that the KFDA have the lower error rate than other methods (PCA, FLD).
Flow chart of the proposed gait recognition method.
The paper presents a new gait recognition method based on KFDA, as shown in Fig. 1. First, the GEI is calculated from gait cycle. Then KFDA can make use of the high correlation between different GEI for feature extraction by selecting the proper kernel function, and the nearest neighbor classifier is designed as the classifier. Finally, the algorithm is tested on CASIA and USF gait database. The results of experiment prove that our method is effective for gait recognition. The main contributions of this paper include (1) a new feature extraction algorithm is proposed based on natural gait cycles, and the observation vector set is constructed using the extracted features. In order to improve the algorithm robustness, the algorithm also adopts the feature combination of gait image and its region bounded by legs. (2) KFDA method is used to refine the extracted gait features, and low-dimensional feature vectors for each gait videos can be got. (3) Experimental results on CASIA and USF gait database show that even when human body have changed in walking at condition of carrying backpacks and other objects, the proposed method can still get better gait classification rates than other existing methods.
The rest of the paper is organized as follows. In Section 2, the details of the proposed method are discussed. Section 3 presents the experimental methods and results analysis in detail. Finally, conclusions and future works are drawn in Section 4.
2. The Proposed Method
2.1 Gait Silhouettes Extraction
The proposed approach extracts the gait video silhouettes through moving object detection and segmentation from each gait videos. Since there are some influences of external factors in the process of real image acquisition, which can easily cause some problems such as noise and contrast, it is necessary to preprocess the image of gait recognition. Image preprocessing is the precondition of feature extraction and gait recognition. The paper mainly follows the steps to preprocess image.
- Background reconstruction: Since the scene is approximately stationary and the background information is corresponding to the low frequency part in the whole video sequence, the median of the corresponding pixels in the sequence image is used to estimate the static background.
- Moving object detection: Background subtraction is first used to eliminate moving foreground image, which is got by difference operation between the image sequence and the background image, and then the moving shadow of the foreground image is erased by HSV based color model [17]. At last, the binarization image is calculated by threshold value of OTSU algorithm [18].
- Binarization image noise removal and normalization: Since there are some small noises around the human body and background area in the process of threshold segmentation, the median filter method is used to remove the noise and redundant information. In addition, to deal with the problem of the inconsistency of the human silhouette size caused by the change of focal length in the shooting, the object image silhouettes is normalized and the image is scaled to the same size.
- Gait cycle detection: The change of gait is periodic and the center of gravity of the body is constantly changing in the process of walking. The center of gravity does not change with the hands and legs forward movement, but the vertical axis will change periodically. In the paper, the period of the gait is determined according to the change of the coordinate of the center of gravity.
2.2 Improving GEI Processing and Cycle Detection
The gait energy diagram adopts a simple weighted average method to synthesize a periodic gait silhouettes images into one image, which is defined as:
[TeX:] $$G(x, y)=\frac{1}{T} \sum_{t=1}^{T} B_{t}(x, y)$$
where T is the gait cycle length and [TeX:] $$B_{t}(x, y)$$ is the brightness values of pixel point (x, y) at time t. The brightness value of the background area is 0, and the target area is 255.
Since the method of gait video silhouettes extraction has been discussed in above section, this section will focus on subsequent gait description and identification and propose feature extraction algorithm based on natural video gait cycle. According to human leg characteristics, the key gait of normal walking can be classified three kinds:
State 1: The two legs are kept close together in the same plane of the body and there are three common postures, namely left foot being lifted by the side of the right foot, right foot being lifted by the side of the left foot and two feet normally standing, all together marked as K1;
State 2: The left foot being in front of the right foot, marked as K2;
State 3: The right foot being in front of the left foot, marked as K3.
We define the complete gait cycle as a process of K1→K2→K1→K3→K1 or K1→K3→K1→K2→K1. Fig. 2 shows one example of complete gait cycle of K1->K3->K1->K2->K1. After the segmentation of a complete gait cycle, we will first continue to deal with all subsequent frames, find all complete gait cycles or gaits to evenly partition each gait cycle and extract NF key gait silhouettes images in turn. Then the GEI of a time domain is extracted based on center of each key gait silhouettes image to construct the observed state set. At last, the KFDA is used to reduce the dimensions of the observed state set and corresponding low dimensional eigenvectors would be got, which will be discussed in follow section.
An example of a complete gait cycle.
GEIs of three different conditions at 90º: (a) normal condition, (b) carrying a bag, and (c) with a coat.
From Fig. 3, we can see the outermost outlines of the human body have changed when human is walking at condition of carrying backpacks and other objects, which can lead to poor gait recognition robustness [19]. Since changes on top part of the body is very little and lower part of the body (such as legs ) has obvious change in normal, carrying and clothing conditions, the paper adopts the method in [20] to represent the gait feature by both gait silhouettes image and energy image in the local outline of the legs. Fig. 4 shows energy image in the local region of the legs (box tag).
Supposed using above feature extraction method to extract feature of whole video gait silhouettes image and the corresponding energy image in the local region of the legs for each gait video G, G can be expressed as
[TeX:] $$G=\left[\left\{F_{1}\right\},\left\{F_{2}\right\}\right]$$
where [TeX:] $$F_{1} \text { and } F_{2}$$ is extracted features from whole gait silhouettes images and from the local region of the legs respectively. The [TeX:] $$F_{1} \text { and } F_{2}$$ can be calculated by eigenmatrix mapping method, which was discussed in detail in [20] and we're not going to analyze it.
Image in the local outline of the legs: (a) normal, (b) carrying a bag, (c) clothing, and (d) walking at night.
2.3 Kernel Fisher Discriminant Analysis
KFDA method is to map the input samples into a high dimensional feature space by a nonlinear mapping and find the projection space, which makes the inter-class scatter matrix biggest and the withinclass scatter matrix smallest. The main idea of LDA algorithm is to find the optimal projection matrix [TeX:] $$W_{o p t}$$ by Fisher criterion, which objective is to determine the discriminant vectors by maximizing interclass scatter matrix [TeX:] $$S_{B}^{\Phi}$$ while minimizing the within-class scatter matrix [TeX:] $$s_{W}^{\Phi}.$$
Suppose there are n samples [TeX:] $$\left\{x_{1}, x_{2}, \ldots, x_{n}\right\}$$ belonging to class [TeX:] $$C\left\{X_{1}, X_{2}, \ldots, X_{c}\right\}.$$ According to Fisher LDA, non-linear function [TeX:] $$\varphi(x)$$ map samples on the feature space F, and get the optimal subspace [TeX:] $$W_{o p t} :$$
[TeX:] $$W_{o p t}=\arg \max _{w} \frac{ | W^{T} S_{B} \Phi_{W |}}{\left|W^{T} S_{W}^{\Phi} W\right|}=\left[w_{1}, w_{2}, \ldots, w_{m}\right]$$
Similarly, [TeX:] $$s_{B}^{\Phi} \text { and } s_{W}^{\Phi}$$ represent the inter-class and within-class scatter matrix of feature space F.
[TeX:] $$S_{B}^{\Phi}=\sum_{i=1}^{C} n_{i}\left(u_{i}^{\Phi}-u^{\Phi}\right)\left(u_{i}^{\Phi}-u^{\Phi}\right)^{T}$$
[TeX:] $$S_{W}^{\Phi}=\sum_{i=1}^{C} \sum_{X_{k} \in X_{i}}\left(\Phi\left(x_{k}\right)-\mu_{i}^{\Phi}\right)\left(\Phi\left(x_{k}\right)-\mu_{i}^{\Phi}\right)^{T}$$
where [TeX:] $$\mu_{i}^{\Phi}=\frac{1}{n} \sum_{x_{k} \in X_{i}} \Phi\left(x_{k}\right), u^{\Phi}=\frac{1}{n} \sum_{i=1}^{N} \Phi\left(x_{i}\right), w_{i}$$ is calculated by [TeX:] $$S_{B}^{\Phi} w_{i}=\lambda_{i} S_{W}^{\Phi} w_{i}.$$
In the kernel function theory, [TeX:] $$w_{i} \in | \operatorname{span}\left\{\varphi\left(\mathrm{x}_{1}\right), \varphi\left(\mathrm{x}_{2}\right), \cdots, \varphi\left(\mathrm{x}_{n}\right)\right\}$$ is bound in the mapping function to generate space, which is, [TeX:] $$w_{i} \in \operatorname{span}\left\{\Phi\left(x_{1}\right), \Phi\left(x_{2}\right), \ldots, \Phi\left(x_{n}\right)\right\}$$ can be linear represented as [TeX:] $$w_{i}=\sum_{j=1}^{n} \alpha_{i}^{j} \cdot \Phi\left(x_{j}\right)$$ by train samples. So molecular in (4) can be converted as:
[TeX:] $$w_{i}^{T} S_{B}^{\Phi} w_{i}=\alpha_{i}^{T} M \alpha_{i}$$
where [TeX:] $$M=\sum_{i=1}^{C}\left(M_{i}-\overline{M}\right)\left(M_{i}-\overline{M}\right)^{T}, \overline{M}_{j}=\frac{1}{n} \sum_{i=1}^{n} k\left(x_{j}, x_{i}\right) \text { and } \alpha_{i}=\left[\alpha_{i}^{1}, \alpha_{i}^{2}, \ldots, \alpha_{i}^{n}\right]^{T}$$
Similarly, the denominator in (3) can be converted as:
[TeX:] $$w_{i}^{T} S_{W}^{\Phi} w_{i}=\alpha_{i}^{T} L \alpha_{i}$$
where [TeX:] $$L=\sum_{j=1}^{C} K_{j}\left(I-I_{N}\right) K_{j}^{T}, K_{j} \text { is } n \times n_{j}$$ matrix, [TeX:] $$\left(K_{j}\right)_{n m}=k\left(x_{n}, x_{m}\right), x_{m} \in X_{j},$$ I denotes a unit matrix, [TeX:] $$I_{N j}$$ represents a matrix formed by [TeX:] $$1 / n_{i}$$.
Eq. (3) can also be simplified as:
[TeX:] $$\alpha_{o p t}=\arg \max _{\alpha} \frac{\left|\alpha^{T} M \alpha\right|}{\left|\alpha^{T} L \alpha\right|}=\left[\alpha_{1}, \alpha_{2}, \ldots, \alpha_{m}\right]$$
The optimal subspace [TeX:] $$W_{O p t}$$ is:
[TeX:] $$W_{O p t}=\left[w_{1}, w_{2}, \ldots, w_{m}\right]=\left[\sum_{i=1}^{n} \alpha_{1}^{i} \Phi\left(x_{i}\right), \sum_{i=1}^{n} \alpha_{2}^{i} \Phi\left(x_{i}\right), \ldots, \sum_{i=1}^{n} \alpha_{\bmod e l s}^{i} \Phi\left(x_{i}\right)\right]$$
When given a non-linear mapping function [TeX:] $$\Phi(x),$$ the mapping result of sample to the feature space F is:
[TeX:] $$\begin{array}{l}{\Phi(x) W_{o p t}=\left[\sum_{i=1}^{n} \alpha_{1}^{i} \Phi\left(x_{i}\right) \Phi(x), \sum_{i=1}^{n} \alpha_{2}^{i} \Phi\left(x_{i}\right) \Phi(x), \ldots\right]} \\ {=\left[\sum_{i=1}^{n} \alpha_{1}^{i} k\left(x_{i}, x\right), \sum_{i=1}^{n} \alpha_{2}^{i} k\left(x_{i}, x\right), \sum_{i=1}^{n} \alpha_{m}^{i} k\left(x_{i}, x\right)\right]}\end{array}$$
KFDA method can convert class X to c - 1 dimension vector, and we can chose the feature vectors corresponding to the first c - 1 eigenvectors to reduce the dimension of feature space and improve the processing speed. Since the dimension of samples usually is small in the recognition process, the number of training samples is smaller than the pixels of an image that may cause inter-class scatter matrix [TeX:] $$S_{W}$$ singular matrix, that is say [TeX:] $$\operatorname{rank}(Q)=\operatorname{rank}\left(S_{w}^{\Phi}\right) \leq n-1,$$ we cannot use generalized eigenvalue equation to solve Rayleigh's extremum problems. Aiming at this problem, we add [TeX:] $$\mu I$$ (I represents a unit matrix and μ is a coefficient) to the Q matrix, and [TeX:] $$Q \mu=Q+\mu I,$$ which can let Q become a nonsingular and use a generalized eigenvalue equation to solve.
Suppose there are m individuals, each of which correspond the gait sequences of different view angles. The proposed algorithm extracts gait information contained in the each individual's GEI as an input vector samples. According to KFDA method, assuming that the number of gait sequences is [TeX:] $$m_{i}, i=1,2, \ldots, q,$$ the sample set is [TeX:] $$\left\{x_{1,1}, x_{1,2}, \ldots, x_{1, n 1}, x_{2,1}, \ldots, x_{2, n 2}, \ldots, x_{q, n q}\right\},$$ the input samples is [TeX:] $$n=n_{1}+n_{2}+\ldots+n_{q},$$ if we want to classify the unknown class of gait sequences, the first we should do is to train the known classes of gait, find the optimal feature space [TeX:] $$W_{o p t} \text { and } \alpha_{o p t}.$$ Then the projection on [TeX:] $$\alpha_{o p t}$$ and its projected trajectory are calculated. The detailed gait recognition algorithm is described in Algorithm 1.
The gait recognition algorithm based on combination of KFDA and GCI-GEIs
3.1 Experiment on CASIA Gait Database
The CASIA dataset A [21] is called NLPR gait database before, which contains 20 subjects and each subjects has 12 image sequences and 3 walking direction (0º, 45º, and 90º). There are 4 image sequences in each direction and 2 gait cycles in each sequence. Our experiments select 2 gait sequences and 4 gait cycles for training and another 2 gait sequences and 4 gait cycles for testing. We carried out 6 experiments on CASIA database A and calculate the average.
The CASIA Dataset B is a large multi-view gait dataset contains 124 subjects and data was captured from 11 views varying from 0º to 180º with 18º between each two nearest view directions. There are 10 sequences for each subject, 6 sequences of them for normal walking (normal), 2 for walking with bag (bag) and 2 for walking in coat. Our experiments use 60 subjects at 90º and normal conditions that is each subject has 6 sequences, each sequence contains 2 gait cycles and every sequence have 12 gait cycles. In the experiments, 2 gait sequences and 4 gait cycles are selected for training and another 4 gait sequences and 8 gait cycles for testing. We carried out 15 experiments on CASIA B and calculate the average.
Our main focuses are on feature extraction and identification of human silhouette image. The input samples of experiments are expansion of everyone's GEI by column vector. The training and testing set are divided with the ratio of 1:1. Then, the KFDA method is applied to train features, get [TeX:] $$\alpha_{o p t} \text { and } W_{o p t}.$$ Finally, the sample vectors are projected on the optimal feature space [TeX:] $$W_{O p t}$$ and the training features Train_Features and testing features Train_Features are got respectively.
Due to the small number of samples, to obtain the unbiased estimation of the correct recognition rate, leave-one-out cross-validation method is adopted to conduct the experiments. The paper adopts cumulative match score (CMS) [22] to evaluate the experiment results and presents recognition rates of Rank 1 and Rank 5. In order to evaluate the GCI-GEI feature extraction effect and the KFDA dimension reduction and classification ability, the recognition rate of our method is compared to other exiting algorithms, and the results are showed in Table 1.
Recognition rate of Rank 1 and Rank 5
As can be seen from Table 1, the average recognition rate of our algorithm is best. We can find that our proposed algorithm can get more than 90% recognition rate in Rank 1 and Rank 5, which testify that the method based on GCI-GEI and KFDA can identify better the different human gait and get better recognition effect than other algorithms in the small sample gait database. Furthermore, the experiment data shows the recognition rate of GEI+LBP+DCV is lower than GEI+PCA, which explains reducing the dimension by PCA will lose important gait recognition information after LBP feature extraction from GEI.
3.2 Experiments on USF Human ID Database
A database consisting of 1,870 sequences of the 122 subjects based the USF Human ID database [23] is divided into 1 set for training and 10 probes labeled from A to J for test, which based on 3 covariates: normal, walking, and carrying condition. Being different from experiment on CASIA, this experiment adopts weighted mean recognition rate to evaluate the experiment results. To demonstrate the advantages of KFDA, we choose the traditional algorithms LDA, PCA and DCV, and manifold earning algorithm LPP for comparison. Table 2 reports the recognition rates of this group of experiments on the 10 probes.
Recognition rates (%) of different algorithms
As can be seen from Table 2, KFDA can achieve higher recognition rate than traditional GEI when using different dimensional reduction algorithms including PCA, LDA, LPP, and DCV. KFDA features present more discriminating power than original features. Comparing with other algorithms, the GCIGEI+ KFDA approaches can improve the recognition rate by about 6%. In addition, since the experiment is based on normal, walking and carrying condition, the experiment results demonstrate the proposed algorithm can eliminate effects of walking and carrying conditions and have great robustness.
The different walking conditions of human movement leaded gait recognition become more difficult. This paper proposes a new algorithm for gait recognition based on GEI and KFDA. The proposed algorithm firstly proposed a new feature extraction algorithm based on natural gait cycles, and the observation vector set is constructed using the extracted features by combination of gait image and its region bounded by legs. Then the expansion vector of the GEI by column is used as the input to get the optimal subspace [TeX:] $$W_{o p t} \text { and } \alpha_{o p t},$$ the projection on [TeX:] $$\alpha_{O p t}$$ is calculated by the GEI observation vectors. The third is to project sample vectors on [TeX:] $$W_{O p t}$$ to get training features and testing features. At last, the feasibility of the algorithm is verified by experiments on CASIA and USF. Comparing with other 5 existing algorithms, the results prove the proposed algorithm can get better recognition effect and have better robustness than other existing algorithms in the small sample gait database.
However, the limitation of the algorithm is that: first, only 5 other algorithms are used to compare with our proposed method in experiments; second, the experimental verification of the algorithm is only done on CASIA B and USF database. In the future work, we will try to use more other algorithms to compare and more publicly available databases to verify effectiveness and robustness of the proposed algorithm on a larger scale.
Jun Huang
He received his B.S. and M.S. degree in computer & application major from Hunan University and Chang Jiang University in 1982 and 1992 respectively. He is currently with the College of Modern Science and Technology, China Jjliang University as an associate professor of computer engineering. His research interests include network communication, digital image processing and multimedia processing.
Xiuhui Wang
He received his Ph.D. in information and computing science major from Zhejiang University in 2007. He is currently with the college of information engineering, China Jjliang University as an associate professor of computer engineering. His research interests focus on computer graphics, computer vision, and computer networks.
Jun Wang
He received the Bachelor degree from South China Agricultural University, China, in 2014. He is currently a master student at college of information engineering, China Jiliang University. His research focuses on the pattern recognition and computer vision.
1 Q. Yang, D. Xue, "Gait recognition based on sparse representation and segmented frame difference energy image," Information and Control, vol. 42, no. 1, pp. 27-32, 2013.doi:[[[10.3724/SP.J.1219.2013.00027]]]
2 N. V. Boulgouris, D. Hatzinakos, K. N. Plataniotis, "Gait recognition: a challenging signal processing technology for biometric identification," IEEE Signal Processing Magazine, vol. 22, no. 6, pp. 78-90, 2005.doi:[[[10.1109/msp.2005.1550191]]]
3 D. Xu, Y. Huang, Z. Zeng, X. Xu, "Human gait recognition using patch distribution feature and locality-constrained group sparse representation," IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 316-326, 2011.doi:[[[10.1109/TIP.2011.2160956]]]
4 N. V. Boulgouris, Z. X. Chi, "Gait recognition using radon transform and linear discriminant analysis," IEEE Transactions on Image Processing, vol. 16, no. 3, pp. 731-740, 2007.doi:[[[10.1109/TIP.2007.891157]]]
5 K. Wang, T. Yan, Z. Lu, "Feature level fusion method based on the coupled metric learning and its application in gait recognition," Journal of Southeast University (Natural Science), vol. 43, no. S1, pp. 7-10, 2013.doi:[[[10.3969/j.issn.1001-0505.2013.S1.002]]]
6 J. Huang, Z. Yi, X. Wang, H. Wu, "Gait recognition system based on (2D)2 PCA and HMM," in Proceedings of the 8th International Conference on Digital Image Processing (ICDIP), Chengdu, China, 2016;custom:[[[-]]]
7 F. X. Guan, K. J. Wang, J. Y. Liu, H. MA, "Bi-direction weighted (2D)2 PCA with eigenvalue normalization one for finger vein recognition," Pattern Recognition and Artificial Intelligence, vol. 24, no. 3, pp. 417-424, 2011.custom:[[[-]]]
8 X. Yang, J. Dai, Y. Zhou, J. Yang, "Gabor-based discriminative common vectors for gait recognition," in Proceedings of 2008 Congress on Image and Signal Processing, Hainan, China, 2008;pp. 191-195. custom:[[[-]]]
9 Z. Liu, G. Feng, W. Chen, "Gait recognition based on local binary pattern and discriminant common vector," Computer Science, vol. 40, no. 9, pp. 262-265, 2013.custom:[[[-]]]
10 R. Atta, S. Shaheen, M. Ghanbari, "Human identification based on temporal lifting using 5/3 wavelet filters and radon transform," Pattern Recognition, vol. 69, pp. 213-224, 2017.doi:[[[10.1016/j.patcog.2017.04.015]]]
11 X. Z. Liu, C. G. Zhang, "Fisher discriminant analysis based on kernel cuboid for face recognition," Soft Computing, vol. 20, no. 3, pp. 831-840, 2016.doi:[[[10.1007/s00500-015-1794-2]]]
CASIA dataset A
CASIA dataset B
Rank 1 (%) Rank 5 (%) Rank 1 (%) Rank 5 (%)
GEI+PCA [13] 88.04 92.37 85.41 90.31
GEI+LDA [14] 89.20 93.50 86.94 90.37
GEI+LPP [15] 90.12 94.37 87.07 91.53
GEI+DCV [8] 88.07 92.78 83.39 87.66
GEI+LBP+DCV [9] 86.16 90.38 82.44 87.53
GCI-GEI+KFDA (proposed algorithm) 93.08 100 90.78 93.98
GEI+PCA 80 87 72 26 22 11 13 47 40 37 39.25
GEI+LDA 88 89 74 25 25 15 20 52 53 56 45.93
GEI+LPP 88 91 76 29 32 21 23 53 52 53 47.49
GEI+DCV 87 93 76 30 35 21 23 53 57 53 48.33
GCI-GEI+KFDA(proposed algorithm) 90 87 80 41 48 27 28 72 63 63 55.64 | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.